
Test-Driven Development is Stupid - henrik_w
http://geometrian.com/programming/tutorials/testing/test-first.php
======
hoorayimhelping
> _You are writing code to test something that doesn 't even exist yet. I am
> not rightly able to apprehend the kind of confusion of ideas that could
> provoke such a method._

Yeah, the fact that you can't comprehend why people do this is very clear; if
you could, you wouldn't have written this terrible rant. It feels like the
author is criticizing this before coming anywhere close to understanding why
people do it.

I really wanted some kind of point to find to disprove, bu there aren't any in
this post. The strongest proof the author offers to his point is using one
word tautological sentences to repeat what he just said: "Tests don't work
because they just. don't. work."

The more I read, the more I realized this post isn't about testing, it's about
the author letting the world know how fucking awesome and smart they are. They
build huge software suites (from the sound of it, completely alone) with no
tests, and everything works out fine, even better than fine, spectacular. It
seems like the author deleted this line from the post in response to ridicule
in the comments:

> "As it happens, I do write code others depend on--as it happens, a lot of it
> --and, my code has never, even once, failed in production: a record I am
> extremely proud of."

I don't even particularly ascribe to TDD, but the arrogance and dismissiveness
and contempt of this guy made want to see him proved completely and utterly
wrong.

~~~
theseatoms
>You are writing code to test something that doesn't even exist yet.

To piggyback on this... sure, the _code_ doesn't exist yet, but the project
specs do. And unit tests can help by making the required specification
explicit.

~~~
eutectic
In any other field, specifying a function by its value at a handful of points
would be a bad joke.

If you don't know what you want your code to do then tests will just make it
harder to experiment, and if you do know then there's no harm in writing them
after the fact.

~~~
dllthomas
Bench vices pin down a board at a handful of points. Other things about the
environment (shape of board, material properties of wood) help me make sure it
generalizes in the ways I care about.

------
mabbo
> I am against writing unit tests in general, since experience shows that they
> actually prevent high-quality code from emerging from development

And he's lost me in his first sentence.

Unit testing, good unit testing at least, is not just about developing as it
is about preventing regressions. A unit test that runs on every build ensure
that something is true and that it stays true forever.

~~~
zeveb
I think I see part of his point, though. By its nature, a unit test is (often,
albeit not necessarily) tightly coupled with the thing it is testing--which
means that anyone who changes the implementation must change the test, which
increases the complexity of code changes.

I know that in my own work, the single most important thing for me is to be
able to massively refactor, restructure and redesign my and others' code, as
I'm working with it (this is probably why I like dynamic languages and macros
so much, and probably why I benefit from static typing so much); anything
which gets in the way of deleting, reorganising, clarifying, duplicating,
altering, reducing and shifting code is going to slow me down and make the
resulting code worse.

Perhaps, though, there is a middle ground: while the internals of a library
must be free to mutate, the external interface ought not to change nearly as
much. Perhaps all unit tests should be written to test the external interface,
not the internal details. That might also help avoid the tests-which-test-
that-the-code-does-what-it-does disease.

I do really appreciate tests, and they do help detect and prevent certain
types of regressions.

~~~
ed_blackburn
Which is why testing behaviour not implementation is so important. Admittedly
this sounds easier than it is in practice. Personally I've found that in-
memory coarse-grained tests more akin to integration tests, that test from the
outside in are helpful and easier to maintain because they tend to be less
concerned with the how. I can then choose to write finer-grained unit tests
where appropraite

~~~
mabbo
Agreed! A code base I recently inherited (and exorcised) contained unit tests
with roughly 90% coverage... And not a single assert. They didn't understand
the idea of testing behavior, they just copied the implementation as tests.

I reduced 20,000 lines of tests to 5,000, and caught a dozen bugs.

------
franciscop
I love the first quote for how (ironically) true it is:

"Trying to improve software quality by increasing the amount of testing is
like try[ing] to lose weight by weighing yourself more often."

From someone who has lost over 15kg in the past in few months, one of the
things that helped most was starting weighing myself as I didn't do that
previously. It kept reminding me that I wasn't still there and gave me more
motivation

~~~
tdkl
What is "more often" ? Weighing yourself daily is useless, since the weight
fluctuates too much. Not to mention that "weight" is useless as well, while
you're actually trying to loose FAT. Fat percentage and resulting lean body
mass (body weight minus body fat) are the metrics to measure. But still a
waste of time doing it daily, twice per week to gain insight on the delta and
where it's trending is enough.

~~~
TeMPOraL
Weighing yourself daily is _not_ useless. The idea of doing it weekly is being
spread explicitly because GenPop doesn't understand the concepts of a moving
average and a low-pass filter - instead, they freak out over those daily
fluctuations. If you weigh yourself daily and remember to always look at the
average of the last few samples, you get a more useful indicator of your
current weight trend.

------
alanfranzoni
There're already people speaking about the limits of a too strict TDD:

[http://david.heinemeierhansson.com/2014/tdd-is-dead-long-
liv...](http://david.heinemeierhansson.com/2014/tdd-is-dead-long-live-
testing.html)

And yes, it may be stupid to test before a design emerges - but only if you
start with a very fine-grained test. Usually when I'm coding from scratch I
write a very, very, VERY coarse-grained test that "tests something", and when
I reach the point of passing it (which may involved creating and designing
multiple classes) I probably have a working design and I may begin creating
other, smaller unit tests for individual components. The initial test may
disappear or become an integration or acceptance test.

By the way there's little content in the article. It's just a rant. And the
article about not writing test cases at all is simply ridiculous - since an
error can exist in a test, we shouldn't write test cases?!?

~~~
Scarblac
> Usually when I'm coding from scratch I write a very, very, VERY coarse-
> grained test that "tests something", and when I reach the point of passing
> it (which may involved creating and designing multiple classes) I probably
> have a working design and I may begin creating other, smaller unit tests for
> individual components.

That is exactly how I start coding something from scratch, except I don't have
that initial test. I don't think such a vague starting test adds anything
real.

~~~
alanfranzoni
It offers the value of having an initial target, otherwise you risk coding too
much without stopping.

------
struppi
If you're doing TDD (or software development) like this, you're doing it
wrong. Yes, yes, I know. No True Scotsman [1]. But when many - I mean LOTS OF
- people say "I get benefits form [Technique]", you can't just say: "It cannot
work. I tried it, it sucked.".

I mean, you can say that. But doing so makes you look... ignorant - at best.
You know, there is a possibility that you just got it wrong.

So, many great programmers say that they get benefits from TDD. _They_ get
benefits. Not the suits. Not their co-workers. _They_.

TDD is hard. I had to learn it and practice it. And I'm still learning and
practicing it, even though I'm now also teaching it to others and helping
teams implement it. But I get benefits from it. Writing tests helps me to
think about problems and to _actually improve my designs_ [2]. And writing
tests helps me to know when to stop - To not gold-plate my designs.

Sorry, but this "article" is just an angry rant, with no real arguments.
Please, don't give up on TDD early because of rants like this. If you have any
questions about TDD or need help getting started, feel free to ask (here or in
private - you'll find my email address in my profile)...

[1]
[http://davidtanzer.net/no_true_scotsman_in_agile](http://davidtanzer.net/no_true_scotsman_in_agile)

[2] But you have to refactor ruthlessly. And learn and practice refactoring,
which is hard.

~~~
meowface
>If you're doing TDD (or software development) like this, you're doing it
wrong.

What do you think is the right way to do it, then?

~~~
struppi
My point is: If TDD leads you to _bad design_ you're doing it wrong. You are
probably not listening to your tests and you are probably not taking care to
refactor towards a better design. Maybe you are even writing bad unit tests
[1].

BTW, if _any technique_ in software development leads you to bad design, and
you don't stop and try to improve something, you're doing _software
development_ wrong. If a technique does not help you, you have two
possibilities: Trying to do it better (maybe with outside help), or trying
something different.

[1] [http://www.makinggoodsoftware.com/2012/01/27/the-evil-
unit-t...](http://www.makinggoodsoftware.com/2012/01/27/the-evil-unit-test/)

~~~
TeMPOraL
But that's a general counterargument for criticism against anything - "if it
doesn't work for you, you're doing it wrong"!

Ultimately, we're too young a field to be able to replace common sense with a
process.

------
TeMPOraL
_Way_ stronger than I'd write, and I don't agree with it entirely, but the
author has a point.

Personally, what annoys me the most about TDD I've seen in the wild are two
things: designing for tests instead of actual problems, and tests affecting
the structure of "real" code.

Designing for tests - the standard TDD approach, first we write tests, then we
write code to pass the tests. Quite often the consideration of _the problem
being solved_ disappears. It's a fine approach when your task is to write a
small black box that takes some data in and transforms it into something else.
But I've never seen a case where someone made it work for complex tasks. It
always ends up the same - your tests become more complicated than the tested
code. It happens with any non-trivial problem, because the thinking you have
to do to write tests that make sense is _the same_ as the thinking you need to
do to solve the problem in the first place. So you're basically writing the
program twice, only in a convoluted way, and without considerations for global
design.

Tests affecting the structure - this is IMO a strong code smell. If you're
modifying your design to accomodate for tests, by e.g. adding superfluous
dependencies, hooks or injection points, you've screwed up. It only makes the
code more complicated and less reliable.

The only tests I've found valuable so far (in terms of effect for effort
spent) are regression tests - the ones you write to catch bugs in order to
make sure they won't happen again. Everything else in TDD seems to be easily
replaceable by proper iterative programming.

Maybe that's why TDD is popular in the languages without a sane REPL.

~~~
Freak_NL
> Tests affecting the structure […] is […] a strong code smell

On the other hand, code being hard or impossible to test is often thought of
as a code smell as well. Code that is easy to test is easier to understand —
not in the least because there are tests demonstrating its use. Techniques
such as dependency injection help a lot here.

~~~
Jach
Techniques like dependency injection can be really useful -- see Angular --
but too often I see a perfectly understandable piece of code expand into a
mess of multiple constructors (only one ever called in production) and helper
methods all so that strict unit testing can be done. The post's commit story
was unsurprising. If TDD is being done this mangling often just happens
upfront. DI is great in the same way interfaces are great, but if your code
_actually_ just cares about a particular implemention you instantiate
wherever, or even better is built-in to the language, it's easiest to reason
with that implementation. Taken to the extreme, you get Enterprise FizzBuzz.

------
ghshephard
Just wait until he has to write a system with several hundred web services
that have been documented to perform in a very precise manner given particular
data sets, and have hundreds of customers who have integrated to that API, and
absolutely _required_ that there be no variance in the output in the API, or
there (extraordinarily expensive) system integration will fail, at great costs
to their business systems.

I'm not saying TDD makes sense everywhere, but being able to confirm that your
3000+ continuous integration tests all passed green before shipping a new
version to all of your customer is a huge way to avoid embarrassingly
predictable bugs. (Leaving, of course, the opportunity to ship the
embarrassingly non-obvious bugs)

And, honestly, it's not clear that he was opposed to TDD, as he was opposed to
being constrained to having to write huge test frameworks during the
exploratory phase of code development, in which you may want to have a bit
more freedom to write/discard while you feel out requirements/solutions.

~~~
TeMPOraL
What you described _having tests_ , not _doing TDD_. That is, it's important
to have those test cases verifying that your APIs still work, but that doesn't
mean you have to _design_ the APIs by writing tests, as opposed to designing
by thinking about what you actually want to achieve.

~~~
ghshephard
Fair point. I was more referring to his comment here: _I am against writing
unit tests in general, since experience shows that they actually prevent high-
quality code from emerging from development--a better strategy is to focus on
competence and good design._

~~~
TeMPOraL
I disagree about that with the author too. I think _testing_ and _TDD_ should
be clearly separated as two different things.

------
elros
I genuinely wanted a good critique (It's nice to get dissenting viewpoints)
but reading text so charged with anger and so scant in information is
stressing me out.

~~~
cricketer9923
agreed - there are certainly some disadvantages to TDD, and I was hoping for
an discussion raising some ideas I'm yet to formulate fully.

Instead there were lots of parallels that I don't recognise.

It was disappointing the author didn't seem to try and find the advantages in
TDD and discuss a counter argument or alternative, such as modifying a foreign
code base.

------
zamalek
There have been one or two HN posts somewhat recently indicate that developers
should stop calling themselves "engineers." Enter examples of how engineered
solutions generally work the first time round, and how software almost
universally does not. There's a fundamental reason for that: it's computer
_science._

One of the facets of science is that you have a hypothesis. Writing a unit
test before you start is analogous to having the hypothesis: "the solution I
come up with for this problem will work." The rest of the exercise of TDD is a
scientific process of creating a solution and _proving_ that your solution
works (and adjusting your hypothesis if you find it to be incorrect), albeit
in a slightly strange way.

I could have a hypothesis about how earth is really traveling through the
stars on the back of a turtle. We know that is not true because Einstein came
up with a solution, followed by him and others _proving_ that solution.
Relatively is only valuable because _proof_ of it exists.

If computer science is a science (which it is) we must put our solutions to
the same degree of rigor that other scientists do.

While I no longer follow the strictest form of TDD (starting with tests that
fail to compile) I'll never forget the singular most important lesson that it
has taught me:

    
    
        A solution is worthless if you cannot prove that it works.
    

I'd take this article more seriously if Mr. Mallett provided an alternative
for correctness proofing. I can't honestly sell software to someone if I don't
know if it works myself.

~~~
TeMPOraL
I disagree on multiple levels. First of all, programming is _not_ computer
science. Sure, we probably do not deserve to be called "engineers", but what
we do goes in completely other direction - _away_ from science and towards
art.

Secondly, I wouldn't try to fit unit testing into scientific process - because
if the test is a hypothesis, then what you're doing is the exact opposite of
how science is done. You do not design your experiment to make your hypothesis
come out true!

While the quote you cited is interesting, I'd treat it with a grain of salt,
given that you can't _prove_ that your solution works - and if you think you
did, it usually turns out the _proof itself_ is wrong. It happened even for
formally proven algorithms.

Testing gives you increased confidence. It's a worthwhile goal. But IMO,
driving your design by tests is going a bit too far, and something you don't
need to do to have tests that ensure your code works.

~~~
jerf
"Secondly, I wouldn't try to fit unit testing into scientific process -
because if the test is a hypothesis, then what you're doing is the exact
opposite of how science is done. You do not design your experiment to make
your hypothesis come out true!"

True, but I think that objection can be trivially fixed by simply... doing
that. I write tests that confirm the code does what I designed, but I also
write code that tries to break my design, and tests that verify that it errors
as expected. I do approach it with a scientific mindset. (And I tend to
consider "scientific mindset" to be the more important part of science vs.
some overprivileging some checkpoint list of specific techniques, which ought
to have come from the scientific mindset in the first place.)

Of course it remains true that you can't do that perfectly, but that's a null
objection in the end. Nothing ever can be, but at least I try.

I can't even count how many times I've tested a code's error case, only to
discover that it unexpectedly "worked". Usually that's because there's a bug
and I need to fix the error case... every once in a while it turns out my code
corrects my own understanding when it reveals what I thought was an error case
is actually perfectly valid and sensible, though. It's important to _try_ to
break the code.

~~~
TeMPOraL
> _I do approach it with a scientific mindset. (And I tend to consider
> "scientific mindset" to be the more important part of science vs. some
> overprivileging some checkpoint list of specific techniques, which ought to
> have come from the scientific mindset in the first place.)_

True. If you just follow the checklist without following the spirit of the
scientific method, you end up doing socio^H^H^H^H^Hcargo-cult science.

As for scientific mindset in programming, I think it's a very valuable thing
to have on both larger scale - in various forms of testing - and smaller
scale. I found that, when running your code, it's good to just ask yourself
what exactly do you expect to happen beforehand, and if you see _any_
deviation, immediately go figure it out, or at least note it down. If a
program does something unexpected it means you don't understand it.

------
ayberkt
> Let me emphasize: you write the test cases for your program, and then you
> write your program. You are writing code to test something that doesn't even
> exist yet. I am not rightly able to apprehend the kind of confusion of ideas
> that could provoke such a method.

I don't understand this. What is so absurd about specifying facts about the
program you will write? When we have tools that can prove facts, we will be
doing formal specifications instead of random sampling. But still, testing is
a way of statistically specifying facts about your program, for which it seems
sensible to be written before the program.

~~~
mojuba
Because like the author says, there is no substitute for competence. If you
can't write good code, then your assumptions about the future design will be
wrong and/or bad, and your tests will be as poorly written. In other words,
TDD (anecdotally) doesn't improve your code but only introduces extra levels
of complexity. Which in the hands of an incompetent developer becomes an even
bigger problem than if there was no TDD.

~~~
veidr
If you can't write good code, then you will not write good code.

But, there is another common case where unit tests (written before or after)
also come in tremendously handy: if you can't write good mistake-free code
_100% of the time_.

Which describes all programmers who have ever existed.

------
meesterdude
Honestly this doesn't belong on the front of HN. There is no discussion value,
or sense of professionalism. It's just a rant riddled with misplaced anger.

There are valid reasons to be against TDD, but it is not stupid. To call a
methodology "stupid" is needlessly judgmental and really has _NO_ place in the
pragmatism and trade offs that so often go along with writing software. Anyone
that calls a software methodology "stupid" so casually has no business writing
software at all, nor blog posts about software.

I don't care how smart you are, or how right you are. If you want people to
listen to you and consider your ideas, don't write posts like this, in this
tone, at this length, with so little actual substance.

------
alebaffa
When I read these kind of articles I'm always curious to see the professional
background of the author. Not to criticize, but to see if he/she's talking
about something he/she saw in scale or not. Because if you're working with a
very small code base then I may even understand sentences like "I am against
unit tests in general". I've never met people who work (or worked) in very
large companies being against, at least, unit test. Where hundred/thousands of
people touch the same code .. not having unit tests, in the long term, is a
suicide.

~~~
ghubbard
None. He's still in university. [0]

[0] [http://geometrian.com/about/cv.pdf](http://geometrian.com/about/cv.pdf)

~~~
zimpenfish
I think you're being slightly disingenuous there with the "still in
university" dismissal. That CV is a lot more impressive than many people I've
worked with in the "real world".

------
henrik_w
A while back I described the benefits I think unit tests give you:

1\. Well-tested parts

2\. Decoupled design

3\. Rapid feedback

4\. Local context to test in.

At the same time, there are many cases where they don't help much. More here:
[http://henrikwarne.com/2014/09/04/a-response-to-why-most-
uni...](http://henrikwarne.com/2014/09/04/a-response-to-why-most-unit-testing-
is-waste/)

~~~
tomp
I think you're missing the most important one: ensuring you don't unexpectedly
break your own code in the future (when you come back in 6 months and forget
why exactly it's arr[1:n-1] not arr[0:n-1] or arr[1:n]).

~~~
falcolas
Sounds like a well placed comment would be of more value than a test there.

~~~
CamatHN
Comments are easy to miss though. We arnt perfect about going through our code
and testing gives us objective measures.

~~~
falcolas
A test won't say why code is the way it is; it will simply provide some sample
input and expected output to that code. This may be enough, but not always.

------
krstck
His experience might be a lot different from mine, but I've found TDD
enormously helpful when tackling legacy systems. Maybe that's not "test-driven
development" and more like "test-driven refactoring", but working on complex
legacy systems where the original developers are basically all gone is _scary_
, and TDD has helped me make some sense of it and feel a lot better about
making changes.

------
hhandoko
I don't necessarily use TDD all the time, but I think it provides significant
value. A key one is to help guide developers break down complexity.

This is quite apparent especially when I conduct pair programming interviews.
Developers who were exposed to TDD (or to projects with significant test code)
approach the problem in a far more structured manner, and their code is much
more pleasant to look at :)

------
toothrot
No you're stupid. :P

This is just a rant. I was hoping for an actual study, but the title should
have given it away.

------
bguthrie
I'm not clear what technique this article is trying to describe, but anyone
that spends two weeks pre-writing tests is not performing TDD. That simply
isn't how it works.

The cycle is: red, green, refactor, repeat. That's per test. It shouldn't take
long. It works nicely.

------
snarfy
TDD assumes you know what all the interfaces are going to be beforehand. If
you discover that an interface needs to change you have to refactor all of
your tests. Unit tests are momentum against change.

It also assumes a bug free app with 100% code coverage is the objective,
regardless of the cost it takes to write all that test code. An app that has a
few bugs but takes half the resources to develop can make better business
sense.

~~~
sulami
> Unit tests are momentum against change.

I get the point, but feel different. When a project becomes large enough,
changes can result in breakage in the weirdest places. Proper testing can help
discovering this breakage before shipping. For me, tests are a safety net for
changes in the tested code.

Large projects without any tests are effectively unmaintainable, often times
even for the original author, and most certainly for others.

~~~
snarfy
I do agree with this, and have felt it from both ends. If you need to change a
core class or interface which cascades throughout the system, those unit tests
are sure nice to have. I'm not sure I'd be confident enough to change the core
class without the unit tests. In that sense, lack of unit tests is momentum
against change.

------
pjc50
Buried in the rant are a couple of sound points:

>\- You're trying to make a design before you learn anything about it.

>\- People write something one way, but then are afraid to change it because
they'll have to rewrite the testing code that goes along with it.

Both of these suggest tests of the wrong granularity; that people are, as he
says, writing silly little fencepost error checks for every single function.

------
ramblerman
> Week 3-4: Write tests.

> Week 5-10: Write code.

??? what

The feedback cycle is one test, implement code. Not implement test suite for
the entire program, then implement your code base.

------
odabaxok
Please, correct me if I am wrong, but the process he described is completely
against TDD and no wonder it did not work (they wrote the function first, and
added tests later, lot of tests, in TDD you would have rather more shorter
functions with a few tests for each):

 _\- Function A is 147 lines long. It is the simple core of the program.

\- Function A is committed to the repository on June 26th, 2002. Function A
has four test cases. Nevertheless, a bug is found in Function A on the 28th
and a patch is uploaded on July 6th. It contains two new test cases.

\- This continues a bit. However, by August 2002, function A is mostly stable
and has no fewer than thirteen test cases--mainly for fencepost errors and
other idiotic things anyone can find with a stack trace. Except for a blip in
early 2003, function A, now 152 lines long, is unchanged until mid-2006._

------
marcosdumay
I'm writing an SMTP server, and when I started it I looked at the problem, and
for the first time since I stopped being a student, I had a well specified
problem where I could try TDD.

At the time I rationalized it and said - nah, I'm too lazy for doing that on a
side project. But as soon as I tried to use it in the real world, I discovered
that all those RFCs are just crude approximations of how people used the
server (most of them are even incomplete). There's absolutely no compliant
client or server out there, and TDD would catch none of the bugs I discovered
at the time. And that's with email - a protocol that people are working on
standardizing since before the Internet existed.

Sorry, but nowadays I'm extremely skeptical about TDD having any application
at all. Not even for reinventing the wheel.

------
ramblerman
TDD is truly cult like in it's extreme. Usually it's managers and slightly
weaker technical folk with a scrum certification that spew the absolutist TDD
path.

I'm a contract java developer and I know how to 'play the game'. The hypocrisy
in the amount of fizbuz clones I've written in a TDD fashion on interviews, to
then see the production code has no little to no unit tests.

I once got negative feedback on an interview because I wasn't TDD enough after
expressing the opinion that TDD works great for most use cases but there are
limits, e.g. positioning of a front end element isn't always best done TDD.
The guy interviewing was non technical and of course any dissent to TDD meant
I was a bad fit.

Anyways - end rant. Like I said I just learnt to play the game.

------
auggierose
It's not really a rant, it is a very strongly communicated opinion. And I
agree with a lot of it.

I don't mind writing a few test cases during or after I am done with my work.
This is mostly to protect someone (including me) to change my perfect design
after I have found it, though ;-)

------
zaargy
Oh look another article with a provocative (e.g. link-bait) title. I'm sure
this will be a well-balanced, nuanced piece on the costs and benefits of TDD,
when it makes sense, when it does not make sense, and will leave me with a
better understanding of the subject.

Oh.

------
ilitirit
Does anyone have a good counter-point worth reading? I understand that it's
more of a anecdotal rant, but it's not the first time I've read something like
this about TDD.

I've tried TDD, but I've found that it just stifles my productivity too much.

------
bartuz
TDD isn't about writing tests. It's about designing architecture.

[http://www.drdobbs.com/tdd-is-about-design-not-
testing/22921...](http://www.drdobbs.com/tdd-is-about-design-not-
testing/229218691)

------
ruraljuror
It seems a lot of these discussions surrounding testing are very anecdotal;
this link being a prime example. I remember hearing an interview with Greg
Wilson regarding his book _Making Software,_ [1] the premise of which is to
apply a more rigorous methodology to understanding what makes software work.

If I remember correctly from the interview (I think it is here[2]), one
conclusion was that TDD doesn't have a clear benefit when you add it to a
project. On the other hand, in a survey, TDD projects are more likely to
succeed because it is a habit common to good developers. I hope I am capturing
the subtlety there. Essentially, TDD is not a silver bullet, but rather a good
habit shared by many good developers. That was enough to convince me of the
merits.

It's another problem altogether to try to institute TDD for a project,
especially for a team. Like so many things in programming, TDD could be used
and abused. The same could be said for JavaScript or [insert proper noun
here]. If misunderstood or used incorrectly, TDD could be a drain on the
project. A benefit--and this ties back into the idea of TDD as a habit--is
that it forces the code you write to have at least one other client. This
requirement would alter the way you write code and arguably for the better.

[1]
[http://shop.oreilly.com/product/9780596808303.do](http://shop.oreilly.com/product/9780596808303.do)

[2] [https://blog.stackoverflow.com/2011/06/se-
podcast-09/](https://blog.stackoverflow.com/2011/06/se-podcast-09/)

------
grhmc
> I was once one of these newly educated kids. So, for a few years, I worked
> exclusively with the Test-First strategy. When it was over, the results were
> undeniable. The code--all of it--was horrible. Class projects, research
> code, contract programs, indie games, everything I'd written during that
> time--it was all slow, hard to read, and so buggy I probably should have
> cried. It passed the tests, but not much else.

Sounds like most everyone's first years :)

------
mannykannot
TDD has always looked to me like a good idea taken to ridiculously dogmatic
extremes (a very common occurrence in software development, IMHO), but I think
many of this author’s indiscriminate potshots miss the most important
problems.

In order to write a test for something, you need to know what it is supposed
to do, and testing will not tell you that. In order to make something that
passes your tests, you have to design it, and a test does not tell you what
that design should be - it can tell you if you failed in a specific way, but
not what to do about it. There is a whole lot of analytical reasoning and
technical judgement to software development that is ignored by TDD, and while
thinking about test cases can help with this, it is an insufficiently powerful
method to complete the job.

Agile methods have not rescinded this fact. Insofar as they form a complete
development methodology (and I am not sure about that), they offer an
empirical approach to discovering your requirements (which may or may not be
appropriate to your situation), and the use of short-cycle iteration to gauge
how you are progressing, but they are largely silent on the issues I raised
above.

------
lootsauce
Well it is not a great rant but ultimately if you just read the conclusion he
does have some good points. There really is no substitute for competence, the
process can and does hinder good design, it is used as a false crutch against
incompetence. And I do think that test first and in general a large overburden
of existing tests does subconsciously limit refactoring and better design.

Thats not to say all testing is bad. I do some unit test and some
functional/integration tests but by no means do I strive for some arbitrary %
of coverage as if that means anything. You can have 100% coverage with
completely brainless useless tests that are testing lots of simple low risk
code. Any sense of security from that is beside the point if your team is
incompetent, the overall design calcifies and becomes a mess and nobody fully
understands how it all works.

Targeted testing I guess is how I think of it, very targeted. I work on a
small project in a small team, in a much larger project with lots of devs I
would have to rethink maybe but then again with properly sized teams (two
pizzas) that don’t even come into existence.

------
klink008
Ok so I read the first sentence and felt it set the tone of the paper.
Continued reading and it pretty much played out exactly as I thought it would.

>"Trying to improve software quality by increasing the amount of testing is
like try[ing] to lose weight by weighing yourself more often."

So this is really funny to me because I just read an article that in fact
states that weighing yourself more often does help people lost weight.
[http://www.sciencedaily.com/releases/2015/06/150617134622.ht...](http://www.sciencedaily.com/releases/2015/06/150617134622.htm)

Just by looking at the page and the style of reading you can tell this is
someone who has not progressed in their career skills since the early 90's

------
k__
In my first big project after university, a RPC API, I used test driven
development and it felt really good. I had about 300 tests in the end and if I
changed something, a few would blow up and I could fix it.

But it slowed development down massively.

In the start because I had to set up the testing as I had to setup the real
software.

Then I had to mess around with the testing framework as I had with the
projects frameworks.

Also I had to write features AND write their tests.

And later in the project, when new features broke old stuff not only the
features had to be fixed but also the tests.

I don't know, but I had the feeling that the time I spent with the test-code
was the same I would have spent with fixing possible bugs later.

------
insanebits
In my oppinion maybe it's too harsh to dismiss TDD. It's certainly good in
some cases. But if you start using it everywhere everything will look like a
nail when all you have is hammer. Clearly not everything is easily unit
testable.

What I like about unit testing that it's forcing you to separate irrelevant
code and prevents you from writing spagetti code.

In my personal oppinion you should test core functions as someone already
said, which validates 90% code with 10% of time. And leave parts which can
fail gracefully in case of a bug. It's not practical/impossible to test 100%
of use cases of a big codebase. But you can test critical parts.

------
ejk314
Starting out with a demonstrably false quote...

[http://www.hindawi.com/journals/jobe/2015/763680/](http://www.hindawi.com/journals/jobe/2015/763680/)

~~~
return0
Ha that is hilarious. I am actually trying to gain some weight and i find i
weigh myself too often. I think i can could come up with the study that shows
the opposite effect. After all it doesn't have to be true, just significant.

------
lmorris84
> I am against writing unit tests in general

> How often have you seen a program crash? If it was developed by a large
> software company, chances are it was written using TDD. Clearly, TDD is not
> a magic bullet. So, TDD does not "prove your code works".

> Developing software is like a painting commission

Several reasons I gave up a third of the way in. I rarely write tests first,
but I can appreciate that it works for plenty of people - my brain just
doesn't work that way.

It's hard to take anything in this article seriously because of the nerdrage
and the dismissal of anything he disagrees with as "stupid".

------
veidr
This is a bad article, based on a fundamental misunderstanding of TDD, as
virtually all the comments on the blog post itself and here on HN attest.

People don't spend weeks writing tests for all the functionality and then
write the tests. That's not what even the most die-hard TDD advocates do.

I think many people here (like me) clicked through to read a well-reasoned
article about how TDD enthusiasm may have gone too far, but sadly, this isn't
that.

HN readers would do well to just move on, and the author would probably be
well served by the advice of some of the commenters on his blog to take this
post offline.

------
varelse
The tone of the article is way over the top, but I 100% agree with this quote:
"[E]xperience has shown repeatedly that good designs arise only from
evolutionary, exploratory interaction between one (or at most a small handful
of) exceptionally able designer(s) and an active user population--and that the
first try at a big new idea is always wrong."

I agree so much so that the best systems I've built have always been the 2nd
revision of a build one to throw it away prototype.

That said, I like to surround my code with lots of unit tests to eliminate
absolutely stupid bugs.

------
crocal
There is a way out that he has not found (yet). To design with complete
testability as a design goal. TDD with no supporting design can indeed lead to
disaster. Hope he can paddle past the rant pond.

------
josteink
> "Trying to improve software quality by increasing the amount of testing is
> like try[ing] to lose weight by weighing yourself more often."

I realize this is probably here to provoke a reaction, but it's still retarded
and it's obviously wrong for the obvious reasons.

> You are writing code to test something that doesn't even exist yet. I am not
> rightly able to apprehend the kind of confusion of ideas that could provoke
> such a method.

I agree TDD can be taken too far and be enforced too strict (although I've yet
to encounter that in practice).

That said: It's not stupid to write tests before the code which you are
supposed to test. The order of these things are here for a reason: If you
don't write your tests before you have working code, how do you know your
tests will detect a failure mode, and thus can be used to prove that your code
is now working?

Just the other day I thought I had fixed a bug, and _then_ proceeded to write
a unit-test for it. The unit-test went green and I was happy.

But then I decided to comment out my fix and re-run the test. I assumed it
probably wasn't needed. I was confident and knew after all that I had "fixed
the bug".

But that way I could at least say I had followed the TDD mantra, which claims
to be there for a reason: 1. write test to reproduce bug, 2. write fix, 3.
rerun test and if green 4. commit.

Upon commenting out my fix and "needlessly" rerunning the tests, lo and
behold: The test was still green.

My test failed to detect the error-condition. Which meant that my patch had
probably not fixed the reported bug. Quality had not improved.

I rewrote the tests and managed to make them go red, that is, detecting the
failure mode. Uncommenting my fix and rebuilding, my test was still red. My
fix was indeed invalid!

Another round of investigation showed that I had misinterpreted the error
condition and produced a "fix" which didn't solve the real problem.

And the "stupid" principles of TDD helped me detect a invalid fix, find the
real issue and verify a real fix for it. TDD helped me increase quality.

Stupid indeed, eh?

------
acaloiar
> "Trying to improve software quality by increasing the amount of testing is
> like try[ing] to lose weight by weighing yourself more often."

Giving the benefit of the doubt that this quote is in fact an accurate
metaphor – to begin with this quote is to begin with the assumption that TDD's
single motivation is improving software quality. TDD is a fantastic practice
for ensuring conformity to a specification; which one might describe as
software's correctness. A more objective metric than "quality".

------
brlewis
What the author describes, writing the tests for an entire program before
writing any of the program, may actually be stupid. But it is not TDD.
According to Wikipedia it's a feature at a time, not a program at a time. I
usually see it done one class or function at a time.

If the author had bad results writing tests for an entire program before
writing any of the program, I'm not terribly suprised. But since doing so
isn't TDD, it doesn't add even a single data point to the TDD discussion.

------
mrweasel
>So this is the first reason TDD fails: You're trying to make a design before
you learn anything about it.

Well yes, I use my unit tests to learn what works and what doesn't. You're
allowed to rewrite or scrap tests, if they are no longer relevant. In the end
I feel that having unit tests result in a better final design.

Yes, sometime unit tests are a little contrived. But they can also help you
design cleaner interfaces and increase your code reuse.

------
edpichler
"I am against writing unit tests in general..."

After this sentence I abandoned the article.

What kind of serious software don't need automated tests? Maybe the kind of no
one uses.

------
kul_
I tend to agree with this. First, no I am not against writing unit tests, they
are great way to trap regressions. But test driven DESIGN? really?

Do you drive your car (software) by banging against rail guards (unit tests)
to reach a destination? They are there so that you do drive off the road
accidentally not guide your way.

------
fideloper
I'd love to hear both sides of this story from experienced people (and not the
ones who speak loudly one way or another).

We'll probably find experience on both side of "good" and "bad". I'm curious
about the "bad" and how that comes about!

------
pc86
> _Week 3-4: Write tests._

> _Week 5-10: Write code._

I'm not necessarily a TDD advocate but I've literally _never_ heard anyone say
you should spend two solid weeks writing tests and then six solid weeks
writing code against those tests.

I mean, that's just idiotic.

------
UK-AL
I don't recognise most this criticism. TDD is designing up front. But your
designing only a little bit up front. You implement the design, if it doesn't
fit, you scrap it and try again. That doesn't matter since its only a little
bit.

------
brudgers
_Painting is a science and should be pursued as an inquiry into the laws of
nature. Why, then, may not a landscape be considered as a branch of natural
philosophy, of which pictures are but experiments?_ \-- John Constable.

------
bartuz
TDD is about design, not writing tests.

[http://www.drdobbs.com/tdd-is-about-design-not-
testing/22921...](http://www.drdobbs.com/tdd-is-about-design-not-
testing/229218691)

------
CBABIES
Good observations! At that company, sure, TDD is ridiculous.

But other companies don't do it that way, and require TDD, so the article took
a narrow, biased view without considering pros and cons and different
scenarios.

------
anthonybsd
Let's see author's bio. Only a few years of programming experience - check.
Never had a job doing programming outside of academic research - check.
Everything checks out, folks!

------
k8tte
just a sorry rant about how OP tried and failed TDD, and then he points to
some project from 2004 that to me seems to be missing code ownership (ppl
afraid of changing code they dont fully understand), but to OP seems to be
doing TDD wrong.

it's a waste of time to read, as it don't bring anything new to the table. i
wish the OP would at least attempt to reflect on what TDD is, what it's not
and why oh why, so many devs seems to like it.

------
sigmonsays
I hope I never work with the author..

------
rafadc
> I am against writing unit tests in general, since experience shows that they
> actually prevent high-quality code from emerging from development--a better
> strategy is to focus on competence and good design.

False dilemma. Are competence and good design exclusively from people that
don't TDD?

> The basic idea--amazingly, one of the most popular methods of software
> engineering (and growing in popularity!)--is that, after you figure out what
> you want to do and how you want to do it, you write up a set of small test
> programs--each testing some tiny piece of the program that will exist. Then,
> you write the program.

No. You are wrong. It is not that.

> Let me emphasize: you write the test cases for your program, and then you
> write your program. You are writing code to test something that doesn't even
> exist yet. I am not rightly able to apprehend the kind of confusion of ideas
> that could provoke such a method.

You are writing specifications for a program that doesn't exist yet!? Nuts.

> The most important argument is a practical one: Test-First doesn't work.

It doesn't work because it doesn't work. Sad argument.

> I was once one of these newly educated kids. So, for a few years, I worked
> exclusively with the Test-First strategy. When it was over, the results were
> undeniable. The code--all of it--was horrible. Class projects, research
> code, contract programs, indie games, everything I'd written during that
> time--it was all slow, hard to read, and so buggy I probably should have
> cried. It passed the tests, but not much else.

So it didn't work for you so it is everybody else's problem but yours.

> So this is the first reason TDD fails: You're trying to make a design before
> you learn anything about it.

No. You are not. Indeed, depending on how you decide to do your tests you
should be able to change your design and the underlying implementation and no
test should break. Please, take a look to the outside in approach.

> Week 3-4: Write tests.

> Week 5-10: Write code.

Oh my god. This is terrible. And no, this is not TDD either. This is inverse
waterfall :P

> Even if you somehow succeed, TDD prevents incremental drafts by functionally
> requiring all tests for a module to pass before you get any real results.

Probably this is a misconception too. You don't write all the tests before.
One test, small implementation pass, another test, small implementation
pass... You get the results incrementally.

> There is literally no substitute for competence. If your coders don't have
> it, TDD won't fix it.

I fully agree on this. And probably this is a really common misconception. But
false dilemma again. I don't think TDD came in this world to turn bad
programmers into good programmers. It is just another tool in the toolset. You
can have terrible programmers that do TDD and excellent programmers that do
it.

My recommendation: [https://vimeo.com/68375232](https://vimeo.com/68375232)

------
thom
Up next, on 2003's hottest takes...

------
jasode
_> I am against writing unit tests in general, since experience shows that
they actually prevent high-quality code from emerging from development_

The essay doesn't have the high quality of content I'd expect from PhD
candidate in computer science. I'd expect less outbursts of passion and more
discussion of pros/cons/tradeoffs.

First, without getting into "TDD" itself, let's just isolate whether "tests"
are valuable. I think the examples of SQLite's rock solid reliability
(extensive Tcl regression tests[1]) and NASA's disciplined software
verification prove that tests help discover bugs and increase code quality.

Tests are valuable -- but they also have a "cost" to write. Let's cover the
cost issue at the end.

When to write the tests. If the sequence is: write the code first and then the
tests, you might call them regression tests. If you write the tests first and
then the code (e.g. iterations to turn red FAILED into green PASSED), you can
call it "TDD". And those TDD tests can still function later as regression
tests.

To me, TDD acts as a "design" step or "outline" 10,000 foot view. Before any
plumbing code is written, what do you want the REST API or group of functions
to "look like". Is there unity and coherence to the collection of fucntions
modeling the abstraction? The subsequent of fleshing out TDD with "expects()"
and "asserts()" is just mechanical work to glue the edit+compile cycles to a
verification target but it's not the most interesting aspect philosophically.

However, even though tests have a benefit, there is a cost. The cost-benefit
works in some cases but not others:

In my experience, I'm completely sold on TDD (or regression tests) for
foundational library type of code. If you're writing a core string library
that 100 developers at the company (or open source community) will link into
their projects, I prefer seeing extensive regression tests covering all edge
cases that proves that it actually works the way the developer intended. It's
not strange at all if the code for regression tests outnumber the actual code
10-to-1.

On the other hand, TDD that is mostly UI verification is extremely brittle. If
you have TDD that simluates mouse clicks and has "expects()" on reading
webpage UI elements to check if things like sales tax calc is correct, you
could easily get overwhelmed by all the extra work that synchronizing the
actual code and the TDD scenarios generates. (E.g. a UX designer moves an icon
2 pixels or adds a row to table and ends up breaking the entire TDD validation
suite for developers.) I could see where TDD at that level would be
counterproductive.

[1][https://www.sqlite.org/testing.html](https://www.sqlite.org/testing.html)

[2][http://www.fastcompany.com/28121/they-write-right-
stuff](http://www.fastcompany.com/28121/they-write-right-stuff)

------
moron4hire
The "problem" with TDD is that it can only tell you that your expectations
have been met. It can't tell you if you are doing the right thing. Only that
the things you guessed at are working in the way you decided they should work.

If, on the other hand, your problem is not very well defined, then your tests
have a good chance of eventually becoming a liability. If you ever discover
that your domain model, interfaces, or even selection of algorithms are
insufficient (or just plain _wrong_ ), it's more likely that your pre-existing
test code will be unusable, rather than the sort of guide for refactoring that
they're touted to be. If you had guessed at the interface or algorithms
correctly, but merely implemented them incorrectly, yes, the tests will guide
you back to correctness, but I think that tends to be a big "if". Given that
you don't understand the problem, the likelihood that you've made mistakes in
the design are high.

Of course, this isn't the fault of TDD, that's exactly what it's meant to be.
The problem is in people thinking that is equivalent to verification.

TDD is ultimately a design tool, one that is useful in cases where we have a
very good idea of constraints and requirements of a problem, on where the
problem is very, very well defined.

And that's why I say it's a code-smell. The problems that are well-defined are
often that way because several people have created several implementations of
it already. If I'm TDDing something, it usually means I'm rewriting code that
already exists somewhere.

Now, I might have good reason to do that. Perhaps I have constraints that
nobody else has ever considered. I generally hate the phrase "don't reinvent
the wheel". I can think of at least 3 times off the top of my head that the
wheel was successfully and usefully reinvented in the 20th century alone. But
it's very important that you understand that is what you're doing. If you are
aware you're reimplementing a known solution, you can now choose to study your
forebears and get an even better understanding of the problem.

Personally, I think saved REPL sessions are better than TDD for problems that
are not very well understood or well defined. If I were to try to build an
MPEG encoder in JavaScript (for whatever reason), I'd certainly use TDD,
because MPEG encoders have specifically well-known inputs and outputs. But if
I am trying to invent a completely new UI paradigm for virtual reality, then
TDD is just not an applicable tool.

Also, it's a lot easier to tell myself to just discard a saved REPL session
that ceases to be valuable after a major rethink in how the problem works than
it is to discard tests. Plus, I find them to be a little more informative in
terms of demonstrating behavior to other developers than test code.

------
Walkman

        If you want to change what a function does, you have to change all the tests you wrote for it.
    

This guy doesn't know how to write good tests at all. The whole point of
testing is that you can change those function without any fear of breaking
anything because the tests will prove it works the same way as before.

