
TDD did not live up to expectations - kiyanwang
https://blogs.msdn.microsoft.com/ericgu/2017/06/22/notdd/
======
atonse
In the earlier days of the ruby community, I feel TDD was seen as gospel. And
if you dared say that TDD wasn't the way (which I always felt), you'd feel
like you were ostracized (update: just rewording to say, you'd worry that it
would hurt you in a job search, not that people were mean to you). So I never
spoke up. I feel like I was in the TDD-bad closet.

I absolutely think _tests_ are useful, but have never found any advantages to
test-DRIVEN-development (test-first).

But part of that is probably my style of problem solving. I consider it
similar to sketching and doodling with code until a solution that "feels
right" emerges. TDD severely slows that down, in my experience, with little
benefit.

What I've found works really well is independently writing tests afterwards to
really test your assumptions.

~~~
moron4hire
Like the author in the original article, I used to be all about TDD. Now, I
like to tell people that "I'm apostate in the ways of TDD". I specifically use
the language of religion because I think strict adherence to TDD is itself a
religion.

I somewhat disagree with the statement he makes, "we know that most developers
do not have great design/refactoring skills". I've certainly worked at places
that all but ordered me to never refactor code, and I suspect my experience is
very far from unique. They thought "refactoring" was a made-up word that
programmers used to cover up dicking around and wasting time. From a non-
programmer's perspective, all they see is the programmer spending several
hours with the end result being that nothing has visibly changed. Mind you, no
visible changes is the ideal case for refactoring, so you have no easy means
to explain why that is wrong.

He says that TDD only works if the programmer is good, but I think it's more
the case that test-first only works in cases where we have very well-defined
problems. For these TDD trainers that he's talking about, they've gone through
the same problem set over and over again. For an experienced programmer, if
you're encountering the same sorts of problems, you know what the pitfalls are
and can look ahead, even if you aren't violating YAGNI, you can still do
yourself some favors.

To put it to a specific metric, if you have a specification from which you're
working, you basically have an answer sheet for your code. You can then write
code that automatically verifies that your code matches that answer sheet. The
more like "having a spec" your problem is, the more likely TDD will work very
well for you.

But I don't do that kind of work very frequently. Most of my work is extremely
experimental, fluid, and in a lot of ways, its behavior is arbitrary. Does a
padding of 5px or 10px look better? Should the surface of a polygon be more or
less smooth? Should running over the ground be 3m/s or 5m/s? Should the user
have two hand tools that are identical, or should they be different tools? A
lot of it has to be visually verified, because it would otherwise require a
computer vision algorithm of some kind to know that the code I wrote to go
from text to screen pixels worked correctly.

It's much more _design_ than _development_ , just that design is expressed in
code rather than in Photoshop PSDs.

As a result, I focus much more on REPLs, saved REPL sessions, demo code, and
making the iteration time between code change and test run as narrow as
possible. What I think TDD provides (poorly) for people in this situation is
this REPL-like behavior, in languages that don't particularly provide a good
REPL (let us consider RoR a separate "language" from Ruby, in this case, as
the expectations tend to be completely separate).

So if I'm writing an implementation of a rope data structure as part of a
syntax highlighting text editor that renders into a WebGL texture, yeah, I
have tests for that. Ropes and text editors are well-understood data
structures, there's lots of information available about them, and there are
right and wrong answers to what operations each should have and how they
should behave. But that's not where the work ends. I still need to figure out
how to let people interact with that text editor, be it with touch gestures or
motion controllers or a keyboard or a gamepad or what have you, and that has
no easy-mode.

~~~
Joeri
_They thought "refactoring" was a made-up word that programmers used to cover
up dicking around and wasting time. From a non-programmer's perspective, all
they see is the programmer spending several hours with the end result being
that nothing has visibly changed._

The thing is, many refactoring efforts make code worse, not better.
Refuctoring is something I've observed many times over, by people who consider
themselves quite skilled at the art. The end result of their efforts is that
the code is fancier or trickier, not more robust or easier to understand.

So, from a manager's perspective, how are you to discern between a programmer
whose refactoring efforts save time down the road, and one whose efforts lose
time down the road. Of course, normally you have technical leadership that can
make this distinction and makes sure those people get a cold hard reality
check, but many organizations discourage technical people from gaining enough
power that they can stop other technical people in their tracks before they
mess things up.

~~~
charlieflowers
Did you intend to say "Refuctoring"? It could be a typo or it could be clever
wordplay. And it leads to two drastically different interpretations of what
you're saying :)

~~~
nickpsecurity
I think I'm going to start using that one. Especially on ports of legacy
software from one, crappy language to another crappy language. I'd especially
guess that the COBOL to OOP COBOL translations involve a lot of refuctoring. I
just gotta remember to give Joeri credit for coming up with it.

~~~
mdpopescu
Greg Young also uses that word :)

------
agentultra
Maybe the title should be: _TDD did not live up to my expectations_?

I too, like the author, have been practicing TDD for > 10 years. Test,
implement, refactor, test... that's the cycle. If you follow that workflow
I've never seen it do anything to a code base other than improve it. If you
fail on the refactor step, as the author mentions, you're not getting the full
benefit of TDD and may, in fact, be shooting yourself in the foot.

I've read studies that have demonstrated that whether you test first or last
doesn't really have a huge impact on productivity.

However it does seem to have an impact on design. Testing first forces you to
think about your desired outcomes and design your implementation towards them.
If you think clearly about your problem, invariants, and APIs then you will
guide yourself towards a decent system.

The only failing I've seen with TDD is that all too often we use it as a
specification... and a test suite is woefully incomplete as a specification
language. A sound type system, static analysis, or at the very least,
property-based testing fill gaps here.

But for me, TDD, is just the state of the art. I've yet to see someone suggest
a better process or practice that alleviates their concerns with TDD.

~~~
eropple
_> Testing first forces you to think about your desired outcomes and design
your implementation towards them. If you think clearly about your problem,
invariants, and APIs then you will guide yourself towards a decent system._

The way I put it: "if I don't know the domain and range of my function, I
shouldn't be writing code yet, I should be investigating the problem."

TDD is for unit tests, and unit tests best test functional, stateless code--
the functional, central logic of your application is what you should be
specifying and writing unit tests for, not all the imperative stuff wrapping
it for IO and failure catching and the like (Gary Bernhardt[1] calls it scar
tissue).

[1] -
[https://www.destroyallsoftware.com/talks/boundaries](https://www.destroyallsoftware.com/talks/boundaries)

~~~
tieTYT
> TDD is for unit tests

Never heard anyone say this. Kent Beck has said the opposite on podcasts.

~~~
eropple
I think it probably depends on what your definition of "unit" is. As I stress
functional units that encapsulate logic and data model but don't permute
state, my units are probably significantly larger (and simpler) than those of
people writing more imperative/traditional-OO units (where something under TDD
may encapsulate many units due to mounting complexity/complication and so
require further decomposition).

------
dcherman
I've also never found TDD to really be very beneficial except for all but the
most trivial utility libraries.

Most of the time, I have an idea of where I want to go, but not necessarily
exactly what my interface will look like. Writing tests beforehand seems to
never work our since nearly always, then will be some requirement or change
that I decide to make that'd necessitate re-writing the test anyway, so why
write it to begin with?

The extent of my tests beforehand these days (if I write _any_ before code)
are generally in the form of (in jasmine.js terms)

it('should behave this particular way', function() { fail(); });

Basically serving as a glorified checklist of thoughts I had beforehand, but
that's no more beneficial to me than just whiteboarding it or a piece of
paper.

That said, _all_ of my projects eventually contain unit tests and if necessary
integration tests, I just never try to write them beforehand.

~~~
eropple
_> necessitate re-writing the test anyway, so why write it to begin with_

Because now you have an unambiguous record of the conscious decision to change
your interface, because the test demonstrates the correct change of that
interface.

I don't write tests first for _everything_ I do, but I try very hard to when
I'm writing code that other people may read for this exact reason. Otherwise
they have to divine my intent through less significant means.

~~~
moron4hire
> the test demonstrates the correct change of that interface.

No, they don't. Tests never prove correctness. They prove that assumptions are
met. Whether or not those assumptions are correct is unprovable.

The problem is, there are changes to interface that could make the entire
approach of the existing tests completely meaningless. If all you're doing is
changing the name of a method, that can be practically automated away in most
cases. But if you're changing fundamental behavior, like "we no longer add up
a bunch of numbers and show them to the user here, now we print out a bunch of
labels", then your leftover, failing tests are completely useless.

I've seen far too many cases in-the-wild where people either just deleted
those tests, or faked-it-till-they-maked-it to force the tests to pass,
without any thought putting into whether or not the tests prove anything
important.

~~~
eropple
If there are changes to an interface that make irrelevant those tests, then
you change those tests (and, if you're semantically versioning like you should
be, you bump the major version number). I don't really get what you're
objecting to. Implicit in doing TDD effectively _is_ adopting the (light)
burden of refactoring.

Yeah, people will delete tests because they refuse to refactor them. They're
bad programmers. Don't work with them?

------
willvarfar
No article about TDD, particularly one that shouts out to the respected Ron
Jeffries [http://ronjeffries.com/](http://ronjeffries.com/), is complete
without mentioning the TDD Sudoku Fiasco :)

Ravi has a nice summary: [http://ravimohan.blogspot.se/2007/04/learning-from-
sudoku-so...](http://ravimohan.blogspot.se/2007/04/learning-from-sudoku-
solvers.html)

Peter Norvig's old-fashioned approach is excellent counterbalance:
[http://norvig.com/sudoku.html](http://norvig.com/sudoku.html)

~~~
Cephlin
Uncle Bob always has some clever things to say on TDD too:
[http://blog.cleancoder.com/uncle-bob/2017/03/03/TDD-Harms-
Ar...](http://blog.cleancoder.com/uncle-bob/2017/03/03/TDD-Harms-
Architecture.html)

and: [http://blog.cleancoder.com/uncle-bob/2016/11/10/TDD-
Doesnt-w...](http://blog.cleancoder.com/uncle-bob/2016/11/10/TDD-Doesnt-
work.html)

~~~
hota_mazi
I don't find most of the Uncle Bob writes to be clever, and especially when he
writes about TDD.

He's just too dogmatic to be credible to anyone working in the industry (and
make no mistake, Uncle Bob knows absolutely nothing of the software industry
in the 21st century).

His writings are only here to promote himself, his books and his consultancy.
That's it.

~~~
neuromantik8086
I weakly agree with this statement. While I think there is merit to TDD under
some circumstances (that others have described quite elegantly and succinctly
elsewhere in the this thread), my main takeaway from the Clean Coder was that
if I ever got super burned out I should just start a consultancy.

[addition]: I also find it weird how the Clean Coder seemed to encourage
burnout by prescribing that you use your off-hours to hone your craft. While I
agree that software engineering should be treated more like a craft (in
particular, I'm thinking of the apprenticeship and craftsman ship culture that
is prevalent in Germany / possibly other former Hansa areas), I don't think
that it's reasonable to assume that people should sacrifice their personal
time for it. I understand that sometimes this might be necessary (the
proverbial night class to get up to speed with some new domain of knowledge),
but his implying that surgeons constantly practice surgery during their off-
hours (and really, short of illegally exhuming bodies, how would they do
this?) seemed a bit of a naive and unrealistic ideal.

------
austenallred
The problem with TDD is that we flawed humans are writing the tests in the
first place. If I suck at writing code there's no reason to believe I wouldn't
suck at writing tests to check that code.

I use it on occasion as a good sanity check to make sure I didn't break
anything too obvious, but this idea that TDD is a panacea where no bugs ever
survive didn't ever make sense to me in the first place.

~~~
_puk
"I use it on occasion as a good sanity check to make sure I didn't break
anything too obvious"

That sounds more like unit tests than TDD. With TDD you should already know
that you didn't break anything..

If you suck at writing code, then TDD should help you determine whether your
sucky code produces the correct output for a given input; it is not however
going to determine if you did that in the best possible way.

~~~
eropple
This is a good and cool post. TDD is less susceptible to "sucky code" for two
reasons:

1) It should be a direct port of your specification. If you don't have a
specification, you can't do TDD.

2) It should be written by someone who won't be writing the code. Or, at
minimum, checked over by someone who won't be writing the code.

Writing tests after you've written code strongly encourages you to write
implementation tests rather than specification tests. Both are important and
useful, but for separate things (if you have to change your specification
tests it's probably because you need to bump a minor or major version number
for your software).

~~~
lgunsch
Regarding your second point, it is not only very difficult, but also not even
how TDD was designed to work by its original creators at all. It is
specifically not written by other people.

TDD, as outlined by the one who created the technique, as well as other big
proponents such as Robert Martin, is done in tight 2 or 3 minute loops. As
Kent Beck says, if you can't write your test in 5 minutes, you're doing too
much.

~~~
eropple
How "its creators" designed it to work, and how I find it to work in practice,
are pretty different. I've found that having two people understand a spec to
the level where they can reason about it (and that the best artifact for
demonstrating that reasoning is a test suite) is the most straightforward way
to build reasonably correct code.

(I mean, heck. Theoretically, Agile's "creators" designed it to be useful and
helpful to developers. Cue sad trombone sound.)

------
nostrademons
TDD failed for economic reasons, not engineering ones.

If you look at who were the early TDD proponents, virtually all of them were
consultants who were called in to fix failing enterprise projects. When you're
in this situation, the requirements are known. You have a single client, so
you can largely do what the contract says you'll deliver and expect to get
paid, and the previous failing team has already unearthed many of the "hidden"
requirements that management didn't consider. So you've got a solid spec,
which you can translate into tests, which you can use to write loosely-
coupled, testable code.

This is not how most of the money is made in the software industry.

Software, as an industry, generally profits the most when it can identify an
existing need that is currently solved without computers, and then make it
10x+ more efficient by applying computers. In this situation, the software
doesn't need to be bug-free, it doesn't need to do everything, it just needs
to work better than a human can. The requirements are usually ambiguous:
you're sacrificing some portion of the capability of a human in exchange for
making the important part orders of magnitude cheaper, and it's crucial to
find out what the important part is and what you can sacrifice. And time-to-
market is critical: you might get a million-times speedup over a human doing
the job, but the next company that comes along will be lucky to get 50% on
you, so they face much more of an adoption battle.

Under these conditions, TDD just slows you down. You don't even know what the
requirements are, and a large portion of why you're building the product is to
find out what they are. Slow down the initial MVP by a factor of 2 and
somebody will beat you to it.

And so economically, the only companies to survive are those that have built a
steaming hunk of shit, and that's why consultants like the inventors of TDD
have a business model. They can make some money cleaning up the messes in
certain business sectors where reliability is important, but most companies
would rather keep their steaming piles of shit and hire developers to maintain
them.

Interestingly, if you read Carlota Perez, she posits that the adoption of any
new core technology is divided into two phases: the "installation" phase,
where the technology spreads rapidly throughout society and replaces existing
means of production, and the "deployment" phase, where the technology has
already been adapted by everyone and the focus is on making it maximally
useful for customers, with a war or financial crisis in-between. In the
installation phase, Worse is Better [1] rules, time-to-market is crucial,
financial capital dominates production capital, and successive waves of new
businesses are overcome by startups. In the deployment phase, regulations are
adopted, labor organizes, production capital reigns over financial capital,
safety standards win over time-to-market, and few new businesses can enter the
market. It's very likely that when software enters the deployment phase, we'll
see a lot more interest in "forgotten" practices like security, TDD, provably-
correct software, and basically anything that increases reliability & security
at the expense of time to market.

[1]
[https://dreamsongs.com/RiseOfWorseIsBetter.html](https://dreamsongs.com/RiseOfWorseIsBetter.html)

~~~
taeric
No. TDD failed for engineering reasons in addition to economic ones.

The main failing of TDD is the assumption that you know all of the tests that
you will need before you know your software. This is true in bridge building
as much as it is in anything else. Until you fully know the domain of what you
are building, you cannot possibly know all of the things you need to test for.

To that end, _if_ TDD were focused around building things to break them and
analyze them, it would be wise. However, in software it is often taught to
build a wall of failing tests, and you can then use that as a sort of burn
down chart for progress. Note that you don't learn about the software you are
building from this burndown. Instead, you only get a progress report.

And obviously, this is all my opinion. I'm highly interested in exploring the
ideas. And I'm highly likely to be wrong in my initial thoughts. :)

~~~
s73ver
I've never heard TDD being that you write all of your tests up front, before
you've written any code. I've always heard it as: When you're writing a
class/unit, you write a few tests for what that unit is going to do. You then
make those tests pass. You add some more tests, make those pass, and so on and
so on.

~~~
taeric
You are still writing the tests first, with no real learning from them other
than "they are now passing."

It is thought this is like engineering. You design an experiment and then pass
it. However, that is not how experiments work. You design an experiment, and
then you collect results from it. These may be enough to indicate that you are
doing something right/wrong. They are intended, though, to be directional
learnings. Not decisions in and of themselves.

So, I probably described something in more of a strawman fashion than I had
meant. I don't think that really changes my thoughts here, though.

~~~
nostrademons
You're often learning the optimal API as well. I've had several experiences
where I write a library, write some tests for it, and then realize that the
library's API is inconvenient in ways X, Y, and Z, and that I could have a
much simpler API by moving a few tokens around. When I write the tests first,
I tend to get a fairly usable API the first time around, because I'm designing
it in response to real client code.

(This all depends on having a clear enough idea of what the library needs to
do that I can write realistic test code...I've also had the experience where I
write the tests, write the library, and then find that the real system has a
"hidden" requirement that isn't captured by the tests and requires a very
different way of interacting with the library.)

~~~
mayoff
It’s funny that you say “I'm designing it in response to real client code.“

To me, a test is not _real_ client code. _Real_ client code is code that calls
the API in service of the user. E.g. the real client code for the Twitter API
is in your preferred Twitter client, not in the tests that run against the
Twitter API.

~~~
nostrademons
That's the caveat I mention in my second paragraph.

But yes, if the tests are well-designed and built with actual real-world
experience, I do treat them like real client code. Someone looking to use the
library should be able to read the unit tests and have a pretty good idea what
code they need to write and what pitfalls they'll encounter. And when the
library is redesigned or refactored, the tests are first-class citizens; they
aren't mindlessly updated to fit the new code, they're considered alongside
other client code as something that may or may not have to change but ideally
wouldn't.

------
ryanmarsh
My day job is teaching TDD.

Just like other agile rhetoric I've found the benefits are not what the
proponents advertise.

I teach it through pairing and here's what I find.

TDD provides two things.

1\. Focus

Focus is something I find most programmers struggle with. When we're starting
some work and I ask, "ok what are we doing here" and then say "ok let's start
with a test" it is a focusing activity that brings clarity to the cluttered
mind of the developer neck deep in complexity. I find my pairing partners
write much less code, and much better code (even without good refactoring
skills) when they write a test first. Few people naturally poses this kind of
focus.

2\. "Done"

This one caught me by surprise. My students often tell me they like TDD
because when they're done programming they are actually done. They don't need
to go and begin writing tests now that the code works. They like the feeling
of not having additional chores after the real task is complete.

~~~
lgunsch
I would agree with those two statements. They are even mentioned by Kent Beck,
who created the technique in the first place, in his book Test Driven
Development: By Example.

He spends a whole bunch of time hammering home the point that TDD is to help
you manage complex problems and focus. He even mentions that if you think you
can just power through the code and write it correctly in one swoop, then just
do it. Skip TDD - it's not helping you then.

There are also other techniques he talks about regarding focus. For example,
leaving a failing test as your last "bookmark" to where you left off for when
you come back the next day. That allows you to jump right into where you left
off, no ramp-up time at all.

~~~
wolco
TDD is designed for developers with advanced refactoring skills. By the time
you reach that level you stop needing the organization ttd provides.

------
chubot
_The tests get in the way. Because my design does not have low coupling, I end
up with tests that also do not have low coupling._

Not to be smug, but I feel like this is a rookie mistake I learned 10 years
ago immediately after starting TDD.

The slogan I use in my head is that _testing calcifies interfaces_. Once you
have a test against an interface, it's hard to change it. If you find yourself
changing tests and code AT THE SAME TIME, e.g. while refactoring, then your
tests become less useful, and are just slowing you down.

Instead, you want to test against stable interfaces -- ones you did NOT
create. That could be HTTP/WSGI/Rack for web services, or stdin/stdout/argv
for command line tools.

Unit test frameworks and in particular mocking frameworks can lead you into
this trap. I've never used a mocking library -- they are the worst.

There are pretty straightforward solutions to this problem. If I want to be
fancy then I will say I write "bespoke test frameworks", but all this means
is: write some simple Python or shell scripts to test your code from a coarse-
grained level. Your tests can often be in a different langauge than your code.

The last two posts on my blog are about this:

"How I Use Tests":
[http://www.oilshell.org/blog/2017/06/22.html](http://www.oilshell.org/blog/2017/06/22.html)

"How I Plan to Use Tests: Transforming OSH":
[http://www.oilshell.org/blog/2017/06/24.html](http://www.oilshell.org/blog/2017/06/24.html)
\-- I want to change the LANGUAGE my code is written in, but preserve the
tests, and use them as a guide.

And definitely these kinds of tests work better for data manipulation rather
than heavily stateful code. But the point is that testing teaches you good
design, and good design is to separate your data manipulation from your
program state as much as possible. State is hard, and logic is easy (if you
have isolated it and tested it.)

Summary: I use TDD, it absolutely works. But I use more coarse-grained tests
against STABLE INTERFACES I didn't create.

~~~
daxfohl
This is terrific. You know, I've spent the last ten years perfecting coding
strategies around unit test frameworks and mocks. I'm really really good at
that. Really. I won't be humble; if somebody wants this, then I'm one of the
best in the business. Yet, I'm slowly coming to the realization that that's a
perfectly useless skill. Perhaps, even a negative skill.

It depends on the project, but for most projects I've worked on, the most
difficult parts are the integrations with external systems. Figuring out what
headers you need to pass in a call to Facebook, or what certificates you need
to access some third-party API, or what data to send over USB to activate some
device. Unit tests / mocks let you blithely ignore all those things. You mock
them out, hide behind an interface, write your "application code" that uses
these interfaces to do whatever your application does, make unit tests with
mocks that behave the way you'd like, and viola your app is done! With almost
100% code coverage even! And it's even fairly well-designed with fairly low
coupling. Except, it doesn't work at all, the hard work is still ahead of you,
and your interfaces are all probably leaky abstractions that you're going to
have to change substantially to make it work for real.

Anyway that's the hole I've dug for my current project. I'm pretty quickly
coming to the conclusion that I need to unlearn quite a bit, retrain my
instincts. I like your posts. Not even much for the content, but mainly for
the concept. Rather than cargo culting onto "unit-testing-in-framework-X-to-
achieve-100%-code-coverage-because-that's-how-you-make-sure-your-app-
works,-right?", it's more of an intentional approach. Step 1: determine what
we _really_ need to make sure of, step 2: determine the best way to achieve
that.

Ultimately, what I think is wrong with TDD as most people know it, in a word,
is that it's a shortcut. It's easy. You populate your mocks with fake data and
it's infinitely repeatable so yay. Populating an actual database with fake
data and making sure it's deleted/refreshed/whatever between tests is
difficult. But that doesn't mean it's not the right thing to do.

~~~
bluGill
I can argue both sides of this (in fact I have).

I would response here that you need to consider your facade design pattern.
You should have a facade that you test your code with by careful mocks and
fakes. Then the facade translates between your code (which now works) and the
API. Particularity if the third party is infamous for breaking changes all the
time you want to ensure that when things break you only have to figure out
what they changed and fix the glue code.

Of course if all you have is glue code (this is likely: a lot of real world
problems are just getting data from one system to another) there is nothing to
test.

------
sevensor
TDD has worked well for me exactly once: porting a library from to Python to
C. I had a very clear idea of what every function was supposed to do and I
could write tests first. Due to the nature of the library I was able to write
tests that generated lots of random inputs and check the properties of the
outputs. This was a great experience --- it was very easy to change the
internals without fear of breaking something. Ordinarily writing C is a bit of
white-knuckle experience, but this made it quite pleasant.

------
norswap
TDD never worked for me, I believe because of the nature of my work: research
code, very explorative in nature. I do not now in advance how the interface
will turn up, so it's hard to anticipate the interface in my tests (or it
leads to a lot of wasted).

Nowadays I mostly test with "redundant random generation testing": generate
random but coherent input, run logic, then... Either I can reverse the logic,
and I do that and verify that I get back the original input. Or I can't and
then I simply write a second implementation (as simple as possible, usually
extremely inefficient). This finds bugs that classical unit and integration
testing would never find.

~~~
neuromantik8086
My former lab tried to use TDD. I've come to the impression that it's not the
right approach for science / prototyping and just bogs you down. I don't think
that science should eschew testing completely- I just think that something
like your "redundant random generation testing" is more in line with what
needs to go on. The trickier issue is changing your lab culture so that it
actually _understands_ the need to run tests of some kind consistently / with
discipline.

~~~
daxfohl
This is funny, because as a consultant, I think to myself "TDD really doesn't
work for me, because I've got so many 3rd-party dependencies that I have to
mock out in my tests (which often makes unit tests seem more like a test of
your mocks than of your actual application); TDD must really be best for
researchy things where they don't have to deal with those problems".

~~~
bluGill
Why are you mocking those 3rd party things? I find in most cases I can test
with the 3rd party thing in an integration style test. When the data I'm
testing is trivial those 3rd party things can give me an answer in less than a
millisecond and so my whole test is fast enough. I find that even when I need
to do things like database query that setting up a database with fake data is
enough that mocking the database isn't worth it.

As a bonus if there is a bug in the 3rd party code I'll find it before we go
to production. My boss doesn't want me to point fingers and some 3rd party
code when we are losing millions of dollars because of a software bug: he
doesn't want to lose those millions of dollars in the first place!

~~~
daxfohl
See my other comment on this thread:
[https://news.ycombinator.com/item?id=14668402](https://news.ycombinator.com/item?id=14668402)

------
dkarl
_When you look at the TDD evangelists, all of them share something: they are
all very good – probably even great – at design and refactoring. They see
issues in existing code and they know how to transform the code so it doesn’t
have those issues, and specifically, they know how to separate concerns and
reduce coupling._

I think one of the selling points of TDD, and something I hoped for from TDD,
was that causation went the other way, and writing tests would result in code
being refactored to separate concerns and reduce coupling. Sadly, I've seen
that it is possible to write code that is highly testable but is still a
confused mess. What's more, TDD as promoted encourages people to confuse the
two, resulting in testability being used as a reliable indicator of good
design, which produces poor results because it's much easier to make code
testable than it is to make it well-designed.

I've also seen people mangle well-factored but untestable code in the process
of writing tests, which can be a tragedy when dealing with a legacy codebase
that was written with insufficient testing but is otherwise well-designed. A
legacy codebase should always be approached with respect until you learn to
inhabit the mental space of the people who wrote it (always an incomplete
process yet very important), but TDD encourages people to treat untested code
as crap and start randomly hacking it up as if it were impossible to make it
worse.

This unfortunate (and lazy) habit of treating testability as identical with
good design is not a mistake that good TDD practitioners would make, but I
think they did make a mistake in the understanding of their process. My guess
is that when those people invested effort into refactoring their code for
testability, they were improving the design at the same time, as a side effect
of the time and attention invested combined with their natural tendency to
recognize and value good design. They misunderstood that process and gave too
much credit to the pursuit of testability as naturally leading to better
design.

I do think the idea of TDD is not entirely bankrupt, because the value of
writing tests is more than just the value of having the tests afterwards, but
I think its value is overblown, and people who believe in the magical effect
of TDD end up having blind confidence in the quality of their code.

~~~
agentultra
> I've also seen people mangle well-factored but untestable code in the
> process of writing tests, which can be a tragedy when dealing with a legacy
> codebase that was written with insufficient testing but is otherwise well-
> designed.

Have you read Michael Feathers' _Working Effectively with Legacy Code_? [0]

In his definition of _legacy code_ it is any such code that has no test
coverage. It's a black box. There are errors in it somewhere. It works for
some inputs. However you cannot quantify either of those things just by
"inhabiting the mind of the original developers." The only way to work
effectively with that code base in order to extend it, maintain it, or modify
it is to bring it under test.

This is far more difficult than it sounds with legacy code than with
greenfield TDD for the aforementioned reasons: there are unquantified errors
and underspecified behaviours. You can't possibly do it in one sweeping effort
and so the strategy is to accept that tests are useful and to add them with
each change, first, before making that change and using the test to prove the
change is correct.

Slowly, over time, your legacy code base surfaces little islands of well
tested code.

You have to be deliberate and careful. You have to think about what you're
doing.

This is a much different experience than writing greenfield code. TDD is
effortless and drives you towards the answer in this case.

[0] [https://www.amazon.com/Working-Effectively-Legacy-Michael-
Fe...](https://www.amazon.com/Working-Effectively-Legacy-Michael-
Feathers/dp/0131177052)

~~~
dkarl
Yeah, I've read it, I know the definition, and I understand the intention
behind adding tests to legacy code. Unfortunately, TDD (meaning TDD as I've
encountered it in print and in practice) encourages people to think that "no
tests" is tantamount to "no value to preserve" and therefore "no risk of harm
from refactoring." Maybe that's an exaggeration, but certainly some TDD
practitioners think that they can't possibly harm a codebase by adding tests.
Unfortunately, testing requires refactoring, refactoring is redesigning, and
if you don't understand the design you're modifying, your changes can make the
code _less_ understandable, not more. Tests added after the fact impose the
test-writer's understanding of how the system works, which results in chaos if
their understanding isn't compatible with the understanding embedded in the
existing design.

Also, in spite of that definition, there's a lot more to "legacy" than not
having tests. Not having access to the original designer and not having access
to the the requirements or domain understanding that influenced development
are important handicaps of legacy code that are entirely separate from tests.
Certainly tests added during development can capture some of this knowledge,
but adding tests after the fact does not automatically recreate it.

These are both examples of the danger of trying to elevate one aspect of
software development to a primary and sufficient role. "Take care of X and
everything else will work out" has no known solution for software development,
and any methodology is harmful to the extent that it encourages people to
think that way.

------
gkop
There are at least several other advantages to TDD the article misses:

* Faster development feedback loop by minimizing manual interactions with the system

* The tests are an executable to-do list that guides development, helping you stay focused and reminding you what the next step is

* Provides a record of the experimentation taken to accomplish a goal, which is especially useful when multiple developers collaborate on work-in-progress

~~~
thinkingkong
Except in order to have a checklist you have to have your design and
implementation up front which is out of reach for most developers.

If were testing high level results in multiple cases thats cool. But a lot of
the time youre going to try different approaches that may modify or involve
changing your output.

~~~
hyperpallium
TDD is waterfall

~~~
darylfritz
Care to clarify this? I've seen plenty of Agile shops that practice TDD.

~~~
hyperpallium
I was summarizing the parent. Although, a few other comments here better put
my point, that TDD and unit testing in general, freeze your interfaces.
Because interfaces define the functionality of modules, this forces your
choice of modularity upfront.

This is fine, if your already have a fair idea of how to architect your
problem. But if you don't, it makes it harder to refactor your interfaces (in
the same way that clients depending on your interfaces does). Of course, you
_can_ rewrite the tests, since you have control over them; it's just harder.
And you don't have the TDD raison d'etre reassurance of working tests for
_this_ refactoring, of interfaces. Higher level function tests, yes; unit
tests, no.

------
zorked
And another generation of programmers learns that there is No Silver
Bullet[1].

This will happen again and again.

[1]
[http://faculty.salisbury.edu/~xswang/Research/Papers/SERelat...](http://faculty.salisbury.edu/~xswang/Research/Papers/SERelated/no-
silver-bullet.pdf)

~~~
a_imho
Indeed, as long as there are people caring enough to seek out ways to improve
their product/craft, there will be an ample supply of silver bullet merchants
to satisfy the demand. Predictably with discussions along the same fault lines
too (works for me, hate being forced to use it, you are doing it wrong etc.)

------
namuol
The main benefit of TDD:

It strongly encourages you to think of your code as several input/output
problems.

When you apply this model of thinking at scale it tends to lead to a much
simpler (read: less-complex) codebase.

~~~
sowbug
This is the top-level comment that resonated most with the point I wanted to
make.

TDD is good for training you to recognize untestable code. "Oh, yeah, that's a
Law of Demeter thing... maybe we should just ask for the narrow object rather
than the giant one that knows how to get the narrow one," etc.

Once you have that skill ingrained, it's reasonable to stray from strict TDD
and simply remember to defend your code with adequate tests.

------
dyarosla
tl;dr: TDD is not, on its own, effective. If code is highly coupled, tests
become highly coupled and a nightmare. The author advocates that learning to
properly refactor and create low-coupled code should be a first priority ahead
of following TDD blindly.

IMO, nothing really profound here.

~~~
tonyedgecombe
It's interesting where this leads you though, deep into dependency injection,
inversion of control containers and patterns like MVVM.

I'm not saying these things are bad but they do have a cost.

~~~
eropple
It doesn't necessarily lead you there at all. "Dependency injection" is a
fancy phrase for "pass stuff a function needs into the function", only it does
so by adding state to objects where that state probably shouldn't exist. You
don't have to faff around with IoC or MVVM to properly isolate dependencies;
indeed, it's the core idea behind something like Gary Bernhardt's "Functional
Core, Imperative Shell" (which, if you're not understanding this
_instinctively_ , makes me fear for your code):

[https://www.destroyallsoftware.com/talks/boundaries](https://www.destroyallsoftware.com/talks/boundaries)

[https://www.destroyallsoftware.com/screencasts/catalog/funct...](https://www.destroyallsoftware.com/screencasts/catalog/functional-
core-imperative-shell)

~~~
AstralStorm
Sometimes state is an actual desirable thing and strategy can be a dependency.

Heck, higher order functions (functions taking functions as argument) are
dependency injection personified.

~~~
eropple
State is a desirable thing in the code that _wraps_ your business logic, for
sure! Otherwise you're not actually doing anything. That's distinct from state
_in_ your business logic, though.

------
exabrial
TDD takes discipline, planning, and creates stability; I think in the day and
age of "rewrite it using the latest framework" it just doesn't coincide with
the insatiable thirst developers to use the latest bleeding edge XYZ.

------
sametmax
Another reason for the failure of TDD is the fact you need to be a very good
programmer to be productive with it. Indeed, it requires you to be able to
think your general API and architecture ahead.

Junior and 9-to-5 programmers suck at this. They are much better at tinkering
until it forms a whole, then shape that into something that look decent and
works well enough.

And we live in a world where they represent a good part of the work force.

You can't expect everyone to be a passionate dev with 10 years of experience
and skilled in code ergonomics and architecture design while being good at
expressing him/herself. That's delusional. And armful.

~~~
UK-AL
Not really. Core tenants of TDD are failed test -> Get it working -> Refactor
into something nice.

~~~
sametmax
But you need to know what the working API must look like. Creating an API is a
much harder task than you think. You are used to it so it seems natural to
you, but I assure you I keep giving training in IT months after months, and
most people are not very good at it.

HN is full of 10x devs and they live in an expert bubble.

------
flashdance
I develop software for radio towers. This was a very confusing headline and
article. I only figured out halfway through that I was thinking of the wrong
TDD.

Test driven development is one thing. Time division duplexing is very
different. I'll have you know that the latter did in fact live up to its
expectations!

Showerthought: I wonder if our TDD codebase is TDD?

------
wyldfire
Sorry for the aside but I find it humorous that the headline that should read
"TDD--, Refactoring++" instead shows "TDD—, Refactoring++".

This is emblematic of that frustrating AutoFormat behavior that replaces
double dashes with an emdash. Probably not a coincidence that this appears on
MSDN -- perhaps it was drafted on Outlook or Office or some other tool w/this
same AutoFormat.

This feature is responsible for countless miscommunications between colleagues
à la "I copied and pasted your command just as you had it in the email"...

~~~
austenallred
Someone should have written a test to check for that

------
thewoolleyman
Having done TDD mostly full-time for well over a decade, I have to agree, and
it hits home with some past experiences.

You can end up with fully tested, TDD'd code, that is not well-designed - i.e.
unnecessarily coupled, and not cohesive. Cohesion and coupling are the basis
of most everything in good software design - e.g. all the letters in SOLID
boil down to those two things.

The premise of TDD is that it's supposed to make that too painful to do to a
damaging extent. But, if you keep ignoring the pain, perhaps in the name of a
"spike" solution, or because you just don't have the experience or background
to know what good design is, you will end up with a tested mess of spaghetti.

And that's even harder to untangle and refactor than untested code, because
you have to figure out which tests are useful, useless, or missing. That just
slows you down as you work towards a better design.

In these situations, scrapping the whole module, including tests, and starting
over, is sometimes faster in the long run that trying to refactor
incrementally with the safety net of existing tests (another of the main
values of TDD).

\-- Chad

(edit: typo)

------
fanpuns
I appreciate that many of you (including the article author) are coming at
this question with a lot of experience. I, however, knew very little about
coding a year ago and learned with TDD as part of how I build almost every
project. Although I think it's always the case that I might "be doing it
wrong", it's hard for me to imagine now writing code without first writing
tests. Part of this is, admittedly, that I'm still uncertain if my solutions
or code will even work and writing tests helps me to both organize what I want
to do and also verify that I haven't made silly syntax mistakes.

Was it harder to learn this way? Absolutely (at least I think so, but my
sample size is 1). I can't tell you the state of despondency I was sometimes
in learning test suites and trying to figure out how to test certain things
all while knowing that I could just write the stupid code and inspect the
output to see if it was right.

Also, I love to refactor. How do you refactor if you don't have tests to catch
you when you break something?

~~~
oelmekki
Not doing TDD does not mean not having tests, it just means writing them after
writing the feature (this has been the usual way for a long time before TDD,
although automated testing somewhat got very popular only a few years before
TDD).

I too became a big fan of TDD when I was introduced to it by rspec, then a
brand new testing tool. But those latest years, I grew tired of keeping fixing
my tests despite having no regression in my codebase - just because
implementation changed. I quit doing TDD, but mainly because I quit doing unit
testing.

Nowadays, I only do integration testing. I've made a chrome devtool extension
to generate capybara tests while I'm browsing, so that I can write my feature,
then generate a test for it while doing visual QA in literally 5 minutes (this
was something selenium IDE was doing in its time, I just made something a bit
more modern to replace it). And since this is integration testing, my test
won't break as often if I change the underlying implementation to do the same
thing. I would even argue it makes my refactoring more pleasant : if I
refactor correctly and end result is not altered, I don't even have to edit my
tests (if it breaks, it may actually be a regression).

That being said, what matters is what makes _you_ more productive. Don't take
our word for it, it may not apply to you. Just keep wondering how you could
make things better.

------
dccoolgai
Contract-driven development is the best model for building stable and reusable
systems at scale. The flaw in TDD is that it tries to make the tests _be_ the
contract instead of _supporting_ the contract.

------
mledu
If anything I think this article makes a great case for TDD. If your
developers aren't good at design and refactoring and that is showing up in
your tests, that is an indication that your design needs to be refactored to
be less coupled. TDD isn't a panacea, developers have to have some level of
sense and see the signs of a less than optimal design. Pain in test creation
is a great way of showing that as it simulates client code.

I also don't understand people thinking that you have to write the entire test
suite up front. You build your test along with your code. You start simply and
build up, this way if you don't have concrete specs your tests are helping you
with the design by thinking about consumption as well as implementation.

------
romanovcode
Oh, finally the TDD fad is dying. Never got into it and always thought it is a
complete waste of time.

I advocate to write tests only for critical algorithmic calculations and
nothing else.

Integration tests matter 100x more (at least in webdev).

~~~
hdi
> Never got into it and always thought it is a complete waste of time. I
> advocate to write tests only for critical algorithmic calculations and
> nothing else.

What great advice! While we're at it let's advocate surgeons didn't use
sterile instruments cos it's a waste of time.. well maybe only for heart
transplants.

~~~
romanovcode
Yeah, I bet you are one of those who also test your repositories and DB
connections while mocking the DB response essentially testing database
operations without even having an actual database in the back-end.

------
briandear
I disagree with the assertion that TDD takes more time. TDD takes less time if
you factor in the reduction of errors TDD helps prevent.

This article should be renamed, “TDD doesn’t work if you don’t do it right.”

This article seems to argue for Big Design Up Front. Ok, if you do that, then
why not write the tests for those designs after you make the design – then the
code you write confirms to the design.

I don’t think anyone advocates that writing tests is the same as the design
process. Tests are the result of design not the other way around. The gray
area is not design up front – but how much design up front.

~~~
ericgunnerson
Original author here.

I get your point, but it's not really about not doing TDD right. It's about
TDD evangelists - and I used to be one of those - advocating that people
should use TDD despite those people not having the skills to do TDD "right".
And my experience is that not only is doing TDD without those skills not a fun
experience, the resulting code is worse than if you let those same developers
write the code the way they are used to, and maybe spend some teaching time on
refactoring because it's a more basic skill.

------
borplk
Unfortunately TDD has become a band-aid for lack of constructs that should be
a part of the language in the first place.

A language that allows you to express the spec could be so much more useful.

~~~
UK-AL
If you implement the feature in the language, surely that language is
expressing the spec.

~~~
borplk
That's not what I meant.

A language could give you first-class-citizen constructs for things like tests
and constraints that could be compiler-enforced.

Some of that can be a part of the type system such as the ability to express
"positive odd prime integer between 7 and 197".

Some of that could be in the form of other constructs such as function entry
and exit guards.

You can "implement" all of that on your own as part of the program but that is
not the same thing because semantic meaning and intention is lost.

However when it is a first-class language-level construct it has cascading
benefits and tooling can also make use of it.

------
js8
I think people should in fact test data, not code. Looking at it from purely
functional point of view, functions should be either proved that they do what
they should, or they should be asserted and QuickCheck-ed. But what really
needs to be tested is that the input parameters (i.e. data) conform to some
"hidden" assumptions that we had when we wrote the functions. Because as we
modify the program, or even why we modify it, is that these assumptions have
changed.

------
pif
A.k.a: no methodology will turn a coding monkey into a great developer. What a
surprise :-)

~~~
johansch
This (implied or not) seems to be the premise of many of these fads.

On the flipside: I think TDD is great for people who love writing code more
than they love getting results. They get to write twice as much code!

------
pwm
I think testing itself is the second-best thing short of proving correctness.
It won't guarantee that your code is correct but ideally it greatly helps
reducing bugs. TDD, as far as I understand, seems to promise more than just
the benefits of testing. It promises an emergent design that produces a
solution to your problem. I think this works well for some problem domains,
like a lot of web/CRUD/LOB apps, and not so well for others, eg. the Sudoku
solver mentioned in this thread. On the other hand a lot of real world
problems can be successfully solved by solutions that are adequate but not
optimal, ie. good enough solutions and TDD seems to be a viable strategy for
these. I personally yearn working on problems where TDD based emergent design
is not enough and human ingenuity/intelligence/creativeness is needed.
Sometimes I bump into these but at the same time I realise that most of my
day-to-day job involves problems that is solvable by TDD alone. That said,
while all my production code has extensive tests, probably less than 50% has a
test driven design and I'm content with that.

------
maruhan2
"The tests get in the way. Because my design does not have low coupling, I end
up with tests that also do not have low coupling. This means that if I change
the behavior of how class <x> works, I often have to fix tests for other
classes. Because I don’t have low coupling, I need to use mocks or other tests
doubles often. Tests are good to the extent that the tests use the code in
precisely the same way the real system uses the code. As soon as I introduce
mocks, I now have a test that only works as long as that mock faithfully
matches the behavior of the real system. If I have lots of mocks – and since I
don’t have low coupling, I need lots of mocks – then I’m going to have cases
where the behavior does not match. This will either show up as a broken test,
or a missed regression."

Simply comment them out temporarily?

"Design on the fly is a learned skill. If you don’t have the refactoring
skills to drive it, it is possible that the design you reach through TDD is
going to be worse than if you spent 15 minutes doing up-front design."

I don't quite understand how TDD means you skip up-front design.

------
alkonaut
His argument is basically that with tight coupling, TDD is too hard and time
consuming to pay off.

But part of the point of TDD is ensure that all code is testable, and testable
means loosely coupled.

So you can't start TDD'ing on a bad and tightly coupled legacy codebase. You
can do it on a greenfield project however. Greenfield is very much the "lab
environment" he talks about. You control everything.

With greenfield projects comes another reality though: you often have to
explore and sketch a lot. TDD does _not_ work well for writing a dozen sketch
solutions to something and throwing out eleven.

And that to me is the main drawback of TDD: it works poorly for very young
code bases and it works poorly for very old ones (that weren't loosely coupled
to begin with). It's a very narrow window where you can start using TDD in a
codebase and that's when the architecture is first set, but the codebase
hasn't yet grown too coupled. Such a narrow window means it's not very
popular, for good reason.

------
lucidguppy
I'm not really convinced by this article or by the comments.

You can do exploratory code and TDD at the same time - you just have to write
down what you expect the code to do first.

These criticisms of TDD are very weak because they don't spell out an
alternative - every critic's vision of proper testing is different - and will
respond "that's not what I meant".

------
thegigaraptor
I hope nobody uses this article as ammo against TDD. The benefits are not felt
immediately but when time comes for maintenance/updates, I'm working on my
second port with a company. The first app had fantastic testing and I was
confident in the work I delivered. This second app however was led by a
developer who "needed to get things done" and I now have to wrap the v1 app in
functional tests to validate that I'm delivering a solid port. If the company
had enforced better practices sooner, they would have saved the time I'm
spending on retesting the original app. This second iteration is test driven,
hopefully the next dev has a better experience.

Also testing helps alleviate QA's workload by ensuring developers have not
broken any tests and regressed functionality before we hand off to QA.

If you're hacking on an idea or learning, I can understand not testing, but if
someone is paying to deliver code, deliver it with tests, period.

------
kevwil
This logic is flawed, and I'm not surprised it's coming from Microsoft. If the
expectation is that (repeatedly?) making blind guesses quickly and
(optionally) cleaning up the mess later is better than expressing an
understanding of the problem domain before writing 'real' code, then yes TDD
will not live up to those expectations.

~~~
seanmcdirmid
Note the site is
[http://blogs.msdn.microsoft.com](http://blogs.msdn.microsoft.com). Any MS
employee can start a blog here, and none of the content posted by an employee
goes through a vetting process.

That being said, the logic is not really flawed at all. There will always be
the real scottsman fallacy to deal with (what is real code? When does code
become real?). TDD promotes writing tests first, whereas you might not even
know what the spec is until you've gone through multiple iterations of the
code, and along the way you'll have to refactor and rewrite to reduce coupling
and reflect better understanding of the spec. Tests aren't bad, but writing
the tests at the beginning doesn't really make sense in that context.

------
peterburkimsher
TDD supports the paradigm of Software Engineering as an Engineering field.
Design, plan, build, test.

Chartered Engineers have qualifications for their skills to do this - whether
it's building bridges, designing circuits, or making cars.

Most programming is not Engineering. It's scripting. Hacking together a quick
solution to meet the user's immediate needs.

Huge businesses (including the company I work for) have some really weak
points in their production flow. They're planning factory operations using
some shoddy macros in Microsoft Excel thrown together by some businessperson
with no programming experience. Management won't change it, because "it
works".

Other fields of Engineering (civil, electronic, mechanical) have serious life-
threatening consequences if they fail. Software rarely has that risk. (Insert
comment about healthcare systems and WannaCry here).

For times when software carries serious risk, then TDD is still important! The
rest of the time, it's a burden.

------
cyberpanther
Sometimes lowering your expectations are a good thing for everyone. Now we
know the pros and cons and can use it appropriately. No one particular habit
is going to solve all your problems.

[https://www.youtube.com/watch?v=7gt4StirOzc](https://www.youtube.com/watch?v=7gt4StirOzc)

------
richardknop
Would it still be TDD when talking about functional tests? So no mocking. Or
does strict TDD definition only include unit tests.

Because the general principle of writing a test case and then writing/editing
code still applies with functional / integration tests.

And I have always preferred to use functional tests to test bigger components
/ packages based on their public interface then to write a unit test every
small function inside the package.

Unit tests seem to be much more useful in situations when you know exactly
what should your inputs and outputs be, for example if you are writing a
function to transform data from one type/object to another. This is where unit
testing shines.

But a lot development usually involves integrating / gluing together several
higher level components together and passing data between these components and
I much prefer functional tests there.

------
S_A_P
One thing I've noticed. TDD takes longer. It just does. You can argue that you
are racking up less technical debt in the long term but _every_ consulting gig
I've been on where TDD was the "directive" often deteriorated because the
business does not want to factor in between 40 and 100% extra time to allow
proper TDD coding. They want the same somewhat arbitrary and bonus driven
deadlines that they always do, and in order to meet them, we usually end up
tossing out TDD halfway through and reverting to just having skilled
developers get the job done as quickly as possible.

THIS is the economic reality of TDD failing. A manager wanting to reduce
quarterly spend so he gets his bonus doesnt care that TDD will cost him less
over 5 years, he cares that he can get a project delivered on time and under
budget...

~~~
jldugger
> One thing I've noticed. TDD takes longer.

It's not just you, there's ample evidence[1][2] that TDD is a tradeoff between
delivery time and quality. I think most experienced practitioners would agree
this makes sense: when the schedule slips, you can cut testing but you can't
cut implementation. Works at all beats trumps works in all cases.

In the TDD model, you make that decision up front, as well as the investment
in tests. When the schedule slips in development, there's nowhere to cut but
the most painful part, so you don't. The schedule slips.

To take a contrarian position, TDD fails because it doesn't allow management
to revisit decisions as reality deviates from the plan N months ago. This
seems like a surprisingly non-agile approach at the high level.

[1]:
[https://pdfs.semanticscholar.org/4dcf/5e7eed29c6707a8e1a415c...](https://pdfs.semanticscholar.org/4dcf/5e7eed29c6707a8e1a415c5a6713a23c1d91.pdf)

[2]: [https://www.infoq.com/news/2009/03/TDD-Improves-
Quality](https://www.infoq.com/news/2009/03/TDD-Improves-Quality)

------
colomon
It always startles me when people assume that one programming technique either
works for every type of programming or doesn't work for every type.

Working on Perl 6 compilers, an extensive set of tests was our best friend. It
was (probably still is, I haven't had time to help the last few years) utterly
routine to write tests first and then write the code to make them work. It was
a perfect way of working on it.

On the other hand, one of my personal projects in Perl 6 is abc2ly. It has
lots of low level unit tests, great. But almost everything really interesting
the program does is really hard to test programmatically. How would I write
tests to make sure the sheet music PDF generated has all the correct notes and
looks nice? That problem is significantly harder than generating the sheet
music in the first place!

~~~
dtech
I think TDD works great if you have a very clearly designed input or
interface. A programming language falls under this category: here's a piece of
code, make sure it compiles and returns the correct output is a perfect test.
And one which you can defined beforehand and is achievable to write as a test.

However, programmers are often not in such a luxurious position. Business
requirements are unclear, it is unclear what the solution even should look
like, and the requirements will change over time and are changed based on the
software's progress. In this case TDD only slows you down. It's also why short
iteration times (i.e. agile) seem work relatively well in software
development.

~~~
lgunsch
It's exactly the opposite. TDD was a put forward as a solution to unclear,
changing business requirements. It allows you to adjust course very quickly
when the business realizes its requirements were off and new set of
requirements are needed.

Robert Martin has even mentioned that it's a waste of time if you have a full
100% specification available to you, or when you already know what the final
solution is and simply need to implement it.

~~~
dtech
I'm not an expert on TDD, but how does TDD help in this situation?

If your requirements change, your existing tests and code become (partly)
useless. This would indicate the most efficient way is the exact opposite
where you test minimally until the requirements have stabilized.

~~~
lgunsch
The goal of TDD is not to have tests, or a huge test coverage. Those are just
important side-effects. One goal, ignoring how it improves focus and divides
complexity up nicely, is to make cleaner code. If your code respects the
single-responsibility-principle, is not duplicated, and doesn't have
dependency issues, you can adjust the requirements faster. The test suite
allows you to be confident your changes to some parts of the system don't
break others.

You can keep more code, not even having to touch many parts of the system. You
know that by changing x, you don't also have to change w, y and z. Of course
code still must change, but its vastly simpler and quicker to change.

Now, that doesn't mean you can't have those benefits without TDD, it's just
much easier and clear.

------
alexeiz
This looks more like "Misunderstood TDD did not live up to expectations." It's
obvious from the article that tests were written after the code. That explains
the excessive coupling in the code and tests being hard to write and
constantly getting in the way of code development. This is not "test driven
development". This is bolting tests on top of already (poorly) designed and
implemented code. Tests are an afterthought. They did not drive the design. No
wonder it doesn't work well. The high coupling in the code has to be repeated
in tests and it's a rather painful and fruitless exercise. Have tests been
written first, it would have been clear which design led to lower coupling:
the one it's easier to write tests for.

------
makecheck
It’s important to not assume that tests and code will be written by _the same
person_.

When tests are being created early, it’s actually a good excuse to have at
least a couple of minds looking at the same problem, instead of bottling it up
into one person who ends up quitting next month. It’s an excuse to not just
discuss the approach to use but have some code, where each person may realize
that they hadn’t really thought about the whole problem or maybe didn’t
understand it at all.

Other criticisms in this thread are still fair. It is certainly possible to
waste a lot of time on tests for instance, and to build something that is too
restrictive. Ultimately though, if you’re more than a one-person project,
_some_ form of “sketch it out first” is a good thing.

------
paupino_masano
I think it all depends on the context of what you're writing. For example, we
use TDD when making additions and modifications to a tax engine. For this use
case it's incredibly useful as the relative inputs and outputs are both
predictable as well as repeatable.

------
crucialfelix
Some code is perfectly suited to TDD. Other code not so much.

I was just working on something with a bunch of tangly functions that measure
and remap data and prepare it for sonification as sound events.

I keep jest running and the workflow is much quicker and more satisfying than
any other way of hacking.

------
velox_io
TDD is a nice idea, however TDD can add quite a bit of overhead. Upfront
overhead, before writing any code. This isn't a problem if it is justified/
needed. Plus the tests can become more cumbersome when code changes, more
baggage to carry.

Testing often becomes a KPI, and therefor is commonly gamed. Doing the bare
minimum to tick '100% code converge!'. I'm a fan of contracts to test the spec
and boundaries(whether human or other application) of software.

TDD requires discipline and experience; You could spend an infinite amount of
time writing tests and never deliver, or the other extreme becoming
incapacitated fighting bugs.

Our first priority should be crash-free software, THEN start to think about
making it bug free.

------
_Codemonkeyism
I love unit tests, they give me a good vibe and find some kinds of bugs -
mostly b/c I think differently about a problem.

TDD never worked for me, b/c the code goes through many refactorings until I'm
happy and it always felt tedious refactoring the TDD tests.

------
he0001
For me TDD is mainly three things. Firstly it’s about testing myself so I’ve
understood the task properly before writing any code at all. In this step I
also can tell if my code is doing too much and therefore can tell if my method
or function conforms to the “single responsibility” rule. Secondly it’s about
maintainability and logical reason. A codebase where I don’t know what it’s
supposed to do, or forgot about it, I can always rely on the tests to skip the
parts that’s not interesting making me to move faster. Thirdly it’s about the
ability to refactor and therefore evolving design. Even if this is a step in
TDD you will always need to refactor since requirements changes over time. The
solution you had is not the optimal solution anymore and therefore you must
refactor anyway. Evolving design is a strength where you continuously
strengthen your code while getting work done faster and faster since you can
offload the reasoning on the tests, covering your back.

AFAIK TDD is actually the only way to produce code in a systematic way. When
reading TDD driven code I can make certain assumptions which I cannot do with
randomly produced code (there’s no system to the code). TDD code is developed
automatically with tests in mind and are always a lot easier to test when you
need to do it, and you always need to do it. (I would argue that there is
possible code which is not testable as is, unless you refactor and then you
don’t know what the code is doing the same thing as it did before).

If your tests get coupled with the code, I’d say it’s because either your
method/function is doing to much or it’s a language problem, not giving you
the necessary tools to ignore implementation details, which mocks usually are
an indication of.

Since TDD is a systematic way of producing code (at least more systematic than
not doing it), code which isn’t produced with TDD will not play well with TDD
produced code since it won’t follow the same conventions, designs and
possibilities.

TDD doesn’t automatically make the code more bug free, but I don’t believe
that TDD cause more bugs just because you use TDD.

If programmers cannot learn or deal with TDD, you have a different problem on
your hands.

------
hdi
I like the general assumption that TDD failed.

Failed at what exactly? Who would think 1 methodology would give them all they
need to be a great software engineer?

If TDD supposedly failed, can we hear the process that succeeded at what TDD
failed? Please extend an olive branch and enlighten the rest of us.

Because I tell yea, I can't even count the amount of "senior software
engineers" I've encountered, that deploy untested production code daily to
systems that help you guys buy your coffee in the morning, manage your money
and pensions. Oh yea, and they all seem to think TDD is bollocks too.

When that percentage decreases and engineers like that become a rare
occurrence, then we can talk again. Peace.

------
elliotec
Very relevant 3 year old DHH article:
[http://david.heinemeierhansson.com/2014/tdd-is-dead-long-
liv...](http://david.heinemeierhansson.com/2014/tdd-is-dead-long-live-
testing.html)

------
AngeloAnolin
This.

"developers are quick to pick up the workflow and can create working code and
tests during classes/exercises/katas – and then failing in the real world."

The anathema of TDD is that people equate having well defined test cases to a
solid product that brings the solution that the user wants. I have seen far
more software projects boasting of >85% of code coversge for tests, but still
failing spectacularly.

TDD failed because it was assumed as the magic wand that aligns the end
product with what's on spec and the process that it would cover, but the
reality of human behavior being merely coded to test cases is far out from
reality.

------
corpMaverick
Title should be: "TDD did not live up to MY expectations".

TDD is sort of an art more than a science. You have to know when and how to do
it. You also have to know how much as the marginal utility diminishes very
fast.

------
didibus
I remember an old Microsoft analysis where they measured empirically time to
delivery and actual defects and found TDD to not reduce defects while
increasing time to delivery. Can't find it anymore.

------
weberc2
The main benefit for TDD in my mind is that it mostly makes it painful to
write spaghetti code. When I'm reviewing something that looks too integrated,
I just ask for a unit test for that particular piece of functionality, and the
author is effectively forced to go back and refactor. After going through this
a few times, they learn to think about their design before they write code. Of
course, many dynamic languages defeat this by offering hacks like Python's
mock.patch which let you nominally test spaghetti code...

~~~
shubb
Interesting that you mention dynamic languages. I think that a lot of why
people do TDD is that you can't reliably know if a program in a dynamic
language has basic issues unless you run it.

The same is partially true for manual memory managed languages.

You need to push towards 100% coverage in order to replicate the advantage you
get in a statically typed language like Rust.

~~~
weberc2
Yes, the better static analysis you have, the less you need to depend on
runtime checks, but Rust makes a lot of tradeoffs which ultimately make it a
poor choice for most application development (steep learning curve
notwithstanding). There's a whole suite of static functional languages that
give you the same strong static guarantees, but without the headaches of
borrow checkers, lifetimes, boxes, etc, etc. Beyond that, there is Go which
gives you some rudimentary static typing (a 99% improvement over dynamic
languages), world-class tooling, a great deploy story, etc. In my mind, Go is
the sweet spot for the demands of modern application development.

------
PaulKeeble
Microsofts own study of TDD showed that it definitely improved defect rates
and they went fully into developing with it for Vista which is part of the
reason for the delays. Nowadays large chunks of the API is automatically
tested and this has allowed Microsoft to release changes must more often with
a lot less manually testing.

So while this individual is finding his local team isn't getting the full
benefits, Microsoft appears to be based on its own report on the technique and
its outwards software release cycle changing.

~~~
mandude
Wasn't Vista one of the Windows versions that was known for having bugs?

------
gedrap
Personally, I found that TDD works very well for small modules / classes, etc,
when there's little design to be done. In this case, you can focus on writing
down the spec (test cases) and be fairly confident that it works by the time
all test cases pass. Also, I agree with the author about complexity involved
if one decides to go down the TDD way for large, complex systems. So,
essentially, it boils down to picking the right tool for the job, and TDD is
just a tool, like any others.

------
garganzol
TDD works exceptionally well. The secret sauce is to find the correct scope
for tests. In my experience, integration tests are the most suited kind of
tests for successful TDD.

------
StevePerkins
Any programming language suitable for business application development is
going to have static analysis tools that can reveal your percentage of test
coverage.

As long as 95% (or whatever) of your logical branches are covered by tests, I
don't really care whether you wrote the tests beforehand or after the fact.

However, TDD being hard is not a justification for not writing the test
coverage at _some_ point in the dev cycle. Too many developers, and managers,
make that fallacy leap.

~~~
eropple
_> As long as 95% (or whatever) of your logical branches are covered by tests,
I don't really care whether you wrote the tests beforehand or after the fact._

Do you care that those tests reflect the business goal or just the
implementation? 'Cause TDD is way, way more likely to do the former than test-
last development.

~~~
StevePerkins
What are you talking about? TDD generally deals with unit testing. Which by
definition is more fine-grained than end-to-end integration tests against non-
mock dependencies.

All of which is unrelated to UAT. Where you show the alpha to the business,
and it's only after seeing it live for the first time that they start to get
on the same page about what their business goals were to begin with.

~~~
eropple
TDD totally deals with unit testing. Are your units not reflective of business
goals? Mine tend very strongly to be, perhaps because I extract IO and other
state crap out into the less functional part of my program (and I don't really
unit test that).

------
perlgeek
When reading the title, I had hoped for data. Like when Microsoft analyzed
their own developer's data to find out if remote work impacted productivity or
bug counts.

Instead, just another piece of anecdote. Sure, anecdotes from 15 years, still
not what I hoped for.

Doesn't Microsoft have hundreds of dev teams, and can compare things like
development speed and bug counts, and correlate with whether those teams
practice TDD? I'd read that article immediately!

------
blunte
[https://en.wikipedia.org/wiki/No_Silver_Bullet](https://en.wikipedia.org/wiki/No_Silver_Bullet)

------
kgilpin
Also, when a test relies on mocks it doesn't test the real thing, and doesn't
guarantee proper behavior in the real world. I suppose this is obvious from
the nature of mocks. And yet, if you can figure out a clean and fast way to
test something without mocks, I think you're better off.

Along with the coupling problem mentioned in the article, these are the two
reasons why I am writing a lot fewer mocked tests (e.g. Rspec) than before.

------
krmboya
What about a kind of middle ground, doing a 'spike' when figuring out stuff,
what kind of thing you should build, what the design should look like, etc
then follow up with TDD to stabilize the identified interfaces and produce
tests that act as a system health check?

Ok, for consultants, perhaps they end up doing the same kind of things for
different clients to the extent that they can just jump in doing things TDD
from the very beginning.

~~~
poushkar
That's exactly how I work (web development) and I am really happy with it so
far. First, I am playing with a throw-away code to understand how I want it to
be. Then, I write some tests based on the acquired understanding and then
finally working code. I can even copy-paste some parts of the throw-away code
if they were correct initially.

I see my designs being damaged by tests sometimes, though but in tolerable
amounts, and I am happy to compromise.

------
Debugreality
I've seen TDD used once really well in a university setting where it was used
only for shared libraries/services that could be used by multiple other teams
or departments but not on individual (front facing) projects.

Probably because only the best developers on the team worked on the shared
services it eliminated the refactoring issue as well as ensuring shared
services could be a lot more reliably and safely updated.

------
blackoil
TDD should not be taken as a religious dogma. The way I like it is, central
business logic as pure functions, which have tonnes of unit test. While
integration with other services and components sits on edge which do not have
unit tests, instead integration tests. If I have a key piece of code, I wanna
test, but would require lots of mocks, it is time to refactor.

------
pcarolan
Defining the interface before you write the code is the major advantage of
test-driven development and what it added to that way of thinking was very
valuable especially to novices. It also makes your code more modular and
reusable. Writing code as if other people were going to use it is something we
don't talk about enough.

~~~
brango
I found it just flew in the face of iteration. I get something simple working,
then iterate a few times, adding complexity. If you have to iterate your tests
as well, you're just walking in treacle.

Either that or you end up with a load of stuff in your interface that you
thought you'd need, but when you actually got to it you either didn't, or it
wasn't important to implement at the outset.

------
99_00
>The tests get in the way. Because my design does not have low coupling, I end
up with tests that also do not have low coupling. This means that if I change
the behavior of how class <x> works, I often have to fix tests for other
classes.

At this point you should be realizing your code is untestable and needs to be
refactored.

------
alexandercrohde
Tl, dr: When we treat unit tests as an ends in itself, this leads to writing
clunky tests for clunky code. If an engineer doesn't understand modular,
reusable code then that engineer can't won't be able to write code that can
tested easily. Thus understanding design is a prerequisite to effective TDD.

------
henrik_w
One of the key benefits for me is the mindset (fostered by TDD) to make as
much as possible of the code (unit) testable. This naturally leads to less
coupled code, because otherwise it is not possible to test it in isolation. So
the fact that you start with the aim of unit-testability leads to better
designs.

------
faragon
Tests are good for detecting code that is not working as expected, using them
as a investment/insurance, based on a _budget_. However, in my opinion, TDD is
often more about an obsesive-compulsive religion built on wishful thinking on
programmers reaching the excellence by writing tests ad nauseam.

------
luord
This article made me feel good. Ever since I started doing TDD, I refactor a
lot more and my code looks nicer.

Hopefully, I'm not falling into the other trap he mentions and getting into
design that would be worse than 15 minutes up-front.

Sadly, I can't comment on anything else as TDD isn't practiced _at all_ in my
area.

------
remotehack
Software obeys its own dynamics. Some things work well. Some not quite the
same. It's nice to see someone admitting that testing is good, rigid TDD; like
rigid Agile or rigid Waterfall, are bad. Corollary, what our parents said
still holds true; too much of a good thing is bad.

------
deweller
I grant the premise that TDD has drawbacks. But are they really worse than the
drawbacks of not writing tests?

Code with no test coverage will have more defects and will be more prone to
regressions.

For many projects TDD is the best we've got until something comes along that
replaces it.

~~~
globuous
I thought TDD meant doing tests _before_ implementing code rather simply
testing code.

So I always assumed someone testing his code wasn't necessarily TDDing, and
that someone not TDDing wasn't necessarily not testing, just that she tests
after implementation.

Is my understanding of TDD wrong ?

~~~
deweller
You are correct.

I incorrectly interpreted "don't do TDD" as "don't do tests".

------
buckbova
Seems a little arrogant to say most developers don't know how to refactor or
do it poorly. Maybe it's true. I really can't say one way or the other because
what I see is most devs believe they don't have time to refactor.

~~~
ericgunnerson
Original author here...

Let me ask you - and by "you", I mean everybody reading this - a question.

If you had poor design or refactoring skills, how would you know?

Let's just say you've finished working on some code that involved writing new
code. How do you know whether it's well-designed?

Well, you start by using your own sense of what is good design. But all of us
can only see the things that we already know about, which us thinking that the
code has decent design is only a sign that it is up to our level of
understanding. This is a fundamental part of how knowledge works.

Code review can give some additional design feedback, but 1) code reviewers
_generally_ don't have time to give in-depth design feedback and 2) "you
should redesign this" is generally not well received (see previous point about
everybody thinking they are good designers), and I both have to work with my
teammates and their comments factor into the kind of review I get. So, code
review is not a good way to improve design change, even assuming that you have
people on your team who have useful design advice to pass on _and_ you are in
the kind of environment that lets you be thoughtful about design.

If you are in a team with decent designers and your team pairs, then you are
in luck; you have a good chance to expand your design skills. Leverage this at
every opportunity.

Finally, it is possible to spend time (almost always your free time) learning
more about design, doing katas, going to code retreats, etc. This also works.

And I'm sorry if this sounds arrogant; I've tried to find other ways of saying
it.

------
partycoder
TDD forces you to design with testing in mind, also known as construction for
verification.

If testing gets in the way is because the design doesn't emphasize testing.

Then, if you have high coupling, you have high complexity and a stronger
reason to test.

------
baybal2
As I remember, one Microsoft was one of the original TDD pushers. One person
who worked there for over 20 years told me that "the peak TDD" at Microsoft
was right around time of Win ME release

~~~
ern
_As I remember, one Microsoft was one of the original TDD pushers._

You may be mistaken: There was a well-known incident back in 2005 where
Microsoft published guidance for TDD that was flat-out wrong (or at least not
fitting with the mainstream understanding), and they had to retract it (in
fact it was so far-out that they had to expunge the article from MSDN).

[http://codebetter.com/jeremymiller/2005/11/18/microsoft’s-re...](http://codebetter.com/jeremymiller/2005/11/18/microsoft’s-recommendations-
for-test-driven-development-are-wrong/)

 _One person who worked there for over 20 years told me that "the peak TDD" at
Microsoft was right around time of Win ME release_

Of course, Microsoft is big, and the broken guidance may not have reflected
practices elsewhere in the company, but it seems unlikely that they were early
TDD pushers if they hadn't grasped the concept (as commonly understood) by
2005.

------
hughperkins
Since when did TDD fail? Which is not to say it needs to be applied
systematically to everything. But there are often bits of code which are
better off being correct, and TDD works well for those.

------
tomelders
Off topic: But who's in charge of typography at Microsoft? The typesetting of
their blogs and documentation are horrific. And what little I see of Windows
looks just as bad.

------
jv22222
I'm not sure if the OP is advocating against using tests and ci completely, or
just the process of writing tests first and then code... Any one got any
thiughts on that?

------
AdmiralAsshat
Stylistic critique of the article: Is it too much to ask that you enumerate
your acronym at least _once_ throughout the entire article? The acronym "TDD"
appears 16 times throughout the article, and not once do we get "Test-driven
development" spelled out.

I get that it's a technical blog, but "TDD" isn't exactly a household name.
You can't utter it in the same breath as SSL or RSA and expect people to know
what it means without context.

As a test (no pun intended), try reading the article with the premise that you
have NO idea what "TDD" stands for. Can you reasonably infer it from the rest
of the article?

~~~
briandear
What’s RSA?

~~~
marcosdumay
It's an encryption algorithm... And a proper name, not an acronym¹, detracting
from the GP's point.

1 - Ok, it's technically an acronym, with a meaning different from the thing
it names.

------
MichaelMoser123
if requirements are not clearly understood then the tests will not be complete
- now does that invalidate the need to write tests? I don't think so.

Now this problem is amplified when writing tests on top of mocks; if you don't
understand the requirement of the next level (mocked level) then your tests
will be very incomplete.

Still, having unit tests that are run with each build is much better than
having no tests at all.

------
cphoover
TDD failed? What no it didn't? When was the last time you depended on the use
of an untested library for a major project?

~~~
runald
I've seen this too many times... TDD != Automated testing

------
dc2
Writing TDD tests made me a better developer because it pointed out just how
coupled my applications were.

------
throw7
too bad about the RE on the REOI. But TDD really failed because it didn't
identify SLG parameters. I don't know if I'd call it a failure though, SLG
parameters are usually hard to know before the start of a project and, even,
throughout.

------
jasonkostempski
Could have figured that out much sooner if someone had written tests for the
expectations.

------
emperorcezar
Gonna throw the baby out with the bathwater because bad programmers write bad
tests.

------
haskellandchill
TDD works for me and the rest of Pivotal Labs, maybe 1000 or so people. YMMV

------
24gttghh
Test Driven Development? All these acronyms and not once is it spelled out in
the article or the discussion! Is it like saying HTTP or DNS to most people?
I'd honestly never heard of it...but the concept seems logical from a high
level.

------
zubairq
I wish I could upvote this article X10000000... I totally agree!

------
matchagaucho
Salesforce.com Developers are the highest paid in the IT industry and TDD is
hardcoded into their development process. (All code is _required_ to have 75%
unit test coverage).

Correlation is not causation... but maybe it is?

------
mdpopescu
TLDR: TDD done badly doesn't work well.

~~~
falcolas
I'd say a better summary is that TDD won't transform OK programmers into great
programmers.

------
dreamdu5t
TDD: For people who've never used a decent type system.

Writing a Solidity test to add two integers today really drove home the point.

~~~
eropple
The language I've been most effective with when using TDD is Scala.

~~~
dreamdu5t
That's great. In my experience, most TDD has looked like "We're going to need
a function or class to do this, so let's write a test that calls all those
methods first." Which is absolutely ridiculous when we could just be using
languages that have concise semantics to describe what they actually do.

For example, the last HTTP API I wrote in Haskell has no tests. The proper
request/response is ensured by the types alone. I've had multiple jobs where I
spent weeks writing tests to do the equivalent thing with a nodejs API (the
body should contain this, the headers should look like that, etc.)

------
soared
Yeah I could google it, but its common practice to spell out an acronym the
first time you use it.

------
geebee
Over 405 comments, I am late to the party here, especially since this is just
my 2cents.

I think TDD "failed" largely for creative reasons. And it didn't actually
fail.

The reason I was willing to use the word failed, in quotations, is that I do
think that TDD is dead in the sense that it's a stick that can be used to beat
people into submission. There was a long series of debates on youtube with DHH
and a few TDD advocates, titled "Is TDD Dead", and it's funny that I think DHH
largely won the debate considering that I believe the answer is, clearly, "no,
TDD is not dead". TDD remains relevant and useful.

And yet, I think the TDD proponents suffered a severe setback in that debate,
severe enough that I'd consider it a pretty bruising defeat.

Why? Because the debate showed that debate is reasonable. That the position
that TDD is dead can actually be defended. Here's the thing - a lot of TDD
proponents denied the existence of a legitimate debate. There was right, and
wrong. Blog posts saying that people who even question the value of TDD should
be unemployable, that TDD might rightly one day have the force of law behind
it, that questioning TDD is the modern day equivalent of medieval doctors
denying the importance of sanitary conditions and washing hands. That sort of
thing. I think that by the end of the debate, there were too many cracks in
the TDD argument to deny that not doing TDD may, in fact, be a good way to
write software. TDD may not be dead (or even close), but that sort of
browbeating certainly was put to rest.

Some of it was what DHH called test driven design damage. But the biggest one
was creative. TDD may simply not work for the creative flow required for many
types of software development. It's like, to contrive an analogy of my own,
requiring a writer to outline the next page before writing the current one.
It's just too disruptive. You can justify it a hundred ways from Thursday, but
if people doing it can't write software as well as people who don't, TDD will
lose.

None of this is to understate the importance of test coverage. But write some
code, write some tests, repeat - yeah, I think that works. Trying to force
everyone to do TDD through a campaign of shaming and intimidation was a
horrendous fail. That's was the outcome of the youtube debates - TDD advocates
actually did defend the practice quite well, but they fell far short of a
standard that would mean DHH shouldn't be employable because he question TDD.

Perhaps not all TDD advocates were that extreme, but it was a strong enough
faction in the TDD movement that I don't think I'm finding extremists and
using them as a straw man. That sort of browbeating really was part of the TDD
culture, and I think that even the good parts of TDD, the parts we should keep
and even evangelize as developers, are harder to defend because of these early
tactics.

------
mncharity
Boston used to have a software craftsmanship meetup. One month, on the train
going home, a few of us discussed "how to describe TDD". Someone had a
teaching gig coming up. That night, I attempted to distill the views expressed
by this couple of experienced TDD folks. Here it is, FWIW.

# What is TDD?

TDD is JIT-development, built on tests.

It's not developing things before you need them. That's too easy to get wrong.
It's not built on reviews and approvals. They're too slow and fragile.

## TDD's JIT development with tests is:

1\. Live in the present.

Focus effort on what is clearly useful progress now, not speculation.

Don't do planning or development before you need to. Because later, you will
better know what is actually needed, if anything. Be restrained but thoughtful
in judging how much of what, needs to be done now.

Don't put off integration. Until then, usefulness and spec are only
speculative.

2\. JIT-spec

Capture each behavior you care about as a test.

Keep them simple, small, and clear. A new spec is a failing test. A passing
test means "done with that -- next!".

Don't stuff your mouth. Don't do lumps. Keep it bite-sized.

Don't spec it until you need it, even if you (speculate) you know where you
are going later.

Don't worry about the spec having to change later. They usually do. If the
speced behavior is clearly useful now to make progress, that's good enough. If
it's something you don't really care about long-term, you can remove it later.

3\. JIT-implementation

Keep implementation minimal.

Don't create speculative code. Do reactive implementation and refactoring. If
you "might need it later", write it later, when you have clearer need and
spec, and more tests available.

## About tests.

Programs have a few behaviors you care about, and many more that are
implementation details. Test the behaviors you care about.

Tests redistribute development flexibility and speed.

* Behaviors pinned down by tests, are harder to change. Because you have to update the tests too. They're transparent but rigid, and change is slower.

* All other behaviors, become much easier to change. Because everything you care about is tested, implementation changes can be done energetically, without careful cautiousness and fear of accidental hidden breakage. They're opaque but flexible, and change is faster.

Distribute your transparency and flexibility wisely.

Some topics I'm notably unclear on include test refactoring and management.

* when does one delete tests?

* how are lines of development pivoted?

* how are different classes of tests handled? (eg: external spec commitment; less critical spec I still care about; sentinel spec, which I don't mind changing, but I don't want it to happen accidentally/silently; spec that's transient development scaffolding, and should be removed later; and so on)

Opportunities include:

* broader coverage of the strategy, test, and implementation layer activities

* description of how test suites and implementations change longer-term

* specific discussion of cross-cutting issues like risk mitigation

* tighter characterization of core (eg, all tests and code are a burden, and start with a high time discount, so create and retain only those which are clearly and currently useful)

------
programminggeek
This is why TDD failed...

People don't like doing it.

So they don't.

The end.

------
kmicklas
Most developers are bad at refactoring because most of the tools for it are
terrible. Even at Google, something as trivial as renaming a function can be a
monumental task.

------
draw_down
I don't really have solid arguments against it, I just never found it
particularly helpful, or ended up with a result that seemed to justify always
working this way, or even defaulting to working this way.

------
joeblau
DR

~~~
wolco
TDD could be the reason for the bad system design. If you are writing to pass
a test if you do not refactor you end up with a mess. Why not write it
properly as a first step and start adding the tests later in the process.

------
im_down_w_otp
PBT-centric TDD has worked very well for us.

