
Test-Driven Development Is Fundamentally Wrong - psalminen
https://hackernoon.com/test-driven-development-is-fundamentally-wrong-hor3z4d
======
rubyn00bie
Like all ideologies, TDD has holes, and cannot be perfectly applied to the
real world.

I think the the author is doing what TDD (or BDD) is fundamentally trying to
get people to do, by writing such detailed requirements and specs: think about
shit before they write the code.

When I take the time to properly spec something out down to the interface and
calls, implementing it is almost always a cake-walk. If I do TDD to describe
the interface, implementing it is almost always a cake-walk. But! I can only
do these things if I truly understand the problem and its domain, if I have
unknown unknowns my spec and tests will be wrong. C'est la vie.

Is TDD a panacea for software development problems? Nope. Does it help? It
sure can.

Personally, I start writing something using BDD until it's hobbling along
(~40% done), and then I switch to TDD for the rest of it since I think it
allows me to write correct software faster.

~~~
deeviant
> I think the the author is doing what TDD (or BDD) is fundamentally trying to
> get people to do, by writing such detailed requirements and specs: think
> about shit before they write the code.

I hear this a lot, but I wonder if the people writing or saying have ever
considered that TDD does not have a monopoly on "thinking about shit". Back in
the day, we used to create a blueprint before we manufactured a metaphorical
software constructed plane. We had design reviews, not just code reviews, we
_thought about shit_ far more than is commonly done today in the Agile
software reality we live in.

I also find it ironic that many developers/managers that I have worked with in
the past that have most ardently supported TDD, also tended to ardently
support AGILE/scrum, which I find to be polar opposites. Agile development, to
me, is the "fuck it, we're doing it live" development methodology where
planning and other sorts of "thinking about shit" is dogmatically attacked
"Don't go chasing waterfalls, take a bite and make some progress, etc", only
to be invariably followed 6-12 months later with a retrospective bullet point
that reads like something like, "X didn't take Y in consideration causing
considerable delay/trouble/bad-shit with Z".

------
DerpyBaby123
>when the tests all pass, you’re done

>Every TDD advocate I have ever met has repeated this verbatim, with the same
hollow-eyed conviction.

My experience has been much different, in that I've never heard this mantra. I
have heard for years 'Red, green, refactor'.

I question what it is the author is railing against, as it doesnt seem to be
the TDD that I'm familiar with.

------
Driky
I would laugh if it wasn't so crazy that someone that doesn't know how to do
TDD was writing an article to say it sucks. EDIT: after reading the blog post
a second time, I even think that the author doesn't know how to write more
classical unit-test.

~~~
gmiller123456
What is it the author said that makes you think he didn't know how to do TDD?
Or write unit tests?

~~~
ergothus
> Note that (4) may happen dozens of times in the course of a large project,
> and that every single revisit of the TDD tests is 100% wasted time.

^ That is what makes me (Note: I'm not the one you asked the question of)
suspicious of the author's understanding.

This is the scenario they were describing:

> 1\. write the TDD tests

> 2\. begin implementation

> 3\. discover an unanticipated consideration

> 4\. rewrite the tests

Since the tests themselves didn't reveal the unanticipated scenario, that
means it was in using a dependency, not in the API/interface of their code.
Fair enough. But the argument that the tests are wasted is effectively saying
"making sure the API/interface I was going to use actually worked" was wasted
time.

I'm a fan of TDD, but hardly a fanatic, and honestly I've had limited
opportunities to practice it lately. I think one of the major benefits of TDD
is not using it, but the behaviors it teaches you - confirming your interface,
making each piece small and decoupled - the things that make something
testable happen to also be best practices for coding. This author isn't
focused on the areas that actually doing TDD becomes complex, this author is
tearing down those best practices by calling it "100% wasted time"

That's a much harder sell, and I'm not being convinced by this article.

~~~
gmiller123456
>"making sure the API/interface I was going to use actually worked" was wasted
time.

That's not how I interpret his statement at all. I think he's saying the specs
changed, so writing the tests for the previous version of the specs was a
waste of time.

~~~
ergothus
> I think he's saying the specs changed, so writing the tests for the previous
> version of the specs was a waste of time.

...and instead they'd prefer to write code that was a waste of time?

If the issue is that time is being wasted, the tests aren't the problem. What
the tests do (define, confirm the interface, ensure modularity and decoupling)
is part of writing good code, and saying "but if I skip that part it's faster"
is an illusion because you're skipping ahead to lower quality results.

When I first did TDD, it took about 6 months before I was as productive as I
was before. After that, though, I was roughly equal in productivity (or
faster,) AND my code was better quality (this is anecdotal and hard to
quantify, but my coworkers and following job-switch serve to confirm). When I
deal with problems where TDD is bad (exploratory throwaway code, code highly
coupled to an external data source that is complex to mock, and with browser
rendering) I still follow the same ideas as TDD - often writing a test that
can't run, but that defines the interface.

I don't recall the exact quote, but there's a saying something like
"programmers code, great programmers think about code". Tests are thinking. If
you're not using your tests to think, you are in fact wasting time.

The author is pointing to the fact that they're wasting time and blaming TDD -
Consider what their code looks like if they find the test-writing to be so
wasteful. I'd guess they're either a genius-level coder with fantastic
instincts...or they write code that is hard to test in the first place because
of poor practices. As I and most of us don't have perfect coding instincts, if
the author does have such ability, their practices aren't useful to me. And if
they instead write poor code, why am I taking this advice?

------
bbody
I think TDD is over hyped, it is something many people seem to treat as a
silver bullet. That being said I’m not sure if it is fair to say it is
fundamentally wrong. As with many “silver bullets” it has its place, I’ve
found it particularly useful when I’m writing a complicated function, it
forced me to focus on inputs and expected outputs and I code to that
specification. With regards to a changing specification, that is a problem
regardless of when or who writes the tests, it is a part of life regardless.

~~~
Udik
Of course, if both your input and output assertions are written in stone from
the beginning, and you're writing a single piece of code transforming an input
into the output, then why not. But this is hardly the general case.

The general case is more that you'll discover both your requirements and your
solution while coding, many times over. Writing a test that is tightly coupled
with a solution you might discard anyway one hour, a day or a week later is
pretty pointless.

On the other hand, I can understand that it's a good practice, while coding,
to keep asking yourself 'how will I test this piece of code'\- as it enforces
a decent architecture of well isolated parts.

~~~
bbody
Like I said that was the only time I find it useful particularly useful, I
wasn't implying that it was the general case.

TDD isn't a substitute for architecture but I think if throwing away code and
tests so much is a problem, maybe planning some architecture at the start of
the project might be required.

------
al2o3cr

        With this approach I write the tests after the odyssey of
        discovery, so the tests are only written to the final design
    

Or if your manager tells you there's another DOUBLE SUPER IMPORTANT TOP
PRIORITY thing to do, the tests are written _never_.

Strict TDD is a technological solution to a management problem.

~~~
mv1
More generally, waiting until the end to write tests is a great way to get
poor code coverage, and test cases that are very hard to debug. Unit tests as
you go along is the way to go. If you must, reserve system testing until the
end.

------
coorasse2
This article is so full of bullshit that listing all the wrong things that
Chris Fox wrote would make an even longer article. And no, this time I'll not
start making such a list because it would be a complete waste of time. This
guy is completely ignorant and a very bad developer. Read books before start
writing such shit.

------
jdlshore
Back in 2005, Microsoft published an article about TDD that was wrong. Not
just a little bit wrong, completely and utterly wrong. I wrote about it at the
time:

[https://www.jamesshore.com/Blog/Microsoft-Gets-TDD-
Completel...](https://www.jamesshore.com/Blog/Microsoft-Gets-TDD-Completely-
Wrong.html)

The authors of that article described TDD the same way the OP's polemic does:
1) write your tests 2) implement the tests.

 _But that 's not how TDD works._

Every complaint the author has stems from this misunderstanding.

If you're interested in how TDD and related practices _actually_ work, my talk
from last month's Pacific Northwest Software Quality Conference has been
getting a lot of praise on Twitter. The whole thing's worth watching, but the
TDD-specific part starts at 15:21.

Whole video:
[https://www.youtube.com/watch?v=_Dv4M39Arec](https://www.youtube.com/watch?v=_Dv4M39Arec)

TDD part:
[https://youtu.be/_Dv4M39Arec?t=921](https://youtu.be/_Dv4M39Arec?t=921)

~~~
zestyping
I watched the video segment. I really appreciated the presentation style and
visuals—the explanation of your procedure is very clear.

But I'm having trouble understanding how this works in the real world.

In your example, the thing that took 62 seconds to build and test four times
is "invoke an empty constructor in another file". That is the sort of thing
that I think of as a single task, perhaps taking 10 to 15 seconds. Dividing it
into four tiny tasks would only generate work for me; testing it four times
would provide no benefit because the task is so simple. The example feels to
me like a toy example.

I'm having difficulty seeing how to extend this technique to non-trivial
tasks. The moment I do "real work" (e.g. match a string against a regular
expression), writing a series of tests that verifies enough cases to establish
correctness does not take 10 seconds; it can take 2 or 5 or 20 minutes.

And that's where the author's complaint starts to make sense. It may be that
when I write the code, the requirements I have in mind are underspecified or
incorrect (e.g. I don't yet know whether I need whitespace to be significant
because I haven't designed the rest of the program yet, so I plan to write the
regular expression without allowing extra whitespace).

This is where I get stuck. In situations like this:

(a) If I write tests that verify only the requirements that I am absolutely
certain will not change, then I risk ending up with a program that has lots of
incomplete tests and bugs going undetected.

(b) If I write a test that completely verifies the behaviour of the code I'm
about to write, then I risk ending up with tests that overconstrain or
incorrectly constrain the code, so I get the problem the author described: as
I'm building the rest of the program, I realize that I need to make
adjustments (e.g. it becomes clear that I should ignore extra whitespace),
which means I now need to go back and change the test as well as my code, and
repeat.

It's not possible for the requirements to always be 100% complete and
perfectly correct in my mind in advance. The type of situation the author is
describing happens all the time because the process of constructing the
program is a significant part of how the requirements become clear. This is
what the author is getting at, I think.

Have I deeply misunderstood TDD?

~~~
jdlshore
It _is_ a trivial example, no question. It's part of a bigger tutorial example
you can find here:

[https://www.letscodejavascript.com/v3/comments/tdd_intro/5](https://www.letscodejavascript.com/v3/comments/tdd_intro/5)

There's a different, written example here (scroll down to "A TDD Example"):

[https://www.jamesshore.com/Agile-
Book/test_driven_developmen...](https://www.jamesshore.com/Agile-
Book/test_driven_development.html)

Those examples are for beginners, and not totally representative of the real
world. For real-world TDD, check out my "Let's Play TDD" or "Let's Code
JavaScript" screencasts, which are listed in my profile.

> Have I deeply misunderstood TDD?

Maybe? A defining feature of TDD is that you iterate, so you wouldn't take
2-20 minutes to write a series of tests. Instead, you'd write one test, get
that to work, modify it or write the next test, get that to work, etc.

Part of the skill of doing TDD well is figuring out which tests to write
first, so that this iterative cycle forms a smooth path from beginning to end,
while still allowing you to discover new things about your requirements and
design as you go.

Another part of the skill is testing the behavior of your code, not its
implementation, so that you don't overconstrain the implementation.
Implementation changes that don't affect behavior shouldn't require test
changes. This is hard and many people struggle with this.

TDD is easy to learn but hard to master. I personally find it very worthwhile.
The confidence it gives me in my code is very freeing, and I like not having
to spend much time debugging. TDD isn't perfect, nothing is, but the problems
the OP described don't match my experience.

~~~
zestyping
The place where I'm getting stuck is the claim that all programming can be
done in steps that small. That just doesn't seem realistic.

For instance, how could one possibly write a complete test for the behaviour
of a regular expression that matches a C comment in less than 2 minutes or
even 5 minutes?

The test has to be understandable to other readers, so I would easily spend a
few minutes just documenting it carefully so that other readers could convince
themselves that the test is complete.

~~~
jdlshore
You build it up gradually. For example, let's say we're writing an isComment
function (and please excuse any misunderstandings about C comment syntax):

Test:

    
    
      it("starts and ends with slash-star", function() {
         assert.isTrue(isComment("/**/"), "empty comment");
         assert.isFalse(isComment(""), "empty string");
         assert.isFalse(isComment("foo"), "not a comment");
       });
    

Code:

    
    
      isComment(text) {
        return text.matches(/\/\*\*\//);  // excuse any errors; haven't actually tried this
      }
    

That's the first TDD loop. Takes less than a minute. But it's obviously
incomplete, so now we build it up, step by step.

Add another test:

    
    
      it("can contain text", function() {
        assert.isTrue(isComment("/* foo */"));
      }
    

Modify the code:

    
    
      isComment(text) {
        return text.matches(/\/\*.*?\*\//);
      }
    

That also takes less than a minute. We continue in this way, step by step,
handling more and more specific edge cases, until we think we've handled all
the cases. For example, the next test might be _it( "can span multiple
lines")_.

~~~
zestyping
Hmm. Okay, so my incorrect assumption was that the test has to be correct. In
this process, it is okay for the test to be incomplete.

I've been pondering why this feels so counterintuitive to me. I think it's
because, in traditional testing, the test is treated as authoritative. Your
test is supposed to be 100% correct, and then you make the code good enough to
pass the test. The word "test" suggests a teacher grading an exam: the teacher
must be 100% correct.

So, I wonder if it is helpful when explaining this approach to explicitly set
aside this assumption. The program and the test can both be incomplete. Would
it be an overstatement to say it seems like the test and the program have more
of a symmetric relationship, rather than an asymmetric one?

Thanks for taking the time to explain this!

~~~
jdlshore
You're welcome! I'm not sure what you mean by symmetric vs. asymmetric
relationships, but if you mean that they're developed in parallel, both as
first-class citizens, with each informing the other, and each taking about the
same amount of time and code, then yes. Or you could also call it a symbiotic
relationship.

~~~
zestyping
Yes, all those things. ("Symbiotic" doesn't quite get at it because it doesn't
imply this kind of equality on both sides.)

Based on my personal experience and observations of others so far, I think a
large fraction of programmers experience unit tests as

(a) taking more time to write than the code under test;

(b) requiring much more code than the code under test; and

(c) relied upon to be correct much more than the code under test.

(To be clear, I'm talking about unit tests, not integration or system tests.)

In order to understand what you meant by TDD, I needed to unlearn all these
things, and I suspect others will need to as well.

------
dhagz
The only reason I like writing tests before code-complete is that I feel less
likely to write my tests to the code. But really, that just amounts to
defining the functionality of the app beforehand, but by way of unit/system
tests rather than some design document.

------
agsilvio
I think TDD is fundamentally appropriate (and a blessing). I see it as
generating proofs for claims that your software does X,Y,Z. This is invaluable
to me and has given me confidence in rollouts to production.

