
How Does TDD Affect Design? - Garbage
http://www.jamesshore.com/Blog/How-Does-TDD-Affect-Design.html
======
hibikir
There's two parts to TDD: what it does for you when you are writing a new
piece of code, or what it does when you come back to it.

IMHO, the advantages when you are writing new code are vastly overstated. A
former employer mandated TDD, and I did my best to follow their practices. It
doesn't lead me to better modularization, more maintainable code, or really,
any less bugs.

However, going back to code that actually has a real test suite around it is
invaluable. It's not really because the tests prevent you from breaking the
code: More often than not, it's the test that needs fixing. The value comes
from the tests behaving as living documentation. It lets me see why someone,
at some point, wanted the code there. Then I can evaluate if the reasons are
still valid, and wonder if the old requirements still make sense, or whether
my updated requirements are going to be an issue, because someone forgot some
byzantine case that the tests just reminded us of.

Now, the question is whether we are better off getting here through TDD and a
quest for full test coverage, or something higher level, like Specification by
Example. But either way, if the domain is complex enough, either will cover a
need that static, non-executable documentation handles way worse, because old
school paper documentation goes stale very quickly.

So how does TDD affect design? Mostly by making sure that you aren't writing
pieces of code that are so complicated they are untestable, and frankly, you
should probably be avoiding that in the first place.

~~~
Retric
IMO what TDD does right is provide documentation that people are forced to
maintain. It's not uncommon to see the same bug show up regularly because
something that works is fragile. As long as coders don't do X everything works
and fixing it would take weeks or months thus it's left lone. So, of course
every new developer does X generally 2-3 times.

And that's the secret TDD is almost useless for simple projects but grow some
cruft and tests stop seeming so pointless. Which, IMO suggests you add tests
mostly around that cruft.

------
claudiusd
TDD doesn't lead to better or worse design, your testing strategy does. TDD
changes your approach to testing, but if you choose a poor test strategy then
TDD or not, you're going to have a bad time. I hate to say it, but this is yet
another misleading article that is doing the programming public a disservice.

Here's my testing strategy (YMMV):

1\. Feature tests for the "happy path" \- make sure the system requirements
work as described from the user's perspective with full integration. I usually
just write a couple.

2\. Integration tests for high-level system components - these are more
functional in nature but capture all boundary cases for the components that
sit closest to the user. These are my saving grace when refactoring.

3\. Unit tests for methods and other low-level components - these primarily
serve my "integration tests" by factoring out the functionality of low-level
components and allowing me to focus on their integration. Here I'll use
mocking if it's convenient, and when refactoring these usually get re-written.

Notice that this strategy says nothing about TDD, but I would argue that
following the principles of TDD makes this strategy strictly work better.

~~~
nawitus
>TDD doesn't lead to better or worse design, your testing strategy does.

That's an empirical question. Do you know any studies that actually test this?
It's certainly possible that TDD leads to worse designs or leads to better
designs.

------
isuraed
Ultimately, the only tests that matter are system tests (feature tests). These
are the tests that ensure you're delivering the correct result to the
customer. In practice I find unit level testing a pointless exercise.

~~~
matthewmacleod
To be blunt, that means you've not been using it right.

I totally agree that feature or behavioural tests validate that customer
requirements are met. But you shouldn't (and probably can't) test more in-dept
behaviour in these.

For example, I wrote a shopping basket system for a site last year. There were
feature tests in there - "When I click the 'add to basket' button then I
should see the item in the basket" sort of thing. Those are great. But I also
wrote a whole bunch of unit tests for this - checking that calculations were
correctly performed, and the adding and removing items worked correctly, and
that tax was applied according to the correct rules, and so on. These tests
are super-quick to run and provide a lot of confidence that the API contract
is being adhered to. We could have completely switched out the back-end
storage for a third-party API or something, and the tests would still be
applicable.

There are loads of reasons to test behaviour in layers - I agree that you can
easily over-invest in effectively pointless tests, and I've seen that
everywhere. But don't discard all unit tests as worthless.

~~~
rpedela
Based on my very limited understanding of your problem from your brief
description, it seems like thorough and properly designed feature tests would
have achieved the same thing. In other words, if there is a button in the UI
for removing items from the basket, that could have been heavily tested with
feature tests instead of unit tests. And those feature tests would serve well
if you need to completely rewrite the code at any layer that implements the
"remove item from basket" button.

~~~
barries
There are several issues with relying solely on feature tests.

Feature tests often need to span large parts of an application, so there is a
often significant amount of overhead (both code and test-time) in repeating
identical-except-for-one-value.

Feature tests often can't test the corner cases of internal code. For example,
one hallmark of quality software is that it degrades gracefully in the
presence of unexpected inputs. So, while the UI might prevent out-of-range
values, programmers often choose to also check value ranges at, for example,
the top of a stored procedure. This means that you can't test that code with
an app-level feature test because a correct UI won't let you enter values to
trigger the stored proc's failure case.

Another big issue is the combinatorial explosion. If you have a processing
pipeline, like filters in a sound or image processing app or validation and
authorization checks in a line-of-business app, the number of configuration
and data values that need to be tested for each stage needs multiply together
if you only feature test. Unit testing allows you to make sure each of the
stages works "well enough"[1], then you can use far fewer integration and
feature tests to make sure that the stages cooperate properly and that system
requirements are met (two overlapping, but different, concerns).

[1] However the engineer, team or industry defines "well enough".

[Edit: split up the wall 'o text]

~~~
rpedela
The internal code argument is weak in my opinion. What does it matter if a
stored procedure works 100% if, like you said, that code path will never be
executed by the user? There will always be bugs in code so the goal in my view
should be to make the application as bug-free as possible from the user's
point of view not literally bug-free which is an unattainable goal.

Time-consuming nature of feature tests can be an issue, but often is mitigated
by automated testing on commit, merge, etc. But not always of course.

I agree with combinatorial explosion however it can sometimes be mitigated by
procedurally generating tests.

~~~
jdlshore
In a complex app, there are many code paths that _are_ executed by users that
are easier to test with unit tests than feature tests. In general, unit tests
are easier and faster to write than feature tests if (1) you're experienced
writing unit tests (there _is_ a learning curve) and (2) your application
design supports good unit tests.

Feature tests are fine for simple or small apps, but I wouldn't rely on them
for the bulk of my testing in any significant app.

------
morganherlocker
Just about everyone writes tests first. You have some sort of script that you
are running against the real code you are working on to see if it is doing
what it should be doing. The only real difference between true "TDD" and
everyone else, is that the TDD folks save this script to be run later.

I might be so far down the TDD trail that I have lost perspective, but do
people really write huge swaths of code without writing scripts to run the
code and see what it is doing? Those little scripts are tests, and with a tiny
bit of formatting, they can be made to output something consistently useful
like TAP[1], instead of inconsistent console logs. This whole debate seems
like strawmen fighting strawmen.

[1] [http://testanything.org/](http://testanything.org/)

~~~
nawitus
I haven't heard about people writing some sort of scripts to run "against" the
real code. Most people do manual integration testing by running the
application, and doing some light debugging, and then when it starts to work,
write a few tests (if any).

~~~
taeric
Imagine if they could codify their manual integration tests in such a way that
they could be automated? Not necessarily a unit test, but still a test. Likely
a valuable one.

~~~
nawitus
Yes, obviously we do that, but after the feature is working. It's not
practical to write it before implementing it at least with the stack I'm
working on currently. We use Selenium+Protractor for integration tests, and
these tests are implementation specific.

------
overgard
I'm generally for testing, but I have noticed a phenomenon that happens a lot:
basically, the tests start influencing the actual design (in not a good way).
To get the tests working, you end up having to write a lot more infrastructure
and abstraction to support it. So you end up with 6 classes where one would
do, because you end up needing an extra interface, and then impl, and then
mocks, and dependency injection and on and on. Not to mention extra dev
infrastructure like mock rest servers and so on. This seems more prevalent in
languages where the type system is pretty primitive and concise code isn't as
valued (ie: java). It seems like mostly a non-issue in dynamic languages like
python and ruby (where honestly I think the tests are way more useful anyway)

To me, the problem is that each line of code should always be treated as a
cost. More code is more cost. And if you're writing 6 classes where one or two
could do, you're creating a lot of extra code. And we'll just ignore the time
spent writing the test and assume that it'll pay itself back in the future,
I'm just talking about the extra maintenance costs of having 6 classes hanging
around and the extra time it takes to run those tests every time you compile.

I guess what it comes down to is not testing for dumb things. You should be
writing tests against stuff you probably think could break. I'm not really big
on the idea of 100% test coverage. I think once you're testing "most" of the
code the cost/benefit of way more tests is mostly just in satisfying the OCD
types.

~~~
lectrick
> To get the tests working, you end up having to write a lot more
> infrastructure and abstraction to support it. So you end up with 6 classes
> where one would do, because you end up needing an extra interface, and then
> impl, and then mocks, and dependency injection and on and on.

I have found this to be true only when having to work with code that was not
written with TDD.

For example, if you're writing a new Rack middleware, it will be EXTREMELY
easy to TDD and unit test. The entire Rack design (while not devoid of
criticism) is sort of ingeniously simple and was clearly written with TDD/unit
testing in mind as a design priority.

However, if you're interested in TDD'ing a Rails controller without loading
your entire stack... Good luck with that. But Rails was NOT written with TDD,
nor unit testing, in mind. It's evident right in the Rails lingo- a "unit
test" in Rails is (was?) a test of the model code (and everything underneath).
A "true" unit test doesn't test dependencies (so for example, an ACTUAL "unit
test" of model code would abstract out the entire database and mock out all
the CRUD operations).

So I guess I could paraphrase what you're saying as, "If you have to TDD and,
in the course of this particular development work, you have to work with code
that was _not_ written via TDD, you're going to have a bad time."

------
arthurjj
An issue I have with TDD is that the time spent supporting it is often better
spent on thinking about the design

~~~
Pacabel
Is that a problem with TDD, or is that just a problem with teams choosing to
use bad tooling and processes?

I've seen teams waste a lot of time trying to get various CI systems
integrating with various TDD frameworks, generating lots of reports that
nobody really ever looks at. Or they waste a lot of time writing elaborate
frameworks that build upon some existing TDD framework.

But all of that has basically nothing to do with TDD. It has nothing to do
with the act of writing tests first, and developing the actual code based on
the success or failure of those tests.

~~~
crdoconnor
Writing tests takes time and is an investment and should be treated as such.
Too many zealots try to pretend that it's a zero cost design habit.

~~~
skj
I think that they would argue that it's a positive net benefit design habit.

~~~
crdoconnor
The OP mentions a number of antipatterns that only come about due to TDD. So
no, not necessarily. Not unless it's done correctly and in the right
circumstances.

------
vjeux
Jest[1], the JavaScript testing framework used at Facebook attempts to lessen
two of the mentionned impacts of TDD

> TDD works best with fast tests and rapid feedback. In search of speed, some
> people use mocks in a way that locks their production code in place.
> Ironically, this makes refactoring very difficult, which prevents designs
> from being improved.

Jest automatically generates mocks for you. This reduces the cost invested in
writing mocks and therefore lessen the burden of refactoring your code. But if
you change the interface of your module, the generated mock will change and
break all the tests that depends on it. Which is good!

> Also in search of speed, some people make very elaborate dependency-
> injection tools and structures, as well as unnecessary interfaces or
> classes, just so they can mock out dependencies for testing. This leads to
> overly complex, hard to understand code.

By mocking at the module system level, most of your code is able to be mocked
without doing important refactoring.

[1] [http://facebook.github.io/jest/](http://facebook.github.io/jest/)

------
jacknews
"TDD doesn't create design. You do."

Sorry, but this is akin the fallacious "guns don't kill people" argument.
Well, of course not, but they certainly make it easier.

So does TDD make good design easier, or harder? I'm afraid that, for me, the
article didn't provide an answer, or even anything bu the most superficial
insight into the issues.

For example, coding is often about exploring a problem, and in these cases I
think TDD is a hindrance. But whatever the overall argument, TDD certainly
changes how things are designed, and what kinds of things are easy or
difficult to achieve. Indeed, writing tests can sometimes be more challenging
than the code itself.

Now that would be something worth further discussion.

~~~
benihana
> _But whatever the overall argument, TDD certainly changes how things are
> designed_

I think you missed this point completely, and you made it hard to ever get it
when you compared TDD to a highly contentious issue in guns - rather than this
just being an issue of methodology in coding, it's now a political issue as
well.

Regardless, TDD doesn't change how things are designed. TDD is a methodology,
a way of doing things. It's a tool - it can't change things without human
input. TDD does not change design; it can't. It only amplifies a person's
ability to do that by making poor abstractions and weak boundaries clear and
that is the power of it as a methodology.

~~~
the_af
First, let me say I agree with you. TDD is a methodology, a tool, and it's
been found helpful by many practitioners.

That said, the problem is that TDD is often presented as The One True Way.
Even people who should know better say "I'm not saying TDD is the only way,
but if you don't do it, show me what you use that is as good". (I'm looking at
you, Uncle Bob). This has the unstated assumption that TDD is the best tool
available, and that you're doing something wrong if you don't use it, and that
you have the burden to explain why you do something else. Again, _even_ people
who claim "use the right tool for the right job" will usually look at you with
skepticism if you claim you've found a job for which TDD is not good, somewhat
negating their claim that they're open-minded about it.

I've heard the following (somewhat contradictory) claims about TDD:

\- "Every single line you write must be covered by TDD, otherwise you're doing
it wrong."

\- "TDD is about design, not about testing. Testing is merely a nice side-
effect."

\- "You cannot refactor safely without TDD."

\- "If your test isn't red first, you're doing it wrong."

\- "TDD makes your code more testable."

\- "TDD can drive design and lead you to a solution."

\- "A reasonable way to develop a mathematical function like Fibonacci is to
use TDD" (I _never_ can get if the person writing this knows that's not the
case but forgets to explain it, or if they truly believe that. Either way,
they hurt the case for TDD).

Each of these assumptions is either demonstrably wrong or at least arguable.
But TDD zealots (again, not talking about commenters on HN, but TDD's well-
known evangelists) will make you feel bad if you doubt a single one. IMO, this
is the single thing driving most of the backlash against TDD. If we instead
demote TDD from its priviledged place to "one methodology, not formally
proven, that may or may not work for you", I presume the backlash will go
away. And so will the people who tell you that if you're not doing TDD, you're
doing development wrong.

~~~
SoftwareMaven
Software development has a long and infamous history of The One True Way to
Write Code(tm). Object oriented, agile, dynamic typing, and TDD[1] have all
had their moments where it was "known" that it is the only way to efficiently
create quality, maintainable software. All of them have their place, but none
have managed to maintain the level of importance their evangelists promoted.
I've often wondered why software development is so prone to evangelical
thinking. I think it is related to egos and the relative immaturity if the
field.

I've gotten to where I filter out the value from the hype. There are important
lessons to learn from the hype, but my experience is people become more
concerned about the religious practice than the actual software.

1\. TDD is simply the latest. And, like the others, TDD will have a long-
lasting impact on software development. Eventually, reasonable testing will be
part of the collective software culture (it's getting closer, but not quite
there), and tests will be written for most projects. At that point, the TDD
evangelicals will have nothing but strawmen to fight against, and the next
religion will form.

------
SoftwareMaven
It's funny that many people here seem to think TFA is a criticism of TDD,
which is telling, IMO. It seems many people feel that TDD is _always_ a net
benefit to the software development process, regardless of how it is
implemented. I saw the same thing with agile development and object oriented
programming: just follow the buzz and software will be better.

The author is not arguing against TDD. Instead, the authoR is simply stating
it is possible to do it poorly. Everybody should be looking at every process
they implement and deciding if it provides value and how it can be done
better. Doing anything else is cargo cult development and doesn't lead to
better software.

------
tszming
The last sentence of the above article is actually agreeing what dhh wanted to
say :)

>>So pay attention. Think about design. And if TDD is pushing you in a
direction that makes your code worse, stop. Take a walk. Talk to a colleague.
And look for a better way.

~~~
jdlshore
Author here. I agree with DHH that people have created bad designs in the name
of testability. I disagree with his conclusion that TDD should be thrown out,
or that slow end-to-end tests are an adequate replacement.

By "stop," I meant, "stop programming for a bit and think about a better way
to design," not "stop using TDD."

(I might agree that TDD is a bad choice for typical Rails apps, though. That's
a pretty narrow case, and it says more about Rails than TDD.)

