
Is TDD dead? - shutton
http://martinfowler.com/articles/is-tdd-dead/#
======
phpnode
TDD is not dead, but hopefully the practise of foisting rigid, prescriptive,
quasi-religious software development methodologies on creative individuals is.

TDD, like all of these things - eXtreme Programming, agile, has good points
and bad. Listen to what these people have to say, and then take what works
best for you.

To me, TDD is about writing my code and my tests at the same time, not test
first. As soon as I write just enough code to do something, I'll write a test
to see if it actually works. I'll then expand on that test and perhaps add a
few more as I refactor that piece of code.

This means I still get the benefits of exploratory programming, unconstrained
by test first, but I also get the security of knowing that my changes didn't
break anything. It feels very natural to develop this way, as the code base
grows, your confidence in its quality grows with it, rather than the inverse
(lots of passing tests). Also a fast test suite which runs on file save is
_addictive_ , I cannot imagine returning to F5 Driven Development now.

~~~
analyst74
I can't state how much I dislike the quasi-religious part of TDD. Trying to
establish a logical argument of trade-offs with someone who "believes" in TDD
is from difficult to impossible.

The arguments always end up in line with "You just have to be better/be more
confident/etc etc", or in more blunt term, "I'm better programmer than you,
that's why TDD works for me but not for you".

------
tunesmith
The embedded debate about "hexagonal" architecture is unfortunate.

I think one of the very common sticking points about refactoring and
testability is that when trying to demonstrate techniques, you have to pick
simple examples to isolate the techniques.

When in reality, those simple examples don't _need_ those levels of
abstraction or indirection. DHH's gist is a perfect example of demonstrating
something that - by itself - doesn't need to be refactored in that manner.

But it completely misses the point. Those refactoring techniques are essential
for when your codebase is 100 or 1000 times more complex. (Jim W. mentions
this explicitly in his youtube video.) But the problem is you'd never show
that complex a codebase in a demonstration, because the mental load in
understanding the prerequisites (before getting to the actual refactoring) is
too large.

I think another common sticking point is that people talk about refactoring to
make a codebase more testable. And that is a worthy aim and all, but making it
more testable isn't the end goal. Making it more testable forces you to
separate your concerns/layers, and _that 's_ the goal. The reason that is the
goal is because as your code grows, you still want the product to be easily
changeable so it can be quickly responsive to customer's needs. And if you
need to switch out your data repository implementation on short notice due to
your site being suddenly popular, that'll be easier/faster if that
implementation is separated out.

~~~
mreiland
> Making it more testable forces you to separate your concerns/layers, and
> that's the goal.

This exactly, I've made this exact point many times over the years. testing is
never the goal.

------
andymatuschak
I really enjoyed Gary Bernhardt's response to this discussion:
[https://www.destroyallsoftware.com/blog/2014/test-
isolation-...](https://www.destroyallsoftware.com/blog/2014/test-isolation-is-
about-avoiding-mocks)

You should read the whole article, but to motivate:

> This post was triggered by Kent's comment about triply-nested mocks. I doubt
> that he intended to claim that mocking three levels deep is inherent to, or
> even common in, isolated testing. However, many others have proposed exactly
> that straw man argument. That argument misrepresents isolated testing to
> discredit it; it presents deep mocks, which are to be avoided in isolated
> testing, as being fundamental to it; it's fallacious. It's at the root of
> the claim that mocking inherently makes tests fragile and refactoring
> difficult. That's very true of deep mocks, but not very true of mock-based
> isolation done well, and certainly isn't true of isolation done without
> mocks.

> In a very real sense, isolated testing done test-first exposes design
> mistakes before they're made. It translates coupling distributed throughout
> the module into mock setup centralized in the test, and it does that before
> the coupling is even written down. With practice, that preemptive design
> feedback becomes internalized in the programmer, granting some of the
> benefit even when not writing tests.

------
zwieback
Inside DHH's arguments is also a general argument against what I call
frameworkitis - the desire to overgeneralize an application in the vague hope
that the code will get used in a different way in the future. In my middle
career I was prone to that but in the last 5 years or so I've been trying to
resist that impulse. I now try to write the application and if another similar
application comes around I might pull out reusable portions. On the whole this
approach has been a win for me.

Not that Beck or Fowler are advocating anything different but I think DHH is
the one with his finger on the pulse of current development trends.

~~~
encoderer
I'm a full YAGNI convert as well. And I've developed a strong belief that some
amount of duplication is almost always better than the _wrong_ abstraction.

~~~
codeulike
_some amount of duplication is almost always preferred to the wrong
abstraction_

This. This concisely expresses something I've learnt in the last few years but
have struggled to write down. Generalising is great until you generalise in
the wrong direction, then it creates extra complexity in all the other
directions.

~~~
mreiland
I agree, see my reply to the post you were replying to.

------
politician
It seems like there is a general realization is that TDD is useful for
increasingly narrow contexts to create business logic where there already
exists a generally accepted architecture.

In other words, the controllers of typical web apps which AFAICT is where the
technique originated and the source for a lot of the introductory examples.

It's not a silver bullet, but what is?

~~~
Touche
> It seems like there is a general realization is that TDD is useful for
> increasingly narrow contexts

There is? I haven't seen this "debate" take place anywhere other than the
Rails community. I work on open-source and I can't imagine releasing a library
where its APIs are not unit-tested. That's a different context than the one
you mention. And I don't think I'm alone, every library I run into has
automated testing.

~~~
kenjackson
Unit testing does not equal TDD.

~~~
krisdol
Sure, but I usually write tests when I have a good confidence in the API of my
class, even if that API isn't implemented. I tend to have more biases and
assumptions if I write the tests later, so I find this works better for me.
So, unit testing is often a subset of TDD.

------
shutton
Some interesting conversations between DHH, Kent Beck and Martin Fowler.

DHH can be quite outspoken at times but I think he's right to have this
discussion and to have it in public so we can all learn something.

~~~
zwieback
I agree, I was negatively biased against him based on what I'd read in the
past but he comes across as the one with the interesting ideas and passion
around them. Beck and Fowler deserve respect but I think DHH is making a very
valid point.

------
tunesmith
I'm interested in how Martin and Ken say they don't actually use a lot of
mocking.

Let's say you have production code where A will call B, and B will call C.
Basic layer separation. It might be a controller calling a service method,
which calls a dao.

If you want to test B, then in my mind, that means you want to write a test
that would call B (similar to how A does). And then since you don't want C
confusing things, you'd create a mock of C (returning a canned result), and
assign it to B.

So... how would you test B without mocking C? Are Martin and Ken simply
content to let B call C, which would imply that it is an integration test
rather than a unit test? Or is there some other design technique they're
using?

~~~
mreiland
When you test with mocks, most of the time you're not testing reality, you're
testing your theory of reality.

Think of it like this: How many times have you deployed an app in production,
or against production data, only to have it crash and burn because something
somewhere in the data wasn't what you expected? No one would really be
surprised by it happening, I think most of us have experienced it a time or
two in our careers.

Here's the thing: No seasoned developer is going to recommend that you write a
migration process, create a DB with purely dev created test data, and if it
passes, immediately pass it into production/into the hands of customers. Most
experienced dev's would tell you that's dangerous, and you really need to test
it against actual production data first.

In this case, the dev created test data is a 'mock', and the production data
is 'reality'. In this context, no one would disagree that the dev created test
data is not enough in order to ensure proper functioning.

But many people will tell you that a mock object is enough to ensure proper
functioning. This is not true, and the investment required in order to make it
true tends to be a lot higher than people expect for the same reason that
battle tested code tends to be more complicated than most people would expect
is necessary to solve the problem.

That doesn't mean that mocking doesn't bring value, but because the cost of
making that mock accurately mimic reality, it means you should reserve it for
those times where the alternatives tend to be even more prohibitive.

This is why you'll see a lot of recommendations for using mocks to do things
like simulate failed connections at specific points in a protocol handshake.
Getting the connection to break at the correct time using an actual network is
prohibitively difficult. Getting a mock to do it consistently at the right
time is a lot easier to do.

These are the sorts of situations that mocks were originally created to solve.
The issue is that somewhere along the line people started abusing mocks in
situations that did not warrant them either because the cost of actually
getting it correct is too high, or because they didn't actually pay the cost
resulting in a false sense of security.

I don't think any reasonable developer will tell you that mocks are worthless,
but I also don't think any reasonable developer will tell you that you should
be mocking a whole lot.

~~~
tunesmith
Well, you're more explaining why not to mock, rather than what they mean by
not mocking... but I have a hard time agreeing with this, for properly
designed code. Referring to my earlier example of B calling C, what if B's
business logic is complex, but makes no outbound calls except to C, and C (for
simplicity's sake) is also complex, but returns only a boolean result?

Mocking has real value there because you can control what C returns when
testing B. You might write tests for if C returns true, if C returns false, if
C returns null, or if C returns some general exception. There, you're testing
that B behaves correctly, given all possible responses from C. At that point,
you've controlled the interface between B and C, and limited the ways that C
can respond. Because of that, you can be sure that if your test for B fails,
it's because of a problem with B.

If you instead don't mock C, then an integration test that tests both B and C
(since B is actually calling C) becomes a lot more complex - probably more
than twice as hard to nail down a bug.

The trick there is that by focusing on the unit test, you've written a clear,
simple interface between B and C, when you might not have otherwise. I think
that's an example of how mocking can really help, and how it doesn't lead to
the nightmare of mocks-calling-mocks that a lot of people seem to criticize.
And rather than being only useful in cases like protocol handshakes, it seems
frequently useful, such as in wanting to mock a dao layer when testing a
business logic layer.

At any rate, that makes it sound like by not mocking, they're merely doing
multi-level integration tests. I just would have expected them to be more
focused on unit tests, which to my mind requires mocking if you have a multi-
layer application.

Jim W. made the same point about "not mocking" in his youtube video though...
and in the same breath said that he _does_ , however, mock at the boundaries.
Which makes me wonder if this is all just a terminology confusion. Because for
instance, if I'm testing a public method in business logic, I'm not going to
mock the private methods that it calls in that same business logic layer -
that just seems crazy to me. And yet I would still call that a unit test,
since it's still limited to one layer, if not to one single-layer-of-
abstraction method. I will mock the dao layer in those cases, though, because
that strikes me as a completely separate layer/concern. But I don't know,
maybe someone else would say that I "don't mock all that often".

~~~
mreiland
I didn't post that to give you something concrete to attack, I posted it
because you claimed to not understand what they meant when they said they
don't mock that often.

I will respond to 1 point you made because I feel it's the point you're not
understanding.

> Referring to my earlier example of B calling C, what if B's business logic
> is complex, but makes no outbound calls except to C, and C (for simplicity's
> sake) is also complex, but returns only a boolean result?

Theory and practice. In theory, the only thing you need to rely on is the
return value. In practice, that code also relies on the runtime
characteristics of the code it's calling, and this includes things like
unexpected slowness, as well as unexpected exceptions (depending on the
technology you're working in).

This is the entire point about a mock being a theory of reality, and getting
that right is difficult.

------
aikah
In my opinion,this debate has a lot to do with the language used,the tools
,the IDE...

Java/C#/... can be heavily assisted with the right IDE,(test suit
generation,mock/stub generation...).So the cost of testing can be light,no
matter how big the codebase is.

In JS,or ruby.One actually has to write everything. Therefore,one would be
less tempted to write heavily decoupled code with a huge graph of function
constructors/classes. While ruby has a lot of sugar for a wide range of
things,that's not javascript case.

So in my opinion,the cost of testing and managing test suits is higher in
languages such as javascript or ruby,therefore one is tempted to write as
little code as possible,and sacrifice decoupling and sometime testability.

In one hangout,DHH comes with a snippet of code,highly decoupled and
testable.But does ruby encourage this kind of architecture?because it's more
code to test and more testsuit to manage(and more tests to rewrite when
refactoring).

So isnt it more a language /tooling question than a TDD/mock not TDD/mock
question?

------
kragen
I'd really like to hear what Hansson, Fowler, and Beck have to say, but not so
much that I'm willing to sit through videos. The summaries are good (if you
click the "more..." links) but I'd really like to read transcripts. Are there
transcripts?

------
shawnps
I once talked to someone from GitHub who said that the way he sells people on
TDD is to show them the tests running automatically when they save a file. I
thought this was a pretty neat idea, and have used it on a couple of projects.
If you write Go for example, you can run fswatch . "clear && go test" in your
project's folder and get this result.

~~~
GFK_of_xmaspast
"TDD" is not "lots of little tests running all the time". You can (and I try
to) have the latter without doing the former.

------
jamieb
"TDD creates bad architecture".

What is good architecture? I subscribe to the idea that we are doing software
engineering and as such there are some "generally" understood principles such
as SOLID, the Law of Demeter, Cyclomatic Complexity, etc that provide
objective measures of "good" architecture (I apply SOLID all the way up the
architecture hierarchy not just on classes).

What I've noticed is that TDD results in code that scores well against these
measures, while code that scores well is easy to test (i.e. after writing the
code).

Therefore, I think the argument that TDD creates bad architecture is false.

About 25 minutes into the talk we get to the crux of DHH's complaint and it is
that Hexagon is an alternative to the Active Record (which he created) and the
only reason Hexagon exists is to allow TDD. Hexagon requires throwing away the
really useful code that is Active Record.

Hexagon appears to be an attempt to introduce sound software engineering
practices (SOLID etc) into the Ruby world (with what success I do not know).
Active Record and Rails in general is really useful if what you want is what
it does, but sometimes its not. The implied claim that Hexagon is a bad
architecture is false. The claim that Hexagon only exists to facilitate TDD is
false.

"Mocks returning mocks returning mocks"

I use mocks. Fowler and Beck said on the whole they don't use them which
genuinely surprised me. They cited examples of code where the test actually
enforced implementation rather than purpose. I think that's probably how I
wrote tests for the first few years. Code that results in mocks returning
mocks returning mocks is code that is violating the Law of Demeter. Its bad
code. It happens to be really hard to test, and it happens to be _really_ hard
to write tests first that way. Universally I've only ever seen tests like that
when the tests were written after the code. TDD doesn't produce code like that
because its easier to refactor it rather than keep digging that hole.

Mocks returning mocks returning mocks is a symptom of _not_ doing TDD.

"My mind works differently... I have to write code first"

Spike. Problems that I don't know how to solve I spike first (I write code
with no tests, or with tests only as drivers of execution). That's easy. The
hard question is, "Now I have all this code, I have to throw it away and TDD
it?" That's pretty hard to stick to in a business environment. I choose to
write tests-after for all those pieces of code that _already_ meet SOLID
metrics, and rewrite the code (using TDD) for the pieces that don't. The
pieces that don't are very difficult to write tests for after, and they also
happen to be the pieces where I find bugs (for example, I'll cut and paste a
bit of logic and find its wrong for one set of inputs).

"All code should have full coverage of automated tests"

All three agreed that this is the case. Fowler: "If you have a full suite of
tests I don't care how you got it [TDD or not]". I don't know about you but
I'm still fighting this battle. I also have to deal with teams that have a
"full suite of tests" and 80% test coverage, but where every single one of
those tests simply executes code. No actual "test" occurs. Indeed, in
particularly memorable test, I managed to delete 70% of the lines of code and
all the tests passed (including deleting the one line that was the main
purpose of the method). Approximately 90% of all the tests were complete
garbage: they reported success as long as the code didn't throw an exception.

~~~
mreiland
> I subscribe to the idea that we are doing software engineering and as such
> there are some "generally" understood principles such as SOLID, the Law of
> Demeter, Cyclomatic Complexity, etc that provide objective measures of
> "good" architecture

Cyclomatic Complexity is not a principle, it's a specific measurement of
_potential_ complexity. Law of Demeter is a _guideline_ SOLID is an object
oriented-centric set of principles.

These are 3 _completely_ different things.

You then go on to state using these makes software more testable. Since TDD
can only exist in "testable" code, TDD is good design because these
measurements/guidelines/principles are good design.

It's a non-sequitur of epic proportions.

I guaran-goddamned-tee you I can write a piece of software with less
cyclomatic complexity than the Linux kernel, that is not nearly as well
designed. Conflating those two things is what it means to be PHB.

CC is simply a measurement of a specific type of complexity, and like all
measurements, it means jack shit without context. For example. the measurement
of 8 inches. Is that number high, low, or normal? Or put another way, are we
measuring a mans penis, a mans leg, or a mans hand?

 _this_ is why so much software turns to shit. This right here. The process by
which you make decisions. If it's a good process, you'll have a tendency to
make good decisions. If not, as illustrated by the post above, you will have a
tendency to make bad decisions. A series of mostly good decisions will result
in acceptable to good software. And the opposite results in unacceptable to
bad software.

You want to learn how to write good, stable software? Learn to examine your
thought process, and be explicit in your attempt to make good decisions. Be
willing, and able, to identify bad decisions, _why_ they're bad, and what you
should have done instead.

What you did was believe the conclusion (TDD is good design) and then
constructed the argument for it. DO NOT DO THIS. This is the stuff of bad
decisions and software design failures, and this will happen regardless of
which process you subscribe to.

~~~
CountSessine
I'm kind of shocked by how hostile your reply is! Not even 'passionate' \-
it's condescending, presumptuous, and isn't especially constructive!

The only part of this post that I think really deserves a good response is

 _You then go on to state using these makes software more testable. Since TDD
can only exist in "testable" code, TDD is good design because these
measurements/guidelines/principles are good design. It's a non-sequitur of
epic proportions._

That's not really what he said. He was saying that essentially there is a sort
of design isomorphism - like a mathematical dual - between code test-ability
and other desirable properties like compose-ability, low coupling, re-
usability, and maintainability. The argument in favor of TDD says that
building code out of a TDD process that requires testing means making design
decisions that have this nice property of building highly compose-able code.

~~~
mreiland
hostile? or blunt with a bit of dramatic flair. You'll take your pick based
upon whether or not you agree with my position, that has nothing to do with
me.

> That's not really what he said.

It is what he said, here's his wording, verbatim

_What I've noticed is that TDD results in code that scores well against these
measures, while code that scores well is easy to test (i.e. after writing the
code). Therefore, I think the argument that TDD creates bad architecture is
false._

You're interpreting things that were not said.

The entire argument is a non-sequitur. It doesn't even matter if you agree
with him, an honest evaluation of what he stated would result in the
conclusion that the conclusion absolutely does not follow from the argument.

The only thing that can consistently produce good design is good decision
making. CC doesn't tell you anything without context, SOLID can be misapplied
and is very OO-centric, and the Law of Demeter isn't even a principle, it's a
guideline. It's like saying 'prefer composition over inheritance unless it's a
clear win'. Ok great, but that doesn't actually result in a good design, it's
cautionary guidance on what tends to be the better decision.

 _none_ of these things necessarily result in good design, and none of these
things are _required_ for good design. Hell, even the idea of 'good design' is
nebulous and changes from 1 project to the next, and over time w/i the same
project. Good design in a mobile app where energy is of the utmost importance
is not the same as good design in a scientific application where correctness
and verifiability are of the utmost importance.

The problem here is that you have yet another person coming to a conclusion
and then working backwards in order to justify the conclusion. This is the
sort of thing that consistently results in bad design __regardless of how many
acronyms you follow_.

There is absolutely nothing in those ideas that intrinsically results in good
design, or even intrinsically avoids bad design. That too was a part of DHH's
point, one that a lot of people seemed to miss.

The conclusion does not follow because the conclusion came first. You
mischaracterizing me as angry doesn't change that, but it is another
indication of a flawed thought process (that the validity of the argument
somehow stems from me being angry or not). Which brings us full circle back to
the sort of thought process required for good decisions.

------
n1ghtmare_
This is a strange debate these days. For me TDD made me a much better
programmer. I'm not sure how I used to work without it. It saved me on many
occasions. It does lead to better quality (from my experience). Frankly I
don't see a single downside. Seriously.

Is it hard ? Sure.

~~~
fixermark
The most significant downside is opportunity cost; all code is eventually
throwaway code, and if you are doubling your authoring burden by writing code
and tests on a throwaway prototype, you are iterating half as fast as you
could be.

But in my personal experience, most programmers err _heavily_ on the side of
failing to test thoroughly something that will become a long-term solution,
not on the side of over-testing their throwaways.

~~~
jeremysmyth
It sounds like you write your tests based on the code you write rather than
its expected behaviour.

You can write a good set of tests when you know the initial requirements, and
not all requirements iterate at the same rate as the code. As you learn new
requirements, you write new tests. The old ones don't go away (nor do the
early requirements if they're good), even if you completely replace the
implementing code.

~~~
fixermark
> You can write a good set of tests when you know the initial requirements

Throwaway prototypes are part of the requirement discovery process.

------
suckprogrammer
Coming from the furthest thing considered open-source - all we have are TDD
evangelists - we have 0 implementations. The one die hard TDD I've seen is so
unproductive it's painful to the business. Is this just something that's
enjoyable to kick around?

------
crazytony
I don't know that gist just seems contrived. I'm totally not a rails guy so I
have to ask: is that the only way you can implement that in rails land?

In (most)JS and/or python I would just monkey patch the save method and/or
constructor on the employee object to get the isolation and branch coverage.

I would probably need to do some of the same crazy abstraction for Java
though.

~~~
npinguy
Nothing crazy required public EmployeeController(EmployeeMapper mapper){}

public Employee create() { ... mapper.save(employee); return employee; }

Done.

All you do now is create a mock employee mapper in your tests, and you can
verify just what you want in the Controller instead of anything else

You've got * Single responsibility classes (Controller manages employee
creation business logic; Mapper is a dumb database wrapper; If you want to go
even further, use a Repository instead of a Mapper to introduce one more layer
of abstraction between the data model, that way you can slide in caching or
in-memory replacements, or what have you at will) * Clean dependencies.
There's nothing hidden about what this class needs to get it's work done

~~~
crazytony
So the "damaged" class is not really damaged. It's what you'll wind up with if
you started with a small rails project and you had to scale it as it got
popular? hmmm.

------
zinxq
Much like Agile, there's good concepts in there but it seemed particularly
prone to people taking things to far too far. (maybe you could even use the
word "too extreme")

Guice is a representative programmatic example. Reasonable idea but once you
trickle it into your code, it becomes a cancer quite quickly.

------
jedp
I clicked on this article thinking it was about Telecommunications Devices for
the Deaf, wondering what the alternatives are now that installed payphones
with TDD consoles are not to be seen anywhere. I must look into the smart- and
feature-phone alternatives.

------
andywood
Nope. I know people who are still using it.

I believe it comes down to the team. Who is on it, how the team works
together. If there are a lot of people on the team who enjoy working that way,
they'll use it. If not, they don't. It lives on.

------
progx
If TDD is the best thing on this planet, why have we to discuss about it? Why
did not everybody use it?

Answer: obviously, it is not the best thing on this planet.

Like "use the right tool to solve a problem", use TDD if it solves a problem.

~~~
kasey_junk
"If TDD is the best thing on this planet, why have we to discuss about it? Why
did not everybody use it?"

That's a pretty easy logical fallacy. For instance, I've encountered
developers who don't use version control. That doesn't imply that you
shouldn't use version control because everyone doesn't.

~~~
notastartup
if you use an IDE with local history support like PyCharms or Webstorm, you
can view every single save and every line of code you changed. It works just
like git diff.

~~~
jyu
Now what happens when you are working with one other person using only local
history support? What about ten people? What if someone is a bozo programmer
and screws up your code accidentally?

~~~
ithinkso
Do not generalize here, what if he is working alone?

------
ing33k
Don't miss this part of the video.
[http://www.youtube.com/watch?v=JoTB2mcjU7w&t=29m20s](http://www.youtube.com/watch?v=JoTB2mcjU7w&t=29m20s)

------
markrages
"living" vs. "dead" seems like the wrong dichotomy to me.

How about dialing back the religiosity and ask "is TDD useful?"

------
programminggeek
One reason Ruby makes it needlessly difficult to do do great software
architecture and sane TDD is the lack of contracts or strong parameters.

It is easy to take for granted that when you have a function like this:

function foo(int x, string y) { int count = x * 100; print y + " " \+
count.to_string(); }

...that if you send in a string for x or an int for y the function will blow
up. And if you are using a compiled language, it will yell at you before you
even attempt to run the code.

In ruby the equivalent code would run just fine, even with generally bad input
and it is totally up to you (the programmer) to police yourself to write code
that doesn't send across bad data.

A ruby programmer (and probably the same goes for PHP and Python), will end up
writing a fair amount of tests that simply ensure that the boundaries are
enforced properly and sanely in the code.

There is a lot of love about ruby, but if you are going to write safe,
dependable, bug free code it is up to the programmer to enforce boundaries, do
validations, and so on and so forth and then write tests that validate that
they are still correct over time.

Part of the reason this argument isn't so much happening in other languages is
in part that there is maybe not as strong of a culture of TDD in other
languages, but also simply because some of the problems don't exist in Java,
Scala, C#, etc. because of a type checking compiler and stronger boundaries
between components.

A lot of the things that we consider "good architecture" are simply the way we
name things, the way we put files in folders, and where we place boundaries in
our code. We could write everything as a single file that ran sequentially and
jumped to different locations (which is largely what happens anyway), but to
make things reasonable and understandable we use patterns like MVC, or
composition based functional programming, or MVVM, or various OOP patterns,
and so on to express program logic and create logical, sensible boundaries in
our code.

If you see the world through the lens of naming things, file structure, and
conceptual boundaries, you will see this argument about TDD is more about how
strong the boundaries in our code should be and where they should live, not
whether TDD as a tool is a good thing or a bad thing.

Rails MVC sees the world as 3 tightly coupled things - model, view,
controller. And in that tightly coupled world(along with weak ruby
boundaries), TDD is painful and probably not worth your time. DHH is right
about that 100%.

If you don't structure your project as Rails MVC does, TDD can be a very
pleasant way to build your project and have confidence as you change it over
time. Ruby is not the best tool for that job, but it can be made to work. In
general, the weaknesses of ruby's boundaries will mean this argument will keep
coming up over time and will never be solved.

------
fireWalker
TDD is good. I think "Test-First" is dead. Few places I've been at have been
able to do test-first.

Small groups trying to deliver on aggressive schedules waste too much time
with test-first.

Having a competent leader and skilled team that can follow SOLID principles
removes the need to be test-first.

TDD is not dead. Test-first probably is.

~~~
kens
Aren't those both the same thing? (Not being sarcastic; I'm genuinely
confused.)

[http://stackoverflow.com/questions/3192090/test-driven-
devel...](http://stackoverflow.com/questions/3192090/test-driven-development-
vs-test-first-development)

~~~
ZoFreX
The way I was taught is that TDD is writing a test, making it pass, repeat -
the red-green-refactor cycle. Test first could include "write 200 tests that
all fail, then write all the code to make them pass". Somewhat substantiated
by this answer: [http://stackoverflow.com/questions/334779/is-there-a-
differe...](http://stackoverflow.com/questions/334779/is-there-a-difference-
between-tdd-and-test-first-development-or-test-first-prog)

------
compsci_mofo
Is TDD dead? No, it just smells that way. Like all cargo cults, only the
clueless follow it religiously. You can spot them a mile off - with a book
they bought off of pragprog, their whole test before breakfast mantra, and
usually by their crappy brittle test forged code.

------
andyl
In my experience, if you are a competent TDD practitioner, you have a leg-up
on those who are not. Sometimes a big leg up.

Yeah, it takes a long time to become competent.

------
moepstar
Betteridge says No

