
Test-induced design damage - petercooper
http://david.heinemeierhansson.com/2014/test-induced-design-damage.html
======
kasey_junk
I suspect my disagreements with DHHs last couple of blog posts have more to do
with what each of us has seen in the wild than with any actual disagreement in
principle.

For instance, in my experience I more frequently encounter places that have
way fewer tests than necessary and there is no consideration about how to
verify requirements at all.

Further in my experience, the GUI and database layers are the least
interesting parts of the systems I work with. They truly are parts that can
and do get swapped out with some regularity.

I suppose if I worked mostly on systems where they were in essence gui's on
top of databases with little logic in the middle, I wouldn't want to isolate
those concerns either. It would be more trouble than it is worth.

For instance, when I write small unixy command line utilities I very rarely
test anything in unit tests. What would be the point? I can easily define the
entirety of the specification in example tests that utilize the utility as a
black box. I still do it first though...

~~~
enoptix
Is swapping out the database layer so common you really need a complicated
abstraction like the Repository model he mentioned? We've been running an app
for 3 years and have never dropped ActiveRecord or Postgres.

~~~
sanderjd
Spinning this around on you: I worked on a project where swapping data stores
around wasn't common _because_ of tight coupling with AR. We considered
swapping data stores around for different and evolving use cases and never did
largely because we had so much code that we felt was tied up so tightly with
AR that it would have been very expensive to swap. I think it definitely
decreases flexibility, but that it's pretty hard to know if you want more
flexibility ahead of time.

Pretty much all of these pattern discussions seem to be this way to me - "just
do it the simple way! YAGNI!" versus "crap this one time I _did_ need it and
it was difficult to change by then! Maybe I should design things more flexibly
from the start next time!". It's pretty easy to get burned going either
direction, and depends a lot on things like what the project is, what
organization is building it, and the level of success it ends up having. The
closer a project is to a simple-CRUD, small team/unproven-company, prototype
with limited success, the more sense YAGNI makes, and the further from each of
those criteria a project is, the more it makes sense to design for more
flexibility.

~~~
jarrett
> It's pretty easy to get burned going either direction

Quite true, though I'd argue that YAGNI is still true as a probabilistic
maxim. You'll make the "will I need it" decision many thousands of times in
your career. If you follow YAGNI consistently[1], it will help you more often
than it hurts, and you'll come out ahead in the long run.

[1] But nobody is saying you should ignore concrete evidence that you _will_
need something later. That's its own cargo cult. If there's good reason to
believe YAGNI doesn't apply in a particular case, don't follow it in that
case.

~~~
sanderjd
I think this is a dangerous line of thinking, but I suppose I wouldn't modify
it very much. What I would say is that YAGNI should perhaps be _weighted
higher_ , but that the probability of it being wrong in particular cases
should be considered carefully.

------
Jtsummers
Can all this be summed up with "Avoid all dogma, even this."?

Any time we buy into a dogma at the expense of rationality, we lose. This has
been demonstrated throughout history in human interactions with each other
(via religion, politics, legal systems), the development of science and
technology (see Galileo, Copernicus, the 19th century US doctors ignoring germ
theory and killing a president).

Sometimes we create dogmas to try and move things away from bad ideas towards
better ideas. Dijkstra's "Go To Considered Harmful" was one such effort.
Gotos, as used at the time, were fucking terrible. They were used instead of
higher level expressions like if/then/else, for, do/while, function calls. But
the (at the time I was in college, early 2000s) refrain was tired and wrong
(or misapplied). Sometimes, in some languages gotos can, in fact, be very
useful, so long as their use is chosen deliberately and with care (see the C
idiom of using gotos to jump to error handling/reporting code in functions).

In the end, nearly every development process runs the risk of becoming a
dogma. Avoid that. Study the process, practice the process, and reason about
where the process should actually be applied. And we already know that the
answer isn't "everywhere and everytime".

------
programminggeek
I feel like MVC is being treated as like the one true pattern to design your
web app with and that's just simply not true. Rails is MVC and maybe that's
all it should be for what it is intended for. Other projects might not be a
great fit for such a simplistic view of the world and maybe that means Rails
is not a great fit for projects that don't fit into the MVC abstraction.

In my experience MVP, MVVM, super thin Sinatra API's, hexagonal architecture,
functional programming, and other sort of weird approaches fit certain
projects much better than the standard Rails MVC approach.

Also, not every project is a web app and there are plenty of times where
various testing approaches make a lot more sense than they do in Rails. It's
too bad that a whole line of thinking about software quality is being
disparaged because it isn't a good fit for Rails as DHH sees it.

TDD is a useful tool in the right context. Maybe that context isn't Rails.

It seems unwise to be telling a lot of smart people who care about software
quality to "get off my lawn" so to speak, but I've never run a successful OSS
project as big as Rails, so I probably don't have a clue about how to lead a
community as big as Rails is.

~~~
ryanbrunner
I think one important thing to keep in mind is that TDD is not necessarily
synonymous with "software quality". In some cases it's a very useful tool to
ensure the quality of your code, but it's not even the stated goal of TDD, and
a focus on TDD as the "one true path" to software quality ignores that some
things are more effective (not necessarily simpler) to test using more of a
"test later", integration-focused testing approach.

I agree that there's projects where a simplistic MVC approach doesn't
completely fit. That doesn't mean that every software project needs to be
built to the standards of the most complex software, or even that aspects of a
project that _do_ require this complexity can't be solved with a more
straightforward, simple MVC approach.

At the end of the day, I think the main message I get from DHH's recent series
of blog posts is that treating anything as a silver bullet, or a universally
beneficial pattern is harmful - and this is equally as applicable to MVC for
everything itself as it is for a complex, hexagonal architecture.

~~~
cwbrandsma
Bad programmers will write bad code no matter the methodology, pattern,
language, tooling, or best practice.

~~~
mwcampbell
That's not useful though. What makes programmers bad? In some cases, at least,
it's the methodology, patterns, or best practices they use.

~~~
collyw
More often than not it is lack of any methodology, patterns or best practices.

------
ascendantlogic
The overriding message here is one of pragmatism. TDD, like a lot of
methodologies before it, became gospel and people began practicing it in a
dogmatic fashion without thinking about the best way to apply the principles
to whatever problem is at hand. The spirit of TDD is that you have a safety
net of tests to protect you from making changes in class A and breaking
something in class B. If those are acceptance, integration or unit tests,
great. If the code is cleanly organized and readable, great. Don't let zealots
on either side of the aisle convince you to do anything beyond what makes
sense to solve the problem that is sitting in front of you.

~~~
Toenex
Well said. Methodologies should be treated as patterns and as such you need to
understand what they aim to achieve, the trade-offs in terms of risks and
benefits and most importantly how to adapt the pattern to what you are doing.
You open yourself up to problems when methodologies and patterns are adopted
without thought.

------
darrencauthon
Does anyone know of a public example of a Rails application that does testing
in the way that DHH says is good?

I'm tired of the talk talk talk talk talk talk of "proper" testing in Rails,
yet the examples always seem to be hidden away behind company firewalls. I've
only seen a couple Rails apps with Rails-Way test suites, and they were
nightmares that took many minutes to run. But I have seen dozens of Rails apps
written by opinionated Rails devs with strong views about what proper testing
was... and the apps had no tests at all.

~~~
techdragon
If your tests take 10 seconds, how much did you really test?

The point it sounds like he's trying to make is that if you say things like
"they were nightmares that too many minutes to run" you may be approaching
testing from the wrong point of view. He sounds like he wants to say "let the
tests take 5 minutes" and I agree with him, thats what CI is for. Commit your
code, mark the issue your fixing, let CI tell you if its done or not, take
your pomodoro break, coffee break, etc, then sit back down, and pick back up
with your test results on the CI server and repeat the cycle, a 5 minute test
suite is NOT A BAD THING...

If you think 5 minutes is terribly long spare a thought for us deployment
engineers... my test suite involves building and tearing down entire VM's or
PXE booted machines and depending on what software is being built and tested
through deployment can take an hour or more.

~~~
awj
> If your tests take 10 seconds, how much did you really test?

I see your point, but time-to-test is a horrible proxy for quality of tests.
Business logic isolated from external systems can run _incredibly_ fast, so
ten seconds worth of testing can mean an awful lot in that case. The nature of
TDD basically _demands_ that you structure your code that way to remain
productive. Otherwise it's like using a text editor that takes five minutes
every time you try to save a file.

That's my inherent frustration with this argument. Both sides aren't arguing
for their methodologies, they're arguing _against_ the byproducts of each
others methodologies.

~~~
karmajunkie
Excluding the acceptance tests (written with capybara and spinach) the test
suite for my current client takes less than three seconds on my machine,
including the run time (excluding the phantomjs boot time) of the suite of ui-
exercising JavaScript tests, and they are nearly comprehensive, testing every
component contract from the client to the backend. The ATs run in a little
under a minute, covering the major integration points. There is very little
mocking in any of the suite, and no direct database access.

Testing hurts when you do it poorly or naively. I know because I've done it
both ways, and when I find something harder than it ought to be I invariably
find some point of coupling beneath the surface. When my design is good, my
tests are fast and easy. If you listen to DHH you're going to have problems
testing. Not because you have to when writing software, but because he's
already made decisions for you which are bad or highly coupled. Don't fall for
the straw man. There are better ways to do it.

------
gregwebs
Maintaining giant test suites and trying to keep them running fast is why I am
so glad to not be using Rails anymore. Dynamic languages don't scale well for
me because the testing is difficult to scale. With Yesod (a Haskell web
framework I help maintain) I have a fraction of the need for unit tests. The
compiler already gives me the equivalent of 100% code coverage for catching
basic errors. I can focus efforts on testing application logic and integration
testing.

~~~
sanderjd
Are you able to find consistent work building web applications with Haskell?
It seems that many organizations are reticent to build on those sorts of
technologies (not quite sure what else I'm including - maybe OCaml?) for very
rational reasons - it is hard and expensive to find employees who are capable
of being productive in them.

~~~
nbouscal
There seems to be a very common perception that it is hard and expensive to
find employees who can be productive in less mainstream languages, but I
rarely see evidence to back that up. I can't speak for Greg, but in my
experience, hiring is hard in _every_ language, but not materially moreso in
Haskell.

~~~
sanderjd
It's interesting, I meant my question to be about finding _companies_ using
Haskell or willing to hire contractors that build projects for them using it,
but of course I did take it in the hiring direction, so it's my own fault.
Rephrasing: Is it easy to find work using Haskell, despite the perception
(whether deserved or no) that it is difficult to hire for, which may limit the
number of companies willing to build in it?

~~~
nbouscal
It is very interesting, because the dominant narrative seems to be both that
it's hard to hire for and hard to find jobs in, which can't both be true
without massive communication inefficiencies, which I'm fairly sure don't
exist. My experience has been that it's somewhat harder to find a job writing
Haskell than it is to find a job in a mainstream language, but that it is
still very doable.

~~~
sanderjd
This doesn't seem inconsistent at all. It's a classic feedback loop - it is
hard to hire because most people don't want to invest in becoming experienced
in a technology that not many companies are using, and it is hard to find a
job because companies don't want to invest the resources in using a technology
that not very many potential employees are experienced in. Very similar to
social network chicken/egg problems. Seems to usually be solved by either
investment from one or more large companies with an interest (Sun, Oracle), or
a "killer app" (Web browsers, Rails, college students for Facebook). My little
theory here is woefully inadequate to explain Python's and Go's success, so
maybe another way out of the trap is "lots of people just really like it". Not
sure if that will or won't work for Haskell...

~~~
nbouscal
My point is that "hard to hire" implies more jobs than candidates, whereas
"hard to find jobs" implies more candidates than jobs. They can't both be
true. You can say things like "there are few candidates" or "there are few
jobs" in an absolute sense, by comparison to other languages, but that
actually isn't very relevant.

This is an oversimplified model because it doesn't take into account engineer
skill level, which actually does seem to be the primary problem. Companies
want skilled engineers, but it's hard to become skilled without having a job
in the language first. So we end up with several companies trying to hire
seniors, and several juniors looking for jobs.

~~~
taeric
Both "hard to hire" and "hard to find jobs" are satisfied by a disjoint set of
job locations to candidate locations.

That is, there could be plenty of X jobs in City Y, but that does little for
the X candidate in Z city. Flip candidate/job as desired.

~~~
nbouscal
That's true, and there does seem to be a disproportionate number of Haskell
jobs in Singapore, but overall I don't think it has a huge effect.

~~~
taeric
More likely you are just seeing a fairly common network effect, where once you
are in it is easy to see many connections.

So, if there are some fairly good quantitative treatments of this, I'd be
interested. I suspect it isn't too shocking. Probably more than the parent
poster and friends think. Probably less than you do. :)

~~~
sanderjd
I actually really don't have much of a hypothesis at all. I asked my original
question out of pure curiosity, and my reply was along the lines of it seeming
very plausible for there to be both a shortage of candidates to hire and a
shortage of companies hiring. But I also think the opposite is plausible. I
think so far the most specific answer to my initial question of "is it easy to
find Haskell work?" is "yes, in Singapore it is".

------
leorocky
My least favorite example of when tests damage an API design is with
dependency injection. There seems to be very little need to resort to
dependency injection as a way to create a good API with easy to understand
architecture many times, but dependency injection gets abused because it makes
testing easier. You can supply your API with mock and stub classes at every
turn if you use dependency injection everywhere, but the consequence is a more
difficult to use API that requires the programmer who uses your API to
understand more arbitrary and unnecessary implementation specific details.

For example, maybe I just want to open up an encrypted TLS TCP socket to a
server. From a user perspective this could be really basic, you provide a
library with an API that you provide with server address, port and handlers.
It could be as simple as a few lines of code. But the dependency injection
version of this would require maybe require creating an SSL Factory, which
requires a 509x certificate provider, which requires a certificate storage
locater. Then instead of an address you must provide it with an ipaddress
factory method and a protocol factory which requires a list of available
protocol implementors. Then 200 lines later you want to actually manage your
connection and you must provide a connection manager and a byte buffer which
itself involves tons of cruft.

Sometimes dependency injection is like a person walking around with their
organs hanging outside of the body. When two people want to make babies they
don't have to know low level biological mechanics of how sperm sends signals
to a ready to be fertilized egg. They don't have to read and learn pointless
documentation. They just insert the thing and everything usually works
although under the hood it is maybe one of the most complicated processes in
biology. That's how an API should work: making complicated things simple.

~~~
WickyNilliams
Could you not have the best of both worlds by writing a facade over the
public-facing API? I have used this approach in the past and it works well.
The facade just handles grunt work of wiring the myriad of types together, as
you described. It can be verified by blackbox testing or simply by eye (as it
should not contain any logic beyond instantiation).

~~~
TylerE
While there are valid use cases, this sort of thinking is why apps are no more
responsive today than they were 20 years ago... everything is running through
20 translation layers.

~~~
humanrebar
While that is a risk, it's not a necessity. Many of the classes in the C++
standard library are designed in exactly this way without runtime performance
issues. All of the wrapping and indirection get inlined at compile time.

------
mattgreenrocks
Maybe the real problem is that we have crappy tools for hexagonal-oriented
architectures; especially Rails. Classic Rails style dictates that
ActiveRecord is Good Enough for your domain logic. This creates a sort of
framework lock-in: inheritance is one of the strongest forms of coupling there
is, especially when you inherit from classes you do not control. The framework
superclass is a likely to be a relic of current-gen frameworks that we do not
tolerate in the future.

The technological way out is to use a Data Mapper pattern ORM to isolate the
domain logic and the persistence. But this approach won't catch on, because
Rails devs have tasted the simplicity of ActiveRecord and aren't about to do
more work to get the same result.

It is telling that many language communities eventually head towards
amalgamating a collection of really good libraries in a low-coupling manner.
This is still a fringe movement in Ruby, currently.

~~~
ryanbrunner
If ActiveRecord allows you to do less work for the same result, at least for
some less complicated applications, isn't that a good thing? I think that's
DHH's whole point - we shouldn't be pursuing some mythical perfectly testable
architecture where it doesn't make sense. If you can write clearer, more
concise code for less effort that doesn't fit into the purely separated,
easily testable TDD approach, is that really such a bad thing?

------
revscat
In Java, the most obvious example of testing affecting the design of a class
is the necessity of avoiding private methods in order to facilitate testing.
While there are ways around this -- reflection, PowerMock, probably others --
they all tend to be ugly and hackish.

This has an effect upon the design of classes, because the easiest path is
simply to make private methods package private. This is frequently not the
ideal design, and taken to its logical extreme means that you will have no
private methods.

I think unit testing is important, and do use it. The line for me, though, is
similar to DHH's here: when the drive for unit testing affects the design of
the software, that's when I tend to become less enamored.

~~~
dragonwriter
> In Java, the most obvious example of testing affecting the design of a class
> is the necessity of avoiding private methods in order to facilitate testing.

IMO, this is unnecessary and a failure to understand the _point_ of unit
testing: unit testing is testing the _public interface_ of the unit-under-test
in isolation from other components, so there is no reason to avoid private
methods to facilitate testing since private methods are, _ipso facto_ , not
part of the public interface of the unit under test, they are called by
methods in the public interface and tested by testing the methods which they
serve. Making private methods public and directly testable makes unit tests
_more brittle_ and refactoring _more expensive_ , which is exactly the
opposite of what you should be striving for with unit testing.

~~~
Strilanc
I disagree. For example, a method to generate all the permutations of a
sequence is easy to get wrong and should be tested whether or not the library
using it exposes it.

Testing an internal method by itself, instead of indirectly through the public
API, gives you the same scope reduction benefits that testing a unit instead
of the entire program gives (but less pronounced).

Personally I think the solution is to scope unit tests into the thing they are
testing. So tests of a private method would be scoped to that method. That way
your decisions about what to test aren't constrained, though they can be
guided, by what is visible.

~~~
kerkeslager
> For example, a method to generate all the permutations of a sequence is easy
> to get wrong and should be tested whether or not the library using it
> exposes it.

There are three possibilities here:

1\. If your language or common utility libraries have a permutations() method,
you shouldn't be rolling your own permutations() method because one exists in
libraries. 2\. If you're in an environment that doesn't have built in
permutations() you should group these kinds of very generic functions that are
hard to get right in to some sort of utility module (in which case it would
necessarily already be public). 3\. If you're in a language that doesn't have
built in permutations() and permutations() is in the class which uses it, you
have a very generic function on a more specific class, where it has no
business being, so it should be moved to a utility class.

In all three cases, the solution isn't just "make it public". If you find that
you're just making something public to unit test it, this usually points to a
much larger problem with your design.

~~~
Strilanc
1\. Agreed. (Assume it doesn't.)

2\. Why am I making my library's utility methods public? It's a frob library,
not a generic utility method library. I don't want clients depending on my
utility methods. I don't want to support a separate utility library just to
avoid testing private methods. I would prefer not to take on an external
dependency for a single simple method. Having it private and tested is the
best tradeoff here.

3\. Agreed.

------
adamors
> the simple controller is forbidden from talking directly to Active Record
> [..] This is not better.

It is. The controller layer should be as dumb as possible, it shouldn't
contain your (entire) application logic. It's a matter of single
responsibility if anything.

Also, I find it very sad that we're still discussing the usefulness of the
active record pattern. Other than convenience, it has none. It's a pain to
maintain an application that uses it once it reaches a certain level of
complexity.

And not just because of testability, it's a pain in the ass to replace/fine
tune certain queries if you're calling active record methods in your
controller.

~~~
sanderjd
Honest question: what pattern for database access is better, and what are the
best tools for said pattern? I don't necessarily mean just in Rails, but
everywhere. ActiveRecord-the-ruby-library is incredibly mature and convenient
to use, and for better or worse, encourages active-record-the-pattern. The
repository pattern seems nicer in theory to me, but in practice, the best tool
(in ruby) to implement it with still seems to be ActiveRecord, and then I find
that I'm mostly delegating to the underlying AR object (because it already
does everything!) and wondering what I've really gained. I was hoping
DataMapper2/ROM[0] may have been a more straightforward but high-functioning
replacement for AR, but it seems there has been no progress on it for quite
some time.

tldr; I'm wondering how you actually _do_ this. Firstly, in Rails, but other
acceptable answers are "other technologies do it in this other way, which is
better than how Rails does it for these reasons".

[0]: [http://rom-rb.org/](http://rom-rb.org/)

~~~
adamors
I'm not using Rails (nor Ruby for that matter) so I can't comment on that
part, but I found the repository pattern to be really useful. Using it with an
active record is something I've seen other people do, and at least it gets the
active record calls out of the controllers. It's not an optimal solution of
course, but IMO it still beats plain active record.

~~~
sanderjd
Are you willing to say what language you're using the repository pattern in
and what tooling you've found useful in supporting it? I think my use of DAOs
in Java struts projects was somewhat repository-pattern-esque, but regardless
of the frustrations I have with ActiveRecord, it is _way better_ than that
was, so I'm really curious what, if anything, is better than either of them.

~~~
adamors
I'm using PHP and Doctrine 2.

~~~
sanderjd
Thanks! I'll check it out.

------
enoptix
I echo his sentiment. Integration testing, especially when you have a JS
frontend, makes much more sense. I never saw the point of controller tests and
making sure a controller assigns variable @widgets with [widget] and all that
nonsense. An integration test will identify all those problems and then some.

~~~
sjtgraham
I agree. Instance variable assignment is an implementation detail, IMO it's
more useful to describe their behaviour with a request spec for example.

~~~
enoptix
Yes, exactly. We use request specs to test regular HTTP requests to regular
pages or our API. And then we use feature tests with Capybara and PhantomJS to
load the app, click through it, and make it loads as it would in a customer's
browser. This covers most of our application. And then we use regular unit
tests for anything not customer facing, such as background jobs. But these
unit tests make up only a small fraction of our test suite.

------
blazespin
Decoupling is one of the fundamental tenants of development, and provides far
far more benefits than merely TDD. If he thinks decoupling is about TDD he's
missed out on architectures that can be easily fixed when bugs show up by
isolating causes in changed code, being able to extend without modifying core
code (the open close rule) and managing regressions in general. How do you
scale software to a team of developers without decoupling?

The only argument I've ever seen against decoupling is performance, and it's
rare that argument makes sense in all but the most real time of applications.

~~~
ebiester
I don't think that DHH is arguing against decoupling but rather he is arguing
that certain decoupling practices that are encouraged by TDD interfere with
the readibility of the application.

~~~
dlisboa
I've never seen a Rails application that suffered from too much decoupling. If
someone out there wrote one of those, call me because I have a job for you.
99% of all things written in Rails are monolithic messes of business logic
encrusted with persistence and response handling. So ridiculously coupled that
you don't talk to objects by themselves, but bring in a whole family of
resources to models and controllers that violate almost every SOLID principle.

The whole "You're not gonna need it" argument works until you actually do need
it. Which, unless you're not doing a good job, is going to happen. Then you
have no discriminated interface to pry your application blocks away from each
other and can't persist a model without dozens of unintended side effects.

There's no readability penalty to decoupling. The more you decouple the less
you need to read to understand an application.

------
ajmurmann
I don't understand why he has to equal TDD with the mockist approach to TDD
without clarifying he is talking about the mockist approach and not TDD in
general. Pivotal Labs for example is obviously a huge proponent of TDD, but
has been historically been opposed to hesitant towards true, isolated, heavily
stubbed and mocked unit tests.

That makes me wonder if he just doesn't have a differentiated enough view of
TDD or if he omitted that on purpose to get more attention. I am also not sure
which answer is would be more disappointing.

------
ch4s3
" but does so by harming the clarity of the code through — usually through
needless indirection and conceptual overhead"

This argument feels a bit thin and unsubstantiated for the general case. I can
see his criticism of hexagonal design applied to Rails, but he's using that as
a straw man to attack TDD. I think he could better criticise the limitations
of TTD by directly examining applications of RGR and other TTD pronciples.

~~~
sjtgraham
It's not really a straw man. Driving your application with tests at the unit
level doesn't make a lot of sense, in the case of a web app at least. The BDD
approach makes more sense to me. It's how I work and in my experience tends to
inform design a lot better.

~~~
ch4s3
I think saying hexagonal design is bad therefore TDD is bad qualifies as a
straw man.

Person 1 has position X. Person 2 disregards certain key points of X and
instead presents the superficially similar position Y. The position Y is a
distorted version of X.

Here X = TDD is good. Y = hexagonal design could be good for rails Y =/= X

I'm no TDD zealot, I just think DHH's argument here is weak.

------
vinceguidry
I was unaware that people were actually trying to unit test controllers. That
to me just seems like a recipe for endless frustration. Mock out a web
request? Please don't.

Everything I've ever read about Rails refactoring indicates that your
controllers should be skinny, implying they don't need to be tested, push all
complex logic out into helper functions, lib classes or models and unit test
those.

~~~
jboggan
I found myself in that exact situation recently, struggling with finer points
of Capybara, trying to nail down critical behavior in my controllers . . .
then I realized, why had all of this crap crept into my controller in the
first place? I refactored a ton of things into the models where they really
belonged and ended up with much tighter code and better reuse of functions.
Now testing is way easier - the only thing my controllers are really doing now
is routing after certain conditions and serving up error/success messages.

Sometimes I feel like Steve Martin when I'm getting more sophisticated with a
framework . . . I've got a googlephonic stereo with a moonrock needle, but
maybe the problem is the shocks in my car:
[https://www.youtube.com/watch?v=Cjjsz14hL48](https://www.youtube.com/watch?v=Cjjsz14hL48)

------
conanbatt
TDD is supposed to affect design in good AND bad ways. It is not true that TDD
claims to have the best design, but the more testable. The first time I read
about TDD it basically said testability > clarity.

A succinct code that you dont know if its doing the right thing is worse than
more verbose code that you can easily verify what it does.

I do think that specifically with Rails, tests get so plentiful that they take
long to run and it threatens the whole process. And the weight of testing
models/controllers/integration is something that has bitten me before.
Particularly, doing less integration and more model, because integration tests
can be flimsy and slow an order of magnitude more.

Since my first web programmer job, in 100% of my projects tests grew so big it
took them minutes to run, making me nostalgic about the speed of Java tests I
had for my first programming job.

~~~
marcosdumay
> The first time I read about TDD it basically said testability > clarity.

Now you scared me. I never tried TDD, and if that's a required tenet, I never
will. This is completely upside-down.

Tests can not verify that a program is correct.

~~~
conanbatt
Only proof can verify a program is correct, and that is so cumbersome and
expensive is done on rare critical operations.

Tests give you the ability to know how a certain code behaves in specific
circumstances.

Clarity makes it easy to understand the general case.

So if the clarity of the general case is a little worse to be able to test the
outlier cases as well, the TDD philosophy would welcome that trade. Or at
least, thats how I understood it.

A brute and unpolished example of clarity vs testability:

A)

    
    
      def complicated_algorithm(input)
        mod_input = Math.root(input)
        mod_input = input / mod_input
        ...
      end
    
      def division_in_complicated_algorithm_test
        input = 1
        input.should_receive(:/).and_raise(ZeroDivisonError)
        Math.root.should_receive(input).and_return(0)
        complicated_algorithm(1)
      end
    

\-----------------

B)

    
    
      def complicated_algorithm_testable_version(input)
        mod_input = Math.root(input)
        mod_input = divide(input, mod_input)
        ...
      end
    
      def divide(a,b)
        a/b
      end
    
      def divide_test
        assert_exception ZeroDivisionError, divide(1,0)
      end
    

Overall, the point of doing tests first is that you dont hack them together
doing complex dependency injections, because naturally to save work you do the
easy test..at some expense of the final code.

In the above example, the top example is less verbose, having one full less
method, but the test is more complex and fickle, because it was done to verify
the above code.

Its not a fantastic example, we could argue that the tests try different
things, and that division is too silly to put into a different method. The
point is that the above happens when you write code first , test later, and
the bottom one happens the other way around. TDD advocates for testability
over clarity.

~~~
marcosdumay
I'll argue that clarity makes it easier to understand the code. All cases of
it. Not just the general case.

Yet, I can see how one'd want to sacrifice a small bit of clarity to gain a
big amount of testability. Thanks for the example.

------
badman_ting
I really identify with some of the points he's making, they're observations
I've made myself so it's nice to see someone with his clout bringing them up.

I wonder about the design thing though - our code is in some ways a document
of the circumstances surrounding it. Does it make sense to have it conform to
some Platonic ideal, which we corrupt when we alter it to make it more
testable? I'm really not sure about this, but I doubt it. Code ultimately
needs to work in a given set of ways and that's our primary concern with it.
Making the code "pure" (or just "easy to read" if you like) is a service to
other developers who come along later. So, the tradeoff is testability for
intelligibility. I can imagine a lot of scenarios where that tradeoff is a
rational one.

~~~
collyw
We were having discussion about a collaborating groups architecture. The guy
said something couldn't be done, because it would involve coupling two
components. Again that is a nice ideal to aim towards, but surely
functionality comes first.

------
vjeux
Kent Beck response: [https://www.facebook.com/notes/kent-beck/rip-
tdd/75084019494...](https://www.facebook.com/notes/kent-beck/rip-
tdd/750840194948847)

~~~
jshen
That was a terribly thought through reply. It sounds like the sort of bullshit
you hear on sunday morning political shows. "Let's pretend I didn't hear my
opponents points, and throw back my own talking points which fail to address
his points.".

------
j_baker
I don't really see DHH giving any arguments as to why designing for tests
leads to poor design decisions. I suppose I can buy the argument that there
are cases where this isn't true, but I can't think of any and he's not giving
any. I would argue that Angular is a good example of how designing for
testability creates good design decisions.

Secondly, I don't buy the idea that you should focus on integration tests over
unit tests. Integration tests are important, but they're also the most
expensive tests in terms of maintenance. Unit tests you can run with every
code submit. You can run them multiple times per code submit. Integration
tests take too much time for this to be practical.

In all, I'm tired of people making decisions based on what they're _against_.
DHH is just being negativistic and defining his code design strategy around
being against TDD and test-driven design. That's ok. But what design
strategies does he _support_? He starts giving more information about that at
the end, but I'm still left scratching my head and wondering what design
philosophy he's actually advocating rather than what design philosophy he's
bashing.

~~~
romaniv
_I don 't really see DHH giving any arguments as to why designing for tests
leads to poor design decisions._

It results in pointless levels of abstraction that aren't used to abstract
anything in real code, but destroy readability and screw up static analysis
tools. It also results in over-splitting of entities to the point where they
don't represent anything remotely similar to problem domain. Finally, it
encourages "old stuff plus this addition" kind of design. (For example, using
a switch statement to cover 7 different cases for days of the week, rather
than using a math formula.)

------
briantakita
Black box (functional) testing is the way to go. I created a flow style of
testing, which allows "Fast & Thorough Testing". This is a javascript &
jasmine extension, but the concept can be applied to other languages.

[http://briantakita.com/articles/fast-and-thorough-testing-
wi...](http://briantakita.com/articles/fast-and-thorough-testing-with-jasmine-
flow/)

The nice thing is the testing does not have a large effect on the
implementation, so you have the freedom to change the implementation without
the tests failing.

The test suite scales since edge cases can be grouped together into a single
flow. This removes extraneous runtime burden of having to recreate the same
context for a each individual edge case.

I find that I don't need to be performing TDD as often.

------
vendakka
I certainly do agree about integration tests being important. I've also
started moving towards using a live database for my tests. I set up a postgres
database by copying over a master copy to a temporary directory and running a
postgres daemon from there. It takes ~100ms and with fsync turned off it makes
for snappy tests. If it starts getting to be slow I can always move it to a
ramdisk.

Here's a library I wrote for golang which wraps it all up in a convenient
package:

[https://github.com/surullabs/ghostgres](https://github.com/surullabs/ghostgres)

------
platz
This may be a naive comment, but is DHH simply being defensive about a
perceived movement away from Rails i.e. "decoupling from Rails"?

* edit - of course, he could be defensive and right, they aren't exclusive

------
mwcampbell
I don't see why TDD proponents make a big deal about not touching the
database. It's as if they haven't heard of SQLite's in-memory option, in which
the database is just another data structure in RAM, which is all that their
extra layers of objects are. True, with that setup, you're using SQLite for
tests and, say, PostgreSQL in production. But is that any worse than using
your own mock objects in tests? What am I missing?

~~~
mickeyp
SQLite is not <your actual rdbms>... it's SQLite. A database that does not
enforce column types and lacks most of the advanced features of a real rdbms.
Using SQLite will work great right up until the point where it won't and you
get your fingers burnt.

If you separate your concerns properly you won't need to mock the database
layer either. Mocking is just one part of the trifecta of good testing, along
with Stubbing and Faking.

For most things it would make more sense to _fake_ the database layer or
_stub_ the database layer in your "logic" layer.

However, if your application makes heavy use of the rdbms then you should test
that layer too: In your integration tests and not your unit tests. Most places
that interact with an RDBMS treat them like black boxes and not like a
business layer of its own. You really need integration tests to ensure things
like constraints and your business rules are captured properly... most people
never bother.

The real problem with TDD and its methodologies isn't TDD itself; it's people
shoehorning about 10% of what proper testing should be into two narrow groups:
stuff that you can do with "unit tests" and "things we can mock." There's a
lot more to it than just those two things.

------
MartinCron
Kind of off-topic, but it is nice to read a thoughtful post that isn't over-
the-top flame inducing. Less emotional rhetoric than some recent TDD
discussions.

~~~
lmm
Seriously? It's full of snarky swipes at "true believers". I don't think DHH
has made a sincere attempt to engage with the other side; this is knocking
down a strawman again.

------
emsy
I would have a repository layer even if I wouldn't write any tests. I don't
want an ORM to interfere with my "business". So this could as well be a post
against ORMs. But still I think author is right. TDD does enforce an
architecture, and that architecture doesn't have to be the best. TDD zealots
often run around, preaching their cult and blind out any drawbacks, as if
there were none.

------
smrtinsert
I wonder if this isn't a maintainability issue in disguise.

I have never had a problem with unit tests or int tests. As a rule I never use
mocks, and everything fits into one of those areas. Either you have real data
sources available (such as an in process db) or you make it a module that can
be easily unit tested.

It's clear he is against TDD first, and looking for reasons second. I feel
other factors are at play.

------
ranit
>> … it's a mistake to try to unit test controllers in Rails (or similar MVC
setups). The purpose of the controller is to integrate the requests from the
user with the response from the models within the context of session.

Well said … Now, if somebody from Salesforce.com could understand this and
stop forcing their customers to write these useless tests for the controllers.

------
jrockway
One thing I like about Java programmers is that they realize everything a
class depends on needs to be passed to that class's constructor. There's
really no way to avoid it. Change concrete instances to interfaces, and you
have a nice testable class. Write an integration tests, write a unit test,
they're both easy.

~~~
sanityinc
s/Java programmers/good Java programmers/.

------
agentultra
DHH is arguing from the perspective of a Rails developer working on a Rails
application. It's no small kingdom but to discredit TDD as a practice for all
software developers is short-sighted. There are enough counter-examples of the
benefits of TDD in my own experience to make the claim invalid as a universal
truth.

------
jbb555
Test... good. TDD ... sometimes (often) good.

But unit tests can lead to an overly abstracted design that harms the quality
of the code.

Test what you can with unit tests but don't compromise your code to do so when
there are other ways to achieve a suitable level of testing

------
michaelmior
Curious why DHH is in the title. This doesn't seem to happen with posts from
others well-known in the tech community.

~~~
Jtsummers
Because it's in the title on the site, however it should be removed and
treated like a "\- NY Times" suffix.

Just checked, it is now removed here.

~~~
michaelmior
Ah, didn't notice this was on the title on the original site. Thanks for
pointing that out.

------
slavoingilizov
Rails, rails. rails. Everything is about Rails. David is a good guy, but I'm
starting to wonder if he's ever built a moderately complex system, involving
integrations, message queues, several data stores, a handful of third-party
libraries and APIs and deployed on more than 3 machines.

TDD is a tool to manage complexity. It's an advice, not a recipe. Like any
technology - it isn't a substitute for thinking.

~~~
dhh
Everyone has their own definition of complexity. I make no mince about a
decade worth of developing Basecamp is where I draw my primary experience
from.

That system is small by web scale standards -- only 70 million requests/day,
1.5 terabyte of DB data, half a petabyte of file storage, two data centers,
and about 100 physical machines -- but probably still larger than 97% of all
Rails apps.

Also, plenty of data stores (memcached, redis, multiple MySQLs, solr), many
3rd party libs, job servers, integrations, and more.

So no, it's no Facebook or Yahoo or Google. But it also isn't a toy system,
except in the sense that we're still having so much fun playing with it.

~~~
slavoingilizov
I am not dismissing Basecamp. I am just saying that in a large portion of the
software world, applications are waaaay more complex than normal rails apps.
And in that context, TDD makes sense if only to manage complexity. Even if you
are not Facebook, but say Airbnb. If their tests are not fast enough and they
cannot trust them to make decisions, they wouldn't be able to deploy in a
reasonable time. And when slow tests lead to infrequent deployments, that's
when the real problems begin. (Airbnb is an arbitrary choice which came from
the top of my mind, not anything specific)

My gut feeling is that >50% of software development happens in those complex
apps and not rails apps. So dismissing TDD is just yet another extreme
viewpoint, which many people will unfortunately take for granted.

~~~
edu
AFAIK Airbnb uses Ruby and Rails to some extent. An actual job offer lists it
as a requisite:
[https://www.airbnb.com/jobs/departments/position/2192](https://www.airbnb.com/jobs/departments/position/2192)

