
The Failures of "Intro to TDD" - davemo
http://blog.testdouble.com/posts/2014-01-25-the-failures-of-intro-to-tdd.html
======
akeefer
I think this is a great explanation of a lot of the obvious pitfalls with
"basic" TDD, and why so many people end up putting in a lot of effort with TDD
without getting much return.

I personally have kind of moved away from TDD over the years, because of some
of these reasons: namely, that if the tests match the structure of the code
too closely, changes to the organization of that code are incredibly painful
because of the work to be done in fixing the tests. I think the author's
solution is a good one, though it still doesn't really solve the problem
around what you do if you realize you got something wrong and need to refactor
things.

Over the years I personally have moved to writing some of the integration
tests first, basically defining the API and the contracts that I feel like are
the least likely to change, then breaking things down into the pieces that I
think are necessary, but only really filling in unit tests once I'm pretty
confident that the structure is basically correct and won't require major
refactorings in the near future (and often only for those pieces whose
behavior is complicated enough that the integration tests are unlikely to
catch all the potential bugs).

I think there sometimes needs to be a bit more honest discussion about things
like: * When TDD isn't a good idea (say, when prototyping things, or when you
don't yet know how you want to structure the system) * Which tests are the
most valuable, and how to identify them * The different ways in which tests
can provide value (in ensuring the system is designed for testability, in
identifying bugs during early implementation, in providing a place to hang
future regression tests, in enabling debugging of the system, in preventing
regressions, etc.), what kinds of tests provide what value, and how to
identify when they're no longer providing enough value to justify their
continued maintenance * What to do when you have to do a major refactoring
that kills hundreds of tests (i.e. how much is it worth it to rewrite those
unit tests?) * That investment in testing is an ROI equation (as with
everything), and how to evaluate the true value the tests are giving you
against the true costs of writing and maintaining them * All the different
failure modes of TDD (e.g. the unit tests work but the system as a whole is
broken, mock hell, expensive refactorings, too many tiny pieces that make it
hard to follow anything) and how to avoid them or minimize their cost

Sometimes it seems like the high level goals, i.e. shipping high-quality
software that solves a user's problems, get lost in the dogma around how to
meet those goals.

~~~
ArbitraryLimits
> When TDD isn't a good idea (say, when ... you don't yet know how you want to
> structure the system)

(Apologies in advance as I can't figure out how not to sound snarky here.)

Isn't that called "the design? And is there any meaningful way in which, if
"test-driven design" fails if you don't already have the design, it's worth
anything at all?

~~~
akeefer
Sure, you can call that structure the design, or the architecture, or whatever
you like. Either way, it's a fair question.

As a point of semantics: TDD generally stands for "test-driven development,"
not "test-driven design," though the article here does make the claim that TDD
helps with design.

To reduce my personal philosophy to a near tautology: if you don't design the
system to be testable, it's not going to be testable. TDD, to me, is really
about designing for testability. Doing that, however, isn't easy: knowing
what's testable and what's not requires a lot of practical experience which
tends to be gained by writing a bunch of tests for things. In addition, the
longer you wait to validate how testable your design actually is, the more
likely it is that you got things wrong and will find it very painful to fix
them. So when I talk about TDD myself, I'm really talking about "design for
testability and validate testability early and often." If you don't have a
clue how you want to build things, TDD isn't going to help.

If you take TDD to mean strictly test-first development . . . well, I only
find that useful when I'm fixing bugs, where step 1 is always to write a
regression test (if possible). Otherwise it just makes me miserable.

The other thing worth pointing out is that design for testability isn't always
100% aligned with other design concerns like performance, readability, or
flexibility: you often have to make a tradeoff, and testability isn't always
the right answer. I personally get really irked by the arguments some people
make that "TDD always leads to good design; if you did TDD and the result
isn't good, you're doing TDD wrong." Sure, plenty of people have no clue what
they're doing and make a mess of things in the name of testability. (To be
clear, I don't think the author here makes the mistake of begging the
question: I liked the article because I think it honestly points out many of
the types of mistakes people make and provides a reasonable approach to
avoiding them.)

~~~
couchand
I think you're spot on here - TDD is great as long as you're not too obstinate
about it. It's a trade off, just like every interesting problem.

One point I'd like to draw out. _If you don 't have a clue how you want to
build things, TDD isn't going to help._

This is exactly right. If you find yourself completely unable to articulate a
test for something, you probably don't really know what it is you're trying to
build. I think that's the greatest benefit to TDD: it forces you to stop
typing and think.

~~~
bphogan
Exactly. This is the whole purpose behind the "spike" \- make a branch, write
a crap implementation of some code to help understand the problem, put it
aside. Then go write the production version TDD style. Once you understand the
problem, you can use TDD to create a good design to solve that problem.

Sounds crazy, but this is how I do everything I don't understand. And my
second implementation is usually better than my first.

~~~
couchand
Or, in the words of Fred Brooks, build one to throw away. I'm always amazed at
how prescient he was.

Unfortunately I find all too often that spike project finds its way into
production for one reason or another. Now I only spike in Befunge.

------
usea
I have tried many times to do TDD. I find it extraordinarily hard to let tests
drive the design, because I already see the design in my head before I start
coding. All the details might not be filled in, and there are surely things I
overlook from the high-up view, but for the most part I already envision the
solution.

It's difficult to ignore the solution that is staring my brain in the face and
pretend to let it happen organically. I know that I will end up with a worse
design too, because I'm a novice at TDD and it doesn't come naturally to me.
(I'd argue that I'm a novice at everything and always will be, but I'm even
more green when it comes to TDD)

I have no problem writing unit tests, I love mocking dependencies, and I love
designing small units of code with little or no internal state. But I cannot
figure out how to let go of all that and try to get there via tests instead.

I don't think that I'm a master craftsman, nor do I think my designs are
perfect. I get excited at the idea of learning that the way I do everything is
garbage and there's a better way. If I ever learn that I'm a master at
software development, I'll probably get depressed. But I don't think my
inability to get to a better design via TDD is dunning-kruger, either.

I want to see the light.

~~~
sethkojo
Maybe you're over thinking it? It sounds like you're already doing the right
things.

 _All the details might not be filled in, and there are surely things I
overlook from the high-up view, but for the most part I already envision the
solution._

The design part of TDD is just the expectations. So if you were to test an add
function for example, you might write something like

    
    
      assertEqual(add(5,2), 7)
      assertEqual(add(-5,2), -3)
      assertEqual(add(5,-2), 3)
    

before actually implementing the function. So here the design is that the add
function takes 2 arguments. That's it.

For other things like classes, your expectations will also drive the design of
the class -- what fields and methods are exposed, what the fields might
default to, what kinds of things the methods return, etc. Your expectations
are the things you saw in your head before you start coding. So it's pretty
much the same as what you do already. The benefit of TDD is in knowing that
you have a correct implementation and you can move on once things are green.

One thing that's easy to misinterpret is that TDD doesn't mean writing a bunch
of tests before writing any code...That's pretty much waterfall development.
TDD tends to work best with a real tight test-code loop at the function level.

~~~
ams6110
Incidentally for functions like that, if you have an environment that supports
a tool like QuickCheck[1], it's a great thing to use. "The programmer provides
a specification of the program, in the form of properties which functions
should satisfy, and QuickCheck then tests that the properties hold in a large
number of randomly generated cases."

1:
[http://www.cse.chalmers.se/~rjmh/QuickCheck/](http://www.cse.chalmers.se/~rjmh/QuickCheck/)

------
richardjordan
The comments section today looks like a support group for
beginners/intermediates who struggled with TDD and gave up, and so want to
explain why it's all bunk. I get this. I am not a great programmer. I'm self
taught like a lot of you. I had tremendous difficulty grokking TDD and for the
longest time I'd start, give up, build without it.

But, I'm here as a you-can-do-it-to. You might not think you want to but I'm
so glad I DID manage to get there.

Feel free to ignore because I respect that everyone's experience differs. But
the real problem is that there are few good step by step tutorials that teach
you from start to competent with TDD. Couple that with the fact that it takes
real time to learn good TDD practices and the vast majority of TDDers in their
early stage write too many tests, bad tests, and tightly couple tests.

Just as it's taken you time to learn programming - I don't mean hello world,
but getting to the competent level with coding you're at today, it'll take a
long time to get good with TDD. My case (ruby ymmv) involved googling every
time I struggled; lots of Stack Overflow; plenty of Confreaks talks; Sandi
Metz' POODR...

Like the OP says - at different stages in the learning cycles you take
different approaches because you're better, it's more instinctive to you. I
thought I understood the purpose of mocks/doubles, until I actually understood
the purpose of mocks/doubles. When used right they're fantastic.

The key insight that everyone attempting TDD has to grok, before all else, is
that it's about design not regression testing. If you're struggling to write
tests, and they're hard to write, messy, take a lot of setup, are slow to run,
too tightly coupled etc. you have a design problem. It's exposed. Think
through your abstractions. Refactor. Always refactor. Don't do RED-GREEN-GOOD
ENOUGH ... I did for a long time. It was frustrating.

This is a good post. Don't dismiss TDD because you're struggling. Try to find
better learning tools and practice lots and listen to others who are
successful with it.

It's true that sometimes fads take hold and we can dismiss them as everyone
doing something for no reason. But cynicism can take hold too and we can think
that of everything and miss good tools and techniques. TDD will help you be a
better coder - at least it has me. If your first response to this post was TDD
is bullshit, give it another try.

~~~
collyw
"If you're struggling to write tests, and they're hard to write, messy, take a
lot of setup, are slow to run, too tightly coupled etc. you have a design
problem."

This is my problem exactly, and I wouldn't say I have a design problem. My
application is a Django app that return complex database query results.
Creating the fixtures for ALL of the edge cases would take significantly
longer than writing the code. At this stage it is far more efficient to take a
copy of the production database and check things manually. It helps that my
app is in house only, and so users will report straight away when something
isn't working.

But to say that I have a design problem because tests are going to be
difficult to implement is just plain wrong.

~~~
richardjordan
sure... it can be more broadly stated "you have a design problem and/or you're
testing the wrong things"

------
hcarvalhoalves
The approach outlined actually makes much more sense without OO. I guess the
WTF comes from forcing yourself into a world of "MoneyFinder",
"InvoiceFetcher", etc. Makes it look a lot more complicated and prone to error
than it is, because you're now supposed to mock objects that may have internal
state. Otherwise it's the usual top-down approach with stubs.

~~~
MoosePlissken
Yeah I think it's interesting that the final approach with "logical units" and
"collaboration units" mirrors a functional approach with "functions" and
"higher-order functions". The advice to write small "logical units" could also
just be "write pure functions". The complex class hierarchy in the final
example could probably be avoided entirely if you were using a language with
first class functions. As a bonus, in a functional language the "collaboration
units" have probably already been written and tested for you.

~~~
searls
I'm not sure if the current draft says so, but I talked about how the logical
units ought to be "pure functions" most of the time at some point.

------
mattvanhorn
I think that Red-Green-Refactor is as much about learning to habitually look
for and recognize the refactoring opportunities as it is about being
meticulous in reacting to those opportunities.

It's true that nothing forces your to refactor - but I think wanting that is a
symptom of treating TDD as a kind of recipe-based prescriptive approach. It is
not a reflection of the nature of TDD as a practice or habit.

It's a subtle difference, but important:

A recipe says "do step 3 or your end result will be bad"

A practice says "do step 3 so you get better at doing step 3"

------
danso
The more I try to explain TDD, the more I realize that some of my favorite
concepts, like the ability to mock functionality of an external process
because the details of that process should be irrelevant...is just beyond the
grasp of most beginners. That is, I thought/hoped that TDD would necessarily
force them into good orthogonal design, because it does so for me...but it
seems like they have to have a good grasp of that _before_ they can truly grok
TDD.

Has anyone else solved this chicken and the egg dilemma?

~~~
searls
This was indeed my motivation for writing the post. I think the _next_ step to
take if you agree with my premise is that we need to come together with ideas
for how to best _teach_ TDD to beginners/novices. Exercises that promote these
concepts, lines of reasoning to take, tools to get people started without any
unnecessary cognitive overhead, etc.

I agree that teaching TDD exactly how I do it today can be a bit overwhelming
from a tooling perspective currently, but conceptually I think visualizing it
as a reductionist exercise with a tree graph of units is pretty simple.

~~~
bphogan
One thing I do with my beginning programming students (since their programs
are tiny) is make them write out "test plans" on paper before they can write
their program code.

They have to write the inputs and then the expected results.

It gets them thinking about the concept of using tests as part of the design
practice.

Later, I give them the unit tests and they have to write the code. This is
usually a rewritten version of a previous program so they see the text-based
test plans in action as unit tests.

Then I might give them the empty test and an empty implementation, asking them
to fill in the test first, then the implementation.

Finally I ask for a completely new feature, and they have to figure out how to
write the test. And I ask them to go about it with a test plan.

After a few semesters of this, I think I'm ready to say that this is
successful for getting the "beginners" there.

It doesn't address everything, but I think it's a good start.

------
Nimi
I wonder about these workshops (even asked Uncle Bob Martin about them in a
recent thread). I can't shake the feeling they are the exact opposite of
agility (obviously, he is better qualified than me to judge that). Their
limited time schedules, which is essentially a bound over the amount of
contact between the client and the supplier, seems analogous to the infamous
"requirements document". Also, there doesn't appear to be a "shippable"
product at the end - the developers apparently don't end up practicing TDD.

I used to be an instructor for a living, and I kind-of equated lectures to
waterfall and exercises to XP. There is even a semantically analogous term in
teaching research, problem-based learning (each word corresponds to the
respective word in test-driven development - cool, right?). Is there anyone
else who sees these analogues, or am I completely crazy here?

~~~
platz
Uncle Bob responds:
[https://news.ycombinator.com/item?id=7139961](https://news.ycombinator.com/item?id=7139961)

------
mrisse
Might one of the problems be that we place too much importance on the
"symmetrical" unit test. In your example the child code is still covered when
it is extracted from the parent.

As a developer that often prefers tests at the functional level, the primary
benefit of tests for me is to get faster feedback while I am developing.

~~~
searls
The trouble with abandoning symmetrical unit tests is that:

* The unit is no longer portable and can't be pulled from the context it was first used in (e.g. into a library or another app) without becoming untested. And adding characterization testing later is usually more expensive * A developer who needs to make a change to that unit needs to know where to "test drive" that change from, which requires that they know where to look for the parent's test that uses it. That's hard enough but it completely falls over when the unit is used in two, three, or more places. Now a bunch of tests have to be redesigned and none of them are easy to find. * Integrated unit tests like this lead to superlinear build duration growth b/c they each get slower as the system gets bigger. This really trips teams up in year 2 or 3 of a system.

~~~
mrisse
Unless I'm missing something, wouldn't the child dependency be enough to
prevent the unit from being dropped into another library or app? That's a good
point you bring up about knowing where to "test drive" the changes from,
though usually on the apps I've worked on, they've been small enough that the
relevant integration test could be found without much detective work.

I guess I haven't been involved in too many 2-3 year monolithic projects.
Maybe that's when a stricter symmetrical unit test policy makes the most
sense.

What other levels of tests do you end up running besides your unit tests? Do
you have any integrated unit tests? Functional tests? End to end tests?

~~~
jasonkarns
The author is stating that the _child_ dependency cannot be extracted to
another library or app. If it is extracted, it is untested, because the only
tests wrapping the child dependency are actually testing the child's original
parent. (Which is likely to not exist in whatever other library/app to which
the child component is moved.) And then, to retroactively add tests to the
child component in order to facilitate moving it to another library, is
painful.

Having symmetrical tests enable components to be easier moved to other
libraries/apps because the test can move with the unit under test.

------
richardjordan
Shout out for Sandi Metz book POODR, and her Railsconf talk The Magic Tricks
of Testing, if you're a rubyist (though the principles hold true for non-ruby
OO programmers too).

[https://www.youtube.com/watch?v=URSWYvyc42M](https://www.youtube.com/watch?v=URSWYvyc42M)

~~~
Groxx
+1 for POODR - very ( _very_ ) well written, goes down multiple pathways
reasonably (rather than "this is how you solve that" without any clue _why_
you solve it that way), and gives some decent tools for any project. I only
wish it were longer.

------
mattvanhorn
I agree with the general approach suggested in the article (in tests,
write/assume the code you wish you had).

But one detail ran counter to my personal practice.

I don't believe that "symmetrical" unit tests are a worthy goal. I believe in
testing units of behavior, whether or not they correspond to a method/class.
Symmetry leads to brittleness. I refactor as much as possible into private
methods, but I leave my tests (mostly) alone. I generally try to have a decent
set of acceptance tests, too.

Ideally, you specify a lot of behavior about your public API, but the details
are handled in small private methods that are free to change without affecting
your tests.

~~~
searls
I understand the concern, but I value consistency and discoverability, so
symmetry of thing-being-tested to test itself is (so far) the best way I've
found to make sure it's dreadfully obvious where a given unit's test is.

This approach is not concerned with brittleness or being coupled to the
implementation because each unit is so small that it's easier to trash the
object and its test when requirements change than it is to try to update both
dramatically.

~~~
mattvanhorn
I suppose that if you do keep things that small, it could work well to trash
and rewrite. Plus it has the benefit of making you consider explicitly what is
going/staying.

Personally, I like my tests to be pretty clearly about the behavior of the
contract, and not the implementation, which is hard when you require every
method have a test.

I'd also be concerned that other team members are reluctant to delete tests -
as this is a dysfunction I see often, and try to counteract with varying
degrees of success.

------
viggity
Yes! I've always hated the common kata, because for every dev writing software
for a bowling alley, there are 200,000 devs writing software the sends
invoices or stores documents.

When I'm teaching TDD, the kata I have everyone go through is a simple order
system.

The requirements are something like:

A user can order a case of soda

The user should have their credit card charged

The user should get an email when the card is charged

The user should get an email when their order ships

If the credit card is denied, they should see an error message

(etc....)

This way they can think about abstracting out dependencies, an IEmailService,
a ICreditCardService, etc. There are no dependencies for a Roman Numeral
converter.

------
GhotiFish
I like the way he broke things up, but something bothers me about his
technique.

All his classes ended in "er".

he's not writing object oriented software, he's writing imperative software
with objects.

~~~
searls
Yes I am.

~~~
GhotiFish
fair enough. Do you think TDD and OOP are mutually exclusive practices?

~~~
searls
TDD as I practice it does, but I think OOP as it's traditionally taught
encourages developers to tangle mutable application state and behavior, which
leads to all sorts of problems. The more I practice, the more I learn that
life is better when I separate whatever holds the state from whatever has the
behavior

~~~
artsrc
> life is better when I separate whatever holds the state from whatever has
> the behavior

If you are not doing what is traditionally taught as OO, and you are doing
something better why not say that?

I wonder why you don't say: "OO is an inferior design because it tangles
mutable state with behavior"

"Not OO" should not be pejorative. OO is definitely sometimes wrong.

------
ChristianMarks
This is probably the first reasonably sophisticated attempt to describe a
test-driven design/development process I have read.

The observation that "[s]ome teachers deal with this problem by exhorting
developers to refactor rigorously with an appeal to virtues like discipline
and professionalism" reminds me of E. O. Wilson's remark that "Karl Marx was
right, socialism works, it is just that he had the wrong species."

If test-driven design were the programming panacea its proponents sometimes
seem to make of it, Knuth would have written about it in TAOCP. Instead Knuth
advocates Literate Programming. TDD seems to attract a cult-like following,
with a relatively high ratio of opinion to cited peer-reviewed literature
among proponents.

TDD as this is commonly understood seems to me like the calculational approach
to program design (c.f. Anne Kaldewaij, _Programming: the derivation of
algorithms_ ), only without the calculation and without predicate
transformers. Still it can be a useful technique.

There is no "right" way to program. This was evident from the beginning, when
Turing proved the unsolvability of the halting problem. (Conventions are
another matter.)

------
zwieback
Sure, but if the end result is "lots of little objects/methods/functions"
maybe there's a simpler way of getting there, e.g. prescriptive design rules.
After all, that's what every design method, including stuff from the waterfall
era attempted.

I'd like TDD to be more than just another way to relearn those old rules,
especially if we arrive at the same conclusions on a circuitous path. Perhaps
the old design rules, object patterns, etc. have to each be integrated with a
testing strategy, e.g. if you're using an observer you have to test it like
this and if you refactor it like that you change your tests like so.

The general rules are easy to understand and your post makes perfect sense but
once you formulate your new design approach you'll have to find a way to teach
it precisely enough to avoid whatever antipattern is certain to evolve among
the half-educated user community, which usually includes myself and about 95%
of everyone else.

------
searls
Hey HN, I just wanted to thank you for the overall very positive, constructive
comment thread. Thanks to you this post got roughly/over ~22k page views and I
didn't receive a single vitriolic comment or bitter dissent. All I got was
thoughtful, earnest, and honest replies. Made my day.

------
tieTYT
OK but after you "Fake It Until You Make It" and you have to add a new feature
to that class structure, aren't you just going to start over with all the
failures he brings up?

\---------

I haven't designed code the way he's advocating, but I have attempted TDD by
starting with the leaves first. Here are the downsides to that:

1) Sometimes you end testing and writing a leaf that you you don't end up
using/needing.

2) You realize you need a parameter you didn't anticipate. EG: "Obviously this
patient report needs the Patient object. Oh crap I forgot that there's a
requirement to print the user's name on the report. Now I've got get that User
object and pass it all the way through".

Maybe these experiences aren't relevant. As I said, I haven't tried to "Fake
It Until You Make It".

~~~
s73v3r
"1) Sometimes you end testing and writing a leaf that you you don't end up
using/needing."

So what? Just delete it. Your version control system should have a record of
what it was if you end up needing to go back to it.

------
radicalbyte
Excellent post, I've had exactly the same experience and come to exactly the
same conclusion.

I still follow the old Code Complete method: think about the problem, sketch
it out, then finally implement with unit tests. The results are the same, and
it's a lot less painful than greenhorn-TDD.

~~~
searls
Time at a white board breaking down a problem is rarely wasted :)

~~~
collyw
I completely agree with this. I fact when I have a bigger architectural
problem to think about I like to sit on it for a day or two, thinking about
one or two designs that would work. It takes a while to see the strengths /
flaws in each design and jumping in to code you won't realize problems until
you have something half implemented.

------
ChuckMcM
I suspect if they had called it _Architecture_ Driven Development (ADD) rather
than _Test_ Driven Development (TDD) it might contextualize better. Basically
what the author explains is that you can design an architecture top down from
simple requirements, deriving more complex requirements, and then providing an
implementation strategy that lets you reason about whether or not you are
"done."

But that 'test' word really puts people in the wrong frame of mind at the
outset.

~~~
searls
Yeah, the common implications of the word "test" have always been problematic.
The BDD movement did a good job bringing that to light, but I didn't want to
re-litigate that all in my post just to make a point about semantics. Totally
agree, though.

------
julie1
TDD and agile have been an effort at breaking an old must have for code which
was: ISO9001; the code should behave according to the plan, and if they don't
conform, plan must be revised if the tests failed. The Plan Do Check Act
Mantra. Now, they find themselves facing the consequences of not respecting
the expectation of the customers and they whine because "it was not applied
correctly, because no one cared".

So now, they reformalize exactly the so "rigid" ISO9001 they were trying to
throw down.

What an irony.

------
searls
Apologies for the downtime folks, this post is proving a little too popular
for us. Would love to see some folks reaction to the post in the comments

------
Arnor
> ...TDD's primary benefit is to improve the design of our code, they were
> caught entirely off guard. And when I told them that any regression safety
> gained by TDD is at best secondary and at worst illusory...

Thank you! Details of this post aside, this gave me an Aha! moment and I feel
like I'm finally leaving the WTF mountain.

------
vegar
Ian Cooper has a good talk that's relevant to this blog post. It's called
'TDD, where did it all go wrong?' and a recording from NDC 2013 can be found
here: [http://vimeo.com/68375232](http://vimeo.com/68375232)

------
tempodox
Those guys must really hate their readers. That crappy web site is not
zoomable! In the 21st century? In the era of “responsive web design”? Mega
fail. Did they use TDD?

~~~
jasonkarns
What's your definition of zoomable? I'm able to adjust text size just fine.
More details on your specific issue? If you mean layout, it is responsive. The
text column narrows and the images never exceed 100% width.

------
asfa124sfaf
What about tools like Typemock? How does that fit in?

~~~
vegar
Tools like Typemock helps you make bad decisions that you will regret later
on...

Isolating things is very important to make it easier to test, and lower the
risk for tests to break when you change other parts of the system. Some times
isolating one part from another is hard work. Typemock makes it easier, but in
the same time it ties you closer to the part that you are trying to isolate
from.

e.g. a database. You want to test something that eventually should store
something in a database. You can either make a thin layer abstracting away
your database so that you can test the functionality without depending on the
database, or you can make a tighter coupling to the database, and use tools
like typemock to get rid of it in test mode. If you want to change the way you
store data, you now have production code tightly coupled to the current
storage strategy AND tests tightly coupled to the current storage strategy...

Typemock can be of great help some times, but really you should strive to find
better designs instead.

------
glittershark
Hello there, Heroku error page

~~~
searls
Apologies for the continued downtime, we're trying to get a CDN in front of
the (static apache) heroku app. In the past not having any dynamic language in
the background was enough to stay up, but not today apparently.

