
Dependency injection is not a virtue - hmart
http://david.heinemeierhansson.com/2012/dependency-injection-is-not-a-virtue.html
======
javajosh
Am I the only one who finds this post confused and unnecessarily mean?
Roughly, the post goes like this:

    
    
        1. DI is bad, mkay?
        2. Ruby offers the best alternative to DI.
        3. BTW, programmers *are* their languages. 
        4. Hence Ruby programmers are the best. QED.
    

What is odd is that #3 is offered almost as an aside, with nothing more than a
link to the wikipedia article on "linguistic relativity" for support, and yet
it poisons this post and turns it into a mean-spirited rant. It makes the
entire post a personal attack on anyone who dares to disagree with it's
assertion.

This is not okay! I happen to dislike DI even more than the OP, but to _attack
people_ on a personal level for liking it is just plain mean, and
unnecessarily so.

When you assert the identification of self with technology preference, you
reenforce a damaging idea that has no merit, which is indeed the very idea
that ensures that such discussions often have more heat than light.

Shame on you, David, for using your position to promulgate an idea that is not
only useless, but actively damaging to the community of programmers.

~~~
bretthopper
I'm not sure where you're getting #4 from. While dhh might actually believe
Ruby programmers are the best, this post does not say or even imply that.

He's saying that sometimes a design pattern isn't universal to every language.
You need to shape your thinking to the language you're using.

In general, I'd say that's a good thing to keep in mind (disregarding anything
specific about DI from that post).

------
h2s
Testability isn't the only good thing about dependency injection. It's much
easier to reason about OO code when objects only tend to interact with objects
that they have been explicitly given. Too much willingness to introduce static
dependencies increases the risk of creating classes with too many
collaborators and/or responsibilities. This is a good example of how a
testable design is often also a just plain good design.

Sure, the given examples of "Time.now" or "new Date" are innocuous and I agree
that it's a good thing that Ruby allows us to write this type of code without
creating a testability issue. But too many people abuse static dependencies
for things like database connections which are:

1\. well-suited to being represented as proper objects that are passed around

2\. better off confined to a small area of the codebase

This is a complex issue and there's a delicate balance that needs to be
struck. Otherwise you can end up with a codebase where a few classes are all
up in everyone else's shit. That's when you find yourself in embarrassing
situations such as some code not working without a database connection even
though there's no real need for that beyond the tangled web of static
dependencies chaining it to the database class.

------
ssmoot
Calling out DI is a bit of a misdirection as most have come to know DI I
think.

What he's really arguing against here is more basic: Composition.

Pragmatically the examples makes some sense. Why not stub out any global like
this though? Because it breaks encapsulation and makes the program harder to
understand.

Instead of lifting dependencies up to the level of method parameters with good
composition, now you have to truly grok the entire method before you can have
any confidence you even know what to stub out, and wether that's actually
going to produce the expected results.

So while I fully endorse this specific, localized, easy to understand example,
the poor composition of many Ruby libraries is what tends to make working with
Ruby code so damn hard (IMO, see: Bundler). It's not enough to read method
signatures. There are no explicit interfaces. Only implicit ones you devine by
understanding the full breadth and scope of how parameter A is used in method
B, all the way down the stack trace.

Fundamentally here DHH isn't talking about "Dependency Injection". He's
talking about Composition and a pragmatic example of breaking Encapsulation.
While sure there are 101 ways in which breaking Encapsulation can be a useful,
pragmatic technique to employ for the seasoned code-slinger in the small, in
the large it makes for more difficult to understand, and therefore less
maintainable code.

I find many of these recent posts by DHH a bit ironic considering the subtext
of a guy who read the PoEAA, then went off and wrote an MVC web-framework,
packed with a nice Table Data Gateway, then proceeded to confuse Unit for
Integration tests, and soap-box on the evils of Design Patterns in general.

[EDIT]

PS: The obvious example for making it easy to test, without breaking
encapsulation, would simply be to avoid globals and use a default parameter.

    
    
      def publish!(current_time = Time::now)
        self.update published_at: current_time
      end
    

TA-DA. So trivial examples might show how a very shallow example of monkey-
patching can be a nice convenience, you also have simple "fixes" that actually
take _less_ code to implement.

You could easily come up with deeper stacks, presenting more difficult
problems, but then you're not really making a great case for the beauty and
simplicity of monkey patching if I have to have such a deep understanding of
the side-effects in your code before I can even start making sense of your
tests.

~~~
jdminhbg
I don't like adding that parameter at all. When someone else comes across your
method, will they know whether #publish! is meant to have a changeable time in
real use? You don't want test-specific functionality leaking into real code.

~~~
yummyfajitas

        def publish!(current_time_TEST_USE_ONLY = Time::now)
            self.update published_at: current_time_TEST_USE_ONLY
        end
    

Another way to do it (what I do) is simply run the test, and compare
obj.published_at to Time::now. If they differ by more than 100ms, something
has gone terribly wrong (if you use Ruby, maybe replace 100ms with 1s).

~~~
tiziano88
the perfect recipe for a flaky test!

~~~
jufo
If you have to do this, you could check that the published time was >= the
time at which you started the test, and <= the time at which you are doing the
check.

------
hackinthebochs
I'm sure my opinion is in the minority here, but all the pages and pages of
blog posts that we have collectively created to rail against various design
patterns seem like we're just trying so hard to justify bad programming
practices.

I don't know ruby so the example given wasn't exactly clear, but it seems like
you're able to (temporarily I hope) override the method Time.now to return
specifically what you want. While that certainly seems reasonable for testing
purposes, I would hate to have that kind of stuff happen in production code.
The fact that methods can be overridden at any point in such a manner is a
detriment to predictability, on the level of the longjump statement itself.
It's hard for me to believe that people actually advocate this stuff as a
"good thing".

Yes its true that dependency injection is a substitute for missing language
features, namely modifying private data or globally scoped objects at any
point. This is a good thing! Dependency injection allows you to have the same
power while still giving you the predictability that truly private scope
brings. If the cost of this is a few extra lines of code and some indirection
in the form of interfaces, I'd say it's well worth it.

~~~
martinced
_"Yes its true that dependency injection is a substitute for missing language
features, namely modifying private data or globally scoped objects at any
point. This is a good thing!"_

The very fact that there are globally scoped objects is an abysmal failure.
That there are private data whose state can change "by magic" is also a major
defect.

There are much more functional ways to deal with such issues and some people
are starting to see the light.

And of course watching people argue over globally modifiying what a function
does or modifying by injection what is basically a global var is a bit like
watching a blind and a one-eyed argue over who can see better ; )

~~~
yummyfajitas
_That there are private data whose state can change "by magic" is also a major
defect._

It's only a major defect if it changes "by magic" in a way that breaks things,
i.e. if the magic stuff can actually affect the result of your tests.

Consider internal state like a cache - this is harmless. It might magically
change, but that shouldn't affect your output.

------
swanson
There is a camp in Ruby that thinks monkey-patching (extending or modifying
behavior at run time) is a great feature and defining characteristic of the
language that should be exploited.

There is also a camp that thinks it is powerful, but dangerous and makes it
harder to reason through a program (since your objects can be changed out from
under you).

If you put yourself into the mindset of the pro-monkey-patch camp, it is easy
to see how DI comes across as an over-engineered solution to a non-problem.

One thing I am finding more and more important is to try to understand the
perspective of the author of a given blog post. While it is more difficult to
do (since no one puts a disclaimer stating their views at the top of the
posts), it ultimately helps to improve my understanding of competing arguments
and how they can align with the problems I face in my own projects.

It is okay to come to the conclusion that an argument is not right or wrong in
the absolute sense, but right or wrong for me and my work at the given time.

~~~
adamjernst
Use monkey patching in tests, never in production.

Monkey patching _is_ fragile, but in tests, who cares? If it breaks, you see
why and you fix it. Tests don't have to be held to the same level of
reliability as production code.

~~~
swanson
"Tests don't have to be held to the same level of reliability as production
code"

I personally take issue with that statement. If your tests are not reliable
and robust, you will stop trusting them. At the point, why even have tests if
you cannot trust them to verify correctness of your program (or at least
increase your confidence that the program is not broken)?

~~~
Silhouette
_If your tests are not reliable and robust, you will stop trusting them._

This seems to be the paradox in much advocacy of strategies like TDD. We start
with the premise that our code is likely to contain bugs. In order to detect
those bugs, we write a lot of tests, perhaps doubling the size of our code
base. Now, we can do whatever we like to the production half of our code base,
as long as our tests in the other half all continue to pass when we run them,
because magically that testing half of our code base is completely error-free.

~~~
nfm
That's not TDD. You define the expected behaviour first (which still can be
wrong, now or in the future - they're based on your current
assumptions/spec/other mutable thing), then ensure the code produces output
that matches your expectations. Your output is now as correct as your
understanding of the solution.

~~~
a_c_s
Your output is still only as correct as your specification of your initial
assumptions.

For example, lets say you make a mistake in your test so you are checking
(result = expected_result), which is always true, instead of (result ==
expected_result). Now when you write your code, you run the test and it
passes.

In this case your code may or may not be correct, and the test, which contains
a bug, does not catch it. But the bug is not a fundamental misunderstanding of
the problem, rather a simple mistake in writing the test. Following strict TDD
doesn't prevent this.

~~~
swanson
"Following strict TDD doesn't prevent this"

Actually, following strict TDD does prevent this.

You must see a test fail before you make it pass. It goes "Red -> Green ->
Refactor" not just "Green -> Refactor". Even if a valid test case passes on
the first time, one should modify the test (or the code) in a known way to
cause an expected failure.

Your premise (tests are only as correct as the spec of initial assumptions) is
correct, but your supporting example is not.

------
nbevans
This article is just so biased and full of odd opinions.

"If your Java code has Date date = new Date(); buried in its guts, how do you
set it to a known value you can then compare against in your tests? Well, you
don't. So what you do instead is pass in the date as part of the parameters to
your method. You inject the dependency on Date. Yay, testable code!"

Seriously? I've never seen things done in this way, ever. The correct way is
to have a TimeSource interface and forego the direct use of Date. Simple. At
this point it feels like the author's premise of the article has been
invalidated.

I don't like using the terms DI or IoC. I prefer to talk about "composition"
and "componentisation". Because those are the real goals. Testing is almost a
secondary concern in this regard, it is just a nice side affect and bonus of
writing well designed software. The primary reason for designing your software
in the style of composition and componentisation is for the separation of
concerns and the achievement of all the SOLID principles. But let me guess,
Ruby hipsters think those are bad too huh?

As an interesting note, I once went to a Hacker News meetup in London and not
a single one of the Ruby developers I spoke to even knew what DI or IoC were.
Most didn't even know what static typing was either. Or even type inference.
This is not really a good sign of a healthy community.

------
kevinpet
His recommended solution prevents parallelizing your unit tests. Dependency
injection is not just testable globals. It's about declaratively defining what
global-ish things your class depends on.

You also don't need to provide the date as a parameter to your methods. You
can make the class depend on a clock. A clock is a natural thing for a service
to depend on, and clearly indicates to outside users that the class needs to
know about time.

If it isn't a service, and you're updating some sort of entity, then why
should the entity figure out the current time itself? The model should just
record that I updated such and such field at such and such time. If it's the
current time, that's someone else's responsibility.

~~~
mcphage
> His recommended solution prevents parallelizing your unit tests.

How so?

~~~
kevingadd
Because monkeypatching is global. It shouldn't require more than a moment of
thought to realize this...

~~~
mcphage
Eh, Ruby doesn't do well with parallelism anyway. If you're running your tests
in parallel, you're usually running them in different processes. Or on
different machines entirely. Either way, monkey patching one of those
processes won't affect other ones.

Besides, if there is a testing library that runs multiple tests in parallel in
the same processes (which maybe there is and I'm just not aware of it), making
your mocking library smart enough to handle that, isn't difficult. Stubbing
only for a particular thread, for instance.

~~~
kevinpet
Yeah, ruby doesn't do well with parallelism, and if you want parallel tests,
you're probably doing it in separate processes, because the ruby community
chooses to completely ignore everything that's known about how to write code
that behaves well in parallel because "eh, I don't do it, so it must not be
important".

------
vii
Dependency injection subjugates readability and simplicity of data flow for a
narrowly defined notion of testability. Even in languages like Java, it's
possible to substitute class definitions for mocked ones without having to
thread a weird sort of test monad through code that otherwise serves a
purpose.

I pretty fundamentally disagree with making production code more complex to
make unit tests easier to write. Make the tests more complex instead and think
about the natural units for testing. Maybe the natural unit for testing isn't
exactly one class.

This is counter to the conclusion of the article: it's true one shouldn't
force Java idioms on Ruby but also if the Java idiom doesn't translate well,
maybe it's a bad programming idiom in general.

~~~
crymer11
Fair, except that DI in Ruby is pretty dang simple.

    
    
        def some_method(dependency = SomeClass.new)
          dependency.do_something
        end
    

I'd argue that injecting the dependency gives you even more expressiveness and
improves readability since you now can name the dependency whatever you want
(assuming you give your variables meaningful names and think it's a good
thing).

Sandi Metz really does the topic justice in her talks on SOLID design:
[http://www.confreaks.com/videos/240-goruco2009-solid-
object-...](http://www.confreaks.com/videos/240-goruco2009-solid-object-
oriented-design)

------
Eduard
"In languages less open than Ruby, [...]."... It's better to not use a metric
so vague and ambiguous as "open" to introduce your opinion post.

------
adamjernst
Amen. Same goes for Objective-C: dependency injection simply isn't needed when
you can swizzle out any method (including +alloc) in tests.

------
jfb
"Design patterns are missing language features."

~~~
Uchikoma
Yes, I wonder what language features monads as a pattern to solve certain
problems are a replacement for.

------
tel
Languages shape the way that you think:

And so this is just killer to my mind. Time.now is an impure effect and should
be isolated so that testing can occur easily in the pure code.

------
tarr11
I'm not as familar with Java, but in C# you can use "shims" to accomplish this
same result, without DI.

[http://www.peterprovost.org/blog/2012/04/25/visual-
studio-11...](http://www.peterprovost.org/blog/2012/04/25/visual-
studio-11-fakes-part-2/)

<http://msdn.microsoft.com/en-us/library/hh549176.aspx>

~~~
nahname
The View Model has a dependency on a repository that is injected in via
constructor injection. I don't think you understand what DI means.

~~~
tarr11
Shims do not require DI, ViewModels, or Repositories. The second link from
MSDN explains this in more detail.

The first one just uses those things in their test examples, but they are not
required to use shims.

Here's an example:

    
    
        // Make DateTime.Now always return midnight Jan 1, 2012
        ShimDateTime.NowGet = () => new DateTime(2012, 1, 1);

------
deafbybeheading
The thing is, unit tests are another "use case" for a given piece of code.
Many people here are saying, "I wouldn't do monkey patching in production, but
it's not really a problem for stubbing in test code." And what happens when
you want to make different use of that code in production? Sure, "YAGNI,
rewrite as necessary" and so on, but ruthlessly applying YAGNI leads to code
so inflexible that you need to rewrite a whole component to make a change in
one class (or start playing with monkey patching in production code, but I
have not seen anyone advocate doing that liberally).

------
mattrepl
Adding function parameters solely for testing purposes is bad. However,
poorly-designed functions are often difficult to unit test.

I don't know the entire backstory, but it appears someone wrote a _publish!_
method that took a _publish_time_ argument that was only intended to be used
in testing. The problem is that the original code didn't properly support some
_publish_time_ values.

This is a good example of the library vs. framework distinction. Frameworks
favor opinion over composition.

------
ExpiredLink
"Dependency Injection" Considered Harmful:
<http://www.natpryce.com/articles/000783.html>

------
powermockr
"If your Java code has Date date = new Date(); buried in its guts, how do you
set it to a known value you can then compare against in your tests? Well, you
don't."

You don't if you aren't testing your code properly. If you are, one option is
PowerMock.

whenNew(Date.class).withNoArguments().thenReturn(someDateWeKnowAbout);

------
mdpm
When looking at code, I like to separate design-time concerns from runtime
ones. Tests are a design-time affair, and if you are substantially altering
the run-time composition of your application to accommodate such things, the
approach is likely wrong.

------
shadowmint
I can't be bothered talking about DI much, because it's a non-issue. Use it if
it's appropriate. You use ruby? Big deal. You're not special; if there's a
situation where DI is helpful, use it. If not don't. This isn't a complicated
idea.

I was mildly interested in the idea that 'language shapes the way we think'.

My initial response was: WAT? O_o

After all, there are some pretty detailed criticisms of the idea
([http://www.quora.com/What-are-the-main-criticisms-of-
Whorfs-...](http://www.quora.com/What-are-the-main-criticisms-of-Whorfs-
theory-of-linguistic-determinism-and-relativity)), and that's just with actual
languages that we think and talk in, not programming languages.

After thinking about it for a bit, there's some basis for this idea, perhaps.

You see, the central idea of whorfianism is basically:

\- Your language affects the type of mental constructs you use.

\- People with different mental constructs behave differently.

So for example (classic example), if you have a language with no concept of
sub-day time units, you'll end up with a society that isn't fussed about
punctuality. "Turn up in the afternoon"; ok, I'll do that. No need to ask
"what time?" because your virtual model doesn't break your calendar up into
hour sized blocks; just into day sized ones.

It's a believable theory. There's a great book about this topic
(<http://en.wikipedia.org/wiki/A_Man_Without_Words>) which broadly speaking
supports the idea that language is basically a set of representative symbols
we use to model concepts.

The issues up for debate is really, to what extent does language influence
behaviour, compared to, for example, other factors. That's a _much_ harder
question to answer (and as yet unresolved I believe).

 _Now_

The idea that a programming language can do that?

Well.... it's not totally rubbish. I mean, software doesn't exist in the real
world; to work on it at all, we have to create a mental model of it.

So it's conceivable that our language provides us with symbols to
conceptualise the code we work on, and so we'll behave in a different way if
we have different models.

For example, if you have a deep understanding of assembly and lower level
programming languages, your model will have more _stuff_ in it, compared to
the model of someone who only knows a high level language, whose model will
drop down to 'black box' components when it gets down to a certain level.

...and I can totally believe that makes a difference to how you write code.

This applies, however, only to _concepts_ (aka. words, aka. representative
mental symbols) and _not to programming languages_.

See, this is the issue; your _domain knowledge_ helps inform how you
generalize and problem solve. That's what ruby is. It's a knowledge domain.

I'm extremely dubious that it provides _novel concepts_ that you don't get
from any other programming language. Perhaps in that its a dynamic language it
gives a few different ones, to say, that a C++ programmer might have, but
broadly speaking:

 _You are not a unique and special snowflake because you use ruby_

...or python. or C++. Or scala. Or C. Using a different programming language
_DOES NOT_ change the way you think.

Learning new words and concepts in your existing language (ie. the one you
_speak_ ) does that. And sure, using a language with new concepts will teach
you those new concepts. Like DI for example, that's a _concept_.

...but "I'm a ruby programmer"?

Just go away. You're an idiot.

------
michaelochurch
At the risk of being offensive, I've always felt that this "design patterns"
cargo cult is Revenge of the Not-Nerds. The people who couldn't hack the
harder CS classes because of all the math are striking back with something
designed to be as incomprehensible to us as mathematics was to them.

Take the Visitor pattern. I mean, really? I already know how to work with
trees. Lisp is all about trees. Ocaml lets us build tagged-union types and
pattern match to our hearts' content. Do we really need to dickshit the
program with _Visitor_? WTF does that even _mean_? Who is visiting and why? Is
this the French meaning, where to "visit" someone is patronize a prostitute?
(In French, you "pay a visit to" someone, or _rendre visite à quelqu'un_. You
don't "visit" your sister.)

The design patterns cargo cult is horrible. It has such a "how the other half
programs" smell about it that I cannot shake the belief that it was designed
to make us Smart People pay for something. Anyway, how can it be "best
practices" if I can't REPL the code and make function calls and see how the
fucking thing works? If you can't interact with the damn thing, you can't
really start to understand it, because it's almost impossible to understand
code until you know what you're looking at. IDEs just give people a false
sense of security about that.

Personally, I like functional programming because it has _two_ design
patterns: Noun and Verb. Nouns are immutable data. Verbs are referentially
transparent functions. Want side effects? You can have them. Those are a
separate class of Verbs: Scheme denotes them with a bang (e.g. set!) and
Haskell has them in the IO monad. Real-world functional programming isn't
about intolerantly eschewing mutable state, but about _managing_ it.

Now, I'll admit that mature codebases will often benefit from some solution to
the Adjective problem, which is what object-orientation (locally interpreted
methods instead of globally-defined functions) tries to solve. OO, at its
roots, isn't a bad thing. Nor is it incompatible with FP. The Alan Kay vision
was: _when complexity is necessary_ , encapsulate it behind a simpler
interface. That's good stuff. He was not saying, "Go out and build gigantic,
ill-defined God objects written by 39 commodity programmers, and use OO jargon
to convince yourselves that you're not making a hash of the whole thing." No,
he was not.

~~~
contravert
The real use case for the visitor pattern is to simulate multiple dispatch in
a language that only has single dispatch. In a language like Java, if you want
to traverse a tree where each node can be a different type, you don't really
want to use a series of if-statements for every single type, so the visitor
pattern is used in this case. The visitor pattern allows you to use method
overloading instead for each different type instead.

~~~
michaelochurch
Why do you need all this complexity in order to do that?

I feel like one of Java's problems is that these design patterns have taken it
away from the language's intended design. It wasn't intended to be a dynamic
language. Most of Java's ugliness is an extremely incompetent Greenspun's
Tenth Rule: solving of problems where the solution is "use a different
language, like Clojure or Scala".

There are occasions where Java and C++ are the right languages to use, but
most of the time when people are using these over-complex frameworky
solutions, it'd be much more elegant to use a different language, and the code
would be more legible.

~~~
wcarss
Mentally rewriting the beginning of your last paragraph to "Sometimes Java or
C++ is the right language to use, but most..." made it make considerably more
sense to me.

As it stands, I think the double-aren't is colloquial or a mistake; I honestly
couldn't follow the meaning of the overall triple-negative. :) Not trying to
be a jerk - I love reading your stuff, but I spent more than a minute thinking
about that sentence before just guessing it from the rest of what you said.

~~~
michaelochurch
Good catch. It was a mistake. Two or four negations would have been
semantically correct (if ugly). Three is wrong.

------
martinced
It's an anti-pattern. It's a workaround for a serious language defect.

I've written my own DI back in the days (way before Guice existed) but...

Using DI is just a glorified way to have "globals".

One better solution in TFA's example is to pass a high-order function whose
"functionality" is to provide the time.

Wanna _unit_ test? Pass in a modified function giving back the time you want.

That's just one way to do it.

DI is a fugly beast disguised as "good practice" but it really ain't. It
complicates the code and is basically a hack allowing to more or less be able
to test non-functional code. Really sad.

\-- _"Patterns means 'I've run out of language"_

~~~
Peaker
But passing in a function providing the time is a form of "dependency
injection". It is a shitty name for the practice, but that's at least one way
I saw the term "dependency injection" used.

~~~
lusr
This may be completely left field, but are you the same Peaker from
#programmers on DALnet years ago?

~~~
Peaker
Yeah, many years ago...

~~~
lusr
Ah cool, I was foozy... Hit me up with an email by picking any word at my
username .org :)

