
Dependency Injection is Evil (2011) - paralelogram
http://www.tonymarston.net/php-mysql/dependency-injection-is-evil.html
======
spion
What is evil in our profession is that everyone professes some technique or
another that has worked for them, without mentioning any specifics about the
exact situation in which that technique worked. Infact most programers go out
of their ways to avoid mentioning specific experiences, confident that their
grand designs generalize across all projects, platforms and languages. Just
look up the recent TDD debate with Bob Martin and DHH and see if anyone
mentioned one specific project / example.

As a result we end up with long diatribes of barely useful "best practice"
advice that is spread by means of cargo-culting and that causes problems down
the line.

What we need to collectively do is get off our high horses and start talking
about specific projects, situations and problems. Start asking ourselves: Why
did our solution work? What was the specific problem and situation? Which
properties of the problem made our solution the right one?

------
awinder
From the post:

    
    
      I do not use mock objects when building my application, and I do not see
      the sense in using mock objects when testing. If I am going to deliver 
      a real object to my customer then I want to test that real object and 
      not a reasonable facsimile This is because it would be so easy to put 
      code in the mock object that passes a particular test, but when the 
      same conditions are encountered in the real object in the customer's 
      application the results are something else entirely. You should be 
      testing the code that you will be delivering to your customers, 
      not the code which exists only in the test suite.
    

I don't think the argument for unit testing is that you should ONLY do unit
testing. By stubbing/mocking and removing dependent systems from unit tests,
you're usually left with:

    
    
      1. Tests that run very well in isolation
      2. Tests that run very quickly
      3. Tests that are very stable (as long as the code isn't changing, 
         results shouldn't change from run to run)
    

All of these are great qualities for tests that will help you, the developer,
continue to safely refactor code as your application ages. Functional tests
where you're not stubbing / mocking will help you ensure your application
continues to function as expected from a user interface perspective. Both are
valuable to different audiences and for different purposes :-).

~~~
jonhohle
I agree.

I like take an onioned approach to software quality:

    
    
      * unit tests
      * release consistency (build artifacts and
        dependencies tracked as an atomic, reproducible unit)
      * some type of deployment/startup sanity tests
      * development environment integration tests
      * staging environment integration tests
      * statistical anomaly detection on one production box
      * application and infrastructure metric monitoring
    

This is not a prescriptive list; just something I've found works well for
continually deployed systems.

------
benvan
I would argue that "dependency injection" at its purist is little more than
the side-effect of designing with intent to minimise module responsibility.

When writing a class, as a rule of thumb I'd posit there's a general advantage
to outsourcing units of behaviour or complexity to other modules. Those other
modules could be constructed by the class itself, but this is likely to make
the current class concerned with the construction details of those modules.
One of the simplest things to do, for example, is to ask for those modules to
be provided to the class on construction.

This sounds like a great idea until somebody comes along and calls it
"dependency injection" and a bunch of us lose our minds.

~~~
sparkie
Fowler has a habit of inventing "special terms" for what people have long
considered "programming", and people get sucked into some fundamentalism. I
don't like how the author of this article appeals to authority when trying to
justify the use of a service locator over DI.

> So if Martin Fowler says that it is possible to use a service locator
> instead of DI in unit testing, then who are you to argue otherwise?

This argument is the same as "It is possible to use a global variable instead
of an argument when unit testing a procedure." That's really what a service
locator is - a facade around global variables, unless of course, you use DI to
inject the service locator, then you've not really gained anything, but just
inserted another layer of indirection.

And that brings us back to why we even use techniques which are now labeled
"DI" in the first place - they're basically there to avoid the use of globals
(and hence, tight coupling). Interfaces are in place to keep implementations
decoupled while providing everything necessary for them to interact.

~~~
sago
I see this argument pop up from time to time and it confuses me. It's almost
as if someone read "global variables are evil" and understood it to mean
"global data is evil".

Singletons aren't global variables. Neither are service locators. Nor
databases. Nor screens. Nor keyboards. Global variables are global variables.

Unless they all are. And if you're going that way, you have a single main
function, somewhere.

~~~
sparkie
"Global variables" is really language specific terminology for global data.
What I mean by "globals" in the scope of testing is "side-effects" \- anything
which can affect the behavior of some unit which you're trying to test beyond
the test target and the values contained in your test. All of your listed
"non-global variables" are examples of it - they make it more difficult to
perform isolated unit tests because you need to bring the global state into
the test, then you're no longer performing unit tests, but whole systems
tests.

A database is a good example. One might have some class "Person" with some
business related logic in it. If your goal is to simply test this business
logic of a person, then why would your test be concerned about whether it can
establish a connection to a database? By definition, this is no longer a unit
test, but a systems test, and the test surface is much larger because now you
need to be concerned about whether or not a connection can be established and
myriad of other possible problems, all of which could be tested separately,
and by knowing which tests succeed/fail, we can have immediate feedback on
where some problems might exist, rather than having to debug an entire system
to find an issue.

In the case of the service locator - you can't really perform "unit tests" on
individual blocks of code which have dependency on the SL, because the SL is
mutated in arbitrary places througout the codebase. The SL acts to increase
coupling, because now instead of depending on just a specific interface, you
depend on the whole runtime data of your application.

~~~
sago
> "Global variables" is really language specific terminology for global data.

Strange, I thought it was terminology for global variables.

> What I mean by "globals" in the scope of testing is "side-effects"

Which is as whole different kettle of mackerel. Minimising side-effects is, of
course, excellent advice, and can help with testing. But global variables and
SL are not the same thing as side-effects.

Your example (of testing a database) seems very confused to me. You're talking
about coupling now. Not global variables. Why would you suddenly need a
database connection? Why does the existence of global mutable state mean that
nothing in your code can be tested independently? You seem to be imagining the
worst case of coupling as an argument against service lookup.

The last paragraph seems blatantly false. You're again straw manning this
version of an SL that is mutated in arbitrary places through the code-base and
therefore can't be tested in isolation, and neither can the components that
use it. That is a strange version of SL you have here, and one that DI
wouldn't help with.

~~~
sparkie
>But global variables and SL are not the same thing as side-effects.

They're examples of side effects. It's not good enough to set the values of a
global variable or register some service purely for the purpose of a test,
because the test then does not reflect the runtime behavior of the code. The
benefit of a unit test is to assert that code behaves the same way all the
time - not just for specific values you use at the time of testing.

> Your example (of testing a database) seems very confused to me. You're
> talking about coupling now. Not global variables. Why would you suddenly
> need a database connection? Why does the existence of global mutable state
> mean that nothing in your code can be tested independently? You seem to have
> a strange idea of how software works.

Global variables increase coupling - code which consumes a global variable now
has a dependency on all of the code which mutates it. You simply cannot test
the consuming code in isolation without regard for the code mutating the
variable, unless your test is exhaustive of every possible value which the
global variable may contain.

My example was not of testing a database, it was about testing algorithms or
logic that might exist inside some class named "Person", but which has a data
dependency on an actual person (held in a database). If one wants to test the
logic only, then mock data must be supplied instead of the real data from the
database - else you're not testing only the person, but also testing that the
database is connected and querying it is successful. The correct way to test
this is to decouple Person from the database, usually be means of a mock
object, or by passing the mock data into the person directly. Either way, it
seems the blog author does not do such unit tests, as he doesn't use mock
objects.

> I do not use mock objects when building my application, and I do not see the
> sense in using mock objects when testing. If I am going to deliver a real
> object to my customer then I want to test that real object and not a
> reasonable facsimile This is because it would be so easy to put code in the
> mock object that passes a particular test, but when the same conditions are
> encountered in the real object in the customer's application the results are
> something else entirely. You should be testing the code that you will be
> delivering to your customers, not the code which exists only in the test
> suite.

The problem with the author's philosophy is that it means when problems do
arise in his applications, he must perform whole system testing/debugging to
find them. He is missing perhaps the main benefit of unit tests - which is
that, when a bug arises, you can quickly eliminate many possible causes
because unit tests against those parts of code have succeeded (unless your
unit tests were wrong to begin with, which will more or less be the case if
they're testing against code which depends on globals).

~~~
sago
You're making me feel very dumb. Because several of these seem to be the
opposite of what I've observed.

Testing with a mock object implies that the mock object can generate all the
required output that the real object can generate that might have some effect
on the consuming code. Not only that, but it assumes that the mock object
generates the correct data in ways that cannot generate false positives in the
test. This doesn't mean you're only testing the client logic. You're now
testing the client logic using services that are ad-hoc and aren't guaranteed
to behave like the real thing. You're testing a fantasy.

It is far better to test against the real database. Using a fixture, or a
transaction, or some way to use the actual system with representative data.
Mocks have their place in very complex services where this is practically
impossible. But they don't suddenly make things better for testing, or more
atomic. IMHO, when you have to use a mock, it should be as a last resort, when
you have to sacrifice fidelity for tractability. Your code is coupled in
behavior to the services it uses, pretending it isn't is just fooling
yourself.

I have very much the same problem with people who write unit tests against,
say SQLite databases, rather than the full DBMS. The complexity of 'masking
sure the database is connected and can be queried' is pretty trivial compared
to the complexity of mocking a whole RBMDS interface. Good software
engineering will, of course, limit the number of places the database
interfaces with (I'm not suggesting code with SQL statements in strings
everywhere, that's a straw man). But I'd not accept mocked tests that exists
just to avoid a database connection or because the developer doesn't
understand how to write a transaction.

So I don't understand. Either you're advocating a very bizarre, and seemingly
pathological development style, or you're consistently muddying the waters by
comparing good programming in your chosen methodology with bad programming in
mine, which just misses the point.

Here's an example then. In your Person object, on a platform with reasonable
transaction/fixtures support (like Django). Is it better to write your unit
test using a mocked ORM layer, or a fixture with the test data in it?

> He is missing perhaps the main benefit of unit tests - which is that, when a
> bug arises, you can quickly eliminate many possible causes because unit
> tests against those parts of code have succeeded

I've no idea why this is somehow impossible. I write unit tests at various
levels of abstraction. If I have module A, calling module B which calls module
C, then I need tests for C, B(+C) and A(+B+C). If I get a failure in A, I make
sure that there is a test in B that corresponds to the way A is using B, if
so, it is a problem with A, not B. If B and C were mocked, I'd have no way of
knowing if the problem was with the mock logic without having to test C,
C-mock, B+C-mock, B-mock, A+B-mock.

> now has a dependency on all of the code which mutates it

This seems a bizarre claim. Does your code have a dependency on everything
else that can possibly change what's on the screen? If so, how do you deal
with that?

That's why pretending 'global variables' = 'all central resources' seems
foolish to me.

~~~
sparkie
I probably have quite a fundamentalist view on unit testing because I write
primarily in purely functional code these days - where a "unit" is a pure
function, and it's clearly an isolated unit. Even when I'm back in OOP world
though, I basically avoid static variables/globals like the plague. Even where
the framework or some library makes use of them, I'll tend to wrap them up and
pass them into my code via Main, to make sure that no statics are globally
accessible throughout the code.

If I were testing a salary calculation which takes values from a database, and
I named my test "Test_salary_calculation_correct", where instead of using some
sample data which could easily cover the range of values I need to test
against, I instead relied on a database connection, and this test failed
because the database was not accessible - I've only confused the developer who
picks up my shit where "Test_salary_calculation_correct" fails, and he thinks
there's a problem with my calculation rather than a misconfigured firewall
somewhere else. The firewall has nothing to do with my salary calcuation - why
should it have any effect on the test passing?

The way I see unit tests is this: If you write a test and it passes on your
machine, then some other developer takes your code and the same test fails -
it's a fuckup on your behalf. Unit tests should not depend on the environment
in any way. Actually, by definition, a unit test is a test of a single "unit"
\- including database access into this is well beyond the scope of unit
testing, but into integration testing.

To me it seems you're skipping unit testing and just going onto integration
testing with your unit testing framework. I'm not sure what you've observed or
where, but I can tell you it's certainly not standard or best practice in the
industry. It might possibly tell you something about your own code style
though - are you writing units which can be treated in isolation? (Certainly
not if you depend on a SL, which is a global context of services with no clear
boundary)

Ideally a codebase should be designed to maximize unit-testability and reduce
the need for integration testing to as little as possible - since this is
where most of the "unexpected", or "out of my control" problems are most
likely to occur. This testing is more a case of "am I handling all the
relevant exceptions" than getting green lights to pass in a unit testing
framework. It doesn't really help to make unit tests against code which is
expected to fail out in the wild due to whatever circumstance - what matters
here is that your code is prepared for the worst and knows how to recover.

It's these cases where mock classes are particularly useful - because you can
forcefully simluate any behavior from the external service and make sure your
code is working correctly for all the potential circumstances. Having to rely
on divine intervention to trigger some event that may only happen 1% of the
time in the real-world situation is hardly practical. Unfortunately testing in
the wild is often like this - everything works fine 99% of the time.

Even for cases where you're arguing for a fixture with real test data in (from
a database), then the reasonable thing to do is extract this data beforehand
and encode it into the unit testing language (which is fairly trivial to do).
Now you have a reliable test which will continue to work as you update the
code. Testing against live data is giving a false sense of security to begin
with anyway. Imagine the scenario where you have a bunch of data in the
database, you run your unit test against it with all green flags - then after
deployment, somebody inserts into the database a value which your code doesn't
expect. The unit test shouldn't be testing against real world data, but
against data representitive of the possible values it should accept (ie,
include all the obvious edge cases which should fail too, but are not likely
to exist in the real world DB).

------
_pmf_
I'm not sure whether this whole article is just an elaborate troll or if the
author has never worked on anything but the greenest of greenfield
applications.

DI is not a pattern.

The argument that DI breaks encapsulation does not make any sense at all; in
fact, it's an argument in favor of DI, since it cleanly allows configurable
behavior (the alternative requiring some way of configuring the host object to
behave in a certain way using constructor arguments or setters).

Dependency injection is a direct application of coding to interfaces instead
of implementations (or to abstractions). In the absence of polymorphic
constructions, the only way to achieve this is to either inject the dependency
or a factory for the dependency.

------
pornel
The article is overly dramatic about DI. Parts of it sound like a mere "get
off my lawn!" rant against OOP in general (godwinning just few paragraphs in).

There's praise for the Singleton antipattern, variables with an "obj" prefix
as if objects were an odd thing to watch out for. There's an example of
polymorphism achieved by setting a global variable before including a script.
PHP (v4!) written like that would be icky even in 2011.

There is some good advice in there too, but overall design it advocates for
seems to reduce OOP to mere namespacing of global variables and functions. I'm
assuming the author doesn't use unit tests either (given that TDD is
"poppycock!"). It's not surprising that DI doesn't fit that way of building
programs, but I don't think that makes DI evil. It's just a tool for a
different job.

------
nicksellen
The conclusion is a lot more moderate than the title, the author actually uses
dependency injection:

 _While DI has benefits in some circumstances, such as shown in the Copy
program, I do not believe in the idea that it automatically provides benefits
in all circumstances. This is why I consider that the application of DI in the
wrong circumstances should be considered as being evil and not a universal
panacea or silver bullet. In fact, in the wrong circumstances it could even be
said that Dependency Injection breaks encapsulation. Although I have found
some places in my framework where I have made use of DI because of the obvious
benefits it provides, there are some places where I could use DI but have
chosen not to do so._

I think using the word _evil_ is far too dramatic, evil should mean _do not
ever use_.

~~~
xtrumanx
Well at least he didn't title it "Dependency Injection Considered Harmful".

~~~
jschwartzi
Articles Titled Considered Harmful Considered Harmful

------
_throwmaybe_
There is a good reason to force people to use dependency injection. At least
you're sure that their code will use an interface to describe a dependency
which is a huge advantage compared to letting everybody write their own code
as they want.

~~~
kaens
The issue with this, in my experience, is that if you don't understand the
driving principles behind DI patterns (prefer to depend on an interface or set
of behaviors, prefer to be given something that satisfies that dependency
instead of instantiating it yourself, with the aim of not doing things you
aren't responsible for), you can end up with what amounts to either rather
messy global state, or with systems that are still _very_ highly coupled but
have good paint.

I can't help but see discussions about DI as suffering from sounding fancier
than it is and also from less-experienced devs thinking that they must be
writing "correct" code because it has the general shape of the DI pattern they
read about. If it were my choice, I'd ditch the terminology and best practices
all together in favor of active practice critically thinking about the
dependencies, responsibilities, and assumptions of code being written or read.

------
KyeRussell
> Those statements imply that if you are not using Dependency Injection (DI)
> then you are not doing OO "properly", that you are an idiot. I take great
> exception to that arrogant, condescending attitude.

Stuff like this makes me question if this guy has some deeper issues he needs
to work out before he can write blog posts about design patterns. The entire
thing reeks of clickbait and narcissism.

------
hoodoof
Really thoughtful, clearly articulated, backed with facts and diagrams,
whining.

------
elchief
Mr. Marsten is the type of writer / programmer I dislike.

Brevity is the soul of wit.

Here's a summary:

Dependency injection's primary benefit is to aid unit testing.

~~~
pan69
Which makes Dependency Injection all the worth while using but since the
Author doesn't Unit Test, the application of DI is obviously lost on him. So,
big rant follows.

------
ane
"If you know you will never change the implementation or configuration of some
dependency, there is no benefit in using dependency injection.".

It all boils down to this.

Here's the catch, though: with dependencies, it is really, _really_ hard to
know whether dependency implementations change or not. What is more, in most
languages, implementing dependency injection is so trivial that it is always
worth it. Conversely, the work associated with changing implementations that
aren't built with some form of loose coupling, that is, designing around DI,
is in most cases non-trivial.

~~~
jblow
Nope. Nope nope. This is the same argument that early OO people used to
justify the idea that you should use getter/setters everywhere rather than
directly accessing your variables. You never know if the implementations of
those ideas will change!!!!!11

That's programming. It is always possible that anything might need to change.
That is how it is. This does not justify calcifying your code by adding extra
unnecessary structure, because what that in fact does is make the program
harder to change later (while requiring you to do more work up front). Also,
as the author of the article notes, it requires one to keep more pieces of
information clear in one's head in order to work with code of equivalent
complexity, something that is almost always a big lose.

In a good language, if a dependency implementation changes, you know this
because your program does not compile. (Well, of course because you are not a
noob, you are linking things that are versioned in the first place, so this
should not ever even be an issue unless you are actively upgrading outside
code and are expecting it.) When your program does not compile, you want the
compile error to be at the site that uses the dependency, because that tells
you exactly where the thing is that you need to fix. Adding excess verbiage
around it, and distancing the site that instantiates the dependency from the
site that uses it, only causes more work.

If you are using a language/system that doesn't allow you to program this
directly and clearly, then maybe _that_ is the problem...

~~~
ane
It seems to me that you think designing for DI is much more complicated than
it actually is. In most modern languages it is trivial to implement.
Especially if you have ad-hoc polymorphism available, it comes at virtually no
price. Some languages make it more difficult than others, but in most modern
languages it's really easy.

Your example of getters and setters is a bit of a red herring. I understand
what you mean by it, but it doesn't apply in this case. This is because the
getter and setter is an abstraction that derives from encapsulation. But it is
also an antipattern. In many cases, getters and setters are just a type of
needless complexity--a distraction, overengineering! On the other hand, DI
solves a real, practial problem, and if your language is intelligent, it is
really simple to implement.

------
PaulHoule
I dunno.

I write a lot of scripty programs that make subjective decisions that, when
they work well enough, go into production.

Consistent use of DI means that I never check my AWS keys into my source code.
In particular I can do experiments that change out any module without having
to touch the source code and that is pretty important.

------
barrkel
I have some sympathy for the idea that DI is harmful to good software design,
but this article isn't an argument for it.

My specific issue is that DI, and a number of other things, including single-
implementation interfaces and mocks in testing, are normally used as a means
to an end: testable fragments of code. Individually testable fragments of
code, taken to its logical conclusion, converts every function into a class,
possibly implementing an interface, and taking dependencies (i.e. the other
methods it calls) as instance arguments, either directly to the method, or as
arguments to the constructor (in a kind of OO partial application).

You then end up with an atomized library of classes with names like ThingDoer
and methods like doTheThing(). All the methods are now testable in isolation,
since you can mock all the dependencies, and there's no risk of any pesky
static references reaching out and pulling in stuff you can't easily mock.

Splitting everything up so aggressively means that somebody now needs to put
all the pieces back together. Some automated help (DI, IoC) is used. That's
where the DI comes in.

Some of the problems created by this style:

* Cognitive overload: turning every dependency into a pluggable modularization point greatly inflates the number of concepts required to understand the code, especially from outside a library, because all the subcomponent parts all too often end up in the same namespace as the outer coordinating parts.

* Far harder to understand without debug stepping: runtime composition of code and extra levels of indirection impede IDE code navigation - go to definition on a method, and you find out it's actually just on an interface, then you have to look up the class hierarchy, find the concrete implementation - only one if you're lucky - before you can trace things through.

* Over-modularization / over-abstraction: since the code is split up into so many tiny bits, there's an illusion that reuse or modification of the code is possible by simply adding an extra implementation of one of the single-implementation interfaces. But extensibility needs to be designed in; pervasive, mandatory abstraction boundaries are unlikely to be good fits for ad-hoc future extension.

* Brittle tests: because module boundaries go all the way down, and are individually tested, a refactoring that modifies the implementation of a library is made far more painful. Slightly chunkier tests - not quite integration, but unit-testing at the library level, the semantics that library clients actually care about - go a long way to reduce this. But once you go in this direction, the whole reason for the edifice's existence - individually unit-testable atoms of code - is called into question.

This is also my problem with mainstream Java code style.

My preferred style is to write support libraries that are individually
testable at a slightly higher level, or are functional-style static methods
that are generic and wholly testable with simple stubs, and write the main
business code such that it uses the libraries in a fashion that's they're
close to obviously correct as possible. Isolate any complicated logic into a
testable functional static method, or a testable general (but not necessarily
complete) library. Then integration-test this higher-level business logic.

A common problem I see with many junior Java devs is that they write
effectively procedural code split into method-per-class classes, and they zip
together business logic and more complex implementation logic alongside one
another. Rather than building abstractions that make their business logic
simple and free of complex implementation, you end up with a procedural call
tree that has a gestalt - the complex implementation - spread across and
intermixed with business logic, and all of it tied together via indirected
runtime composition, because testing.

That's fairly abstract, so I'll make it concrete. Consider a spreadsheet
report generator over data coming from entities in a database. Using a
spreadsheet library (e.g. Apache POI) is typically quite thorny because it
needs to try and support all the features, so you end up with complex logic
dealing with each master row, then other methods that have complex logic
dealing with each detail row. Code that has detailed knowledge about the
business domain is intermixed with code that has detailed knowledge about the
spreadsheet library's model. Let's not even talk about tests.

An alternative approach - and a refactoring I made - was to create a
reporting-oriented write-only facade for the spreadsheet manipulation. The
business logic was then conflated from multiple complex classes to a single
simple class that had straightforward code using the spreadsheet writer.

~~~
ufo
I wonder if there is a more meta-programming oriented way to achieve the same
kind of dependency injection. When working on the code you only see the
concrete instantiations but you have some kind of automated tool that replaces
those with mock objects if you want to.

~~~
skwirl
If the ecosystem you are in provides good tools and you know how to use them,
this is actually not really a problem. I work with C#/Visual
Studio/Resharper/Moq. I generally don't have static mock object
implementations because I can use Moq to dynamically create them. Resharper
gives me the ability to press ALT+END to go to the concrete implementation of
a method. If there is only one implementation of that method that it knows of
it will go straight to it - which is the case much of the time since I don't
have static mock implementations. If there are multiple implementations it
will let me choose which to go to.

------
pohl
I suspect that — late at night, tucked in for a slumber after toiling all day
in the processor — injected dependencies dream they are first class functions,
their service interfaces dream they are function signatures, and the objects
into which they are injected dream of being curried functions to which some
dependency arguments have already been partially applied.

~~~
arielby
Except that an interface's signature is more complicated than a single
function.

~~~
yebyen
> their service interfaces dream they are function signatures

In Go, at least, an interface is nothing more than a collection of function
signatures (with the type Foo interface{...} wrapped around them, they become
the interface known as Foo.)

------
V-2

        I do not use mock objects when building my application,
        and I do not see the sense in using mock objects when testing. 
        If I am going to deliver a real object to my customer 
        then I want to test that real object and not a reasonable facsimile
    

This reveals a fundamental misunderstanding of the purpose of mocking.

If I test object A (and mock B for this purpose), I'm testing A - not the
mock.

Another test verifies behavior of concrete object B (while mocking A).

Mocks themselves are not what is being tested.

Of course it is possible that both A and B behave correctly in isolation
(measured against expectations defined by unit tests), but they don't
integrate well.

In other words, expectations, assumptions verified by both unit test suites do
not cover everything that's actually required for the components to form a
functional system - there is a "gap".

This is however not a flaw of Dependency Injection, but a natural shortcoming
of unit tests as such. Obviously they are not a replacement for integration
tests - but that works both ways. Different beasts.

The advantage that unit tests have over integration tests is that they make it
much easier to pinpoint sources of failures.

------
BobTheCoder
\- DI doesn't add a lot of overhead. You don't really explain where it is
EVIL, just that it is NOT STRICTLY NECESSARY ALL THE TIME.

\- It handles complex cases such as dependency X should be a singleton, or
inserting a layer of caching in front of dependency Y.

\- It handles cases where dependency Z requires runtime logic to determine
which implementation to use. You can do things like in
PerformanceCriticalPackage use the HighPerformanceLogger and use some other
logger else where. Want to switch these around, only need to touch the DI
wiring logic.

\- I find it keeps modules that use other modules "cleaner" without having any
kind of dependency construction logic in them.

> Design patterns are an option, not a requirement

Using in DI is a design pattern and I agree all usage of design patterns need
to be justified. However not using DI and manually creating dependencies is
ALSO a design pattern! You need to justify using either, or something
different.

Of the available options, using DI is generally the safest bet, especially if
you want consistency throughout an application, since it's likely you will
want this power somewhere.

------
shadowmint
> I do not believe in the idea that it automatically provides benefits in all
> circumstances

Where 'it' is dependency injection, or, hey, anything else.

How about we have an interesting conversation instead of rethrashing a stupid
one (here is my straw man: 'you use DI for everything without thinking.' Watch
as a bash it...)?

Why do dynamic languages like python, ruby and javascript typically not use
dependency injection?

You could argue that some of them (python) don't have a great unit testing
background, but modern projects like node do, and you still don't see the DI
pattern thrown around a lot.

In fact, you tend to only see it in languages that implement a native
interface-based polymorphism, like java, c++ and c#. Even folks using go
([http://openmymind.net/Dependency-Injection-In-
Go/](http://openmymind.net/Dependency-Injection-In-Go/)). Maybe rust too, but
since the single ownership makes singletons an anti-pattern in rust, maybe not
(at least I haven't seen it).

Could it be you can get by just fine for all your testing needs without DI?

~~~
rjbwork
Dynamic languages provide their own DI facilities simply by making objects.
Since the definition of a class, or object, or prototype (whatever your
language happens to call it) can be completely changed at runtime before ever
getting to your code,there's no need for injection.

You can simply configure all your services and helpers and config objects and
managers etc. at startup, so there's no need for an explicit DI/IoC type
pattern.

------
dicroce
DI is a good and useful thing, but like a lot of fads in software development
it is being overused in some places...

Actually what DI brings to mind for me is functional programming. By injecting
your dependencies (as opposed to simply constructing them) you are moving your
code just slightly in the functional direction. This brings many well known
benefits, but also its own costs.

------
kazinator
Let's look at Dependency Injection using a functional analogy.

Imagine you wrote a function "add1" which adds 1 to every element of a list
and returns a new list. This is inferior because it has a hard-coded
dependency on the specific function object (lambda (arg) (+ 1 arg)).

The dependency inversion way is to write a function "map" which is
"configured" by taking a functional argument. Then you call (map (lambda (arg)
(+ 1 arg)) ...) if you want to add 1 to every element, but of course you can
use other functions.

Now if we actually read the source code of this map, it is confusing compared
to the code of add1. "What does this do? It calls the function that is passed
in, but that could be anything! Why can't this damn thing just call a specific
function? All this program ever needs is to add 1 to a list; why build this
whole useless pattern?"

------
twa927
I think things like dependency injection popularity can be explained by
looking at communities created around programming languages. Particularly, the
levels of abstraction a given community encourages is largely constant.

Java is probably the leader in the amounts and levels of abstraction it's
community uses. J2EE's EJB are a thing of the past, nowadays it's IC. And PHP
for a long time is battling it's inferiority complex by imitating "serious"
Java.

I'm not saying that using a lot and deep abstractions is necessarily bad. You
can write programs with the same functionality using different levels of it.
What I'm saying that discussions like that are more "cultural wars" and
there's no rational proof what's better.

------
oldmanjay
this article is awful. insecurity theater, lots of wiki copy-paste, and a
fundamental misunderstanding of testing object oriented systems.

maybe this was meant to be humorous?

------
keredson
Yes it is. One of the many reasons I've migrated away from the Java world.
(There are no non-spring/non-hibernate jobs left!)

Years ago I wrote a "get me the hell out of spring" tool:
[https://code.google.com/p/unsprung/](https://code.google.com/p/unsprung/)

~~~
eropple
I've literally never worked at a Java shop that used Spring or Hibernate.
There's plenty of such places (I mean, just look at "anybody using
Dropwizard").

------
jdpanderson
A colleague and I had a discussion about this particular author. The
conclusion was that he's an older programmer that doesn't like when the
technology changes/evolves. Anything new is EVIL, and sometimes the people
evolving the technology get slandered in the process. This is visible in this
and many of his other articles.

Edit: shouldn't have used the word "older". It has nothing to do with it. He's
a programmer that doesn't like when technology changes/evolves.

~~~
aikah
> he's an older programmer

I don't think that has anything to do with age. Adaptation or Learning might
be harder as you get old, yet you can find as much young developers that stick
with things they know and refuse to learn or use new stuff. The whole "no
framework" movement for instance in front-end development, "because frameworks
are too complex". I doubt older developers prefer writing everything from
scratch because framework X or Y requires reading the docs. Everybody has
opinions on things, even younger developers write "rants" like that.

