
Dependency Injection is dangerous for your career - angryjim
http://stackoverflow.com/questions/2407540/what-are-the-downsides-to-using-dependency-injection/2407614#2407614
======
jpadvo
Hilarious. :) Some of the other answers are worth reading, too. Specifically,
there is a really good one a little lower that is worth reading:

[http://stackoverflow.com/questions/2407540/what-are-the-
down...](http://stackoverflow.com/questions/2407540/what-are-the-downsides-to-
using-dependency-injection/2408237#2408237)

> The same basic problem you often get with object oriented programming, style
> rules and just about everything else. It's possible - very common, in fact -
> to do too much abstraction, and to add too much indirection, and to
> generally apply good techniques excessively and in the wrong
> places...[answer continues]

------
barnaby
True. But what's even worse for your career is when you learn programming
languages that don't need DI frameworks because that kind of clean separation
is baked in (like Python) but then most jobs out there are Java :-(

~~~
thristian
I'll admit I've never done Java programming, but (so far as I understand the
term) dependency injection is something I use a lot when writing Python code.
For example, if I'm writing a method to transfer a file over a particular kind
of connection, I'll make the connection object a parameter rather than have
the method set up the connection itself. If I'm writing a class, I might make
the connection object a parameter to the constructor, or just construct it in
a helper method that I can override for testing purposes.

Am I missing some magic goo that makes Python not require DI, or do I just not
understand what Java programmers mean by DI?

~~~
hammerdr
> I'll make the connection object a parameter rather than have the method set
> up the connection itself

Parameter (Dependency) Injection.

> I might make the connection object a parameter to the constructor

Constructor (Dependency) Injection.

> construct it in a helper method that I can override for testing purposes.

Not DI.

<http://martinfowler.com/articles/injection.html>

Edit:

For languages such as Java and C#, wiring up these objects is quite a pain.

For duck typed languages such as Python and Ruby, it isn't that big of a pain.
So we just do it. Just because you don't have an IoC framework doesn't mean
you aren't practicing DI :)

~~~
rimantas
Relevant, I guess: <http://onestepback.org/articles/depinj/index.html>

------
primodemus
The following is a comment from Frank Atanassow on a LtU discussion about the
Curry Howard correspondence:

blah blah blah dependency injection blah blah blah (This is a bit off-topic,
but it came to mind, so what the hell.) You are correct in your observations
that given most programming languages (even those such as Haskell), it is
difficult to see exactly how Curry-Howard is useful. I recently stumbled
across someone mentioning something called "dependency injection". I didn't
know what it was, so I googled (I guess this is lowercase nowadays!) it and
read Martin Fowler's article on it. It is a bit on the long side, and I kept
waiting for the punch-line; you know, the point at which the author hits you
with the insight which justifies the preceding verbosity and the hi-tech-
sounding name ("dependency injection" — I can't help but think of "fuel
injection", and gleaming motor engine showcases), but it seemed indefinitely
postponed. And in the end, it turned out that "dependency injection" just
means "abstraction" specifically by parametrization, by update and by what I
think amounts to type abstraction plus update. (Apparently these are called —
I kid you not — type 3 IoC, type 2 IoC and type 1 IoC...!) To me this all
seemed rather obvious and it got me thinking about why it isn't obvious to the
author or his readership. In Haskell, if I am given some type B which I need
to produce somehow, and I realize that the B-values I need depend on some
other values of type A, the first thing I do is write down "f :: A -> B". Then
I write down "f a =", and then I start writing stuff after the equals sign
until I have what I need. I do that because I know once I have the type that
if there is an inhabitant of the type "A -> B" it can be expressed as "\a ->
b" for some b, so the "f a =" part is always part of my solution and I will
never have to change that unless I want to. So once I've written that down I
feel one step closer to my solution. I know that for three reasons. First,
because of my experience as a functional programmer. Second, because it is
part of the universal property of exponentials ("factors uniquely"), that is,
of function types. And third, because by the Curry-Howard correspondence with
natural deduction, I can start any proof of A which depends on B by assuming
A, that is, adding it as a hypothesis. So, why is it so obscure in Java? I
think part of the reason is that in Java you have update, so there are
complications and additional solutions. But part of the reason is also that it
largely lacks structural typing, and that makes it hard to see that a class('s
interface) is a product of exponentials. (With nominal typing, you tend to
think of a class by its name, rather than its structure.) You could also blame
the syntax of method signatures, which obscure the relationship with
exponentials and implication. But is the syntax the cause or just a symptom?
(You know what I think about syntax...) If CH could be readily applied to
Java, perhaps Java's designers would have chosen a more suggestive syntax. But
even if they had decided to stick anyway with C-style syntax, the idea of
using abstraction to handle dependencies would have been more obvious.

More: <http://lambda-the-ultimate.org/node/1532>

------
bebop
Thats probably one of the funniest stack overflow threads I have read.

------
donpark
Dependency injection separates discovery from dependency. It's neither good
nor bad but can be useful as well as abused beyond ad nauseum. End of story.

------
brown9-2
How can you tell how closely coupled a codebase is when you're in the
interview process and not yet an employee?

This is the real challenge. I don't think many companies, outside of perhaps
small startups, will let you view their code - at your own perusal so you know
they aren't just showing off the good stuff - to candidates.

------
fleitz
DI is generally used to solve one of three problems that are pretty much
unique to Java:

Passing a function

Decoupling initialization from memory allocation (eg. making constructors
work) (This problem is also shared by C# before 3.0 but you can kind of get
around it by passing a function that does the initialization, or using an
anonymous constructor in 3.0 and up)

Avoiding the FactoryFactoryFactory pattern where it's all factories all the
way down which is a pattern designed to get around the constructor anti-
pattern. Because constructors are somehow special and not just a function that
returns a specific data-type. So in C# you'll wrap a constructor in a function
so it can be passed, and in Java you'll use DI. (eg. Func<String> s = () =>
new string())

DI is primarily a euphemism for programming in XML or another language that
sucks less than Java. Primarily it's a euphemism designed to assage the egos
of Java programmers who don't want to admit you can't solve problems elegantly
in Java so they move their code to other languages that interact with Java to
pretend Java solves more problems than it creates.

~~~
trezor
_DI is generally used to solve one of two problems that are pretty much unique
to Java:_

 _Passing a function_

If you will pardon my English: What a load of bullshit.

I'm sure it can be used for that as well (i.e. in implementing the delegation
pattern, due to lack of delegates and first class functions), but saying that
is what it all that is good for simply makes you look inexperienced.

For any system with a reasonable complexity you will find yourself wanting to
separate your code into modules. It might be that you want to thoroughly
implement SOC (seperation of concerns), it might be that you want your code to
be more flexible (i.e. be able to replace a file-store with a db-store later)
or simply that you realize that your system is so big, that you need to be
able to work with components separately to be able to properly test your
modules.

 _Decoupling initialization from memory allocation_

First you criticise Java, then you bring up a point which (usually) is not a
very big concern in garbage-collected languages.

Maybe you mean "controlling initialization" which is crucial for testable
code, but given how your first point is completely off base I'm not really
sure I would give you the benefit of the doubt.

 _DI is primarily a euphemism for programming in XML_

You can implement DI without any XML. And no, for reference, I don't do Java.

~~~
fleitz
I know you can implement it with out XML, I do DI all day long with functions.
I just don't call it DI, I call it passing functions. When people talk about
DI they're generally talking about using a DI framework. My point is that the
concept is trivial enough to implement in the language itself and doesn't
require a framework to help 'manage' it.

What problems can be solved with DI that can't be solved by passing a
function? Or 'controlling initialization'

Allocation issues are present in GC languages because of the overhead of GC
when you could just reinitialize an existing piece of memory. (You also get to
keep your L1/L2 caches hot by not initializing new memory when old will do)
Allocation has non-trivial costs. This is why using the methods that allow you
to pass a byte[] buffer are often more efficient than those that allocate
their own buffer. And we're not even getting into the fragmentation that can
occur when you're rapidly allocating memory and some of that memory is locked
by the OS so it can't be collected.

How is it that DI frameworks are necessary for large code bases yet most
operating systems don't have DI frameworks?

When most people writing OO code have a problem, they think 'I know I'll use a
DI Framework', now they have two problems.

~~~
trezor
_What problems can be solved with DI that can't be solved by passing a
function?_

Any non-trivial interface implementation will have more than one function.

Let's say I have a core library which defines lots of possible actions for my
application. This application will need to work with other applications
although only one at a time (in my case, trough COM), however the operations
which needs to be done in these applications remain the same. Applications,
not one. Operations, not one.

Obviously this calls for a common interface (in a statically typed language)
so that the you can make Application-proxy classes which handles the
application interaction in a similar way.

Once you have those implemented as classes you typically pass those around,
for instance via the constructor when setting up your instance. Even without a
DI-framework this _is_ DI.

For this case "passing a function around" would be very impractical as you
would have to pass several functions around. You would have to make sure all
functions were created at the same time, with closures referencing the same
application COM instances etc.

In fact, passing functions around here would make your solution more complex
and error-prone with little benefit what so ever.

Don't get me wrong: I love what function-passing enables. I've written tons of
code where function-passing is used to simplify the solution and make it more
reusable, more elegant and generally better to work with. Being able to pass
functions is awesome.

That said, function passing can be considered a primitive and it's not one
which solves all problems. DI is for when your problem domain is bigger, more
complex and function-passing would be you were only allowed to pass primitive
objects (int,string,etc) around as opposed to sending class-instances around.

They are very, very different and if you can't see that difference, you really
should step back and take another look.

 _Allocation issues are present in GC languages because of the overhead of GC
when you could just reinitialize an existing piece of memory_

Sure. But your argument that this was one of "two" reasons to use DI is most
certainly false. I've never seen anyone advocate DI because of memory issues.
This is simply not an argument.

 _How is it that DI frameworks are necessary for large code bases yet most
operating systems don't have DI frameworks?_

Because you don't need a DI-framework to implement DI in your code. If you
think DI needs "frameworks", you are obviously missing the core point about
it: It's a way to structure your code.

~~~
fleitz
Totally agree with you re: DI as a way to structure your code.

I tend to start with function passing and yeah, when it gets to 5 functions
you start putting them in a class or other structure that allows you to pass
them as a unit. I like using tuples to pass gobs functions because it works
better with type-inference in F#

I've just never heard it referred to as DI except in OO (Java) circles.

------
shareme
well known fact, game developers use DI like it was 2nd nature to them not
matter what computer language they use..

..Gaming is Life

