
Dependency injection is dynamic scoping in disguise - r4um
http://gustavlundin.com/di-frameworks-are-dynamic-binding/
======
denisw
I really enjoyed the comparison of dependency injection with dynamic scoping,
and the explanation of how the latter can take over the uses cases of the
former with less boilerplate.

But one benefit of dependency injection unacknowledged in this article is that
dependency injection is the explicitness of dependencies: the need to pass
them in forces the caller to be aware of which dependencies exist, and changes
in dependencies cannot be ignored (they lead to compilation errors in
statically typed languages, at least).

Managing dependencies with dynamic variables, on the other hand, is implicit.
It's impossible to know which parts of the dynamic environment are used by a
module without inspecting its source code. And changes to the module's
dependencies are not noticed by callers, which may lead to cases where tests
fail to stub out particular side effects without anyone noticing.

Given this drawback, dependency injection still seems like the better trade-
off to me, despite of its higher amount of required boilerplate. Perhaps it is
possible to bring some of the explicitness to the dynamic scoping approach,
though.

~~~
roywiggins
Now I'm flashing back to a system that passed important configuration via
globals, so to call functions that relied on these globals, you had to
carefully make sure that the global environment was in the right state before
you called certain functions.

It was _awful_.

~~~
jetcata
I’ve worked on similar systems, never again. God objects are something that
should be avoided so that code is readable and maintainable.

~~~
roywiggins
One common pattern in this system was that you'd first save the current
environment, run the special "environment preparation" function, then the real
function, and then write back the saved environment over the modified one, so
that you didn't leave the environment changed after you returned. Unless of
course you _meant_ to change it. This was sometimes documented, you'd write in
which globals a function expected and which it modified into a docstring.

This was probably only in the top ten of the problems that this thing had, but
I do remember it vividly. Making any change was like pulling out a Jenga block
and replacing it without toppling the tower.

~~~
eru
> One common pattern in this system was that you'd first save the current
> environment, run the special "environment preparation" function, then the
> real function, and then write back the saved environment over the modified
> one, so that you didn't leave the environment changed after you returned.

That sounds exactly like dynamic scoping.

------
glun
Author here. After posting this to reddit I realized that the original title
is wrong, and poorly reflects the actual point I'm trying to make. Dependency
injection is not dynamic scoping, but the latter can be used to achieve the
former. I'm drafting an update to better reflect this. I'm also going to pull
out reader monads and env passing into separate sections and give reader
monads a better treatment in general.

~~~
eternalban
The 'Env' is typically called 'Context'.

These mechanisms are addressing functional requirements in component oriented
systems, but in the industry have been misunderstood and misused to satisfy
testing requirements.

And if one is not doing pervasive component reuse across multiple systems and
projects, the on-off usage of DI is of course completely over-engineered and
likely a poor design decision.

~~~
mrec
And one obvious refinement of the `Env`/`Context` God-dependency pattern is to
have it implement a bunch of fine-grained interfaces for subsets of the
dependencies it aggregates, so that you can both reduce plumbing (only one
thing to inject) and still make it clear which specific dependencies a given
call might use.

~~~
DaiPlusPlus
How would you manage scoped lifetimes and transient objects using the
Env/Context pattern?

~~~
eternalban
I creatd a minimal framework in C++ in mid '90s around the concept of
Contextual Objects. Child contexts can be used to affect life-cycle scopes. In
this approach, the virtual construct of a 'containment context' allows for
managed life-cycles, at an aggregate level. Delete the context and all child
objects (recursively) are deleted as well.

------
cryptica
I agree that dependency injection and dynamic scoping are similar and I think
they are both anti-patterns.

It doesn't make sense to stub out dependencies in unit tests (unless you
absolutely have to). Stubbing out dependencies is like stubbing out native
functions, operators or loops. They are called dependencies for a good reason;
because your class depends on them and assumes that they work. Trying to make
dependencies substitutable is overengineering and leads to poor design and
gives you more work when implementing unit tests.

Dependency injection is particularly bad because it makes it difficult to find
the path to the dependency's source code (which is critical for debugging).
Hiding the source path of a dependency is a terrible anti-pattern. When it
comes to programming, there are few things more horrible than not being able
to determine where some buggy piece of logic is located. I cannot imagine any
use case where that would be a fair tradeoff.

Any kind of injection of dependencies should be done via an explicit
method/constructor parameter. Sometimes it means that the dependency instance
has to traverse a few classes in the hierarchy, but that's way better because
at least you can unambiguously track where the dependency came from. Also, if
the dependency has to traverse A LOT of classes, then you know there is
probably something wrong with your architecture (e.g. a dependency imported
too high in the hierarchy and used in a deeply nested class can indicate poor
separation of concerns and leaky abstraction between components).

~~~
mateuszf
> Any kind of injection of dependencies should be done via an explicit
> method/constructor parameter.

I would go further and always recommend manual DI - that is passing the
objects and creating them in code rather than depending on some magic
framework to do the job. That makes understanding / analyzing / fixing bugs a
lot easier.

It's actually not that hard / boilerplate-y to do even in Java (which is what
we do in our team).

~~~
jcelerier
> I would go further and always recommend manual DI - that is passing the
> objects and creating them in code rather than depending on some magic
> framework to do the job.

That is so weird. The reason why we have DI and DI frameworks is _because_ we
are asked in requirements to make things configurable out-of-code (and at
runtime), because someone wants to use a pretty UI or a config file of sorts
to set up which behaviour is used to do $thing.

~~~
HereBeBeasties
> The reason why we have DI and DI frameworks is because we are asked in
> requirements to make things configurable out-of-code (and at runtime)

That's _a_ reason, but it's far from the only one. The way I've seen DI used
over the years is far more about separating out concerns in the code via
interfaces, or making things testable without side effects, than it is making
those things dynamically pluggable / controllable via config, not code. The
majority of interfaces passed as dependencies into constructors have a single
real implementation (outside of unit tests).

I lately err on the side of explicit manual wiring as the cost saving of doing
it all automatically does not usually offset the costs of extra magic and lack
of compile-time safety that it incurs. Exceptions are where there is a lot of
AoP interceptor stuff to wire in, or if I want something to help with
lifecycle scoping (e.g. per-request lifetime for a set of related things), at
which point a DI framework will likely start paying for itself.

~~~
mrec
I generally share your view, and have never really understood the appeal of
the magic woo frameworks. I asked an interview candidate about this once (he'd
spoken positively about them) and he very honestly said that yep, it's all
about bypassing change control restrictions.

Which is a pragmatic reason, but not a good one at the organizational level.
It's every bit as easy to break your system via a config change as via a code
change.

~~~
UK-Al05
I don't how a IoC container would allow you bypass change control?

Because config is not covered by change control in your case?

~~~
mrec
Their case, not mine, but yes, I gathered that change control could be skipped
for "config-only" changes.

Don't ask me why, it sounded crazy to me too.

------
matheusmoreira
These patterns are almost always working around language shortcomings.

For example, factories work around the new keyword. The new keyword in Java
emits the constructed type into the bytecode, making it a hard ABI dependency.
So people invented factories: they hide the new keyword behind methods that
return interfaces. In better languages such as Smalltalk, new is just a method
that can be overridden.

Singletons work around the fact only objects can implement interfaces. Classes
are natural singletons and yet they are second class citizens of most
languages. It is not possible to pass the class itself to code expecting some
interface. So people are forced to create an object and add complicated
boilerplate code to prevent more than one instance from ever being created. In
better languages, classes are the same as objects, they can conform to
interfaces and be passed around normally.

~~~
LgWoodenBadger
In Smalltalk, how would how would the consumer be given differing
implementations of this "new" method? And wouldn't that just make the "new"
method a factory?

~~~
matheusmoreira
> In Smalltalk, how would how would the consumer be given differing
> implementations of this "new" method?

Classes can simply override that method. The default implementation of new is:

    
    
      ^ self basicNew initialize
    

And basicNew is a method that allocates and returns an instance of the
receiver.

So a custom implementation can easily change self to some other class, chain
more messages or replace initialize with something else, add more logic before
an object is returned and so on.

> And wouldn't that just make the "new" method a factory?

Smalltalk actually predates the discovery of object-oriented design patterns
by a couple of decades so it's the factory methods that are like the new
method. For some reason language designers turned it into a magic keyword and
people rediscovered the fact that methods are better.

~~~
LgWoodenBadger
If my BubbleMachine currently makes SoapBubbles, but I want it to be able to
make GumBubbles as well, who, out of those three, is responsible for
overriding the “new” method to create GumBubbles instead of SoapBubbles?

~~~
matheusmoreira
Are these all subclasses of a Bubbles class? I think that'd be the natural
place for a custom new method that figures out which subclass to construct
based on the parameters.

In Java, an interface could have a static method that returns concrete
implementations of itself.

~~~
scroot
It's even better. If it's just making stuff, you don't even need
`BubbleMachine` if you have the `Bubbles` base class. You can add creation
method on the class side of `Bubbles` like so:

`Bubbles class >> #newGum` ^ GumBubbles new

`Bubbles class >> #newSoap` ^ SoapBubbles new

The difference here is that the base class still serves as a true base: it
will have all the common functionality for various kinds of bubbles

------
barrkel
DI is a module system built out of classes; dynamic scoping is another way to
build a module system.

If you had a good composable (parameterized) module system, you'd have much
less need of DI. A composable module system would scope the lookup of type
names and static methods to the actual module arguments arguments the
accessing module is constructed with.

The problem with `new X()` vs `@Inject X x` is that the construction of X in
the former has no indirection; type names are global constants. A module
system provides an indirection. Dynamic scoping could also provide an
indirection, because dynamic scoping lets you redefine / redirect those
otherwise constant things.

(DI in practice does a bunch more, like proxies to let you put data with
different lifetimes (request, session) in a mixed object graph; and the fact
that proxies now exist means aspect-oriented programming sticks its head in
and encourages its use for things like auth and transactions. Once you go over
the edge of the DI barrier to acceptance, "best practices" shift dramatically
- you end up quite far from where you started.)

------
adrianmonk
> _Second, we can now pass in different implementations of our dependencies
> when executing in test. This is very good, but let me rephrase that in more
> general terms: the values associated with certain names are now dependent on
> the environment in which we are executing._

Earlier, they established the idea that this was a bad thing by having the
printGreeting() function print a message you probably don't want. The change
in the value caused bad behavior.

However, with dependency injection, you should be following the Liskov
substitution principle. You might get different values, but they should all be
following the same contract.

The acceptOrder() function might get different implementations of BankService,
but the difference should be opaque to it. Calling bank.chargeMoney() should
work the same regardless of which one you got.

The reason I bring this up is the crux of the argument presented here against
dynamic scoping is, "Dynamic scoping makes it hard to figure what our program
actually does, without executing it, and that’s not a quality we want our
programs to exhibit."

To the extent that you successfully pull off having different subclasses
follow the same contract, this weakness doesn't really apply to DI (or
inversion of control).

------
zubspace
> The problem with this style of programming is of course that we have to pass
> the Env around everywhere

So why not make it a singleton? Or even better, make it a static class with
some static properties?

Yes I know, I said something evil! But before you take out the pitchforks,
bear with me:

    
    
        public static class Master
        {
            public static ISupplyService Supply;
            public static IBankService Bank;
            public static IMailService Mailer;
        }
    
    

This is C# and I actually LOVE this pattern. I have seen it referred to as
Master-Pattern, but I don't know the correct term. It solves a lot of
problems:

* You don't have to pass essential modules around anymore. You can initialize the Master once and access them everywhere without caring about their implementation.

* Code completion does wonders on this one. You simply type "Mast..." and it will show you ALL available modules. You don't have to remember anything. It's awesome for new team members.

* It is fast. In release builds you can even try to replace the interfaces with the actual classes implementing it for a straight method call.

It introduces a few problems:

* You increase coupling. Once you add a module, removing it or significantly changing the interface is tricky.

* You need to be careful to NOT couple modules to each other, and if you do, do it rarely. Otherwise you will have to instantiate the modules in a specific order which gets cumbersome.

In my opinion: If you are a small team and have sane teammates, give this one
a try. It reduces boilerplate a lot and reduces the amount of arguments you
need to pass around everywhere.

~~~
bluejekyll
How do you test? One of the things you get with dependency injection is a
method to replace dependencies with other objects specifically to do thing
like unit testing with mocks or integration testing with objects that provide
specific failure modes.

This is much harder to do when everything is a static.

Edit: some typos.

~~~
zubspace
All fields in the Master class are interfaces. You simply assign Mock-
Implementations in the testing scenario, once in a startup routine. Done.

~~~
tmk1108
How do you deal with parallel tests execution? Given that they are all static,
you can't have multiple tests running at the same time setting up the mocks or
they might overlap

~~~
zubspace
You can add a static "InitTest" method somewhere which initializes the
modules. Use a double checked locking pattern [1] in there to make sure, that
you instantiate and assign the modules used for testing exactly once.

[1] [https://en.wikipedia.org/wiki/Double-
checked_locking](https://en.wikipedia.org/wiki/Double-checked_locking)

------
msluyter
In the java OrderService example, the author writes:

"Second, we can now pass in different implementations of our dependencies when
executing in test3. This is very good, but let me rephrase that in more
general terms: the values associated with certain names are now dependent on
the environment in which we are executing. This should sound very familiar,
dependency injection is just a more controlled form of dynamic scoping."

This seems slightly off to me, unless I'm misremembering my java. In the
OrderService example, the _value_ of the variable `bank` cannot change because
it's a reference: it'll always refer to the same object. However, if the
instantiated `BankService` is mutable, then the internals of that object could
change in various ways. Hence, in practice, the dangers of this pattern only
seem problematic if dependencies have mutable state.

Back when I was doing java, we used spring beans everywhere for this sort of
thing and iirc, they had no mutable state. In Python, I use a similar pattern
a lot, where I have classes that are in practice 'immutable once initialized'
\-- though of course, in Python you could always mess around with the
internals at runtime -- which are segregated from classes or objects that have
mutable state. (Similar to the structure described here:
[https://medium.com/unbabel/refactoring-a-python-codebase-
usi...](https://medium.com/unbabel/refactoring-a-python-codebase-using-the-
single-responsibility-principle-ed1367baefd6))

Of course, I get that you can't in practice know how stateful everything in
your dependency graph is. But I think the real problem here (if there is one)
isn't explicit DI in the form of dependency passing, but (unexpected/hidden)
object mutability.

------
jakub_g
[Offtopic info for the author]

I see [1] you're using <frameset> to wrap GitHub Pages with your own domain.
You could do it in a less hacky way by creating a CNAME file in GitHub repo,
and updating GitHub repo settings + DNS settings of the domain:

[https://help.github.com/en/github/working-with-github-
pages/...](https://help.github.com/en/github/working-with-github-
pages/managing-a-custom-domain-for-your-github-pages-site)

(Unless there's some particular way why you don't want it? I'd be curious.
Ability to gather server-side logs?)

[1] learnt it because the page fails to load with uMatrix extension so I
checked source.

~~~
glun
No I only did it that way because I couldn't get it to work with a CNAME file.
However, that guide you linked is much more detailed than what I originally
read so I'll give it another go. Thank you.

------
cousin_it
With dynamic scoping, when you need a DatabaseConnection, someone up the stack
from you still has to construct it manually. With dependency injection, the
framework can construct it for you.

I think maybe a better analogy for dependency injection is imports. When
library A imports library B which imports library C, they can just declare
that, nobody needs to assemble "new A(new B(new C()))". Dependency injection
is the same thing, but instead of libraries you have stateful objects, like "a
RequestHandler needs a RequestContext which needs a DatabaseConnection". Maybe
these tasks could even be handled by the same tool, but I haven't seen such a
tool yet.

~~~
barrkel
Imports, yes; more precisely, modules with parameterized dependencies.

------
thunderbong
This is my favourite explanation on dependency injection -

[https://www.jamesshore.com/Blog/Dependency-Injection-
Demysti...](https://www.jamesshore.com/Blog/Dependency-Injection-
Demystified.html)

From the article -

 _The Really Short Version_

 _Dependency injection means giving an object its instance variables. Really.
That 's it._

------
drblast
I think dependency injection can have its uses, but the way I see it used in
practice it looks like someone should file a bug on whatever programming
language features are missing to make dependency injection necessary.

I think _most_ code should be purely functional and unit tested that way,
which means that the only dependencies are the input parameters. "Mock"
dependencies used in unit tests are usually a unit-test circle jerk; most of
the time you're essentially testing that your programming language can indeed
make method calls through an interface and those tests are only there because
you added DI in the first place. It's common to see all kinds of testing like
this but nothing testing the actual functionality of the code because that's
so obscured by DI or mocked out. It _feels_ like you're implementing
comprehensive testing but it's mostly just additional complexity obscuring the
fact that you're not actually testing anything real.

The code that can't be functional? Sure, go ahead and knock yourself out with
dependency injection and IOC. It's great for not having to pass configuration
and logging instances around. But it's being abused when it's all over the
place and you can't look at code and figure out what it does without also
looking at configuration files and startup classes and knowing how the flavor-
of-the-month DI framework works.

------
larzang
I don't really see this comparison as being particularly useful. There isn't
really a choice between DI or dynamic scoping, there's just a choice between
application architectures, which is largely dictated by language.

The Closure example works because all your foreign symbols are coming from
namespaced imports, which is to say you have a Module architecture and your
only polymorphism lever is altering the namespace mapping to point to an
equivalent module.

With Java and the like you have a Service architecture, so rather than
importing symbols you're injecting services. Things that would otherwise be
exported as bare functions tend to be written as classes so that they can be
used as services.

While some languages can support either (i.e. JS is typically modular but
there are things like Bottle if you want to write JS in a service-based way),
it's not typical that a given application is going to support either
simultaneously. Most languages have a clear idiomatic choice that you're
likely not going to want to stray from to avoid headaches integrating 3rd
party dependencies and developers other than yourself.

------
CGamesPlay
In JavaScript, Jest provides this using module mocks. The code under test
imports a module, but in the test environment that module has been swapped out
with a mocked implementation. [https://jestjs.io/docs/en/jest-
object#jestmockmodulename-fac...](https://jestjs.io/docs/en/jest-
object#jestmockmodulename-factory-options)

------
moutansos
One language that actually utilizes true dynamic scoping is PowerShell. It's
true that this is an extremely powerful idea that can even override imported
functions from a parent scope for things like testing, but it can very much be
a nightmare. It leaves the programmer completely unaware of where a function
or variable is declared or if it even is declared. Imagine the situation that
while testing you have overridden a piece of code that is an interface into a
real data layer in a real system, and you forget to declare it in the parent
scope of the test, or worse yet, declare it and misspell the name of the
function you should be overriding. You accidentally call the real function and
start manipulation of data in a real system. It becomes a nightmare. DI and
IoC don't have these issues because they rely on explicit passing of the
dependency. So like many things, with great power comes great responsibility.

------
catern
>This should sound very familiar, dependency injection is just a more
controlled form of dynamic scoping.

This is not really true. First off, this is too much focus on "dependency
injection", which is just one usage of a more general feature: passing
arguments to functions.

Is dynamic scoping the same thing as passing arguments to functions? Well,
they are certainly closely related; for example, see the "implicit parameters"
paper[0]. But I think it is incorrect to say that converting a function to
take more arguments is "emulating dynamic scoping", any more than function
arguments in general are "emulating dynamic scoping".

[0]
[https://www.researchgate.net/publication/2808232_Implicit_Pa...](https://www.researchgate.net/publication/2808232_Implicit_Parameters_Dynamic_Scoping_with_Static_Types)

------
FpUser
Satisfying dependency can be setting up some pointer to a function, passing an
instance of already created interface, asking some factory to supply it etc.
etc. As in everything in life there is no single universal answer on what is
the best way to accomplish the task. What one uses in practice depends very
much from problem complexity, one's experience / personal preferences,
implementation language / environment features etc. etc.

I think there is no real need to dwell over the subject without knowing
particular context

------
ceronman
In Python's standard library there is the unittest.mock module [1] which
allows you to patch functions and methods. For example:

    
    
        with patch('requests.get', fake.get):
            definition = find_definition('testword')
    

[1]
[https://docs.python.org/3/library/unittest.mock.html](https://docs.python.org/3/library/unittest.mock.html)

------
anaphor
In Racket (and probably other Schemes) you can do

(define foo (make-parameter "some value you want by default"))

(define (do-foo) (do-something (foo))

(do-foo) ; uses the regular foo

(parameterize ([foo "my injected foo]) (do-foo))

And it all just works as you would expect. `foo` gets automatically reset to
the prior value when it leaves the scope of the current parameterize block.
You can also nest them safely, or use them in macros, and IIRC even inside
threads.

------
rb808
I prefer Service Locator to DI, Martin Fowler talked about it a long time ago
- not sure if he's changed.
[https://martinfowler.com/articles/injection.html](https://martinfowler.com/articles/injection.html)

------
mollusk
I don’t quite see the benefits of Clojure’s dynamic binding over with-redefs
for tests as demonstrated here. Personally I have only used Clojure’s dynamic
binding very rarely, mostly in production scenarios to override some default
value from a library and not for test cases.

------
catern
>Some examples of languages that use dynamic scoping by default are APL, Bash,
Latex and Emacs Lisp.

bash isn't dynamically scoped by default; it has only global scope by default.
You have to use "local" to dynamically scope a variable binding.

~~~
kazinator
That's like saying classic Lisp isn't dynamically scoped by default; you have
to use _let_ instead of just _setq_.

~~~
catern
No. When I set the value of a symbol in a dynamically scoped by default
language, it only affects the closest binding for that symbol.

When I assign in bash without "local", it affects global scope. There is no
way to do the corresponding behavior of "setq" in bash.

~~~
kazinator
It so happens that I actually know what I'm talking about in this area, with
regard to both languages/families:

    
    
      #!/bin/bash
    
      v=xyz
    
      f1()
      {
         printf "v = %s\n" $v
         v=clobber
      }
    
      f2()
      {
         local v="abc"
         f1
         f1
      }
    
      f2
      f1
      f1
    

Output:

    
    
      v = abc
      v = clobber
      v = xyz
      v = clobber
    

Almost line-for-line analogous Common Lisp program. Defining the variable is
mapped to _defparameter_ , _local_ mapped to _let_ and _setq_ to assignment:

    
    
      (defparameter v 'xyz)
    
      (defun f1 ()
        (format t "v = ~a~%" v)
        (setq v 'clobber))
    
    
      (defun f2 ()
        (let ((v 'abc))
          (f1)
          (f1)))
    
      (f2)
      (f1)
      (f1)
    

Output (from clisp):

    
    
      v = ABC
      v = CLOBBER
      v = XYZ
      v = CLOBBER

~~~
catern
That seems to be the case. My original statement was wrong then. My mistake -
for some reason I thought the bash behavior was different.

------
klysm
Scala implicits handle this really nicely

