Hacker News new | past | comments | ask | show | jobs | submit login
Dependency injection is dynamic scoping in disguise (gustavlundin.com)
199 points by r4um on Oct 31, 2019 | hide | past | favorite | 164 comments



I really enjoyed the comparison of dependency injection with dynamic scoping, and the explanation of how the latter can take over the uses cases of the former with less boilerplate.

But one benefit of dependency injection unacknowledged in this article is that dependency injection is the explicitness of dependencies: the need to pass them in forces the caller to be aware of which dependencies exist, and changes in dependencies cannot be ignored (they lead to compilation errors in statically typed languages, at least).

Managing dependencies with dynamic variables, on the other hand, is implicit. It's impossible to know which parts of the dynamic environment are used by a module without inspecting its source code. And changes to the module's dependencies are not noticed by callers, which may lead to cases where tests fail to stub out particular side effects without anyone noticing.

Given this drawback, dependency injection still seems like the better trade-off to me, despite of its higher amount of required boilerplate. Perhaps it is possible to bring some of the explicitness to the dynamic scoping approach, though.


Now I'm flashing back to a system that passed important configuration via globals, so to call functions that relied on these globals, you had to carefully make sure that the global environment was in the right state before you called certain functions.

It was awful.


Locales in C are an example of this.

https://github.com/mpv-player/mpv/commit/1e70e82baa9193f6f02...

> The locale (via setlocale()) is global state, and global state is not a reasonable way to do anything.

> It will break libraries, or well modularized code.

> (The latter would be forced to strictly guard all entrypoints set set/restore locales, assuming a single threaded world.)

> On top of that, setting a locale randomly changes the semantics of a bunch of standard functions.

> If a function respects locale, you suddenly can't rely on it to behave the same on all systems.


I’ve worked on similar systems, never again. God objects are something that should be avoided so that code is readable and maintainable.


One common pattern in this system was that you'd first save the current environment, run the special "environment preparation" function, then the real function, and then write back the saved environment over the modified one, so that you didn't leave the environment changed after you returned. Unless of course you meant to change it. This was sometimes documented, you'd write in which globals a function expected and which it modified into a docstring.

This was probably only in the top ten of the problems that this thing had, but I do remember it vividly. Making any change was like pulling out a Jenga block and replacing it without toppling the tower.


> One common pattern in this system was that you'd first save the current environment, run the special "environment preparation" function, then the real function, and then write back the saved environment over the modified one, so that you didn't leave the environment changed after you returned.

That sounds exactly like dynamic scoping.


Sounds like OpenGL or HTML canvas to me, if I'd had to make a contemporary comparison with what I know.

Fun times, having bugs with OpenGL because of this. Oh, fun times :-D


Oh man. It's not as bad now with shaders, but I remember how horrible learning fixed pipeline OpenGL was. You'd try to write some simple code, the result would be a black screen, and you'd just keep adding glEnable/glDisable calls to your code over and over until you figure what invisible piece of global state was ruining your day.


Oh man, I'm not sure if I'm imagining it, but that sounds like me in 2012.


This sounds like a system we use at work. Go take a look at Tera Term.

In the realm of programming languages it... works. It gets the job done for simple tasks. If you want to do anything complex, watch out.


Aha, I see you're familiar with my first programs in PHP 4.


This reminds me of a lot of Matlab code I had to read in grad school :/.


> But one benefit of dependency injection unacknowledged in this article is that dependency injection is the explicitness of dependencies: the need to pass them in forces the caller to be aware of which dependencies exist, and changes in dependencies cannot be ignored (they lead to compilation errors in statically typed languages, at least).

This no longer appears to be true, as most projects nowadays do dependency injection via frameworks like guice or spring. Instead of the caller injecting all dependencies, the caller simply tells the framework that it wants an instance of FooBar, and relies on the framework to magically retrieve all dependencies and use them to construct the instance of FooBar


In production, yes - but in most unit and integration test code projects you still provide explicit constructor parameters.

Many frameworks also provide for dependency validation that can be performed when the program first starts-up - you can then extend your build process to execute this validation code as part of a CI/CD process - so while it’s not strictly speaking compiler-enforced correctness it’s still better than getting a nasty surprise in production.

Aside: I‘d love to see a T4-based static DI object factory for my .NET projects which would be the best of both worlds: full static compiler-enforced dependency correctness without needing to code it by hand. It should be possible to make using EnvDTE or Roslyn. Would anyone be interested in that?


The framework will fail to do so on startup though - thankfully, Spring defaults to eager initialization of all beans. But yes, you're in deep water when you make beans lazy or if you have no other choice than to make most things lazy for performance reasons, like in PHP.


Plus, passing it via function parameters allows you to do things like currying (outside of the constructor case, I suppose)

  const foo = (env) => (arg1, arg2) => {
    ...
  }

  const fooWithEnv = foo(myEnv)


That's what the Reader Monad does under the hood. See eg https://eli.thegreenplace.net/2018/haskell-functions-as-func...


I think this is a very good point, but I also think good code organization can come a long way in addressing it.

In my opinion, a good test suite should contain mostly module level test[1]. You stub out interactions with other modules (if you both read and write to another module you should use a handwritten test double rather than a mock) but leave your own module mostly unchanged. Perhaps you replace some configuration (like changing the DB driver to run against an in-memory database), perhaps you replace the entire peristence layer, but that should be about it.

The points where your module interact with other modules and external systems can be isolated to a single file or package. Sometimes this means just aliasing a function or class, sometimes it means writing a proper facade. This makes it easy to figure out what needs to change when testing.

I also don't think that this is a problem that dependency injection frameworks are helpful in addressing. Looking at constructors is not much better than reading the method implementations -- you still have to look at every file in the codebase. Manual dependency injection with handwritten or generated (ala Dagger) factories does solve the problem completely though.

[1] I think proper unit tests should be reserved for functions that are computational or algorithmic in nature and complicated calculations/algorithms are rare in the domain I'm currently working in, though this would be different in other domains. You'll also always want some real system level tests, but not too many since they're darn slow.


> Given this drawback, dependency injection still seems like the better trade-off to me, despite of its higher amount of required boilerplate. Perhaps it is possible to bring some of the explicitness to the dynamic scoping approach, though.

There's a difference between a program that's industrial-strength versus a toy project.

Any program that's industrial strength will require effort in strengthening, irregardless of the pattern chosen.

(Honestly, I really wish dependency injection was a language feature.)


Effort in strengthening being required at all is independent of pattern chosen.

Amount of effort very much depends.


It seems like this could be a language feature. The key would be an explicit declaration of the identifiers that the function expects to be bound in the calling context. So when declaring the function you optionally specify the dynamically scoped arguments (separately from regular arguments) that callers must have in scope. Doing so allows you to use those dynamic variables within the body of the function.


Scala implicits. It's a curried arg list you can omit, but the compiler checks that you have implicit values of all the needed types in scope. Or you can pass whatever you want explicitly.

https://docs.scala-lang.org/tour/implicit-parameters.html


Scala's implicit parameters and Haskell's typeclass instances are this, correct?


I'm not sure how you'd override the typeclass instance you are going to get easily? (Haskell typically only allows a single instance per type.)

OCaml lets you swap out the equivalent of the typeclass instance. But it's a pain, because there system also means that you always have to specify which instance you want.


Dependency injection is typically used with a system to automate the wiring and take care of the boilerplate.


"passing them in" isn't really dependency injection, it's just taking arguments.


I agree that dependency injection and dynamic scoping are similar and I think they are both anti-patterns.

It doesn't make sense to stub out dependencies in unit tests (unless you absolutely have to). Stubbing out dependencies is like stubbing out native functions, operators or loops. They are called dependencies for a good reason; because your class depends on them and assumes that they work. Trying to make dependencies substitutable is overengineering and leads to poor design and gives you more work when implementing unit tests.

Dependency injection is particularly bad because it makes it difficult to find the path to the dependency's source code (which is critical for debugging). Hiding the source path of a dependency is a terrible anti-pattern. When it comes to programming, there are few things more horrible than not being able to determine where some buggy piece of logic is located. I cannot imagine any use case where that would be a fair tradeoff.

Any kind of injection of dependencies should be done via an explicit method/constructor parameter. Sometimes it means that the dependency instance has to traverse a few classes in the hierarchy, but that's way better because at least you can unambiguously track where the dependency came from. Also, if the dependency has to traverse A LOT of classes, then you know there is probably something wrong with your architecture (e.g. a dependency imported too high in the hierarchy and used in a deeply nested class can indicate poor separation of concerns and leaky abstraction between components).


> Any kind of injection of dependencies should be done via an explicit method/constructor parameter.

I would go further and always recommend manual DI - that is passing the objects and creating them in code rather than depending on some magic framework to do the job. That makes understanding / analyzing / fixing bugs a lot easier.

It's actually not that hard / boilerplate-y to do even in Java (which is what we do in our team).


Yeah I've been working with a lot of AWS lambdas the past year and one of the design decisions was to limit our jar sizes because of warm up time concerns. So, libs like Spring or Guice are not allowed and I can say we really haven't missed them as we just construct new objects by passing in their dependencies when needed.


> I would go further and always recommend manual DI - that is passing the objects and creating them in code rather than depending on some magic framework to do the job.

That is so weird. The reason why we have DI and DI frameworks is because we are asked in requirements to make things configurable out-of-code (and at runtime), because someone wants to use a pretty UI or a config file of sorts to set up which behaviour is used to do $thing.


> The reason why we have DI and DI frameworks is because we are asked in requirements to make things configurable out-of-code (and at runtime)

That's a reason, but it's far from the only one. The way I've seen DI used over the years is far more about separating out concerns in the code via interfaces, or making things testable without side effects, than it is making those things dynamically pluggable / controllable via config, not code. The majority of interfaces passed as dependencies into constructors have a single real implementation (outside of unit tests).

I lately err on the side of explicit manual wiring as the cost saving of doing it all automatically does not usually offset the costs of extra magic and lack of compile-time safety that it incurs. Exceptions are where there is a lot of AoP interceptor stuff to wire in, or if I want something to help with lifecycle scoping (e.g. per-request lifetime for a set of related things), at which point a DI framework will likely start paying for itself.


I generally share your view, and have never really understood the appeal of the magic woo frameworks. I asked an interview candidate about this once (he'd spoken positively about them) and he very honestly said that yep, it's all about bypassing change control restrictions.

Which is a pragmatic reason, but not a good one at the organizational level. It's every bit as easy to break your system via a config change as via a code change.


I don't how a IoC container would allow you bypass change control?

Because config is not covered by change control in your case?


Their case, not mine, but yes, I gathered that change control could be skipped for "config-only" changes.

Don't ask me why, it sounded crazy to me too.


I've always found passing objects explicitly to be superior to DI when it comes to unit testing as well. Build the object you want, then test any expected side-effects of the method. And if you can't construct an object exactly as needed, then mock objects work almost as well.


It's not superior, because it's still DI but manual not automatic.


What's weird about it? In practice it just means writing classes which instead of newing things inside the constructor, allows them to be passed in by the caller.

This is a pretty fundamental aspect of polymorphism and composition.


> What's weird about it?

What I meant about weird was that OP's proposed "solution" to DI is to "do it manually", while the exact reason while we want DI in the first place is not to do it manually (or, in the way that I heard it most in my life, "to be able to change it without rebuilding")


> What I meant about weird was that OP's proposed "solution" to DI is to "do it manually", while the exact reason while we want DI in the first place is not to do it manually (or, in the way that I heard it most in my life, "to be able to change it without rebuilding")

No, the main reason we want DI is to change the behaviors based on what's passed. That's dependency injection. Framework passing the dependencies is automatic dependency injection and what I'm advocating against.


> No, the main reason we want DI is to change the behaviors based on what's passed.

but which behaviours are passed depend on something external to the system - config files, etc. - so you need something that gets the information from somewhere and instantiates the correct class - and trust me, you don't want to write

    Protocol proto = null;
    if(config == "protocolA")
      proto = new ProtocolA;
    else if(config == "protocolB")
      proto = new ProtocolB;
    else if(config == "protocolC")
      proto = new ProtocolC;
    else if(config == "protocolD")
      proto = new ProtocolD;
    else
      throw whatever;

    return new MyObjectWithDependencies(proto);
especially when your system supports >50 protocols, and your object also needs a logger which can itself be of 12 different kinds, a file accessor which can mmap or not according to configuration & os, etc etc


Imo, dependency injection frameworks are distinct to the concept of DI. They do not represent DI itself, which is just a design pattern.


Agree, it reduces boilerplate and prevents you from shooting yourself in the foot when it comes to writing testable code.


> That is so weird. The reason why we have DI and DI frameworks is because we are asked in requirements to make things configurable out-of-code (and at runtime), because someone wants to use a pretty UI or a config file of sorts to set up which behaviour is used to do $thing.

Which is ironic in regard of DI frameworks that rely on in-code annotations to wire dependencies, especially in compiled languages.

Let's not even get started with languages that do not allow ad-hoc object/value creation from external manifests directly, like Go.


Yes, this is a good way to ensure that you aren't doing horrible, horrible things inadvertently. When you can magic up a dependency from a service locator or DI container, you don't have to think as much about what you are actually doing. Having to look at a constructor with seventeen parameters passed in highlights that you've made a mess of your design.


Do you have code examples of this?


Well, you just create an object and set it as a field / pass it via constructor to the object that uses it. There are dependency injections frameworks, but the term dependency injection doesn't mean injecting automatically, it's more general. It just means that the dependency is provided from outside, and not how it's done.

https://en.wikipedia.org/wiki/Dependency_injection


> It doesn't make sense to stub out dependencies in unit tests (unless you absolutely have to).

The long string of assertions that follow this remark are quite wrong and misses the whole point of unit tests. You are expected to write tests that test a specific unit of software, and that unit alone. You are expected to write tests that provide well defined input parameters to your unit of software, free from side effects, and you are expected to put together combinations of those input parameters that enable you to cover an adequate amount of code paths within your unit of software. By far the best and easiest way to pull this off is by mocking interfaces.

When you start testing multiple units of software interacting or writing tests that rely on specific behaviors of external units of software them you're already testing one level above what you're supposed to be testing in the test pyramid: integration tests.


Exactly. I don't want to assume gp doesn't understand the difference between unit and integration tests, but I think the only alternative conclusion is they don't think there is value in unit tests. I disagree, and happily inject mock objects for testing.


The problem with unit tests + heavy mocking is that you essentially makes any sort of class reorganisation as a massive pain.

I try for sociable unit tests(it's a thing), which classes that are expected to work together 100% of the time are tested together. Mock out the external dependencies(http etc) or things that are expected to change.

I tend to only mock out/simulate primary ports(inputs, such http requests), and secondary ports(databases calls) for simple web apps.

I may start isolating stuff for testing purposes when it can be used well independently such a piece of middleware, or things that can be used by multiple parts of a project.

The main difference socaiable unit tests, and integration tests is that traditional integration tests still talk to the db.


Part of the problem with unit tests that mock everything is that they are very time consuming to write. Minor changes in how classes interact result in 5x the time needed to update unit tests.

Furthermore, unit tests don't really test product use cases.

Personally, I find testing a single class with mocking best for edge cases, error conditions, or when a class is something like a custom collection that has little outside dependencies.


> Part of the problem with unit tests that mock everything is that they are very time consuming to write. Minor changes in how classes interact result in 5x the time needed to update unit tests.

Unit tests are supposed to verify a software unit's design contract. If a minor change breaks your design contract then either the change is not minor at all or you have more serious problems to deal with than writing unit tests.


While not GP, I'll chime in and say I find very limited value in having unit tests and integration tests covering the same ground - and that I'd almost always rather have the integration tests if I'm just going to keep one of them.

There are occasions when I'll mock objects, but it's rare. Working in gamedev, maybe I'm just in an environment where the possible perf/isolation benifits of mocking for the purpouses of isolating a unit just aren't as pronounced?

There are still situations where dependency injection is a wonderful decoupling tool, but I can't even remember the last time I used it exclusively for the purpouses of mocking.


There is a lot of disagreement about what is a unit test outside the realms of academia. The way I'm suggesting may lead to overlapping tests with subcomponents but I would argue that this is OK.

If you have a car and you want to write a unit test for the wheel, does it make sense to stub out the tire? Or if you have a door, should you stub out the door handle in your unit test? I would say no because a good test is still about the functionality of the unit/component.

Testing higher level components doesn't stop you from also writing tests for the tire or for the door handle. If the higher level components have a well abstracted interface, then you should be able to substitute any child component with any other working component later and the higher level component tests should still work without any changes required. You can even change the interface of children/sub-components; doesn't matter, the parent component tests keep working. Note that this is not true if you stub out the children, you need to keep updating the stubs.


The value of the unit test is only as good as the (mock) data you feed it. Your mocked object is great and all your units pass; A user asks for the bathroom and the bar burns down because the bathroom was never mocked. What value was added by that passing unit test?


I wouldn't claim that unit tests are completely sufficient; you clearly also need integration tests. But you don't need all of your tests to be integration tests; in fact that would often be needlessly expensive and annoying.


I think you've inadvertently backed up the GPs point. DI makes it impossible to have single units of work, as you are quite literally injecting other units into that unit. Mocking is all find and good, but in the case of DI, you're not mocking because you have an external interface, you're mocking because you don't have single units.


I'm confused by what you mean. DI doesn't introduce any more dependencies compared to a non-DI approach (like hardcoding dependencies via module import or static methods), so how does it change the unit of work? DI doesn't add additional things to mock. DI simply means that dependencies are explicitly passed in. In a non-DI approach, they are implicit but still exist.


DI with sane defaults can reduce the min number of dependencies. The other thing DI does is make it easier to crate acyclic layered dependencies. Which promotes true modularity and reduces churn.


> you are quite literally injecting other units into that unit

In tests, you will usually inject a mock of any dependencies that you don't want to test, in order to isolate the unit you do want to test.


One of the best ways I've found to illuminate poor designs is to not allow the use of mocking frameworks and then see how much pain follows in the test code.


Let’s say I have a newsletter generation service for a website. Users can opt of emails. I use AWE SES for sending emails. I would like to test that emails are sent unless you’ve opted out.

First, I don’t know how to accomplish this without mocking / replacing the email dependency before testing, even in an integration style test. Emails cost and how would I even verify if I did send the email. Second, I really am failing to see how this is poor design and am genuinely curious.

I can think of so many scenarios where some “orchestrating” service uses multiple single unit services to pull off some task. To me, mocking is incredibly useful for ensuring that you test a lot of failure cases in these scenarios by mocking to force an exception.

A lot can be accomplished via integration tests rather than mocking (actually using a database for example). But to somehow suggest mocking is across the board bad seems like an illogical position


The email service I use has a library that allows me to set the Host as a constructor parameter, so I have an EmailHost environment configuration value that I change. It's localhost for dev/testing and set to the correct host for production. My test code then listens for the API calls from that library, uses a different library to parse the email, and then I can verify the content.

No code changes between dev, test, and production, but I added an environment variable to represent the e-mail host so that I can change where e-mails are sent as needed.

I dislike mocking, so I took the time to figure that out and set it up. If mocking works for you, I likely wouldn't go out of my way to change things to get rid of it.


Wait a minute, your test code spins up a SERVER? Dear God, that’s just horrible… Sorry, man, but that’s just… horrible.


That's actually really common for distributed systems stuff as a way to run something in between full integration and unit tests. I think there's some official support for, e.g. local file stuff mimicking the interface of DynamoDB.


I think separating code into 'algorithms' that make a decision based on pure data, returning the decision in the form of pure data, 'side effects' which represent actions which can occur (and are also very mockable), and then 'coordinators' which rig these things together. In a dynamically typed language this pattern can be more challenging because there is a lot more burden to prove that the integration of various components is rigged up properly, but in a language like Scala this can work incredibly well.

The bulk of the testing in this design would be easy to write unit tests around the 'algorithm' functions. The coordinators have almost zero logic in them and so are mostly just there to coordinate components together and generally demand less unit testing. Since the coordinators coordinate actions and algorithms then they'd likely have dependencies (the actions in this case, generally) passed in using constructor injection or parameters to a function.


Instead of relying on a mocking framework to mock the injected service, can you inject a service which implements an interface and then supply whatever type of implementation you like depending on the context you'd like to test?

This is one of the neat things about Java 8 Functions. I'm not sure about what their design decisions were but effectively they backed into coding to interface vs. implementation, which used to be a solid design principle.


You could, but it’s tedious to write a new class for every scenario you want to test. Or to program for every scenario within that test implementation. And now you kind of should test your test implementation, and who wants to do that?

Mocking makes it quick, easy, straightforward, and cuts out a ton of code noise from the test.


You say "expected/supposed to" a number of times here without ever giving a reason or mentioning what tradeoffs are involved. Who cares about the testing pyramid? Theories of software development are as ubiquitous as blog posts. Which is right and why and when?


'Trying to make dependencies substitutable is overengineering and leads to poor design...'

I absolutely agree with this[1]. However, DI does not require this. You can do DI without substitution by default, you can still easily do unit testing, and you can make dependencies explicit. To be fair, I had to build my own framework to make this work exactly the way I wanted, but I also wanted a built in actor framework. I don't have examples of how I do testing, or other features I use, but you can get a feel for explicit injection by default from the readme I made: https://github.com/caseymarquis/KC.Actin

I'm sure there's some established DI framework which uses concrete types by default, and substitution when required. If not, it's not rocket science to roll your own.

1. Substitution by default is useful when you're building a library for mass consumption, but libraries typically don't need DI.


> injection of dependencies should be done via an explicit method/constructor parameter.

Yes! This functional approach is so simple yet so powerful and magicless.

I've recently refactored a small 20k LOC codebase and the first thing I did was to ditch some magical dependency injector in favor of passing dependencies as parameters to functions. Often dependencies are functions themselves making it straightforward to compose and test.


does that mean you went from

    def send_alerts():
      for person in everyone:
        Mail.send(type="alert", person)
where Mail.send calls an object that was created with Mail.create(env=os.environ["ENV"])

to

    def send_alerts(mail: Mail):
      for person in everyone:
        mail.send(type="alert", person)
where send_alerts is called as

    send_alerts(Mail.create(env=os.environ["ENV"]))

?


Not the OP, but yes I imagine that's what they did. But Mail would probably be created further up (probably at the top level in the main function), and passed to send_alerts, rather than created inline.

I use this approach in all my Haskell/Scala code, and it works really nicely. I have never once felt the need for some magicy DI framework. I just pass the dependencies via function arguments. If it becomes too bloated, I group dependencies, or rethink my design.


Doesn't that mean you need to bubble up every dependency up the stack? Like if function a calls b which calls c which calls d, and d needs a new dependency, wouldn't that involve modifying a, b, and c's function signatures and all call sites?


Yep it does require a little more work when bubbling dependencies. However it's less of a hassle that it might seem since changing dependencies is less common than I initially antecipated.


group dependencies as in

    def send_alerts(services):
      for person in everyone:
        services.mail.send(type="alert", person)


?


That is one option, yeah. I wouldn't do it in this example though. I'd keep it as Mail.


Many third party APIs that your code may depend on are either rate limited, cost per use, brittle, etc. if you run tests before check in or once per integration build, you can easily end up incurring costs or have unreliable tests and false positives.

Slow tests are tests that aren’t run. Having unit tests that depend on databases and APIs are slower, increase the deployment cycle and usually end up being skipped.

Trying to make dependencies substitutable is overengineering and leads to poor design and gives you more work when implementing unit tests.

How is it a unit test if it can fail due to other dependencies?

Hiding the source path of a dependency is a terrible anti-pattern. When it comes to programming, there are few things more horrible than not being able to determine where some buggy piece of logic is located. I cannot imagine any use case where that would be a fair tradeoff.

If you are only using dependency injection to enable unit testing, finding the real implementation is trivial - it’s the only concrete implementation of interface to the constructor.

Any kind of injection of dependencies should be done via an explicit method/constructor parameter. Sometimes it means that the dependency instance has to traverse a few classes in the hierarchy, but that's way better because at least you can unambiguously track where the dependency came from.

Again if you have only one concrete implementation and you’re using a DI framework, finding the implementation of an interface is trivial with any halfway decent IDE.


A "unit test" that relies on a database connection, isn't a unit test, rows from a table can be mocked. Data from all external APIs can be mocked.

If you want to run a full suite of integration tests, be my guest, but none of those will ever be unit tests.


Who are you arguing with? Does anything in my reply imply that I think it's good idea not to mock dependencies?


> Dependency injection is particularly bad because it makes it difficult to find the path to the dependency's source code

I agree.

To me it seems that the problem is globals. In the Java example, it's convenient to think of "the bank" as global, right up to the point where it isn't (need to substitute a test service, or different banks for different regions etc.) DI replaces global variables. "Magic" DI where this is handled invisibly by a framework lets you pretend it's still a global variable, with all the advantages and disadvantages of that.

I too prefer "explicit" DI where the dependencies are passed in. This does tend to lead to constructing a "world" object which carries around all the pseudo-global variables.

Perhaps this would benefit from language support in some way, just as Perl has $_ to allow non-explicit parameter passing.


I keep seeing posts about wanting “explicit” DI instead of “Magic” DI. The article’s author only showed examples of using constructor injection. No one is arguing that it should be done any other way.


> Perhaps this would benefit from language support in some way,

Scala has implicit parameters, which are declared explicitly in your method signature, but aren't passed explicitly into the method call (so long as exist in scope as an implicit where the call occurs). They can be confusing, but they can also help.


Most DI frameworks you explicitly pass things in via the constructor. I don't get it.


There can be some magic with jpassing


> Any kind of injection of dependencies should be done via an explicit method/constructor parameter.

In this article, that's the only kind of injection mentioned. I realize a lot of the times people talk about DI they're referring to magic injectors like Spring's, but in this article, and in the Wikipedia article on DI, DI and magic are not synonyms. When the injector is just the constructor that's still DI, but it isn't an anti-pattern.


I like dependency injection as an assemble mechanism in OOP languages like java, so that I just don't have to go around with "news" everywhere constructing every thing I need.

I don't like having to use mocks and stubs, and I think that using those everywhere is indeed an anti-pattern.

Having interfaces for stuff that won't really have polymorphism, it's just there for DI or because some guy likes interfaces is just wrong IMO. Interfaces is a layer of misdirection, if you don't have a use for it, don't use it. If eventually a use comes, then you put an interface there, with modern IDEs, there's even short-cuts to extract an interface from a class and do the renamings if you want. So please don't make "interfaces by default" that some DI apologists seem to love.


I would also like to add that I do love Spring's new functional bean definition for kotlin, keeps my kotlin stuff cleaner, and it's easier to replace if I get fed-up.


> It doesn't make sense to stub out dependencies in unit tests (unless you absolutely have to). Stubbing out dependencies is like stubbing out native functions, operators or loops. They are called dependencies for a good reason; because your class depends on them and assumes that they work. Trying to make dependencies substitutable is overengineering and leads to poor design and gives you more work when implementing unit tests.

This doesn't really make sense when it comes to web services and distributed systems. Often you can mirror a service dependency with a code dependency--whether that code dependency is a proper client library or just an ad-hoc client module that encapsulates the particular service calls you have to make--and then inject that dependency. It's really valuable to be able to stub those out if you don't want your unit tests making actual service calls.

> Dependency injection is particularly bad because it makes it difficult to find the path to the dependency's source code (which is critical for debugging).

...

> Any kind of injection of dependencies should be done via an explicit method/constructor parameter.

In my relatively limited experience with Spring, explicit constructor parameters are the idiomatic mechanism, combined with annotations. And when you follow this idiom, it's actually pretty easy to trace back to the dependency in IntelliJ.


>Dependency injection is particularly bad because it makes it difficult to find the path to the dependency's source code

Can you expand? Not sure I understand. Is it more difficult because the dependency is separate by an abstraction layer? Imo if DI is done right, the dependencies are always explicitly injected so there shouldn't really be any confusion about what the 'source' is.


They're explicitly injected somewhere. That somewhere often is distant both in time and space from where the dependency is used - for example, if the injection happened when that object was constructed as in the example which the original article used.

If many injections are possible, then I'm in a debugger breakpoint in the code where it's used, I don't have access to the call stack of how, where and why the object was injected with that dependency.

If it's usually just "the one" dependency except when testing, then if I'm looking at the code of a particular function, I don't have an easy way to see the code that must always get executed there because it "could" be (but isn't) anything.


> dependency injection [is an] anti-pattern. > injection of dependencies should be done via an explicit method/constructor parameter

The second sentence seems to invalidate the first (and I agree with the second but not the first). If you're injecting dependencies using language features, you're still doing dependency injection - and doing it a lot better than the useless Spring "framework".


> Any kind of injection of dependencies should be done via an explicit method/constructor parameter.

I would go further and always recommend manual DI - that is passing the objects and creating them in code rather than depending on some magic framework to do the job. That makes understanding / analyzing / fixing bugs a lot easier.


My experience is different. Inversion of control is a pattern that works just about anywhere. Javascript, C, Kotlin, Ruby, etc. You name it. I've dealt with a lot of code bases where writing tests was hard because the developers did not understand this simple design pattern. E.g. a lot of javascript frontend code ends up being hard to test for this reason. Anytime you mix object creation and logic, you make it harder to unit test.

In Java, using frameworks for this is common mainly because it has reflection and annotations, which means you don't actually have to manually call a lot of things to get your objects. E.g. in Spring, all you need to do is slap the annotation @Component on your component classes and it sort of self assembles the object graph from just that information. It's kind of neat if you do this properly. There are tons of ways to customise that or do it differently but it can be pretty minimalistic.

You can do this without frameworks as well (lookup DIY dependency injection). It basically just means that you write the code that constructs your objects yourself and do the right thing of not putting that in the wrong place (which results in hard to test code).

There are only two simple rules you need to remember: constructors must not do work (like constructing other objects or initializing some middleware) and all dependencies come in as constructor arguments. If you do that consistently, you are doing DIY dependency injection. Any time that looks tedious, you are likely violating some of the SOLID principles (e.g. because your constructor has 10 dependencies and the whole class suffers from poor cohesiveness).

In Kotlin, the trend is to move away from using reflection or annotation towards using more explicit DSLs with helper functions that figure out how to construct stuff at compile time (by using the type system and some language features). For example, KOIN https://github.com/InsertKoinIO/koin is a framework for Kotlin that uses neither reflection nor annotations. The same principles are used in Kofu, which is a kotlin centric way of doing similar things for Spring. Aside from being easier to debug (both of these use simple function calls), it also has the side effect of enabling better startup performance as well as native compilation using e.g. Graal.

Whichever way you do this, IOC ready components are easy to unit test because nothing gets constructed as a side effect of constructing them. Side effect free construction might be a better term. I was recently working on some python code that was the opposite where db initialization happened as a side effect of importing some module that exposed a global variable. I would call that broken by design. Python has all the tools you need to do better than that. There's no technical need to write broken code like that. I've seen similar problems in javascript code bases for frontend as well as node.js. Usually if you find a system that is hard to unit test, this is the root cause.


> Inversion of control is a pattern that works just about anywhere.

That was what confused me about IoC/DI when I first started hearing the term: it seems so fundamental that it had never occurred to me that there was any _other_ way to write software. I was surprised that anybody was surprised by this pattern.


> Any kind of injection of dependencies should be done via an explicit method/constructor parameter.

These kinds of statement make me question whether you have ever worked with any framework that is not under your control.


Yes, and sensible frameworks use constructor injection -- see ASP.Net Core.


Author here. After posting this to reddit I realized that the original title is wrong, and poorly reflects the actual point I'm trying to make. Dependency injection is not dynamic scoping, but the latter can be used to achieve the former. I'm drafting an update to better reflect this. I'm also going to pull out reader monads and env passing into separate sections and give reader monads a better treatment in general.


The 'Env' is typically called 'Context'.

These mechanisms are addressing functional requirements in component oriented systems, but in the industry have been misunderstood and misused to satisfy testing requirements.

And if one is not doing pervasive component reuse across multiple systems and projects, the on-off usage of DI is of course completely over-engineered and likely a poor design decision.


And one obvious refinement of the `Env`/`Context` God-dependency pattern is to have it implement a bunch of fine-grained interfaces for subsets of the dependencies it aggregates, so that you can both reduce plumbing (only one thing to inject) and still make it clear which specific dependencies a given call might use.


How would you manage scoped lifetimes and transient objects using the Env/Context pattern?


I creatd a minimal framework in C++ in mid '90s around the concept of Contextual Objects. Child contexts can be used to affect life-cycle scopes. In this approach, the virtual construct of a 'containment context' allows for managed life-cycles, at an aggregate level. Delete the context and all child objects (recursively) are deleted as well.


I really liked this essay -- I found it very clearly explained and it pointed out something I ought to have known but somehow had never previously realized.

My only complaint is that when I went to go read through the other articles on your blog and perhaps add it to my RSS feed there weren't any other articles! This wasn't the first thing you have written... is there a place I can read some of the others?


Thank you. This warms my heart to hear. However, this is the first thing Ive written outside of isolated reddit comments. But I do have a backlog with a few articles that I have mapped out but havent gotten around to writing yet.


Please do. IMO, this was very well written, well-paced and interesting.


Seconding the sibling comment that I'd love to read more of your posts, and the most likely way I'll notice a new post is if the RSS feed works :).


If the title was less click-baity I probably wouldn't have read your article. The article itself was perfect. It was short enough, and it was informative enough. If you want to expand on a subject it's probably better to make a new URL and link to it, or if good articles already exist, link to those.


There's an example of dynamically scoped variables that will be familiar to almost everybody: environment variables, which are passed dynamically across program invocations.


Can you share a link to the reddit discussion? I tried searching but I couldn't find the post.


I like that the examples were in a range of different languages.

Not sure how well that works for other people, but I think the way you wrote it makes them understandable even for eg people who don't really know Java.


Your essay reminded me of the local function extension in gcc. A local function has access to the variables in scope when it was defined.


> Author here.

Can you make it so your site uses HTTPS?

Can you make it so it doesn't just load an frame pointing to Github.io?

Both combined seem very sketchy to me.


Good on you for coming here to clarify! :)


These patterns are almost always working around language shortcomings.

For example, factories work around the new keyword. The new keyword in Java emits the constructed type into the bytecode, making it a hard ABI dependency. So people invented factories: they hide the new keyword behind methods that return interfaces. In better languages such as Smalltalk, new is just a method that can be overridden.

Singletons work around the fact only objects can implement interfaces. Classes are natural singletons and yet they are second class citizens of most languages. It is not possible to pass the class itself to code expecting some interface. So people are forced to create an object and add complicated boilerplate code to prevent more than one instance from ever being created. In better languages, classes are the same as objects, they can conform to interfaces and be passed around normally.


see classic "design patterns are missing language features" presentation by peter norvig, aka "design patterns in dynamic languages"[0], which is from 1998.

[0] https://norvig.com/design-patterns/



I think people tend to re-discover this over and over :) I myself thought I had a great insight on this topic while learning CLOS[0], only to realize I was a few decades late.

[0] https://en.wikipedia.org/wiki/Common_Lisp_Object_System


I don't know about Smalltalk, but in other languages, a constructor is just a function, and you can "override" it by passing in a different function.

The flaw in Java is that constructors aren't functions, so you need to wrap them. Wrapping is easy in simple cases but gets annoying fast when you have lots of arguments.

To get around this, you can group arguments using a struct, and sometimes there's a natural way to do it, but that gets annoying in languages without lightweight, easy-to-use structs.


In Smalltalk, how would how would the consumer be given differing implementations of this "new" method? And wouldn't that just make the "new" method a factory?


> In Smalltalk, how would how would the consumer be given differing implementations of this "new" method?

Classes can simply override that method. The default implementation of new is:

  ^ self basicNew initialize
And basicNew is a method that allocates and returns an instance of the receiver.

So a custom implementation can easily change self to some other class, chain more messages or replace initialize with something else, add more logic before an object is returned and so on.

> And wouldn't that just make the "new" method a factory?

Smalltalk actually predates the discovery of object-oriented design patterns by a couple of decades so it's the factory methods that are like the new method. For some reason language designers turned it into a magic keyword and people rediscovered the fact that methods are better.


If my BubbleMachine currently makes SoapBubbles, but I want it to be able to make GumBubbles as well, who, out of those three, is responsible for overriding the “new” method to create GumBubbles instead of SoapBubbles?


Are these all subclasses of a Bubbles class? I think that'd be the natural place for a custom new method that figures out which subclass to construct based on the parameters.

In Java, an interface could have a static method that returns concrete implementations of itself.


It's even better. If it's just making stuff, you don't even need `BubbleMachine` if you have the `Bubbles` base class. You can add creation method on the class side of `Bubbles` like so:

`Bubbles class >> #newGum` ^ GumBubbles new

`Bubbles class >> #newSoap` ^ SoapBubbles new

The difference here is that the base class still serves as a true base: it will have all the common functionality for various kinds of bubbles


A Bubbles class, which is basically the same as a factory. I don't see a huge difference in practice myself.


yes it would be a factory, the difference being that you don't have to use a different syntax (`Klass.build/new Klass` would be `Klass new` in both cases)


In this case DI seems to be compensating the lack of a proper module system as in SML.


I wouldn’t call hyper object oriented languages like smalltalk, io, and ruby “better“, just different. The ABI is only a shortcoming if you value aesthetics over other considerations.


It's not just aesthetics. I mentioned the ABI issue because it is the technical reason why factories exist. Writing "new X" makes X part of the compiled code. This makes it impossible to swap X for Y and delete X later without breaking binary and even source compatibility. This is important for libraries and code reuse.

I don't think it is controversial to say those languages are better. A language with limitations that must be constantly worked around must be worse than a language without those problems.


Languages are only good or bad in the context of performing a task. Ranking them in abstract is a silly waste of time that engineers love to emotionally engage in.

The ABI is also important for performance! You aren’t making this comparison in good faith. You can easily acknowledge the entire trade off rather than just calling one better.


You're right. I acknowledge that languages do pay a price for their flexibility and dynamism: efficient implementation becomes much harder.


I also acknowledge that smalltalk has literally the dream ide and tooling. Ruby will take decades to catch up.


DI is a module system built out of classes; dynamic scoping is another way to build a module system.

If you had a good composable (parameterized) module system, you'd have much less need of DI. A composable module system would scope the lookup of type names and static methods to the actual module arguments arguments the accessing module is constructed with.

The problem with `new X()` vs `@Inject X x` is that the construction of X in the former has no indirection; type names are global constants. A module system provides an indirection. Dynamic scoping could also provide an indirection, because dynamic scoping lets you redefine / redirect those otherwise constant things.

(DI in practice does a bunch more, like proxies to let you put data with different lifetimes (request, session) in a mixed object graph; and the fact that proxies now exist means aspect-oriented programming sticks its head in and encourages its use for things like auth and transactions. Once you go over the edge of the DI barrier to acceptance, "best practices" shift dramatically - you end up quite far from where you started.)


> Second, we can now pass in different implementations of our dependencies when executing in test. This is very good, but let me rephrase that in more general terms: the values associated with certain names are now dependent on the environment in which we are executing.

Earlier, they established the idea that this was a bad thing by having the printGreeting() function print a message you probably don't want. The change in the value caused bad behavior.

However, with dependency injection, you should be following the Liskov substitution principle. You might get different values, but they should all be following the same contract.

The acceptOrder() function might get different implementations of BankService, but the difference should be opaque to it. Calling bank.chargeMoney() should work the same regardless of which one you got.

The reason I bring this up is the crux of the argument presented here against dynamic scoping is, "Dynamic scoping makes it hard to figure what our program actually does, without executing it, and that’s not a quality we want our programs to exhibit."

To the extent that you successfully pull off having different subclasses follow the same contract, this weakness doesn't really apply to DI (or inversion of control).


> The problem with this style of programming is of course that we have to pass the Env around everywhere

So why not make it a singleton? Or even better, make it a static class with some static properties?

Yes I know, I said something evil! But before you take out the pitchforks, bear with me:

    public static class Master
    {
        public static ISupplyService Supply;
        public static IBankService Bank;
        public static IMailService Mailer;
    }

This is C# and I actually LOVE this pattern. I have seen it referred to as Master-Pattern, but I don't know the correct term. It solves a lot of problems:

* You don't have to pass essential modules around anymore. You can initialize the Master once and access them everywhere without caring about their implementation.

* Code completion does wonders on this one. You simply type "Mast..." and it will show you ALL available modules. You don't have to remember anything. It's awesome for new team members.

* It is fast. In release builds you can even try to replace the interfaces with the actual classes implementing it for a straight method call.

It introduces a few problems:

* You increase coupling. Once you add a module, removing it or significantly changing the interface is tricky.

* You need to be careful to NOT couple modules to each other, and if you do, do it rarely. Otherwise you will have to instantiate the modules in a specific order which gets cumbersome.

In my opinion: If you are a small team and have sane teammates, give this one a try. It reduces boilerplate a lot and reduces the amount of arguments you need to pass around everywhere.


We have that style of "Master-Pattern" and no matter how small a team you have it introduces problems. It's still a hidden dependency, so you don't know if you've forgotten to initialize it before you get an exception. At least for us we ran into the problem that we wanted to make some changes that meant it was no longer a singleton - but now we had to perform a major refactoring.

So you could use a good DI framework such as SimpleInjector and get all the benefits without all the problems. You only have to initialize once, and then it's injected into constructors when needed, and you can inject different implementations depending on context.


This is the service locator pattern. Biggist complaint is implicit dependencies, and it's hard to control lifetimes.

So this has the advantages and disadvantages of that.


How do you test? One of the things you get with dependency injection is a method to replace dependencies with other objects specifically to do thing like unit testing with mocks or integration testing with objects that provide specific failure modes.

This is much harder to do when everything is a static.

Edit: some typos.


All fields in the Master class are interfaces. You simply assign Mock-Implementations in the testing scenario, once in a startup routine. Done.


How do you deal with parallel tests execution? Given that they are all static, you can't have multiple tests running at the same time setting up the mocks or they might overlap


You can add a static "InitTest" method somewhere which initializes the modules. Use a double checked locking pattern [1] in there to make sure, that you instantiate and assign the modules used for testing exactly once.

[1] https://en.wikipedia.org/wiki/Double-checked_locking


This also isn't thread safe, but that's not a problem if you don't run parallel tests in the same process. Using the same stateless test doubles in all tests also solves the problem. I often find myself wanting stateful test doubles though.


> * It is fast. In release builds you can even try to replace the interfaces with the actual classes implementing it for a straight method call.*

Isn't the interface mainly for the compiler, and in runtime it's just a straight method call anyways?


Depends on the language/compiler/platform. In C# there's a minuscule difference [1].

[1] https://stackoverflow.com/q/7225205/998987


In the java OrderService example, the author writes:

"Second, we can now pass in different implementations of our dependencies when executing in test3. This is very good, but let me rephrase that in more general terms: the values associated with certain names are now dependent on the environment in which we are executing. This should sound very familiar, dependency injection is just a more controlled form of dynamic scoping."

This seems slightly off to me, unless I'm misremembering my java. In the OrderService example, the value of the variable `bank` cannot change because it's a reference: it'll always refer to the same object. However, if the instantiated `BankService` is mutable, then the internals of that object could change in various ways. Hence, in practice, the dangers of this pattern only seem problematic if dependencies have mutable state.

Back when I was doing java, we used spring beans everywhere for this sort of thing and iirc, they had no mutable state. In Python, I use a similar pattern a lot, where I have classes that are in practice 'immutable once initialized' -- though of course, in Python you could always mess around with the internals at runtime -- which are segregated from classes or objects that have mutable state. (Similar to the structure described here: https://medium.com/unbabel/refactoring-a-python-codebase-usi...)

Of course, I get that you can't in practice know how stateful everything in your dependency graph is. But I think the real problem here (if there is one) isn't explicit DI in the form of dependency passing, but (unexpected/hidden) object mutability.


[Offtopic info for the author]

I see [1] you're using <frameset> to wrap GitHub Pages with your own domain. You could do it in a less hacky way by creating a CNAME file in GitHub repo, and updating GitHub repo settings + DNS settings of the domain:

https://help.github.com/en/github/working-with-github-pages/...

(Unless there's some particular way why you don't want it? I'd be curious. Ability to gather server-side logs?)

[1] learnt it because the page fails to load with uMatrix extension so I checked source.


No I only did it that way because I couldn't get it to work with a CNAME file. However, that guide you linked is much more detailed than what I originally read so I'll give it another go. Thank you.


With dynamic scoping, when you need a DatabaseConnection, someone up the stack from you still has to construct it manually. With dependency injection, the framework can construct it for you.

I think maybe a better analogy for dependency injection is imports. When library A imports library B which imports library C, they can just declare that, nobody needs to assemble "new A(new B(new C()))". Dependency injection is the same thing, but instead of libraries you have stateful objects, like "a RequestHandler needs a RequestContext which needs a DatabaseConnection". Maybe these tasks could even be handled by the same tool, but I haven't seen such a tool yet.


Imports, yes; more precisely, modules with parameterized dependencies.


This is my favourite explanation on dependency injection -

https://www.jamesshore.com/Blog/Dependency-Injection-Demysti...

From the article -

The Really Short Version

Dependency injection means giving an object its instance variables. Really. That's it.


I think dependency injection can have its uses, but the way I see it used in practice it looks like someone should file a bug on whatever programming language features are missing to make dependency injection necessary.

I think most code should be purely functional and unit tested that way, which means that the only dependencies are the input parameters. "Mock" dependencies used in unit tests are usually a unit-test circle jerk; most of the time you're essentially testing that your programming language can indeed make method calls through an interface and those tests are only there because you added DI in the first place. It's common to see all kinds of testing like this but nothing testing the actual functionality of the code because that's so obscured by DI or mocked out. It feels like you're implementing comprehensive testing but it's mostly just additional complexity obscuring the fact that you're not actually testing anything real.

The code that can't be functional? Sure, go ahead and knock yourself out with dependency injection and IOC. It's great for not having to pass configuration and logging instances around. But it's being abused when it's all over the place and you can't look at code and figure out what it does without also looking at configuration files and startup classes and knowing how the flavor-of-the-month DI framework works.


I don't really see this comparison as being particularly useful. There isn't really a choice between DI or dynamic scoping, there's just a choice between application architectures, which is largely dictated by language.

The Closure example works because all your foreign symbols are coming from namespaced imports, which is to say you have a Module architecture and your only polymorphism lever is altering the namespace mapping to point to an equivalent module.

With Java and the like you have a Service architecture, so rather than importing symbols you're injecting services. Things that would otherwise be exported as bare functions tend to be written as classes so that they can be used as services.

While some languages can support either (i.e. JS is typically modular but there are things like Bottle if you want to write JS in a service-based way), it's not typical that a given application is going to support either simultaneously. Most languages have a clear idiomatic choice that you're likely not going to want to stray from to avoid headaches integrating 3rd party dependencies and developers other than yourself.


In JavaScript, Jest provides this using module mocks. The code under test imports a module, but in the test environment that module has been swapped out with a mocked implementation. https://jestjs.io/docs/en/jest-object#jestmockmodulename-fac...


One language that actually utilizes true dynamic scoping is PowerShell. It's true that this is an extremely powerful idea that can even override imported functions from a parent scope for things like testing, but it can very much be a nightmare. It leaves the programmer completely unaware of where a function or variable is declared or if it even is declared. Imagine the situation that while testing you have overridden a piece of code that is an interface into a real data layer in a real system, and you forget to declare it in the parent scope of the test, or worse yet, declare it and misspell the name of the function you should be overriding. You accidentally call the real function and start manipulation of data in a real system. It becomes a nightmare. DI and IoC don't have these issues because they rely on explicit passing of the dependency. So like many things, with great power comes great responsibility.


>This should sound very familiar, dependency injection is just a more controlled form of dynamic scoping.

This is not really true. First off, this is too much focus on "dependency injection", which is just one usage of a more general feature: passing arguments to functions.

Is dynamic scoping the same thing as passing arguments to functions? Well, they are certainly closely related; for example, see the "implicit parameters" paper[0]. But I think it is incorrect to say that converting a function to take more arguments is "emulating dynamic scoping", any more than function arguments in general are "emulating dynamic scoping".

[0] https://www.researchgate.net/publication/2808232_Implicit_Pa...


Satisfying dependency can be setting up some pointer to a function, passing an instance of already created interface, asking some factory to supply it etc. etc. As in everything in life there is no single universal answer on what is the best way to accomplish the task. What one uses in practice depends very much from problem complexity, one's experience / personal preferences, implementation language / environment features etc. etc.

I think there is no real need to dwell over the subject without knowing particular context


In Python's standard library there is the unittest.mock module [1] which allows you to patch functions and methods. For example:

    with patch('requests.get', fake.get):
        definition = find_definition('testword')
[1] https://docs.python.org/3/library/unittest.mock.html


In Racket (and probably other Schemes) you can do

(define foo (make-parameter "some value you want by default"))

(define (do-foo) (do-something (foo))

(do-foo) ; uses the regular foo

(parameterize ([foo "my injected foo]) (do-foo))

And it all just works as you would expect. `foo` gets automatically reset to the prior value when it leaves the scope of the current parameterize block. You can also nest them safely, or use them in macros, and IIRC even inside threads.


I prefer Service Locator to DI, Martin Fowler talked about it a long time ago - not sure if he's changed. https://martinfowler.com/articles/injection.html


I don’t quite see the benefits of Clojure’s dynamic binding over with-redefs for tests as demonstrated here. Personally I have only used Clojure’s dynamic binding very rarely, mostly in production scenarios to override some default value from a library and not for test cases.


>Some examples of languages that use dynamic scoping by default are APL, Bash, Latex and Emacs Lisp.

bash isn't dynamically scoped by default; it has only global scope by default. You have to use "local" to dynamically scope a variable binding.


That's like saying classic Lisp isn't dynamically scoped by default; you have to use let instead of just setq.


No. When I set the value of a symbol in a dynamically scoped by default language, it only affects the closest binding for that symbol.

When I assign in bash without "local", it affects global scope. There is no way to do the corresponding behavior of "setq" in bash.


It so happens that I actually know what I'm talking about in this area, with regard to both languages/families:

  #!/bin/bash

  v=xyz

  f1()
  {
     printf "v = %s\n" $v
     v=clobber
  }

  f2()
  {
     local v="abc"
     f1
     f1
  }

  f2
  f1
  f1
Output:

  v = abc
  v = clobber
  v = xyz
  v = clobber
Almost line-for-line analogous Common Lisp program. Defining the variable is mapped to defparameter, local mapped to let and setq to assignment:

  (defparameter v 'xyz)

  (defun f1 ()
    (format t "v = ~a~%" v)
    (setq v 'clobber))


  (defun f2 ()
    (let ((v 'abc))
      (f1)
      (f1)))

  (f2)
  (f1)
  (f1)
Output (from clisp):

  v = ABC
  v = CLOBBER
  v = XYZ
  v = CLOBBER


That seems to be the case. My original statement was wrong then. My mistake - for some reason I thought the bash behavior was different.


Scala implicits handle this really nicely




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: