I've never been convinced by DI frameworks. I've always enjoyed the fact that the Go ecosystem leans away from them and I would hope things stay that way. In my experience they just obfuscate what should be a straightforward, explicit process of setting up your app in main. The less magic happening there the better.
I have (and probably will) never voluntarily use a DI framework in any language. There's zero downside to manually doing it. It's explicit, incredibly easy, and you can easily trace backward to see how the program is constructed.
On the same note, I could never really get on with Aspect Oriented Programming. Coming from Java world, I got tired of seeing annotations everywhere and understanding what magic is going on. I like simple and straightforward languages.
You say this now, but I've had to work on 5000 line (literally, 5k lines) classes in legacy code where there are a huge number of dependencies injected manually because the class is doing initialization for a huge number of components. You could argue that this was bad design but I'm sure it didn't start out like that, but just accumulated cruft over time and was too dangerous to refactor. (if prod went down it was at least a few hundred $k down the drain). DI makes it much simpler to wire things like this up without making mistakes like accidentally initializing the wrong class implementation, etc. In polymorphism heavy situations it reduces cognitive complexity a lot.
Dependency Injection frameworks definitely seem like one of those technologies that primarily exists to allow people to get away with bad designs, rather than making it easier to create good designs.
This is pretty much what I don't get. DI by construction is trivial and has all of the benefits of a DI framework. DI frameworks just let you move some things around, which is mostly confusing.
As someone who's liked using DI frameworks, I'm curious: if you do it manually, as your codebase gets large, doesn't it get harder to "plumb" a new object that needs to get used somewhere at a deep level? This seems like it'd get more unwieldy when refactoring.
I've never worked in a large codebase that did this manually so I'm not sure what it looks like. The large codebases I've seen that don't use DI have used something like a service locator, singletons, or constructed everything where it was needed (and used extensive mocking framework functionality for testing).
Not just that, but I don't understand why users would be so eager to kill type safety in their applications. If i'm understanding, these injection methods are totally type-unsafe and construction errors are deferred until runtime?
I currently have to deal with DI at work. It's a pain. When everything works, it's nice to be able to just ask for something and have it with no extra work, but good luck when things go wrong. If the DI library throws an exception, good luck figuring out what you actually need to do to resolve it. Often it's because there's a cycle in the dependency graph, but the exceptions never give you any information about what the graph looks like. It also makes it stupidly hard to track down the sources of bugs. Figuring out what classes are being used to satisfy the dozen interfaces your class asks for requires either inspecting variables in the debugger or an annoying amount of digging.
I've used dependency injection heavily in Java in previous jobs, and, later, spent a few years doing Golang at another job. I never missed it.
What does dependency injection give you that a simple combination of Singletons, Constructors, and Factories doesn't? I feel like the only thing you get is the ability to combine multiple independent dependency trees without having to make a sane structure. Kind of like, what redux does for state in JavaScript. It make things easier because it takes away some responsibility, but, in my eyes, that responsibility is an important one, and ignoring it makes code very hard to reason about.
This is actually something I've been trying to talk about with a new hire at my job. We are running a nodejs stack, and he comes from a strict OOP C# background.
I've never worked in a strict OOP paradigm, coming from Python and JavaScript mostly myself. I've been puzzled by his insistence that DI is important. He wants us to implement DI containers and take a "code to interfaces not implementations" approach. To me theres not really any value in this approach in JavaScript.
I don't know Go, but I get the impression this stuff is also not as valuable there.
My question is always "what does this get me and what does it cost me?". Outside of strict OOP paradigms like C# and Java, it's pretty unclear what the benefits are to me.
It's not. It saves a few lines of code, with the tradeoff that you are handing some critical functionality--the wiring of dependencies--over to a black box. When things don't behave as you expect (which you might not initially notice), there is no code path to follow, you have to figure out what kind of magic is happening in the black box. It's just not worth the one-time cost of writing a few extra lines of code. This is also congruent with the aesthetic of Go in general.
I've found DI pretty useful when building frameworks, it gives you a sort of generic plugin system. Plus you can hide it from consumers if that makes sense. If you're just building applications I'm not sure it's worthwhile.
The argument I always hear is DI makes your code more testable.
Which is probably true in some cases, but I find most of my code is pretty testable without it. Haven't run into any tests I've wanted to build where I couldn't.
I think that's a side-effect of how code must be written to work with dependency injection, to make the injection actually possible, IMO it's not the DI itself that makes it testable.
How do you convince the when-to-fire-logic to act on a mock, if the torpedo implementation is not provided from outside it?
I'm wondering if I have the terminology understood differently to other people, because to me, DI == IoC. DI Frameworks are build on top of the concept that dependencies will be injected, but doing it explicitly in your start-up code is the same thing to me.
This is the same conversation I had with the new hire I mentioned. And as the other comment here mentioned, it's JavaScript. You simply overwrite the object method with the mock at runtime of the test, then restore it after the test finishes running.
Most JS testing frameworks do this under the hood I believe, without any change in how you write your code. It's a fundamental difference in how JavaScript handles things versus something stricter like C#.
It's a dangerous footgun if you use it carelessly, of course. But a useful language feature when applied carefully.
Right. Facepalm. I kind of missed the bit where we shifted paradigms.
My personal preference would still be to be explicit, to echew secret mutable state even in the tests, but that's just me. My personal baggage makes phrases like "then restore it after the test finishes" bring me out in a cold sweat ;)
I guess I sympathise with your colleague, but I agree it's not idiomatic, and not being idiomatic is not helpful in a team.
In languages like JS/Python, you can mock imports. Actually, sometimes you can mock things defined in the same file if you set things up right.
Basically, the logic not being inlined into the function is enough to provide this behavior in certain languages.
This is not true for other languages, which is why DI frameworks tend to be more useful there.
In go specifically, I just always throw the dependencies behind an interface and shove them into a struct. Then you just create a struct of mocks for your testing.
JS allows you to override global objects, pretty much nothing is constant or immutable in JS. You can (more or less) replace arbitrary functions in any place, at any time.
> What does dependency injection give you that a simple combination of Singletons, Constructors, and Factories doesn't?
Easy refactoring of an interface/constructor? I've been using go for a while, and I decided to follow the advise of not using a DI for my project. Every time I refactor a constructor, it is a fun hunt to make the same changes everywhere I'm using that constructor.
That's exactly what a Factory is for; you defer construction and pass around an object that knows how to create an instance. Then you're hunting at the root of your call tree to swap out a Factory instead of throughout your codebase to change a constructor call.
This is also what DI amounts to in practice. Frameworks abstract over it in the name of DRY, but at the same time introduce all the downsides of frameworks.
That's true. However, the decoupling that occurs when refactoring one and not the other creates two different code structures, often with different levels of abstraction. I saw the same thing happening with a combination of react and redux, an it made debugging weird behaviors really, really, hard.
I came here to ask a similar question. I’ve spent most of my career writing C# where DI is prevalent. Over the Christmas holiday I picked up golang a bit. When I went to learn about DI for golang it seemed very counter to the principles of the language so I simply moved on. What am I missing out on?
Do you distinguish between dependency injection (frameworks) and general inversion of control?
Because what I think of dependency injection is extremely common in golang - they made interfaces satisfy structurally rather than nominally so that consumers could specify interfaces which anyone is free to satisfy. That the consumer owns the interface (packages shouldn't export interfaces for concretes they implement) is a pretty core tenet, and goes hand in hand with good dependency injection (IoC).
DI frameworks are extremely rare (and IME even more painful to use), because injecting your dependencies explicitly is straightforward. But injected they should be.
Because golang doesn't have constructors, you're forced to implement functions that mimic them yourself. And the language still doesn't prevent you from directly instantiating the struct yourself, meaning it is always possible to bypass the "constructor functions". This is quite terrible and opens up your code to errors.
Furthermore, DI frameworks also usually have lifecycle management, which is quite handy in many cases.
Fine, but that applies to even data structures and other domain types, which even in a more classically-OO language you wouldn’t pipe through a DI framework. I think DI as a code organization concept is great, but if I have to write a factory for every struct anyway then it’s not any harder to simply use those as a rule and define module boundaries sensibly as a way to determine which code should be using new vs the factory methods.
You have a good point wrt lifecycle management, but I feel like that’s actually a separate class of problem.
Agree. I really can only imagine this being useful for an organization that publishes dozens of different libraries that a "product team" needs to use such as logging, database access, secret managers, etc. At that point, speaking the common language of a DI framework might be easier.
For example, rather than have to read the GoDoc for every single constructor i'm supposed to use, I can just see how to use my DI framework to set up this client library and move on with my life.
That being said, DI always strikes me as a layer of abstraction that I don't need in my day to day. But I don't work somewhere at Google/Uber scale so YMMV.
There's nothing special about DI frameworks there. If a client library has some sane defaults preferred by the devs, then call a constructor that sets the defaults for you. You don't need DI for that.
IME at tech megacorps, DI just obfuscates and distracts, shifting the focus onto understanding the accidental complexity it introduces instead of dealing with the intrinsic complexity of the dependency relationships.
I mostly agree. I think there are two cases where using a DI framework make sense:
1. Complex lifecycle management, maybe. For example say you need to restart a set of go routines after loading a new configuration, without killing the process. I’m on the fence about this one.
2. When there is a combinatorially large number of components that need to be combined arbitrarily at runtime. The only examples I’ve seen for this in the wild are games and simulations using an ECS (entity-component-system) and queries to discover components that fit a certain criteria.
Fx author (in part) here - replied in a separate comment. I mostly agree with you, but DI offers some ecosystem-wide benefits that may be useful in some environments.
I want to say thanks to your team for open sourcing this library so the community can discuss. It’s ambitious and to me feels like a right size approach to the problem, for all the heartache around DI, it seems like it’s different strokes for different folks but that doesn’t mean we should all be taking a collective shit on the engineers who are trying their best to provide a way to organize service complexity strategically. Well done.
Just come out and say it: DI is an anti-pattern. No implementation of it that I'd ever use was doable without a good config file schema, and at that point I have to ask: why not just have a declarative build system in the first place as the modern generation of Devops tools does? DI just burdens and litters code unnecessarily with awareness of build files.
You, as many people here, are again mistaking dependency injection with dependency injection containers. You don't need containers for dependency injection, most codebases can just wire everything by hand without any problem.
What's an example of what you're talking about? Because the delegator pattern is not CI, and that's the only thing that's coming to mind that you might mean.
Sorta. The pattern is having standard method of injecting dependencies. Having a framework just saves you having to do the actual injecting, manually.
This is easy to see in frameworks like dagger, which compile time generate the boilerplate you could manually do.
And if everything is a singleton with no lifetime management, the framework doesn't buy you too much. The pattern, though, is kind of nice. I rarely have to question how a dependent section of code is linked to the one I'm at. (Contrast to python, where I don't know what is going to happen if I add that import...)
"Dependency Injection" should be just shorthand jargon for "pass a pointer/reference to the dependency into the constructor".
Unfortunately, thanks to enterprisey-OO zealotry, it's become a terrible monstrosity of frameworks, obfuscation-by-configurable-injection, and other terrible practices.
Every single application I've worked on where DI (in practice, not theory) is in use has been fragile, hard to debug, hard to test, and hard to maintain.
And this particular framework looks like it has all the markings of making any go application worse.
> Every single application I've worked on where DI (in practice, not theory) is in use has been fragile, hard to debug,..
From what I have learned working at various jobs it seems to be the feature and not bug in system design. When things break it is considered a problem worthy of attention of those numerous architecture astronauts. It would have just been college project if things work without any fuss.
We at Khan Academy are planning a blog post soon about how we've been approaching dependency injection in Go. I touched upon this in my GopherCon talk last year[1]. I like the system because it's pretty straightforward and builds upon Go's context with a bit of reflection:
Not a fan of that... Feels like an abuse of Context. I admit, I thought about this a long time ago, Context is passed all the way down the stack, it's just so easy to abuse. But it's just one of those things that completely hides this away from anyone who glances at your code.
It makes it incredibly easy to create "God structs" that can do everything. They have access to every service in your application when really you should be limiting the scope of them.
Unsatisfied dependencies are now a runtime issue, not a buildtime issue, one of my biggest gripes when working with IOC containers in .NET.
Not to mention that Context, originally built as a cancellation token, is now doubling as a service locator? It feels like something completely adjacent to what a Context is.
This is the exact sort of reflection magic code that a lot of Go developers dislike. And there's nothing that this gives you that passing dependencies as parameters can't. If you've got too many to pass then you're doing too much and you're not separating concerns.
Passing context everywhere is quite ironic in a language that is supposedly "non-colored" when it comes to being able to call any function concurrently. In practice however, passing context becomes that function's color.
Project Loom in Java will solve this problem in a much more superior manner.
I'd like someone who knows more about software engineering than I do to tell me why I'd use this over some big globals-but-not-really App struct that holds all of the various dependencies of my app that I can just pass into the different parts of it so they can all access what they need.
If those struct members are typed to interfaces and not types, then it's just as testable, too, because you can drop in mocks as required.
What am I missing? This just seems like a way of obfuscating the fact that, in any modern program, we're going to need 3-20 "global variables" that aren't actually global global (but in practice are singletons in the process).
Took me a while to buy into it, because as you said, it doesn't seem any better than hand-written code.
But after a few years I noticed fx was being adopted by more and more teams, and it reduces cross-project contribution friction, some teams started building conventions around it, and more importantly, iterating on these conventions, etc. It's become just really handy for us.
This is many many small teams (5 people per team, more than 4k devs around many offices and countries).
Same. It’s nice to publish an fx module that implements a middleware that plugs into all of the 12000+ microservices we run. One example is adding a debug/flame graph endpoint to your service- it’s a one line import into your app.
If the complaint about dependency injection is that it’s magic, brittle, and difficult to debug - globals mutation via init() is much worse. In FX codebases this is heavily discouraged. The concepts of constructors and lifecycle hooks in an FX graph are not that hard to grok; you can pretty readily figure out what’s going on if you want to.
The “import _” approach you recommend is idiomatic Go but it is inherently a global mutable state approach and not a DI approach.
I suppose elegance is in the eye of the beholder. FX is very elegant in my opinion: just specify your constructors and your needs, let the computer do the topo sort. But you’re right it is not idiomatic Go. Idiomatic Go is always to maximize the volume of rote, low-information-density code to accomplish any given task. Any time you are being clever and automating grunt work you are certainly violating the spirit of Go.
It's not terrible to have one big struct, but sometimes it can inhibit the reuse of components in more than one system.
When you call a function that takes this struct as an argument, how do you know which members of the struct really need to be populated? Its dependencies aren't clearly documented. (They are over-specified. It's like a function with many unused arguments.)
If a subsystem only uses some members of the struct and you want to enforce that, maybe you pass in a smaller struct, or pass the struct members as separate arguments.
This is something you'll likely have to do if you want to extract a library for other people to use. Libraries are used in multiple systems, which might or might not have their own big struct.
An interface with many methods can have the same problem. If the function doesn't actually use every method in the interface, do you really need to implement them all?
So then it might be better for the function to declare its own interface with just the methods it uses? But then, all the callers need to be changed if you decide to call another method.
There's no principled solution to predicting what dependencies code might need someday. It's a matter of taste.
An interface with many methods is already a bad design. limiting it to a handful methods is way easier to maintain. it's fine to return an object has implement many interfaces, but you really don't need to use them all on the input side.
It's a scale thing. A bag of globals works, until the scale of what you have gets too big, and the big bag of globals becomes a 500 variable super object, and then your build graph gets bottlenecked on the bag of globals and a bunch of other pain.
If your project scale is small, you don't need DI frameworks, but you should still use IoC so you don't have implicit singleton access and your tests get flaky and stupid.
That's a very fair perspective, and build-time verification of your dependency graph is great. That said, fx's approach allows for a more dynamic dependency graph. (For example, fx.Replace[0].)
The documentation for Fx, along with nearly all the applications I saw internally, use the dependency injection container only in main - once the application starts successfully, there's no more interaction with the container. For Uber at the time, this struck a useful balance between safety and the difficulty of distributing yet another versioned code gen tool to thousands of repositories.
Full disclaimer: Uber employee using fx daily as well as in hobby projects. Post reflect my personal opinion and is not related to Uber.
It really does work really well in practice:
- It pretty simple and lightweight so it’s blazingly fast even though it happens at runtime (in contrast to e.g. the Java DI frameworks I’ve seen)
- The module concept is extremely powerful to make modules that plug in with zero effort
- Modules have a lot of autonomy (unlike wire, as I understand it - I have no personal experience with it), like being able to do things at various stages of the application lifecycle (startup, shutdown hooks), collaboratively populate dependency groups (e.g. implementing handlers or middlewares independently and injecting them separately using the grouping mechanism), optional dependencies
Once you have a nice standard library of common modules (this is really crucial for it to work well IMO), it’s a huge speedup to make a high-quality service. My biggest issue with it is that it doesn’t match structs with interfaces, so you effectively end up depending on structs/pointers or returning interfaces.
TL;DR: I played a fairly large role in writing this package, and I wouldn’t use it outside Uber’s environment at that time (thousands of engineers, thousands of microservices, tens of thousands of Go repositories).
At the time we wrote Fx, Uber had ~1500 engineers writing Go. The company had 25 million lines of Go, spread across more than a thousand microservices and an unknown number of shared libraries (likely hundreds, perhaps as many as a thousand). Nearly every project was in a separate git repository, with effectively no tools to make large cross-repository refactorings. As an engineering organization, we struggled to make relatively simple cross-cutting changes.
For example, we spent years rolling out distributed tracing. The actual change required was simple: upgrade all your dependencies to something recent-ish, add the tracing library, construct a tracer in main (or something main-adjacent), use the tracer to construct an RPC interceptor, and add the interceptor to your API server. All in, we're talking about ~20 lines of code and a dependency upgrade. It took multiple TPMs, spreadsheets, quarterly planning, and several high-level edicts to get this mostly done.
Why were changes so painful? At root, because nobody cared much about most of the Go repositories. From the perspective of the teams who nominally owned the code (often after several reorganizations over the years), the code worked fine and solved the business purpose - why invest time in changing anything? On the ground, changes were painful. Go's simplicity makes semantic versioning _very_ restrictive, so trying to pull in a year's worth of dependency updates often produced a variety of breakages. (Keep in mind that many of these libraries were used only in a handful of projects and weren't particularly carefully designed or maintained.) Taking on all this pain to change 20 lines of code in main was a difficult sell.
Fx codified some basic back-compat best practices (if you're paranoid) - mostly param and result structs, so constructors have more flexibility to add inputs and outputs. Fx also made most of these problems a negotiation directly between library authors, leaving the microservice team out of the picture: the "standard Uber stuff" package provides a distributed tracer, and the "RPC stuff" package takes an _optional_ tracer and installs the appropriate interceptor. No changes to main or application logic necessary, just a dependency update (which is hopefully safer, since more libraries are forced to follow better semver practices). The reflection-based wiring came with lots of magic and downsides, but the tradeoff was worth it across the engineering organization - it made us _overall_ more able to change our own systems.
Bluntly, IMO Fx made individual codebases less understandable (especially codebases carefully maintained by engineers who like Go). It made the whole company's code more maintainable. The bulk of the Go engineers at the company agreed (the developer experience org tracked NPS, which went from double-digit negative to +40ish).
In the years since Fx, I left Uber and the company has moved most of their Go to a monorepo. I'm not sure what the current cost/benefit tradeoff of this approach is.
Dependency injection really took root in Java. The reason is actually because if a design flaw: lack of duck typing combined with the ability to specify concrete types in Method signatures.
Go doesn’t really have this problem so I’m not convinced it needs DI at all.
C++ doesn’t really have much DI mindshare because it has templates (and macros).
I started off decades ago with stuff like Delphi, PHP, and Python, all without DI. About 2001 I started with C# and I found properly implemented DI in .Net to be a killer feature.
However I loved it so much that I then wrote my own Node DI package (property and constructor injection). It didn't take long before I abandoned the idea - Node doesn't really fit with DI.
And I feel the same about Go. Some languages (eg C#, Java) work amazing with DI. For some others (eg Node, Go) it simply feels wrong. I can't put my finger on why, but reading the sample Go code in the original article makes me feel how I do when (in film) I see a human body with a limb bent to an unnatural angle.
Just to clarify I'm all up for well designed IOC with Node and Go, I'm just unconvinced it should be done with DI.
Maybe it's just my bad luck, but the projects I've seen from the inside where DI was used, it all turned into a dogawful blob of spaghetti code.
It was OK -- quite good, really -- in small projects... but of course pretty much any organizing principle is workable in small projects. Good organization needs to scale up.
I'm not quite ready to write it off.
For one, those large projects used a framework. Perhaps the lack of friction helped lead to thoughtless injection.
And, of course, no organization scheme can prevent the spaghetti when the project and its leadership are disorganized.
So I'm not quite ready to write it off, but I'm awfully skeptical. And it certainly doesn't seem necessary.
Given that Go modules can depend on private interfaces that specify only the methods they need, dependency injection of the pass-dependencies-to-the-constructor style seems almost built in. Not automated as such, but easy.
This is the way. Make it explicit, obvious, and debuggable. I much prefer to pass my dependencies around, and have a bit of duplicated code to programming in a dynamically typed configuration language and trying to figure out why the eff I’m getting runtime errors.
I think we should also comment on the quality of the framework. I use a home grown approach too. It doesn’t mean I’m not hoping for something to come along and disrupt (for the better) my approach for the “right” abstraction.
That’s what is missing from the conversation: do the abstractions seem justified based on the problems it claims to solve.
Who knows, it could be as influential as jquery was for web development in 2006 (I’m exaggerating of course).
I just find the tone is generally dismissive and that robs viewers the chance to evaluate the tool within the problem space: DI injection for Go.
This feels unnecessarily complex. I fell in love with Go after years of dealing with Spring's DI, and I haven't found a situation since where dependency injection would have made my life any easier.
I dunno, maybe it'd be more useful for a more generic framework but I wouldn't use this in any of my code.