Hacker News new | past | comments | ask | show | jobs | submit login
Compile-Time DI vs. Run-Time DI (dimes.github.io)
47 points by dimes on April 30, 2019 | hide | past | favorite | 36 comments



I don't understand why you need a framework for dependency injection in Go. I've written Go for years and I just use the `main` package to instantiate and wire up everything. It's really straightforward, easy to debug, and there's nothing to generate. It's all type checked at compile time.

I'm not trying to rain on the author's parade. I genuinely don't understand the benefit—especially in relation to the complexity of adding an extra dependency and layer to my application.


The benefit of a dependency injection framework is that you don't have to figure out the order of the dependency tree. If you do this work manually, then you have to make sure to instantiate things in a particular order so that dependencies are initialized before the classes that depend on them. This can be big pain if you have a large number of classes and start refactoring things.

I can understand some hesitation about using a dependency injection framework. Some of them can be very hard to reason about (especially the run time frameworks). However, they don't have to be that hard to reason about. My personal favorite is a compile time dependency injection framework for scala called Macwire[1]. In scala, you might do manual dependency injection like this:

  val instantiated = new Class(dependency1, dependency2)
With Macwire you could write the same thing as:

  val instantiated = wire[Class]
The wire macro above, automatically looks up dependencies by type and creates an instance of Class with those dependencies as arguments. If you combine that macro with Scala's built in support for laziness then you don't have to worry about initialization order either. This gives you something that is fairly easy to reason about (it is basically what your were doing by hand before) and will sort our necessary dependencies for you (and error out of they are missing).

[1] https://github.com/adamw/macwire


> The benefit of a dependency injection framework is that you don't have to figure out the order of the dependency tree.

What! This is essential knowledge. Without it, how can you possibly build a coherent mental model of a program?! I’ve never heard this justification before, it’s bonkers.


Sorry, perhaps I should have been a little clearer. What I meant is that you don't need to figure out the order in which you have to construct the dependency tree. The order of the tree itself (as in what depends on what) is something you still need to specify and should know.

EDIT - You have a tree of dependencies. This is something you care about. In order to create that tree in code, you have to convert it to a linear series of statements. This conversion is something you don't really care about as long as you end up with the tree you want. Dependency injection frameworks should make it so you can declaratively specify that tree without having to worry about how it is actually constructed.


This doesn't really make sense to me. The dep graph is a second-order thing, a conceptual model that's emergent from the parameters to the imperative constructors. It's not a first-order thing that we manipulate directly. There's no way to even represent a graph in a text file without flattening it to some order, anyway.

The point being made is that just writing the construction statements in the main method is exactly "declaratively specify[ing] that tree", and that "having to worry about how it is actually constructed" is incredibly valuable knowledge to maintainers, because it allows them to see explicitly how components are constructed, and how they interact with each other. It's not something you want to hide with a framework, it's something you want to lift up, front-and-center, so everyone can see it.


Pretty reasonable to me. maybe that exposes how relatively junior I am (I'm not OP) but the codebase I use is sufficiently large that if I tried to handwrite the dep tree before I started working, I wouldn't have gotten any work done at all.


Wild. I can’t do any meaningful work on a codebase until I’ve mapped out the component graph.


How many active contributors and lines of code does your projects have?

I’ve found that the advantages for DI frameworks (besides encouraging more unit tests and enabling them) to be more valuable for bigger and messier projects.


I find precisely the opposite, that as a project gains (and loses, and gains again) maintainers, the indirection and mental overhead that DI frameworks necessarily bring to the table vastly outweigh whatever perceived benefit there was in introducing them.

Conversely, a declarative, step-wise func main, with components manually constructed and passed to each other as dependencies where necessary, is sometimes tedious, but never confusing, and always refreshingly easy to maintain, no matter how much time you've spent with the project.


I've literally no idea what the second paragraph means but I suspect it's quite important. Could you point to an example of this being done? Thanks.


This sounds like the main methods i write (in Java, also not using a DI framework). Like this:

  var database = new DatabaseFacade(config.get("db.host"), config.get("db.port", Integer::parseInt));
  var users = new UserRepository(database);
  var catalogue = new ProductRepository(database);
  var cart = new ShoppingCartController(users, catalogue);
It's imperative code, of course, but it's sort of declarative in that all the code is doing is wiring things up. It's step-wise in that it does one simple thing after another; it might be quite long, as there are a lot of components to create, but it can be understood locally. Components are manually constructed, by calling constructors, rather than via reflection done by the framework. Components are passed to each other as parameters to establish dependencies, rather than there being some sort of rule-driven lookup, as a framework would do. It is indeed pretty tedious to read, as it's just lots of constructor calls, with no thrilling action. But it's so simple it's never confusing (and you get to use the full power of the IDE to navigate it, jumping to definitions and uses etc). And, as such, easy to maintain. Definitely refreshingly so if you've come from Spring or Guice.


Would you consider this viable in a monolith, though?

The primary application I work on has hundreds of classes that get dependencies injected through DI. Many can be and are singletons, but many - including some very commonly used ones - cannot.

If I manually wired dependencies it would add thousands of lines of code, and there are some where if I changed the constructor parameters I would have to modify 1000+ call sites.

And this isn't even a particularly massive or complex application.


The largest application where we use this pattern has 131737 lines of code (according to cloc), and four 'main' classes which do dependency injection, which create roughly 155, 145, 111, and 95 components. Those files total 8949 lines of code. They're doing more than just injection, though, there's quite a bit of code for other kinds of setup and configuration.

Remember that this setup code is just code, so if you find that adding a constructor parameter means modifying a thousand call sites, you should probably refactor it a bit first, so that it doesn't.

That being said, this approach to DI does take some tedious manual work to maintain. But by paying that cost, you get to take a magical DI framework out of the picture entirely.


Oh, sorry. twic basically got what I meant. I'd expand the example a bit to demonstrate how to use different implementations of things:

    var db IDatabase
    {
        if dev_mode  
            db = new InMemDatabase()
        else
            db = new PostgresDatabase(dsn)
    }
    var users = new UsersRepository(db)
    var catalogue = new ProductRepository(db)
    // ...
    var cart = new ShoppingCartController(users, catalogue, logger, metrics, ...)


Not OP, but if you're looking for something to search for, I suspect they're describing "Pure DI" or using a "Composition Root".

https://blog.ploeh.dk/2014/06/10/pure-di/

https://stackoverflow.com/questions/6277771/what-is-a-compos...


I've found containers are helpful when you want to replace one or two specific pieces in the middle of the dependency tree for testing - let's say a mock database, for example.

On the other hand, I've also found myself avoiding writing code which requires deep mocks like that in order to test. Functional core, imperative shell helps with that.


What this blog post does not address is the joy of configuring DI with XML files /s

Isn't it bizarre how software developers have taken runtime DI to such extremes that it was considered a good idea to configure services in XML or YAML -- basically another language, that has to be parsed, can be malformed, etc?


I never understood this. DI is for deciding program behavior. That should be a user choice. I don't understand how final users are supposed to (1) write XML files instead of providing simple human readable CLI flags, (2) know the program internals and being able to type in fully qualified class names.

$ ./my-server -reporting=email

vs

setting com.mycompany.package.subpackage.di.providers.abstract.ReportingProvider to com.mycompany.package.subpackage.di.providers.impl.EmailReportingProviderImpl.

The whole DI is a wrong concept. Dependencies should be instantiated at top level (main function) and explicitly passed down the call graph as interface instances.

DI breaks the call graph with (configurable) side effects. If you pass down dependencies explicitly you don't even need mocking frameworks for testing.


You are just refuting your own straw man.

Passing dependencies also qualifies as DI and is for the benefit of the developer. Maybe you are only talking about DI frameworks or containers in which case I share some skepticism but in general the concept of DI is pretty sound.


The creators of Guice mention how configuring the dependency graph in Java was a big advantage it offers over older XML-based frameworks like Spring.


Spring has allowed for java configuration for some time.


I've never configured Spring DI via XML and I've been a Spring dev for at least 2 years. It's always been via Java annotations and config classes.


Early Java took a stupid decision to deliberately restrict expressiveness in the name of being "blue collar" (ironically enough Go largely retreads the same mistake). The result was a language so cumbersome that people would do anything to avoid having to write actual Java. Stupid policies at large companies that subject "configuration" changes to much less scrutiny than "code" changes compounded the problem.

The decision to use XML was actually quite reasonable for many developers given the constraints at the time. Thankfully Java has mostly seen the light these days, adding generics, first-class functions, and good-enough multiple inheritance (though it still has very restricted and cumbersome metaprogramming - classes are not first-class, and the lack of HKT makes it very difficult to work with functions generically since you can't abstract over arity), and people are gradually realizing that this different environment warrants different choices. (Companies still have terrible policies about code versus "config" though).


> Isn't it bizarre how software developers have taken runtime DI to such extremes that it was considered a good idea to configure services in XML or YAML -- basically another language, that has to be parsed, can be malformed, etc?

well, then how do you do it ? what do you do if your client / boss tells you "I want to configure the plug-ins & classes availables in the software through some file editing without having to recompile anything" and your codebase is in C++ / Java / Go / D / Rust ...


> what do you do if your client / boss tells you "I want to configure the plug-ins & classes availables in the software through some file editing without having to recompile anything" and your codebase is in C++ / Java / Go / D / Rust ...

You compile all the (tested) combinations as static executables and shove them all into the release package.


Yes, in that case you probably need to parse a config file.

But you might also encounter DI config files for specific environments, like development where you inject MockMailer, TrivialLogger and InMemoryDatabase instead of RealLifeMailer, CloudLogger and ActualDatabase.


Your point of breaking things as early in the process really is the winner.

Catching things early and often. Tooling that shifts things further left in the pipeline is my go-to default.


Do you think it is rust that popularized this?


It's been standard programming dogma for decades, e.g. https://developers.slashdot.org/story/03/10/21/0141215/softw...


Why Rust? Strongly typed languages had this capability for a long time.


I believe what he means is that Rust takes this a step further by ensuring memory safety at compile time.


Rust is merely the latest in a long line of efforts in that direction. It may have located a particularly nice point in the cost/benefits space, and the borrow semantics I believe are novel, but it's well inline with many efforts to make compile time errors out of as many things as possible, from a lot of C++ stuff, Ada, Eiffel, Haskell, a variety of proof methods, the list goes on, and even the things I list are themselves exemplars of entire streams of thought on the idea rather than a complete list, of course.

(This is not a criticism of Rust. It is not a criticism of a work to place it correctly in its historical context. Very few things consist entirely of novel ideas, and very few of those are any good, if any.)


Agreed. Rust is another step in the right direction but I don't think it invented a new paradigm.


DI is Dependency Injection.


Java has a fairly nice compile time DI framework in Dagger. However, it feels like it is missing a compile time version of the rest of the web stack: compile time routing and compile time template compilation, for example. Does anyone have any suggestions to fill out this stack?


It’s not perfect by any means, but there is Ktor as an example. Use kotlinx.html for your templates and everything is defined at compile time.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: