Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
A FactoryFactoryFactory in Production (2017) (stevenheidel.medium.com)
106 points by fragmede on July 7, 2023 | hide | past | favorite | 101 comments


I overlapped at LinkedIn at the same time as the author. While there, I wrote my first (and to date only) FactoryFactory.

LinkedIn replaced its uses of Spring with a thing called Offspring. Offspring explicitly disavowed being a dependency injection framework, but it did a similar job for us. I rather liked it. Notably, you just wrote Java with it. Invariably, in Offspring, you'd have to write a FooFactory to construct your Foo object to inject it into some (other class. By convention, all of the factories ended in Factory.

Well, I had a use case for a runtime class that needed to make a per-request factory to make little objects. So to make my Bars, I needed a BarFactory; and to construct the BarFactory, I needed an Offspring factory, thus BarFactoryFactory. There it was. I felt a little weird after that.

I suspect the EventFactoryFactoryFactory code here was such an Offspring factory being used for dependency injection, but I can't explain why it produced a FactoryFactory.


> LinkedIn replaced its uses of Spring with a thing called Offspring.

I think dependency injection itself is reasonable, but I increasingly wonder why you need a framework for it at all.

Yeah, for certain extremely complex scenarios, such as dynamically loaded plugins which are only known at runtime, you'd need some "host" code which manages the plugins - though even in that case, it will probably be helpful if that management code is part of your application's codebase, so you can easily debug it.

However, that's already the big exception. I'd claim that in the vast, vast majority of projects, DI is only used during development - and at runtime, there is exactly one way how the object graph is supposed to be assembled. So if that's the case, why not just make a big "application" class in your codebase and instantiate the objects/assemble the graph yourself? What exactly do you need a framework for?

Another exception are callback situations, most prominently web endpoints. A framework can be a big help here to get the headers right, manage auth, caching, CORS etc through filter chains, avoid a good part of typecasting, redundant declarations, etc.

But even there, Java now introduced lambdas and records which makes working with callbacks and "dumb structs" far easier. So the future of "web apps" here might as well be some library which lets you register web endpoints with lambdas - instead some overarching framework which expects that you structure your whole codebase around the fact that your app will serve HTTP on port 8080.


DI at LinkedIn was used for development, for unit test isolation, and to reduce tight coupling.

One big problem with a giant "Application" class is that it means all of the dependencies are laid out there, and their dependencies, and their dependencies, all named and instantiated. But some dependencies are in libraries, and basic detail encapsulation means ... a factory.

Offspring wasn't much of a framework, more of a set of conventions and utilities for building that stuff in plain Java code (with key annotations). In particular, I think the Offspring setup wasn't opinionated about the framework of the rest of the application, although other parts of LinkedIn were (and presumably still are).


> One big problem with a giant "Application" class is that it means all of the dependencies are laid out there, and their dependencies, and their dependencies, all named and instantiated. But some dependencies are in libraries, and basic detail encapsulation means ... a factory.

Indeed they are - and I strongly believe that is a good thing. Because at which other place are all your dependencies laid out and instantiated? In your deployed application code at runtime.

So to understand your application in its entirety (and not just individual components), knowing the whole object graph is essential. And the easiest way to know it is if it's already written down somewhere.

This doesn't have to keep you from enforcing loose coupling and encapsulation, at least for certain definitions of those:

If you want to keep the components encapsulated - i.e. ensure that FooService works with any implementation of BarService - you can still do that: Just ensure the individual components only refer to each other through interfaces and forbid direct dependencies between them. Then in the end, the application class should be the only location in the entire codebase where multiple concrete components interact in the same class (excluding tests). You can even enforce this by putting each components in a separate artefact and only allowing dependencies to an "API" artefact that is distinct from the implementation.

However, if you understand loose coupling such that no part of the application should know about the other, and the entire application assembles its component through some magical algorithm, I'd call into question why this is even a desirable property: You still have an object graph, you just go out of your way to hide it. And it's still very important that the object graph must be assembled in some very specific way, which is why the algorithm must be coaxed into doing the right thing with qualifyer annotations or magic priority numbers.


A FactoryFactory is just a poor man’s currying!


Currying is just poor man's obfuscation :-)

Seriously, I think anytime I have met currying (admittedly only in JS projects) the code could be rewritten to be better without it.


It's a lot more common in e.g. OCaml and Haskell where the syntax makes the partial application a lot easier.


I don't have any inside knowledge, but I'd guess an EventFactory is tied to a specific Event type, so the FactoryFactory is about producing Factories for any such type? I'd never have guessed the third though.


The irony here is that the root of the "FactoryFactoryFactory problem" actually finds itself in composability, which itself is a feature of the -drumroll-

> functional programming paradigm

It's very common to see/use F(F(F...))) paradigms in lambda calculus homework exercises and its more modern/useful derivatives (yes, like Scala) where the real "meat" of the logic is the center of this Tootsie Pop. We need to change the shape of a thing to make it fit some function that takes the thing (where every step of the way, we're greeted with: no, not like that). Often times, the function expects a functional type, but we just have an object. Better Factory it up!

So at some point down that rabbit hole, you're like "fine, what about a FactoryFactory?" ok cool, that works (because the engineer 15 years ago wanted it to be more "general"). Just add a ".build()" and commit. Merge. Done.


I don’t think that’s such an irony. In FP, minding the conceptual overlap, you could conceivably name every function with a Factory suffix. After all that’s what a function does, return a thing it created. Taken to its logical extreme, one would expect far more Factory suffixes for a real program of any significant complexity. But that’s kind of a silly naming convention when every function produces a new value and there aren’t non-Factory things lurking around producing unknowable realities.


It’s even better in Haskell: Haskell only has single-argument functions, so multiple arguments are really function factories, and sometimes they produce other function factories. So, the type signature of a FactoryFactoryFactory is something like foldr:

    (a -> b -> b) -> b -> [a] -> b


Heh I was thinking about implicit currying too and I just figured that much factorying would break too many brains.


Lest we forget lisp, where literally every thing in a program is a list factory, even the last item in a list. Even a list with no items.


> It's very common to see/use F(F(F...))) paradigms in lambda calculus homework exercises and its more modern/useful derivatives (yes, like Scala) where the real "meat" of the logic is the center of this Tootsie Pop.

Agreed. That would be compose in Scala[0]. Or F |> F |> F if using the thrush combinator[1].

> Often times, the function expects a functional type, but we just have an object. Better Factory it up!

Why?

> So at some point down that rabbit hole, you're like "fine, what about a FactoryFactory?"

This is where you jumped the shark. Doing composition by encoding it as a one-off unique type ("<prefix>FactoryFactory") appears to be a "poor man's monad transformer."

Unless, of course, the intent is to just get the change req "down the road."

0 - https://www.scala-lang.org/api/2.13.11/scala/Function1.html#...

1 - https://cinish.medium.com/scala-thrush-combinator-77c8c9d9fc...


I can understand how `InboundRequestContextFinderProviderFactory` comes about, hopefully due to a very mature, thorough testing environment where your `InboundRequestContextFinder` is injected via a provider and that provider needs to be constructed for different environments.

Though it does seem like perhaps the Finder itself is possibly an unnecessary abstraction.

Frankly, the longer I work, the more comfortable I am seeing these kinds of things. They can be misused, but they are also necessary for some really good things too. The absurd names are funny, but also somewhat standardized enough to be a signal on their own.


Yeah, out of context it may seem weird. And if you just call it FactoryFactoryFactory that's bad design with bad naming. But the fact it exists is not weird as long as you don't use it all the time.

I mean, any time you use a an ORM which can use multiple backends and inject runtime/test implementation, you have 3 or 4 levels of factories to create the actual model instance. They're just named much better.


> The absurd names are funny, but also somewhat standardized enough to be a signal on their own.

They are a form of Hungarian notation[0], made popular in MS Windows C code:

  In its original form, Hungarian notation gives
  semantic information about a variable, telling
  you the intended use.
In this iteration, the Hungarian notation is encoded as "canonical suffix words for a class" instead of "canonical prefix characters for a variable."

Both uses are, in my not-so-humble opinion, anti-patterns to be avoided.

0 - https://en.wikipedia.org/wiki/Hungarian_notation

1 - https://learn.microsoft.com/en-us/windows/win32/learnwin32/w...


Stockholm Syndrome?


SyndromeSyndrome


StockholmSyndromeContextRequestBeanInstantiationStrategyDefinitionGeneratorFactoryFactoryFactory


I never got the factory thing. I just put the object creation logic in the object itself as static methods.

For example here: https://www.tutorialspoint.com/design_pattern/factory_patter...

I'd just have a static Shape.getOfType()

Or maybe ShapeType.toShape() for type safety instead of using strings for the type.

It's okay, I'll brace for the down votes. :)


For things where you don't have to know which type of shape you're actually trying to create - sure. But now consider you're making something that displays shapes in one of two GUI toolkits, depending on where you're running it. You have basically 3 options:

- if/else everywhere to make ShapeA/ShapeB depending on the current toolkit

- generic Shape DTO and if/else + transforming all around the dispatcher to each toolkit

- pass in ToolkitContext instance which has a .createShape() which requires a single condition at app initialisation stage

Well... that ToolkitContext is a ShapeFactory. (Among other things) Then if you want to use a nicer system of handling this, you use DI, which is then effectively a FactoryFactory. Whether it looks obnoxious each time or you simply say AppContext.Get<Shape>() depends largely on your language though.


I don't quite know what point this feature came to be, but in Java it's now compiler-correct for subclasses to declare their overrides as returning a more precise type than the superclass, as long as it doesn't disagree with the superclass.

The place I've bumped into this is the "self"/"builder" pattern of setter. So for example if you have the superclass method:

abstract public Shape setArea(Number area);

it's absolutely correct for other interfaces to override it with more precise definitions that they expect their members to obey:

@Override abstract public Quadrilateral setArea(Number area);

@Override public Square setArea(Number area) { this.area = area; return this; }

The downside here is that interfaces and classes have to override all such methods with their own implementation returning the proper type, even if it's just { super.setArea(area); return this } otherwise the superclass's looser definition will be used. For example if you don't override the area method then this doesn't work:

//compiler error: method setSides is unknown for class Shape

new Square().setArea(area).setSides(sides)

But if subclasses override all their methods, for example if Square overrides the setArea method with one returning Square, the above example works.

Obviously, again, only works if you know the instance you're being passed is a Quadrilateral or a Square, but if you don't know that you are reduced to casting based on some kind of list or handler.


Let's say you have SwingButton and FLTKButton. I'd have a common Button that uses the fancy new pattern matching features.

I like me some if statements. Debugger friendly.


Some engineers see a chain of if statements as the problem, others see it as the solution. Who’s right is very subjective.


I strongly disagree with this. If statements have caused me more headaches than any other single programming concept. This is not subjective. When coders use if statements to replace polymorphism, it scatters logic throughout many different files. This has two extremely negative effects: first, to understand the reach of a single component, you need to understand the entire codebase. Second, when multiple developers are working in tandem, you are almost guaranteed to go through an incredibly difficult rebase process since the structure requires everyone to work on the same files.

If statements are useful for validation and bootstrapping decisions*, but if you embed them in your model as part of the operational structure, you’re making a rookie mistake.

*High performance software and scripts are the exception to most rules.


I'd go with: who's right depends on the code in question, with a large area of "it's a wash" in between. The extremes are not subjective.


Use switch!


Then you have an if statement per every place where you're operating on one or the other type. It may be very few places and it may be fine like that. It may be 4 options times a hundred paces and then it's not fine anymore.

I'm not sure what you mean by debugger friendly. Your debugger knows what types things are.


Solving the issue of having two different ways of drawing shapes by having two different classes of shapes and special logic to construct the right one seems like a bad idea.

Seems to me that what you actually want is an algebraic data type and 2 different plot functions. Visitor patterns are a neater way to achieve that.


> seems like a bad idea.

"Seems like a bad idea" seems like a bad way to make software engineering decisions. Define what you're losing / gaining, how it impacts readability, does it simplify future development, etc.


Wouldn't one normally require that kind of analysis before embracing some design pattern, rather than dismissing it?

I'm just saying what my prior is, I'm not saying I won't change my mind, but you seem to suggesting I should require less evidence to accept the option I'm least sure about.

Which, you know, seems like a bad idea.


Shape.getOfType(type) would also be a factory. A factory does not need to be a class, it can also just be a function.

I see two reasons for using a factory:

1. Reducing the scope of what a piece of code is responsible for (a.k.a. "Single-Responsibility-Principle").

Assume you are working on a graphics app that can draw 50 different shapes. The currently selected tool is stored in a variable, and there is a long switch statement that returns a new Shape depending on that variable. You wouldn't want the switch statement to dominate the rest of the code:

    handleClick(x, y) {
        selectedTool = getSelectedTool();

        shape = switch (selectedTool) {
            case "CIRCLE" -> new Circle();
            // 49 more lines here...
        }

        drawShape(shape, x, y);
    }
Not only would it get more difficult to read the code, but also the test for handleClick() would have to test for all 50 shapes. If the instantiation of the shape is separated out into its own function, handleClick() can be shorter. If the factory function is injectable, the tests for handleClick() can focus on the coordination work that the function does: it asks the factory for the shape and draws it in the right place.

2. Allowing to reconfigure what kind of objects are created through Dependency Injection. For instance:

    class CommentRepository {
        constructor(dataSource, queryFactory) { ... }

        getComments(articleId) {
            commentsQuery = queryFactory.getCommentsQuery(articleId);
            return dataSource.query(commentsQuery).map(toResponseType);
        }
    }

    class PostgresQueryFactory { ... }

    class RedisQueryFactory { ... }


Something has to map your click events to each of the button classes, and you have to test that code anyway. So I would just put that delegation in the Button class, I don't see the point in the extra abstraction to the factory yet. But to each their own I guess :)


Also the fact that you use the repository pattern, which I would absolutely never do even though I know it's fairly common, shows that we have different ideals (no offense). :)


I’ve ended up almost never using static things. You almost always end up wanting one more layer up in my experience, especially for testing. When you make something static, you are fully committing to having that be your highest level. It’s convenient and sometimes makes sense, but the drawbacks are significant


The story kinda ends at its climax, and abruptly ends upon reaching the `class EventFactoryFactoryFactory` and fails to explain why it was needed or what it did, concretely


That's the punchline, nobody knows what it does or why it was needed.


Nobody knows if the author checked the commit graph, which is really where you are expected to go if you want to know why it was needed. Maybe it turned up nothing, we will never know.


Commit message was probably ‘get eventfactoryfactoryfactory working’


The commit graph alone? That’s gonna be missing a ton of context you’d find by reference, while doing a bisect to compare the previous reference graph, and taking notes which become so self referential they never terminate. On the bright side, now you’ll solve the halting problem!


It’s okay, I don’t think I’m very funny either


If you want to avoid Java just for those naming styles, while the ecosystem is actually great (doc, tooling and lib maturity) you’re doing it wrong.

Java works. Its effectiveness is high, and it’s quite uncommon to hit rough edges since it’s so well tested. A smooth ride forbackend when compared to languages like Python or Javascript.


I feel like Java was designed in a way that runs contrary to how my mind works. I tend to navigate code starting at the entry-point and going deep. With lots of layers of abstraction, it's hard to figure out what is actually being run, and I tend to get lost in a Java codebase.


Is that a problem of the language though, or of the codebase you were navigating?


If it's the latter, it's happened far too frequently to just be chance. A language encourages the style of development that's most ergonomic for it.


I kindly disagree. Python tooling, documentation and lib maturity is nowadays comparable to Java IMHO. Which one is better suited for your problem is a matter of perception.

Javascript (on the backend) is obviously a different story.


Tooling for a dynamically typed language can never be on the same level as for a statically typed language. And among statically typed ones, from my experience, Java/JVM has the best and most mature tooling by far.


Having just recently moved on from a Python shop, I have to disagree. Vehemently. Python tooling is in no way as good as Java's.


> Python tooling, documentation and lib maturity is nowadays comparable to Java IMHO.

Having moved to a Python shop, I disagree. I really would like it to be true, but it's not.


I’ve spent most of my 25+ year career working in other platforms/frameworks, and only in the past 5 years or so started working in Java. Not by choice. As such, I’ve got lots of bones to pick about the language and the JVM (the former of which I’ve worked around by using Kotlin, which I quite like).

The meme about factories, however, has perplexed and kind of annoyed me. Isn’t that more a symptom of once-popular design paradigms that overstayed their welcome or perhaps never were, which can happen in any long lived ecosystem (in this case, one I’ve seen pop up here and there in other OOP platforms, such as .NET)? Maybe I’m spoiled by mostly having worked in recent Spring Boot where I rarely see these tedious and awkward patterns, and perhaps the meme is more impactful for people stuck in legacy land, but there is a bit of hubris with these blanket associations that rubs me the wrong way. I’m no Java apologist, either.


I always see all this terrible Java stuff, but the world hasn't ended and people who use Java all seem to say it's actually great for large enterprisy stuff.

Are these FactoryFactoryFactories secretly some kind of busywork conspiracy to keep all the "Coding for the sake of coding" engineers busy doing mostly harmless nonsense rather than deciding to make their own assembly language based on JSON that gets interpreted with buggy regex?

Like, I could work with a factoryfactoryfactory. It's just random nonsense, but there's not a lot of actually content.

If someone writes their own PEG parser in Forth, and then embeds a handwritten forth interpreter, and then writes the application in their own language, I'll be completely lost.

I'd rather deal with stupid boring enterprise code that does nothing than clever hacks.


> I would never take a job where I got paid to write Java code for a living. I had seen the light with the functional

I've never understood this kind of mindset. It seems to be really limiting. I've never once worked somewhere that only used on language or which didn't use java for anything.


When I got my IT degree (after being full-time in the industry for over a decade), I had to do some projects in Java, and… it was fine. It was pretty much the same stuff I had learned to do in Obj-C and Swift and PHP with just different syntax to varying degrees. In the end I was like, "this is it? What's there to hate here?" I assume it has to do with the cruft that comes from its age) but from the little I've seen of C++, it had the same sort of cruft) and just its association with enterprisey corporate environments.

That being said, for those that have drunk deeply of the functional Kool-Aid, I guess they're going to have an aversion to anything as strongly OOP-oriented as Java is. And I certainly have my own choices of tech I'd rather avoid wherever I end up going in my career, like JavaScript outside the browser. So in broad strokes I understand the "I feel disgusted coding in X" mindset.


That's precisely it. Hating Java is cliquey virtue signaling and has nothing to do with its engineering features - which are pretty good.


Agreed. Along those same lines, I've ripped out just as much incomprehensible functional scala as I have "enterprisey" java. For every great idea, someone will internalize it as dogma. Dogma releases one of the duties of reflecting on what they are actually doing and if it is the best approach.

java has it's place. many things have a place.


I personally thought it was ironic to praise Scala when the problem with the code was the opposite of incomprehensible. The complaint was about unnecessary boilerplate but his solution introduces a different kind of problem.


Ive worked at a lot of places with no Java. I do tend to avoid the larger enterprisey environments tho


I've never worked anywhere where java was used for anything. But I did work at a Clojure shop, and that was great.

You could pick a niche language and only ever work with that, if you really wanted to.


Groovy is essentially a single framework language. If you want the Rails experience on the JVM with good performance it is your best bet but you aren't going to be writing anything except grails apps in it.


At my first project in my first job, there was a MockWorkerInfoServiceStubFactory. I think it was a factory for mocks of the stub for the worker info service, but it's never easy to know when to pop tokens from the start or from the end of such a name… (It was in C++.)




> class InboundRequestContextFinderProviderFactory

> What does this even mean? Why does the context need to be found, provided, and factory-ed? The implementation of the class itself is 1 line of actual logic surrounded by 22 lines of boilerplate.

How are "Context", "Finder", "Provider", and "Factory" Java specific problems? Whoever decided to use that convention certainly wasn't required to do so by Java.


I feel like the article cuts short. It ends with a “I saw one in real life!” But not anything about why it came to be, if it performed well, despite its clunky name or if it actually was a good idea given its circumstances


If you ever see a factory that makes single objects of one kind, that's basically abuse of the factory pattern.

The purpose of factory is to make groups of related objects that are from parallel, but distinct, inheritance hierarchies.

For instance, to do AES encryption, you might need an AESKey (a kind of Key) and an AESCryptContext (a kind of CryptoContext).

Your code doesn't want to know about the existence of these classes at all, let alone their relationship.

You need some interface where you pass, say, the character string "AES" (or some enumeration, or Lisp symbol or whatever) and you get some object that makes keys and contexts of that kind.


I guess you can do something similar in C++ using templates, just that the result isn't usually called “factory”. The terrible things OOP lets you do aren't limited to Java.


Oddly enough - this was on page 1 earlier today.

https://factoryfactoryfactory.net/


Follow-up posts are a regular occurrence. To judge by the comments, this one was pretty good!


My problem with Java is exactly this. I've avoided it my entire professional life, no matter its promises or success stories. I'm not saying Java doesn't work, but makes my daily life as a programmer miserable.

But there was a period when Java paid the bills, so I got proficient at it. Jumped ship as soon as I was able to.

I still don't know if there's any other language with that amount of "ceremonial syntax" out there.


Used Java 15 years before switching to Scala. Never wrote a FactoryFactoryFactory. Not sure if I've ever written a FactoryFactory, maybe once for some obscure reason.

As annoying as Java is, please don't mistake pattern over-engineering with the language. One could choose to do the same things in any OO language, like C# or C++. And as someone pointed out here, there are similar obscenities in the FP world, they are just obscured under higher-kinded types and algebras.


I agree. Yeah, as most people here know, C# is Microsoft's version of Java (they used to say if you know one, you pretty much know the other, although have diverged a more as time progressed). So to some degree, most of the problems of Java can be found in C#...

... although you alluded to one problem that Java has. Its frameworks were developed during the height of design patterns and many (like Spring) are bloated or overly complex or not worth the trouble (and actually, I think design patterns, when used very carefully and in truly useful situations, can be fantastic).


Spring is not Java. I haven’t worked on a Spring app in the last 10 years. There are other frameworks like Quarkus, Helidon, or Micronaut which all have a modern design.


Agreed. And actually, realized I screwed up what I was trying to communicate in my last comment. What I was trying to say is that Java and C# are at their core basically the same language, and no one is calling C# bloated and syntax heavy.

Yeah, actually, I like Java (it gives application devs just the right amount of control) and think to save it, needs to purge itself of Spring.


I don't disagree. Java as an object-oriented strongly-typed JVM-based language may be beautiful as theory.

But in practice "FactoryFactoryFactory" (*) is commonplace across codebases, frameworks, libraries, etc.

Java in the real world doesn't happen in vacuum.

(*) I know it's an extreme example, just kind of trying to make a point succinctly


I've used golang at an employer who invented their own "application framework", including dependency injection and hooked up to the rest of their libraries, including RPC side car and whatnot. Let me tell you, that was not a pleasant experience at all. And given golang's inherent verbosity, it made it even more ceremonial that what you probably were exposed to.

Using a modern Java framework would have been such a superior approach. And it's even better now with records, pattern matching, virtual threads, and string templates (in preview).


There are no FactoryFactories in Java or its standard libraries.


Once you're dealing with a lot of code that has both a lot of external library/datasource dependencies AND you want to write good, thorough tests around it... what's the alternative to doing something like DI/Factory stuff? At some point you hit the point where you need to interact with the stateful, non-functional outside world. I completely understand why people invent things to avoid having to pass every single thing needed at every single layer as arguments into the topmost method call to get sent all the way down.

I couldn't tell you off the top of my head exactly why any Java code here would need a Provider here vs a Factory there vs a FactoryFactory in some other place, but... without further information on what these classes were actually doing, I'm gonna withhold judgement. Once you've got words like that three deep on the end there's probably room for some refactoring, sure... but is it the most pressing thing that causes headaches when modifying the code? I doubt it.


> what's the alternative to doing something like DI/Factory stuff? At some point you hit the point where you need to interact with the stateful, non-functional outside world.

More DI and more Factory, I kid you not. If I’ve learned anything from FP, there’s nothing you can’t solve by passing contained references to things that make more things. If it ever becomes cumbersome it’s also a great forcing function to reconsider everything you think you know and whether you want to keep knowing.


The solution is integration tests. Unit test with dozens of mocks are the work of architecture astronauts.


It's some cruft from the pre lambda days. I don't know why anyone would care about this.


Discussed at the time:

A FactoryFactoryFactory in Production - https://news.ycombinator.com/item?id=15952502 - Dec 2017 (46 comments)

Related ongoing thread:

Why I Hate Frameworks (2005) - https://news.ycombinator.com/item?id=36637655 - July 2023 (206 comments)


I recently started working in a Kotlin project coming from a Java background.

I wrote a Builder for a data class I needed to add stuff in a loop.

Reviewer suggested I use `copy()` instead. Once I used that I was happy, I didn't need that Builder after all. I removed bunch of useless lines of code, made things simpler. Loved it!


Has anybody actually been in a situation where dependency injection was better than just modules?


I worked on a pretty large java codebase (Stubhub) and actually found most of the code was well organized and very consistent in its architecture. The one thing I didn't like was the usage of Spring pretty much throughout. For the most part, I never had to use dependency injection to solve a problem, and realized that all the extra time and energy used to learn the Spring code and debug its problems (like adding in a new component using autowiring) was clearly not worth it.

But, there was one situation where I had to change the name of a component/constant (can't remember what it was) that was used throughout a very large microservice (like 24 different places or something). This component was autowired using Spring. If I had to rename it by hand, it would have required doing a search and replace in all those places, but instead I had to change the name in one file. It saved me like an hour of time.

Guess what I'm saying was it saved me like in hour in this one particular situation, but cost me days worth of time in learning it's overly complex code and debugging it. Yeah, I feel as though it is a solution in search of a problem and completely not worth it (and overall, I like Java. Yeah, to save Java, Spring must find a it's way to the ash heap:).


Yeah, that has been my experience. Dependency injection makes code more difficult to deal with, read and debug without any obvious upsides. I am sure it can improve testing but, for example, in JS world Jest seems to do that just fine and aligning your whole codebase with DI does not make sense.


>I never had to use dependency injection to solve a problem,

Here is some life advice. Start passing your dependencies through the constructor instead of using a "new X()" call in the constructor. That is what dependency injection is all about.


... actually, to add to this, there is one other HUGE situation where it is beneficial, Spring Boot. Spring Boot is pretty awesome and think it uses the dynamic component wiring of Spring to put together its components.

Was thinking awhile ago they need to re-create spring boot but without spring (if this is possible)


Maybe you can go into more detail about what kind of modules you mean, but generally Dependency Injection allows using different implementations of a thing (module?) in the same place.

The most common use case for me is test doubles (mocks). People who are serious about tests usually use some kind of Dependency Injection.


As a C# guy I am also amazed by this. Somehow the C# frameworks and community skipped this degree of pattern madness and took a more pragmatic approach.

I think this is because of the more industrial and less academic influence on the language and frameworks


I think the problem with this terminology is that it's not direct enough. Like why do you require people to understand a metaphor (factory -> make things) while coding? Just name the thing makeX or makeXMaker.


Oh, that'd just be a NewThing() constructor. The ThingFactory() can probably take an argument specifying which implementation you need. It's almost always just a small function holding a conditional chain that you don't want to copy/paste all over the place. As a bonus, it's a named design pattern that you can find in basically any book about design patterns.

Whether it's actually worth using is another question, but at least it's a point of reference.


That's because it's standard terminology that means more than just 'making things'. There is also the Builder pattern which also makes things but which is not the same as a Factory. Besides, the word maker sounds weird.


Factory methods literally just "make things", idk why polymorphism has to be a "pattern" when it's already a feature of the language.


Does anybody still have that blog post from like 15 or 20 years ago with the factory design pattern that had something like a Knight Factory to help defend the kingdom?

Anybody know what I’m talking about?



"FFF" is a result of an implementation consisting of a poorly(?) though out mix between compositional OOP and dependency injection.


Considering implementing AbstractPatternConfigFactoryBuilderFactoryConfigPatternAbstract. Just to see if anyone notices.


Obligatory help to generate these names:

https://projects.haykranen.nl/java/


FactoryFactoryFactoryFactorsFactorsFactoryFactoryFactoryFactors


Just remove the name and make it a Function<String, Function<Context, Function<String, Context>>> and then use lambdas throughout and your PL fans will love you for not making it something.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: