I always default to redundancy until I can't stand it anymore, or it has actually caused an issue.
For me, the real devil is the clarity of requirements and intended product roadmap. If the thing you are building is crystalline & eternal with 100% obvious requirements on day #1, then sure. Spend all afternoon in the architecture rabbit hole and make a likewise implementation.
I've never worked on a project where you can plan this far ahead. Think about what happens when you allow 2 different developers to work on 2 distinct features that share a (vague) common concept, but then encourage them to independently implement that common concept. Some developers look at this like an inconsistent horror show. I look at it as additional options to choose from.
At the end of the day, you can always come back and de-dupe these common types and re-align implementations accordingly.
A big part of my refactoring is distilling base classes, and protocol defaults (I was just doing exactly that, a few minutes ago, in fact).
I have watched so many dependapocalyses, that I have a healthy fear. I no longer have to touch that particular stove-lid, to know that it's hot.
There are a number of serious issues with dependencies.
In many cases, they are great. They can give a fairly inexperienced programmer, massive power. In many other cases, they are not great. They can give a fairly inexperienced programmer, massive power.
But I use them frequently, in my own work. It's called "Modular Programming," and the concept is decades old. I write almost all my modules, and include them as packages, with static linking. Each package tends to be a fairly standalone, atomic software project, with its own lifecycle.
> They can give a fairly inexperienced programmer, massive power. (x2)
This is a big reason why we use one of the broadest frameworks available in modern tooling today - .NET 6+. It has so many batteries included that pulling in a 3rd party dependency is something rare enough to call a big meeting for.
Additionally, that "dependency" is the same one everyone else uses - not just in our organization - so all that code could be shared with minimal friction.
There are certainly downsides to this - what if Microsoft becomes even more evil? But, we can hypothesize about the end of the world while our competition uses the same trick and steals all of our lunch money...
Seeing things in absolutes sets you up for failure,
or at least limited success.
---
As a side note if we speak about dependencies instead of code reuse in general: dependency pinning and not updating is thoroughly underutilized. Sure you probably don't want to do it for idk. openssl, or your web framework. But reviewing, pinning and forgetting about small self contained dependencies as long you don't run into bugs can be a pretty good idea (through depends a lot on the language, immutability of dependencies, etc.).
Edit: Also making it easy to change code when requirements change doesn't mean adding all kinds of generic abstractions to your code. It mean keeping code clean and simple and avoiding implicit logic dependencies enough to make code changes easy. Adding a lot of generic abstractions can make it harder to change things instead of easier.
Pinning and forgetting is also a fantastic way to be riddled with security vulnerabilities in a not so short span of time! And a great way to make it a pain for people to upgrade to newer versions of dependencies since they'll need to jump several major versions in some cases.
that's why I said for small self contained libraries and depending on the language and tooling
obviously you should never pin openssl or frameworks or similar
but you anyway should have tooling in place to report security vulnerabilities in you supply chain, so as long as you don't use a language where simple bugs are prone to be security vulnerabilities this isn't a problem for security
similar because they are small self contained dependencies upgrading across multiple major versions doesn't matter
and if your language has a proper module system you might never need to upgrade at all (i.e. allowes sainly multiple versions of transitive dependencies, ie. not C or python)
like think about dependencies so simple that you are considering you copy paying them into your code (with attribution). Maybe not as absurdly simple as leftpad but still very simple. Or in other words the kind of single source file libraries which rarely ever have any updates because shortly after being written they are "complete". Or at least are wrt. the features you use.
> Edit: Also making it easy to change code when requirements change doesn't mean adding all kinds of generic abstractions to your code. It mean keeping code clean and simple and avoiding implicit logic dependencies enough to make code changes easy. Adding a lot of generic abstractions can make it harder to change things instead of easier.
Exactly, when you have two parts of your system that do the same thing but for different reasons, when one of them gets a new requirement, you'll be in a better place with code that was previously redundant than if both places called into a single implementation.
Conspicuously absent from this entire article is the word "abstraction".
Oh, sorry, I just said a bad word, didn't I? We're not allowed to use the a-word around these parts, or people will think we're being enterprisey and chase use away with pitchforks. What am I, some kinda enterprise programmer?
I can see why doing an "extract to interface" IDE command in the messy situation TFA describes would (rightly!) get this reaction. Of course that's just adding more needless complexity to an already needlessly complex situation. But that's not what good abstraction looks like. For some reason, everyone seems to get it in their head that abstraction is supposed to be some sort of list of features you support. So you take each public class Foo and extract them to an interface IFoo and you have your list of features. Abstraction!
No. Good abstraction describes your needs. That's it! If your module needs to do something, but doesn't want to be responsible for the exact implementation of that thing, then you define an abstraction which describes that need. It will be implemented by whoever needs your module.
> You can choose between having modules A and B using a module C doing something, or have them do it themselves. What's your call?
A defines abstractions describing what it needs to do its job, and so does B. Since A and B have different responsibilities, these are two separate pieces of code, even if they look basically the same right now.
Now you have lots of options, and you can vary between them easily depending on what makes sense:
- A and B can implement the abstractions C needs, and then use C. So A -> C and B -> C. ("->" meaning "depends on"). The code exists in isolated corners of A and B so it can be moved easily later.
- C can implement A's and B's abstractions, so C -> A and C -> B. The code exists in isolated corners of C, so it can be moved easily later.
- Leave A, B, and C completely independent of one another. Write a small module ac, which provides implementations for A in terms of C, and a small module ad, which provides implementations for B in terms of C. Now you have no dependencies whatsoever between your major modules, and you've introduced small adapter modules that you can throw away later.
- Leave A, B, and C completely independent of one another. Your end application handles stitching them together (like above, but ab and ac are just sitting inside your end application).
> Rumors tell that [XParam's] original host project uses <5% of its features.
Let the marketing people think in terms of "features". You should be thinking in terms of needs. You can spend months implementing features that you don't need. It's much harder to implement a need you don't need. An that's what abstractions are: a description of what you need to do your job.
> Oh, sorry, I just said a bad word, didn't I? We're not allowed to use the a-word around these parts, or people will think we're being enterprisey and chase use away with pitchforks. What am I, some kinda enterprise programmer?
What HN discussions have you been reading? I nearly stopped at this preemptive snark.
Perhaps it's just a vocal minority, but I see a lot of backlash against anything that smells like large-scale, top-down, enterprise design. People see a concept implemented badly, so they get it in their heads that the original concept was worthless. This is one major underlying force for things like the NoSQL movement and other cyclic fads.
What HN discussions have I been reading? Let's look back:
> I wish it was a meme [to hate abstraction]. The things people complained about decades ago are still happening today, and the practitioners are all too eager to turn a 100 line program into a 1000 line one, while taking weeks to test if it does what it should. The average loud dev is a GoF fanatic in love with inaccessible reflection.
> I’ve found this “come correct” mindset is used to justify unnecessarily flexible solutions to allow for easier changes in the future … changes that 90% of the time, never actually materialize.
> [a response:] In other words, when the basic assumptions of that fancy abstraction are just not workable with the future requirements, you're hosed. Worse, now you might need to refactor a lot of code building on this abstraction.
> This view follows quite naturally from another aspect of modern programming thinking (and education), which claims many problems are gross and complex, and thus we need abstraction to make them appear simpler.
From an article that hit the front-page of HN, although this is a little unfair, because this guy is a raging asshole.
> The Gang of Four on the other hand was an unfortunate turn for the industry. As you say we took it too seriously. In practice very few projects can benefit from the "Factory pattern"
Someone claiming passing a function that takes 0 arguments and returns 1 is too enterprisey and "very few projects" can benefit from it. Passing functions around is an absolutely essential feature for a programming language, and I can't imagine how you could develop good abstractions (or good code at all) without it. And yes, I've worked in a lot of languages that don't support first-class functions; that's how I came to the conclusion that it's absolutely essential.
This also illustrates my point about how something can be done badly (factory pattern) and it poisons the well for the concept (passing a function).
> ignoring the architectural realities of the hardware is ignoring one of your responsibilities as an engineer to deliver performant software. Now, it's possible to argue that writing performant software is not important ...
The argument in question here is whether every programmer should always know which processor cache every variable they use lives in at all times. I think it's pretty clear this is completely incompatible with even the barest form of abstraction.
That's basically right, but not really for the right reasons. What exactly does "smaller" mean? Rob Pike is saying they have few methods. We're kinda getting to the right ideas without understanding why. It still sounds like you (and Rob Pike, from the bit I listened to) are still thinking of abstractions as a feature list, and you're saying: instead of having one list of lots of features, have lots of lists of one feature.
Okay, but ... we're still missing the point. It's not about features. It's about needs.
How do you know whether an abstraction can be "smaller"? What does "smaller" mean? Smaller means: more generic and/or easier to implement. That is not the same as having fewer methods, even if that is often the result. An abstraction can be smaller if you can make it more generic, and/or easier to implement, and have it still do everything you need it to do.
An interface must be a small as possible to fulfill it's purpose, no smaller and no larger. It's not about implementation, it's about the mental model of what is expected and why.
Another way to say this is that an interface must be perfectly sized, but perhaps that is to abstract to be useful.
Rob Pike mentioned few methods because that is the only thing that interfaces mean in Go - he might have used different terminology in another context or speaking about abstraction of systems instead of interfaces alone as abstractions.
Being alerted is good. Not necessarily sufficient though. Dependency A depends on C.v2. Dependency B used to depend on C.v2 but just upgraded to C.v3.
Either your language (or _maybe_ tooling) allows those to coexist or you are now going to have a really bad time and give serious thought to jettisoning one of A or B.
For me, the real devil is the clarity of requirements and intended product roadmap. If the thing you are building is crystalline & eternal with 100% obvious requirements on day #1, then sure. Spend all afternoon in the architecture rabbit hole and make a likewise implementation.
I've never worked on a project where you can plan this far ahead. Think about what happens when you allow 2 different developers to work on 2 distinct features that share a (vague) common concept, but then encourage them to independently implement that common concept. Some developers look at this like an inconsistent horror show. I look at it as additional options to choose from.
At the end of the day, you can always come back and de-dupe these common types and re-align implementations accordingly.