Hacker News new | past | comments | ask | show | jobs | submit login
CUPID – the back story (dannorth.net)
322 points by blowski on March 21, 2021 | hide | past | favorite | 257 comments



I read the article and didn't really feel that they disagreed with, or debunked any of the principles. It reads like they formed their own understanding of each principle and maybe disagreed with how they were taught, or how the principles are sometimes presented.

This change in outlook of existing thoughts/ideas is how many crafts grow, such as martial arts, painting, philosophy, etc (instead of stagnating). Sometimes we need to frame things in a more modern manner, and sometimes we need to discard them completely. In this case, I think re-framing the concepts is helpful, and I found it to be an interesting point of view. I agreed with a good amount of it, but I don't think we need to discard SOLID principles just yet.


This alludes to one of my main beefs with SOLID - the lack of a clear definition.

What counts as a "responsibility", for instance. Where I see one responsibility some people see two and vice versa.


I agree. In a way that's one of the strengths of SOLID in my opinion. All places I've worked at had slightly different versions of what e.g. SRP meant. And that's ok. It's a way to make sure your team writes code in a way their teammates would expect. Whether that's objectively good code or not doesn't matter as much to me.


> What counts as a "responsibility", for instance. Where I see one responsibility some people see two and vice versa.

The Single Responsibility Principle is not a rigorously defined Scientific law, where the "responsibility count" can be exactly measured and minimised.

It is a subjective design guideline, with elements of experience, of context and of "I know it when I see it".

This does _not_ make it useless. It is still very useful indeed. But you do have to understand that disagreements over it are not always about "who is objectively right" but "which style makes the most sense in context".


shrug I think every time I've ever seen a disagreement about code quality it's boiled down to both developers thinking "I know it when I see it" about their separate approach.

If a set of principles lets them both think they're both correct and the other one is wrong, what exactly is the point of those principles?

This isn't just a coding thing. It's also, say, why society follows a body of laws rather than something like the 10 commandments.


There are so many instances in code where two options are just as good as eachother by some negligible margin. The actual problem that needs solving is figuring out how to compromise and collaborate.

That's why, if I am on a team project, I much prefer working in opinionated frameworks with strong idioms. It actually doesn't matter if I think I could do it better, the framework has chosen a different way and that is fine. We all have to do it that way, and we can all compromise and collaborate. No one bickers about best, and we can get real work done.

Different when it is a project of my own, but if it requires teamwork you need a framework of collaboration as much as a framework of code.


>There are so many instances in code where two options are just as good as eachother by some negligible margin.

This is my other beef with SOLID: no trade offs.

I'd even extend your point to say that there are many times when code is not ideal but fixing it isn't worthwhile.

It would be nice to have a set of principles which recognized costs rather than promoting a vague, idealized standard for developers to fight over.


> lets them both think they're both correct and the other one is wrong, what exactly is the point of those principles

Guidelines can be useful even when subjective. Vitriolic disputes over whose (subjective) view is "correct" is an example of toxic behavior, not a problem with the guidelines being used to justify such behavior.

What would you think of someone who loudly insisted (without humor) that putting pineapple on pizza was wrong as a matter of principle? Is such expression in any way useful?


Here's the problem:

Some developers will say a (data structure) Controller is a class obeying the SRP

Some others will say the class can manage itself and not need a controller, so M and C can be one thing.

Some other will argue that it's better to make 2 controllers, one for complicated thing 1 and another for complicated thing 2, all based on the same Model.


That's not a problem, it just means the set of design tradeoffs to consider when solving the problem is nontrivial.

The principle draws focus to certain (arguably important) design aspects. Multiple possible approaches are identified. Various concerns are raised in response. A good solution needs to balance these concerns against one another in context.

The principle is just one of many cognitive tools to employ when thinking about the problem.


I think that is a feature, not a bug. I think it makes sense in different contexts for "responsibility" to be abstracted differently. The two extremes are lots of files and functions versus fewer files and functions, and the optimal balance to strike is probably based on whether it is important for people to focus on the modules, or the arrangement of those modules. For high-performance C++ libraries with a good design/architect, it could make sense to split up to a lot of files/functions, so that each function can be iterated upon and improved. For a less-performance sensitive Java library where understanding and usage is most important, you would want less files/functions such that the development focus is more on the high level ideas, the arrangement of the parts (or refactoring).

With any paradigm, there is often ambiguity with certain elements, because those elements should be dynamic. What SOLID aims to do is say that these main points are not something you should dedicate brain cycles towards, as they are best spent elsewhere in the design.


>With any paradigm, there is often ambiguity with certain elements, because those elements should be dynamic.

It's because, unless you're careful, human language is insufficiently precise by its nature for many domains.

This is why mathematicians communicate with mathematical notation, for instance. It's why lawyers use special lawyer only terms (or why interpretation of the law is an explicitly separated function).

With SOLID the lack of precision renders the whole exercise rather pointless.

You're supposed to be able to use these principles as something you can agree upon with other developers so that you have a shared understanding of what constitutes "good" code.

However, it doesn't happen. Instead we argue over what constitutes a single responsibility.

SOLID isn't the only example of this. "Agile" is actually far worse.


> With SOLID the lack of precision renders the whole exercise rather pointless.

If your assumption is that design guidelines can have mathematical precision, then you're going to be perpetually disappointed.

> Instead we argue over what constitutes a single responsibility.

I'm sorry that you can't reach consensus. but IMHO, the false idea that there is a single, mathematically correct answer (and if people differ, therefor someone must be be Wrong, capital W) is often part of the problem, of what stops an agreement on how to move forward pragmatically being reached.


Being vague isn't the only thing wrong with SOLID. The idea that there is "one true way" is partly what bugs me about it (and uncle bob in general).

I don't think that there is a set of design guidelines that can be applied with mathematical precision, but there are a set of code "costs" (like cyclomatic complexity) which can be measured with mathematical precision.

That is to say, all other things being equal, if a pull request non-negligibly reduced cyclomatic complexity I'd be happy to claim that it increased code quality. I don't believe that this idea is false.


> there are a set of code "costs" (like cyclomatic complexity) which can be measured with mathematical precision.

"Responsibility" is IMHO about intent and meaning, which as a human concept is not reducible to measurements of this kind. Cyclomatic complexity is all fine and well, but it cannot tell you if an intent is being met or not. It is silent as to meaning. It's relevant only if you can compare two pieces of code that do the same thing; it's not talking about what that thing should be.


> The idea that there is "one true way" is partly what bugs me about it (and uncle bob in general).

I tend to agree with that, and the original article in general. Mr Martin's approach helped originally, but the dogma now can and should be moved on from.


Yes that is definitely true, human language is very limited. Although, a paradigm is only as strong as it's adoption, and by reducing the barrier to entry (using common language as opposed to overly precise notation), the principles can spread faster and with less friction say inside of an organization. Nuance is always more optimal, for sure. How many people can rattle off the Fundamental Theorem of Calculus? I would think that most people remember the idea behind it, but not the precise mathematical definition. In fact, the definition learned in undergraduate calculus is not 100%, as early undergraduates are not expected to have the rigor and experience to handle real analysis.


A responsibility is an obligation to perform a task or know information (authoritatively). If an object performs two tasks, it has two responibilities, and so on.

For example, an Entity part of the persistence layer in the app, has the responsibility to persist the state of the Entity to the database, but the responsibility to know the information is with an object that is part of the business logic layer. The information that is stored in the Entity to be able to persist it to the DB is just a cache of the information in the Business Object, the Entity is not responsible for it, it just holds a cache. If the same object would be responsible for both holding the information and persisting it, it would have 2 responsibilities.

This might sound somewhat confusing and useless, but it isn't so. Imagine a future where computers have a form of RAM that is not volatile. There is no need for a database, in the classical sense – whatever is in RAM when the computer is powered off/rebooted will still be there when the program resumes running.


I don't disagree, and that lack of common interpretation can be both good and bad. Good because it lets you apply your own understanding and experiences to it, and bad because it introduces the potential for conflict when two people have different understandings.


Pauli, in one of the sickest ever physics burns, said of his colleague’s work that it was “not even wrong.”


The author presents a series of strawman arguments to debunk SOLID, and then suggests that instead we should make software as complex as possible but no more complex than the coders own comprehension. In my experience this commonly how incomprehensible codebases evolve; the code is comprehensible right up to the point that it isn't, and by that point it's too late to change anything.


> he suggests we should make code as complex as possible

I read no such thing. (Speaking of straw men...) instead, I read him saying that the alternative to SOLID was “Write simple code.”

I think there’s room to criticize this essay, but that’s a bizarre one to lead with.


> instead, I read him saying that the alternative to SOLID was “Write simple code.”

The part you missed is in the same sentence as the three words you quoted.

"Instead I suggested to write simple code using the heuristic that it 'Fits In My Head'."

The obvious criticism here is that now any developer who wants to defend their code will simply claim a given spaghetti incident easily fit in their head. The author even seems to acknowledge this line of attack in the next paragraph:

> You might ask “Whose head?” For the purpose of the heuristic I assume the owner of the head can read and write idiomatic code in whichever languages are in play, and that they are familiar with the problem domain. If they need more esoteric knowledge than that, for instance knowing which of the many undocumented internal systems we need to integrate with to get any work done, then that should be made explicit in the code so that it will fit in their head.

But that just protects against spaghetti made from esoteric know of internals. For example, in C a "big head" would still be able to justify using global variables (after all, that's idiomatic C), rolling their own buggy threading approach, and deeply nested side-effect based programming.

I'd much prefer pointing big heads to the rule of "do one thing" than to ever read a response of the category, "Well, it fits in my head."


> The obvious criticism here is that now any developer who wants to defend their code will simply claim a given spaghetti incident easily fit in their head.

The thing is: that's just a bad defense. If the people who are going to be to maintaining it and working with it are saying it's bad code, it's bad. Even if this is the only developer who is going to maintain it for now (which itself is always a terrible idea), eventually someone else will take over, and the closest thing you have to that "someone else" is the rest of the current team.

I'm having a hard time imaging someone seriously making this argument, who is not also:

* Trying to establish themselves as the sole possible maintainer, so either the company can't fire them or is completely screwed if they ever leave, or:

* Is a saying the rest of their team is just too stupid to work with

In either case, this person is a massive liability, and the longer they are employed the more damage they'll do.


You're making a huge assumption that the discussion is one person disagreeing with the rest of the team.

What if everyone but one says it fits in their head?

What if you're just talking one on one, and have no idea how many team members it fits in the head of?

What if the person who is criticizing the code will begrudgingly admit that it fits in their head, but still thinks it's too messy?


> What if everyone but one says it fits in their head?

"It depends". If the one person is a junior it's maybe not a code issue (unless junior-level people are expected to maintain it). On the other hand if it's a skilled, experienced, senior person saying they can't follow it, there's likely an issue with the code.

> What if you're just talking one on one, and have no idea how many team members it fits in the head of?

> What if the person who is criticizing the code will begrudgingly admit that it fits in their head, but still thinks it's too messy?

The greater point here is is that it's about code maintainability. "Fits in your head" is merely one proxy for that, not the actual goal. There are also hundreds of other things that lower code maintainability -- including being "messy", having bad names, poor or no inline documentation, no unit tests, and being "clever" -- and all of those are things that should be discussed and fixed.

Presumably everyone involved are professionals and working in good faith. If they really can't come to consensus: sleep on it. Involve more people. Suggest alternatives. Pair up.

Someone consistently on the losing side of these discussions -- whether they feel they're constantly dumbing down their code, or that the rest of the team is writing unmaintainable spaghetti nonsense -- should probably reflect on if this is the right team for them.


Not defending the article at all, but "whose head" should be at least a code reviewer's head, ideally two.


I believe that there are people that can handle more abstractions in their working memory, maybe up to 6 or 7 layers of abstraction, and there are people that can't go further than 3, before feeling lost.

The "bigger heads" types will use SOLID or any other argument to justify a higher number of abstractions. While the other type will do the same to justify a lesser.

At the end of the day, there is no right solution, but what is so called "cultures" that group people with similar taste and cognitive fingerprint, and provide best practices.


Yup. As a PR reviewer, my gold standard is "I understand 100% of what's going on in this change, and I'm confident it does what it says it does". All the rules and standards and stuff we put in place is basically in support of that goal.

If I can't understand it, neither can the next guy.


Fits in my head and works on my computer.

slaps on hood


>The author presents a series of strawman arguments to debunk SOLID

Care to elaborate as to why those are strawmans?

Without that, this is a no-argument, which is probably worse than a strawman.

>and then suggests that instead we should make software as complex as possible but no more complex than the coders own comprehension

The combo (go and make "software as complex as possible but no more complex than the coders own comprehension") is suggested nowhere in the post.

The second part of the combo (make software "no more complex than the coders own comprehension") is indeed said, and is very sensible advice.

It is, however, combined with the inverse advice of "make it as complex as possible" which you claim the author combined it with: with the advice to keep it simple.


> The Single Responsibility Principle says that code should only do one thing. Another framing is that it should have “one reason to change”. I called this the “Pointlessly Vague Principle”. What is one thing anyway? Is ETL – Extract-Transform-Load – one thing (a DataProcessor) or three things? Any non-trivial code can have any number of reasons to change, which may or may not include the one you had in mind, so again this doesn’t make much sense to me.

The strawman here is the fallacy that SOLID (SRtC) is clearly not saying that a software, composed of "code", should only do one thing. By that reasoning, SOLID rules out any software that provide multiple capabilities. Your editor, for example, has multiple capabilities. It can save. It can highlight. It can cut. It can paste.

So, a reasonable reading of SOLID naturally is not ruling out composing complex software using "simple single purpose code". However, OP is assuming a ridiculous reading (the strawman) that basically can only be valid for software that has only one irreducible capability.

One can read SOLID SRtC in terms of capability as "compose complex software using single purpose code".

One can read SOLID SRtC in terms of change as "changes to code should consist of one, or a sequential set of, single purpose changes".


>The strawman here is the fallacy that SOLID (SRtC) is clearly not saying that a software, composed of "code", should only do one thing. By that reasoning, SOLID rules out any software that provide multiple capabilities. Your editor, for example, has multiple capabilities. It can save. It can highlight. It can cut. It can paste.

That's not what TFA says here. It doesn't claim that SOLID says that software as in 'a full program' should do one thing.

It just says that SOLID says that a piece of code (as in a function or a class) should do one thing, which SOLID does say, and which the author of TFA disagrees with.

Perhaps the use of the word "code" (or "non-trivial code") is confusing, but the author doesn't imply the whole program with that, but the same as SOLID does (a unit of code):

"Code should fit in your head at any level of granularity, whether it is at method/function level, class/module level, components made up of classes, or entire distributed applications".


Did we read the same thing? How can an "ETL" that he uses as example be just a "function or class"? If he meant simply function, then, you'd have 4 classes (ETL, E, T, L) each doing "one thing" and his entire argument would be moot.


That's the point where you need refactoring. The basic problem is that you cannot look into the future and clearly see what exactly you are going to need (if you are lucky enough to implement something along a specification, you should us that knowledge though!).

As long as you manage to keep artificial complicatedness out of your code, you will always have the complexity of the problem mirrored in your code. A common problem of ideas about object oriented programming like SOLID or Clean Code is that they have a focus on classes. If you keep your classes very simple, you will instead end up with a complex architecture, where you might have zero responsibility layers or functions that just pass on the call further to a layer down.


In my opinion, that's in fact exactly how you should be developing your software.

Code is cheap to write, and mostly debt. A software product - a working system that meets some particular need - is not. The distinction is that building a software product is much more than banging out code; it's the experience of figuring out what exactly that need is (gathering requirements, getting a system out for users to test, getting real-world feedback). Sometimes you capture that in documentation, test cases, ADRs, comments, commit messages, etc. Sometimes it's in your head, which is okay as long as you're still there. (Of course, if it's in your head and you leave, then the next set of programmers will be scared to change the system until they redo the work of figuring out what the software is, despite the code being in front of them.)

If you have that understanding about what the software does, ideally in the form of automated test cases, you can rewrite the code. So you may as well bang out the code in a way that gets you a working system and remains comprehensible. Once you're making enough changes to it that you're worried about it getting incomprehensible, proceed to rewrite it. Probably the world has changed in many ways sine you wrote it - maybe you can run the system for a lot faster and cheaper with containers in the cloud talking to a SaaS database than with your expensive IBM mainframe talking to DB2. Or perhaps the requirements are changing significantly, and starting with the old code doesn't give you much of an advantage.

Trying to keep a codebase comprehensible over the long term makes it, if you'll excuse the pun, solidify - it becomes increasingly hard to make changes that weren't anticipated by the original design, and it also requires more and more effort to just do anything. You might be able to swap out the relational database for another one, but you probably can't switch to a design where you're using a key-value store with totally different performance characteristics. And even if you do just want to change the database, you have to figure out how ten classes get dependency-injected into each other before you can start coding.

All the time you spent making the code "future-proof" so it remains forever comprehensible could have been spent simply not doing that, delivering business value, and writing new code as the need arises.


Writing complex and convoluted code is a strategy used by some developers who bill by day or an hour, so then they have to spend extra time to "understand" or "refactor". I've seen this so many times. One even got visibly angry when I politely pointed out his solution could be vastly simplified or even skipped as the client was considering different options.


that's an interesting take, what would you consider strawman arguments in the article?


For example, the open-close principle. The author blames this advice on tooling of the 90s and proposes instead “Change the code to make it do something else”.

This has nothing to do with tooling, but the fact that pulling the rug under an established code base could have very unintended effects, compared to simply adding and extending functionality without touching what is already there.

By doing as the author suggests you’ll end up with either 500 broken tests or 5000 compiler errors in the best case, or in the worst case an effectively instantly legacied code base where you can’t trust anything to do what it says.

I once had to change an entire codebase’s usage of ints to uuids, which took roughly 2 whole days of fixing types and tests, even though logically it was almost equivalent. Imagine changing anything and everything to “make it do something else”.


What's the alternative here? If you had to change a codebase's usage of ints to uuids, should the original author have used dependency inversion and required an IdentifierFactory that was ints at the time so you could just swap out the implementation? And if they did - why wouldn't they have just used UUIDs in the first place? You're betting on the fact that the original author anticipated a particular avenue of further change, but also made the wrong decision for their initial implementation, which seems like the wrong bet. If they made the wrong decision, they almost certainly didn't anticipate the need for another one, either.

And how long would it have taken for the original author to use an IdentifierFactory instead of ints and write meaningful tests for it? Less than two days?


In the uuid case, the person had no choice. Remember, these are principles, not laws, and at some point your system is making concrete choices. Choosing UUIDs isn’t necessarily a OO design problem. He was just highlighting how expensive it can be if you require changes to your fundamental classes to extend or change behavior. In the identifier type case, it’s rare that folks abstract this stuff away. Though: I do know a LOT of systems that use synthetic identifiers for this exact purpose, as larger enterprises tend to deal with many more different identifier types from different integrations, from a DB type that can’t hold new identifiers, because IDs need to be sharper/distributed etc. So yeah, it’s a principal, and one should choose if it’s worth the upfront cost for its benefits.

OCP though more commonly refers to: 1. Building small, universally useful abstractions first 2. Extending behavior of that abstraction or system by writing new code rather than changing published code directly.

This is trivial when you have a few patterns under your belt. Template factories, builders, strategies, commands. I mean, while it’s not the best idea in most cases, even just inheriting a parent class and giving a new concrete new behavior is still better than changing something fundamental to the system.

Like has been said 999 times in this thread, software isn’t black and white. You have to make choices about where you go concrete and where you abstract, and gauge that against risk factors. A somewhat complex class you expect to go away in a couple months? Make it a god class that anyone who wants to can scan through. A fundamental class that will be used by hundreds of developers and underpin most operations in a production system where 5 minutes of downtime costs tens of thousands of dollars? It’s worth the upfront cost to build with these standards.


Changing ints to UUIDs is a classic example of the Primitive Obsession smell, and the solution is to wrap primitives in a type representing their semantic meaning, such as “ID,” or “PrimaryKey,” not to use a factory. That way, when you need to change the underlying type from int to UUID, you only need to do it in one place.


Indeed. Unfortunately, in some languages – like Java or C#, it is harder to do without incurring a significant cost (boxing/unboxing) than in languages that allow type aliases/typedefs.


In theory, yes, but in practice performance is dominated by network and (less often) algorithms. The cost of boxing/unboxing doesn’t even register except in rare cases, which can be specifically coded for.


It has a fair bit to do with tooling. For example, C++ suffers from the fragile base class problem and some changes can cause long compile times. Nowadays, we have tests and deployment pipelines that are explicitly designed to let us make and deploy changes safely.

Honestly, if you cannot change your code, you have a problem.


The OCP is imo poorly named, but it has far bigger implications than the post acknowledges. For one, it implies the concept of abstraction layers. In particular, base libraries should provide abstractions at the level of the library. In this way it's able to achieve being "closed for modification but open for extension".

https://drive.google.com/file/d/0BwhCYaYDn8EgN2M5MTkwM2EtNWF...

Flipping it around, if base libraries were to be always open for modification instead of extension, then instead of writing your feature logic in your own codebase, you might be tempted to submit a pull request to React.js to add your feature logic. That sounds ridiculous but that's the equivalent of what I see a lot of new engineers do when they try to fit the feature logic they need to implement anywhere it makes sense, often in some shared library in the same codebase.


>This has nothing to do with tooling, but the fact that pulling the rug under an established code base could have very unintended effects, compared to simply adding and extending functionality without touching what is already there.

That's still a matter of tooling. With type-checking, static analysics, and test suites, changing code doesn't have "very unintended effects".

Back in the day, without those, things were much more opaque.


Not the person you asked but, I would say that the expectation that SOLID provides or should provide, unambiguous guidance


His entire screed on single responsibility principle spins around on semantics.


Loved it! I agree with most of it as well, I'm so fed up with preparing with abastractions to things that will probably never occur (changing the underlying DB for eg) I changed my coding style a decade ago to focus on producing code that is as simple as possible to read/maintain with as few abastractions layers as possible.


That’s one of the traps, every programmer will fall into at least once: copying some abstraction because someone else did it. I have not seen support of changing underlying DB as a business requirement yet (well, outside of frameworks and platforms, of course). I have seen many times when developers designed architectures, created APIs or added buttons to user interfaces, because they felt it may be useful in the future, not because someone told them it will happen. That code was very different in quality, sometimes too abstract, sometimes a big multi-screen function doing plenty of magic. All of it had nothing to do with SOLID — it was always a violation of another principle: KISS. Keeping it simple, an engineering variant of Ockham’s razor, does not contradict or replace SOLID, it complements it, defining the scope of application of other principles. If your requirements are complex enough, you may need an abstraction — if you really need it, here are some guidelines. That’s it, now keep it simple.


> I have not seen support of changing underlying DB as a business requirement yet (well, outside of frameworks and platforms, of course).

... and, you haven't ever seen a business requirement of what used to be a "software" to become a framework / plateform instead ? it happens all the time


I've seen bits of this, and I've seen it happen in code bases that were built as "abstract" so its components could re-used.

But I've never seen it happen in any way close to resembling what the original architect thought would happen, and as a result, all those abstractions and generic implementations not only added time to the mainline development, but in the end actually got in the way of the abstraction that was needed.


Well, that - "any" software becoming a framework - does happen, but not all the time. Why does this matter?


Yes, let's abstract the database is probably one of the most mindlessly applied rituals in an enterprise software. And if running on different databases is not a feature of your application, it is mostly harmful practice.

If you can decide on a language, framework, critical libraries etc. then you should be able to decide on a database. It's probably more important than your application anyway.


Back before devops, containers, postgres etc were mainstream (so not more than ~6 years ago in my industry), soo many were running oracle dbs. And everyone shared the same instance, and it wasn't exactly trivial to get a new personal up and running (licensing, probably required a DBAs help). So then using hsqldb or something else lightweight were golden for local development or integration tests. So abstracting the DB was the default, and absolutely needed.


What platform are you talking about? In Java world JDBC existed from early days and was enough if you stick to standard SQL, in tests you may have needed only to switch driver classpath and connection string. ORMs existed at least since 2003-2004 (early versions of Hibernate).

At the same time, substituting Oracle with a lightweight DB in an environment where full-time DBA was developing a database with loads of stored procedures and DB-specific stuff wasn't something really feasible - no abstraction layer would solve that.


Java. But to be fair to my point, both JDBC and ORMs are also variants of these kind of abstractions. :)

But as you said, some of the problems were when it got really DB specific. A simple layer that could be swapped for a simpler (and not performant, but didn't matter with little data) variant locally was nice.


>>using hsqldb or something else lightweight were golden for local development or integration tests. So abstracting the DB was the default, and absolutely needed.

This also improves efficiency in operations, not necessarily development. If you used a library/framework for database access anyway, it's not an extra expense. There's ultimately a portability concern even if "vendoring", it only imposes a cut-out to permit control of necessary change).

After a few unpleasant experiences I endlessly advocated we should always use an interface to access populated data objects and not interact with the database directly, not even running queries directly but always using at least lightweight IOC. I also advocated for testing where known result sets were fed through mock objects. After all, saved result sets could also be used to test/diagnose the database independently after schema/data changes. My experience predates a lot of ORM and framework responses.

Unfortunately later frameworks (intended to abstract these concerns) became ends in themselves, rather than a means to an end. These were used to satiate "enterprise-y" concerns (sacrifices to the enterprise gods). If you could afford to deploy operational Oracle, you would't necessarily flinch at the cost of the extra (often pointless) layers of abstraction.


Exactly.

DRY and YAGNI are two basic concepts I've always tried to work with. Do I need this piece of code more than once? Should be extracted/created in a separate place. Would I need this in other projects? Library.

YAGNI is most likely directly conflicting with most of what's mentioned, if the developer simply asks themselves: do I really need this? Will this project benefit from having separate layers of abstraction for things that are most likely never going to change?

I always think twice before writing a single line of code, with the main point of focus being if future me (or anyone who reads that piece of code) will be able to understand and change things, if needed.


DRY seems to be one of those principles too many people take literally. Especially among junior devs from my observation (and experience).

Just because there is a block of code that’s being used in two different places doesn’t mean it should always be abstracted out. There’s a subtle yet mindful consideration whether these two consumers are under the same domain. Or if it should exist as a utility function across all domains. And if that’s the case, changing the code to make it generic, simple and without side effects is ideal.

I’ve seen too many of these mindless DRY patterns over time, and they eventually end up with Boolean flags to check which domain it’s being used in as the codebase becomes larger.


DRY should really be DRYUTETNRYDTP - dont repeat yourself unless the effort to not repeat yourself defeats the purpose

I also propose LAWYD - look at what you've done, a mandatory moment of honest reflection after drying out some code where you compare it to the original and decide if you've really improved the situation


I honestly thought that was one of the principles of DRY. Maybe I've been wrong this whole time. Will it be faster, safer, more maintainable if I just leave this code in here twice? At what point does that equation change?


My approximate guideline is "is this copy and that copy of the code the same because they HAVE to be, or are they accidentally the same?"

If they absolutely have to be the same, wrap'em in a function. If they just happen to be the same, leave them as separate copies. If I don't know, make an educated guess.


Sure it is. However, teachers/mentors may be just a tad to dogmatic and pupils/juniors may just be a tad to inexperienced to properly grok when to do what.

And it sure is easier to just not argue about it and abstract everything. The real pain only comes later, after all.


Just this week I came across a set of 20+ controls in a form. Every control downloaded some version of a file from "report". Not once was there any shared code behind any of these controls. Because different people over time touched this code, not all of the functions were in the same place. And those that were each had slightly different nuances.

Without DRY, this would be a perfectly acceptable practice. DRY gives me something I have in my head when I see this and refactor into something that is manageable. DRY gives me something I can point to and say "please for the love of god don't perpetuate this folly".


I think a good rule of thumb is to never prematurely apply DRY in source code BUT always try to aim for it when it comes to data and configuration unless there's a special need.


pylint irritates me a bit for this. I already created an abstraction, and I'm using the abstraction in a few places, but pylint doesn't like that and says it's duplicate code.


Yea, 20+ years on PostgreSQL. Why would I change my DB away from the perfect one?


IME, because client asked for the different one for whatever reason and is willing to pay for higher development cost.

You can argue about that just the same as when client needs a feature X.

In my case, I was very often getting away with YAGNI or 'how about we implement nightly sync from pg to oracle' but not always.


Self-hosted, low volume or demo installations can benefit from a lightweight database (that is also in-process, so the user does not have to install and maintain it) like SQLite, H2 or Derby.


One never knows what the future may bring.

Long-lived OSS software is a relatively stable bet, but the point still stands.


Maintaining database abstraction layers and the like is a real option used to hedge the risk of the holdup problem. But like any real option, it has a carrying cost. That means it's an economic question, there is no purely technical rule that can give you a robust answer to whether or not to abstract something away.

I feel the industry has a long hangover from the 80s and 90s in terms of the holdup problem. Oracle, basically, created a massive externality of anxiety about vendor lockin that continues to impose drag to this day.


There are complexity limits past which we can't really build or extend applications short of big rewrites. I guess we can describe this in terms of economics but I feel it's not doing the issue justice. And it's often better to spend the complexity budget on domain problems.


Agreed. “Our software currently runs on Oracle but can be switched to MS SQL Server or DB2 with about a day’s work” used to be an admirable and economically valuable statement.

In the majority of cases, I think a simple “runs on Postgres” is even more valuable today (unless your product is a database abstraction layer).


>One never knows what the future may bring.

Exactly, so make the changes when you need them, otherwise you are relying on hitting the abstraction lottery


Decoupling your data model from postgres because you "might" need to swap database is a bet I've seen taken many times but it's never one I've seen paid off.

This is a clear example of where YAGNI applies, I think.

Extra work plus extra boilerplate to maintain. No payoff.

That said, extra code + boilerplate is a great way to treat riskier software.


Also love Pg and using flavor specific features where they deliver tangible value. That said, if a team maintains multiple flavors often then an abstraction can improve quality of life.


Reading these comments make me feel like a huge outlier. Practically every project I ever worked on included AT LEAST one database swap. That includes startups and big tech, for all sorts of different reasons.


I've worked on projects with database swaps, too, but I find it hard to believe that use of abstraction in advance would have helped them. There's a couple of cases.

One is that you're using two SQL databases and you're not using any advanced features. You started dev on MySQL, and then the company says "Thou shalt use Postgres" (or whatever). You don't need anything fancy in your own code to handle this. You're still making SQL queries, you're just swapping out the database engine. Technically, this is an example of dependency inversion (depend on the SQL abstraction), but you also didn't set out to do it - basically any programming language you're likely to use has the common database libraries use a common abstraction for sending queries. And you didn't specifically make sure you were writing generic SQL, you just happened not to need anything.

More commonly, you're switching databases for a specific feature. Maybe you realize PostGIS (or whatever) is going to solve a problem for you very well. But then you're changing how you model data, what your schemata are, and even how your code is architected and accesses things in the database. You're deciding to move certain logic from your code to the database engine - might even decide to move certain logic from the frontend into the backend, or change how request routing works, or something. This is a fantastic reason to move databases, but no amount of abstraction can prepare you for it, because you're fundamentally changing what the abstraction is. And you're deliberately abandoning SOLID because you're picking up a dependency on a concrete database.

But the real case I've seen is where you're switching databases (or data storage layers, more generically) to a different model - MongoDB to not-MongoDB, a C/P database to an A/P one, a relational one to a key-value store, etc. This is the above case but even larger. There is no abstraction you could possibly write that could encompass the old and new cases. It requires rearchitecting how your code works.

And then there are the most boring of cases - the ones where a database swap sounds doable in theory and the code is supposedly using an abstraction layer, but no one has ever verified that the code doesn't make assumptions about what database it's on and the code has gotten too big, so we just get an architectural exemption from "Thou shalt" and we run our own instance of the wrong database, because the overhead of running our own DB costs the business less than getting the swap wrong.

(Some public examples of these sorts of database migrations that come to mind: https://slack.engineering/scaling-datastores-at-slack-with-v... is about how Slack couldn't move away from MySQL and the architectural assumptions they made about it and had to rule out migrations to non-relational databases out of hand, and https://about.gitlab.com/blog/2018/09/12/the-road-to-gitaly-... talks about how GitLab moved from NFS storage to an RPC service, requiring a lot of refactoring of callers.)


I’ve been through plenty of cases where companies successfully swap out databases exactly as you described - a document storage for an Rdbms or vice-versa. The bird social network is an example where the db was so well abstracted, they managed to swap these out with no need to rewrite any application code. So does Facebook, for instance. Slack is a clear example of what the complete lack of forethought on this leads to. (Disclaimer: I’m familiar with all 3, but obviously can’t talk details - there’s plenty of public posts on these cases, though)

FWIW, every single db abstraction I’ve ever witnessed was worth it - if only so that one could run tests in a sqlite and run prod in something else, or as a way to contain vendor lockin in the code (I’ve seen projects successfully migrate from a plsql-heavy system to mysql because the code was well segregated, and I worked at a startup that literally imploded bc the database was metastasized all over the place)

Anyway, as I put it, abstracting data storage is a no-brainer for me, and it saved my skin every single time. I don’t expect to convince anyone here to go do it. :-)


I think that's changed my mind a little bit on this, thanks! I'm mostly surprised they were able to get the abstraction right in advance - did they have an idea of what they might move to? Or did they change the abstraction as they went, but the abstraction layer made it easier to find what parts of the code needed changing, or something?

How do you avoid the issue where SQLite doesn't support all the things your real DB supports? (i.e., why is it worth running tests on SQLite as opposed to a local instance of the same DB software?)

I'll read some of these public posts....


The only case I witnessed where the interface remained untouched, even as we went from an rdmbs backend to kvo and back again, was Twitter. You can argue the reason is the data model being simple, which makes sense. Everywhere else, it was different degrees of “pain”. I don’t think there’s a silver bullet, and I don’t think avoiding a potential future rewrite at all cost are worthwhile goals.

WRT sqlite, the obvious advantage is test setup doesn’t require any local infra, and it forces your system to NOT use vendor-specific features. That school of design is essential for places that want to avoid lockin for whatever reason. This isn’t possible in every case - some companies have a great DBA team, for instance, or strong specialization around a specific db (cough Oracle shops cough), so obviously that flexibility wouldn’t be advantageous. As everything else in our industry, it depends :)


>>the db was so well abstracted, they managed to swap these out with no need to rewrite any application code

What were the reasons for the swap and what were the outcomes of it besides that it actually happened? Costs, performance, scalability etc?

I think hardly anyone would argue that those transitions are impossible, because they are indeed possible. The main question is usually if your abstraction can leverage the benefits of underlying implementation or it is just a common denominator. I can imagine a clever technical strategy in a startup, foreseeing future growth and starting with a simple DB before moving to a more complex in maintenance but scalable solution. This may work. Often it doesn't.


Varies from project to project of course, but it’s never “just because”. If you’re swapping dbs without reasonable expectations to improve SOMETHING, it is indeed a waste of time (one that I also fought against in my career many times - been through many, many “let’s adopt MongoDB bc it’s hot” cases)


I'm curious what other people think when they take over your code? We all have a bias to think our way is simple and obvious, yet it rarely works that way in practice.

Abstractions are a way of trying to find a common ground conceptually. Agreed, abstractions should not be multiplied unnecessarily, but in the same way it's easier to learn Newton's Laws than read Kepler's data, a few well chosen abstractions help others organize and understand what's happening in a code base.


> Abstractions are a way of trying to find a common ground conceptually.

What's always amused me, in a tortured sort of way, is that SQL is already the abstraction. Then people go and layer an abstraction on top of it, such as some ORM flavor-of-the-day, which is obviously much less universal than SQL. In "SQL" I'm including vendor extensions too (Oracle, MySQL, etc.). It's easier to read up on vendor extensions than learn yet-another ORM.

Which do you think has had the longer shelf life: MySQL or some random Ruby ORM? If you've been doing MySQL since the '90s then it's largely the same as MySQL of 2021. That Ruby ORM? Probably hasn't seen an update since 2008. I can't even remember the names of all the ORMs I've had to use over the years.


I guess I don't think of an ORM as abstraction, it's more of a complication. The abstraction would be something like:

class User {

   static get(id) {
      embed your SQL here
   }
}


I know of at least one company that’s slowly dying because their product is irreparably tied to oracle

Literally impossible to get any new clients to agree to run on oracle


the most important take-away from Agile is YAGNI - "you aint gonna need it."

That's what I mean when I say simple. Don't obscure your code with unfalsifiable assumptions reified, as related in the typical obscure vernacular...

I have had to deal with RDBMS-swapping for fairly large and transactionally intensive applications. I appreciated code that embodied SOLID to the extent that it reduced the amount of code to inspect and improved the quality of sizing up what needed to be done. However it was much easier to "lift" up systems to the new objective than "unravel" previous efforts to insure portability without a defined and testable objective of portability.


Providing you’re correct about the DB choice outlasting the lifetime of the software, there is nothing about that approach that’s incompatible with SOLID.

Ultimately the goal of SOLID is to be able to change any important aspect of the software independently from any other. If the DB is going to outlast the business logic you’re writing, there’s no problem having it depend on the DB concretely.

Separating the what and the how does have the effect of changing how you approach reasoning about a program though, so SOLID is not without penalty.

Edited for clarification.


The most useful part of changing the underlying DB, for me, seems like a way to speed up unit tests. Then again, I'm used to writing Django apps where the domain model is heavily coupled to the database, making it hard to just test simple objects without touching a database.

I wonder if others here have applied some sort of domain driven design (domain models agnostic of database) without going down the whole repository route.


I, once, removed a graph from application. Application has all these nice abstractions. Change touched 40+ files. Single Responsibility? bonkers.


> I'm so fed up with preparing with abastractions to things that will probably never occur (changing the underlying DB for eg) [...]

How can you be fed up with something that will never occur?


To me this blog doesn’t really formally/finally refute anything, but is simply saying “don’t over-engineer your solution”.

> “dependency inversion has single-handedly caused billions of dollars in irretrievable sunk cost and waste over the last couple of decades”

Oh please. Is there anything in programming that hasn’t had the “Irreparable harm to humanity” sticker attached to it by now?


I like to imagine a final analysis of all code ever written by humans, after some ai hypermind from the future has digested it, turning out to be 99.999% dependency injection boilerplate


Dependency injection has nothing to do with dependency inversion.


They're different but not completely orthogonal. Dependency injection is often used to achieve dependency inversion.


nothing? okay


So becuase our industry is so unprofessional that you can literally point to loads of it and say "that has cost humanity billions", this entire argument is stupid and actually there is no problem at all?

Not sure that's the right attitude.


It’s an industry which ha s probably generated trillions in value so it maybe a matter of perspective.


So what would you have done to end up in the alternate universe where it hadn’t cost humanity billions? Rational agents and perfect knowledge does not exist outside of economic theories.


Cloud computing, but only because we haven’t hit the trough of disillusionment yet.


Honest question: if you don't do dependency inversion, or if you don't depend on interfaces/abstractions that can't be mocked - how do you unit test your code?

Unit testing is the only reason pretty much all of my code depends on interfaces. Some people seem to consider this a bad thing/over-engineering, but it's how I've seen it done in every place I've worked at.

How do you do it?


Firstly if the dependency isn't doing any io, you can test your code as a whole along with its dependency. No need to mock.

More interesting is if your code relies on the outside world, then instead of abstracting out the connection with the outside world abstract out your business logic and test it separately.

So instead of a database repository being injected into your domain services, make your services rely on pure domain objects which could come from any where be it tests or the database.

Make a thin outer shell which feeds data into your domain logic from the outside world and test that via integration tests if necessary.

I'll admit I don't have the full picture here, but I have used this technique to good effect. The core idea is don't embed your infrastructure code deep inside your architecture instead move it to the very top.


> Firstly if the dependency isn't doing any io, you can test your code as a whole along with its dependency. No need to mock.

This the thing. People have a tendency to overuse mocks. The point of automated testing (whether it's unit tests or something else), is to enable refactoring. That's really the only reason. Code that you don't touch doesn't suddenly change its behaviour one day.

In a previous jobs the developers started to go crazy with mocking and it reached a kind of singularity where essentially if a function called anything, that thing was mocked. It definitely tests each function in complete isolation, but what's the point? It makes refactoring impossible, which is the entire point of the tests in the first place!

This excellent talk completely changed the way I approached testing. Every developer who writes tests needs to watch this now! https://www.youtube.com/watch?v=EZ05e7EMOLM


I generally agree with you, though I want to comment on this line:

> Code that you don't touch doesn't suddenly change its behaviour one day.

This can and does happen all the time, when the platforms and abstractions your code builds on change underneath you. This is why a compelling environment / dependency management story is so important. "Code rot" is real. =P


I thought about this, but technically that is still the code changing, it just happens to be in your dependencies rather than your codebase. The only reason you really have to change dependency versions is security fixes, and they should be infrequent enough that you could do manual testing. So I don't think it's a compelling reason to write unit tests, although it is certainly an added value.


Some of this is covered in "Functional Core, Imperative Shell", https://www.destroyallsoftware.com/screencasts/catalog/funct...

Haskell programs tend to have this structure because pure functions aren't allowed to call impure functions.


You are doing interface/function level testing and calling it unit testing.

That's what industry converged on, that a function/method = a unit, but it apparently used to be that a module meant to be a unit.

It can be that both interfaces and modules can be considered a "bag of functions".

Seems like a lot of confusion about unit testing, mocking and DI stems from this historical shift.

I believe that interface/method level testing is too granular and mostly results in overfitted tests. Testing implementation, not behaviour. Which can be useful for some algorithm package for example, but probably not so much applicable to typical business logic code.

TDD: where did it all go wrong. https://youtu.be/EZ05e7EMOLM


An option in some languages is to simply create an alternate mock/fake version of the actual class or function that is depended upon and monkey-patch the code under test to use it for the duration of the test. This is commonly done in Python ( see `unittest.mock.patch`) and JavaScript (eg. with Jest mocks) for example.

The end result is the same as if you'd created an interface or abstraction with two implementations (real and fake/mock), but you skip the part where a separate interface is defined.

The upside to this is that the code is more straightforward to follow. The downside is that the code is exposed/potentially coupled to the full API of the real implementation, as opposed to some interface exposing only what is needed.


Move your logic to pure functions. Use classes for plumbing. Now you can unit test the logic, and integration test the plumbing. You can easily test with a real database using docker for example. Note that you'll still use DI in this case, but you'll have far fewer classes and therefore fewer dependencies as well.


'How do you unit test your code without mocks?'

I strongly recommend reading "Unit Testing: Principles, Practices, and Patterns" by Vladimir Khorikov.

That's going to answer your question and will unveil a whole world of testing without mocking (not that it's better or worse than with mocks). It might be a single best book about unit testing in general.


The good idea of unit testing is repeatable, reliable tests, ideally those that can be run as part of CI before every merge and also on your end-user machine. "Unit" testing imposes an architectural demand on how you do this.

What you actually want is fast and self-contained integration tests. Have an API endpoint that returns some filtered data from the DB? Spawn a DB server locally, load some fixtures into it (e.g., phrased in the form of a migration), start up your application server, and curl it and see if the data is filtered. Then shut it down. In most stacks you can do all of this in a fraction of a second.

The reason most shops love unit tests is that it saves them from having to figure out how to run those integration tests. It's easier from a Conway's Law perspective to use your language's built-in testing facilities to test the function you're writing, let the ops team figure out how one starts up a database server, and let the frontend team figure out how one makes HTTP requests to your application. But you will deliver better software if you spend a bit of time figuring out all those things and stick them in a tiny shell script.


You can separate your code into “logic” which has no external dependencies (just logic, eg a domain model) and “infrastructure” which does. Infrastructure still requires DI but Logic doesn’t.

Some programs are nothing but Infrastructure (DB backed microservices) but some have complex Logic (complex problem domains). In the latter case the most important parts of the system can be tested without DI.

You still need DI but not pervasively.


Most of the time I've seen dependency inversion "so we can test" the code is littered with hundreds of single-implementation interfaces and no mocks, or stub mocks that are written just to pass the tests. It doesn't actually test anything.

In the case where there are mocks, the unit tests are almost always literally testing "can my programming language call a function and return a result" or "can we store data in a database if the database works?" They are functional tests disguised as unit tests, and are testing only non-production fake functionality.

Write functional code instead. Liberally use the compiler and the type system to make mistakes impossible instead of unit testing.

Unit tests are for when you can't express something in a way where the compiler and type system will save you from errors.

Everything else is a functional or integration test and should be testing the production system, not a mockup of the production system that works differently.

Basically, write Haskell programs in whatever language you're working in. Use unit tests as a backstop for when the type system isn't as good as Haskell's.


Mark Seemann's blog I've been reading for the past couple years answers this question. In particular, his posts on Hexagonal Architecture[0] and Dependency Rejection[1], but there are many more articles that cover this and I highly endorse reading.

To summarize, you want to set up your code into two distinct categories, data and pure functions. Your data is at the top level of your code, so if you need external data that needs to be at the top level of your module. You then feed that data through a pipeline of pure functions that return with some sort of answer back to your top level module that can then pass into some other function(or through a quick lambda) that does the actual transformation at the top level. The key is that a function can either be a pure function or it can be an impure function. Never both.

As an example, let's say you've got code that has to read from a couple text files. Currently your code does this through a couple nested loops, with the loops passing in what folder it is in and what its file name is, but your code is filtering through some that you don't want to read.

To re-envision this code, you would create a pure function that comes back to the top of your module of which files to read. Then that top module would read those files, and pass them to some other function that does transformations on the result.

You now don't have to create mocks, just create data and pass that into your pipeline and then check if it gets the expected result. One of the interesting side effects of doing this, is at a certain point you stop unit testing for correctness(as the code is so easy to reason about written in this style), and start unit testing primarily for documentation.

[0]: https://blog.ploeh.dk/2013/12/03/layers-onions-ports-adapter...

[1]: https://blog.ploeh.dk/2017/02/02/dependency-rejection/


You pick a language that doesn’t require you to bend your architecture around testing.

This is a classic example of software-pattern-being-language-defect.

It is trivial to mock out upstream or downstream dependencies in Python, without any changes to the code under test.


People have already pointed out using pure functions for the heavy lifting and then making the IO as simple as possible, but another issue is tooling. If you have something like Standard ML's module system or COM style components, you define a functor/interface/contract that your code depends on, and then implement a test form and a production form.

This is basically what dependency injection is doing, but working around the fact that modular programming didn't really catch on.


I’ve generally understood the criticism of DI to be levied at codebases where it was applied to literally every class. I’m not the author, but I don’t imagine that relying on interfaces to abstract between layers of an application or to avoid a concrete dependency on something like an external api to be what he was arguing against.


There are two ways of abstracting out an external api, one is via an interface with a set of methods this is well understood but there is another option as well that is to focus on the core data types your application needs. Create those types and then revolve your whole application logic around them.

Finally create decoders that instantiate those types from different sources.


This sounds interesting. Can you link to any examples?


Go through

https://fsharpforfunandprofit.com/fppatterns/

And

https://fsharpforfunandprofit.com/posts/13-ways-of-looking-a...

I'm still learning on how to apply these ideas in my own work (which doesn't use f sharp or functional programming) but it's a rich source of ideas


That was one of the eyebrow-raising things the author touched on, “automated testing theatre”. Really? Unit testing is pointless...is that now a thing? Is there another industry that is constantly changing its mind or debating every aspect of itself quite like the software development industry?


The author is the creator of BDD. I suspect you’re misinterpreting his words.

You can complain about “automate testing theatre” without being against automated testing.


Thanks. I’ll read some of his stuff on the subject.


I wish to know this as well. And what about functional programming languages?!


> Honest question: if you don't do dependency inversion, or if you don't depend on interfaces/abstractions that can't be mocked - how do you unit test your code?

The author isn't actually demanding you change your tests (unless they are unnecessarily complicated).

> Unit testing is the only reason pretty much all of my code depends on interfaces. Some people seem to consider this a bad thing/over-engineering, but it's how I've seen it done in every place I've worked at.

Unit tests require 'seams' to divide the tested code into units but those seams don't have to be interfaces. Some languages don't have interfaces at all - Ruby, say - and they unit test just fine. The pattern of every Foo having a corresponding IFoo is a bad thing - you should replace IFoo with smaller role interfaces. Having an IFoo if it isn't necessary to test Foo and there's only one implementation is indeed over engineering.

> I wish to know this as well. And what about functional programming languages?!

Unit testing's much the same in functional languages, though it tends to be easier because pure functions are easier to test. But how code gets its dependencies tends to differ a bit. It may help to think of dependency injection as parameterisation. Or, perhaps, parameterisation where you don't use the parameter immediately.

Suppose I want to test a function foo that depends on another function bar:

    fun foo() {
        return bar()+1
    }
Well with objects and interfaces, I can do this:

    interface IBar {
        bar()
    }

    class Bar(): IBar {
        fun bar() { return 2 }
    }

    class Foo(val bar: IBar) {
        fun foo() {
            return bar()+1
        }
    }
And then I can make a test double for Bar in my test, and when I invoke foo, then it will call the bar of my test double.

Now, of course, in this example, I could test with the real Bar and I don't need a test double. That's partly because the example is simple but also because foo and bar are pure functions. But let's ignore that and see how we can do the same in a functional language.

    fun foo(n: Int) {
        return bar()+n
    }
The question is, how can we control the value of bar in our tests?

The simplest answer is that, since bar is varying (between test and prod), then bar is a parameter:

    fun foo(bar: ()->Int, n: Int) {
        return bar()+n
    }
Now the reason we don't do this in OOP languages is that we won't have bar at the call site. That is the production code calls foo like this:

    foo(n)
and not like this:

    foo(myTestBar, n)
That's why our test code looks like this:

    val myFoo = Foo(myTestBar)
    assert(myFoo.foo(3)).isEqualTo(something)
though, if foo takes its dependency as a parameter, the test could just look like this:

    assert(foo(myTestBar, 3)).isEqualTo(something)
But the prod code won't look like that, it only passes n. So we need to pass bar before that. That's why we pass bar to the constructor in the OOP version.

OOP languages generally expect you to pass all the parameters to a function when you call it but some functional languages don't require this. That is, you can pass bar on its own - a mechanism known as 'partial application'.

You'd use it like this:

    // Do this when 'wiring up' the application
    val myFoo = foo(myTestBar)

    // now I only need to pass n
    myFoo(n)
If your language doesn't provide partial application, then you can do the same with a function that captures the dependency:

    // Do this when 'wiring up' the application
    val myFoo = {n-> foo(myTestBar, n)}
    
    // now I only need to pass n
    myFoo(n)
Now, notice that, since bar is just a function (of type ()->Int), I didn't need to create a separate interface IBar. For this purpose, functions "just work". They are equivalent to interfaces containing a single function (Java's SAM recognises this, if that's familiar to you).

Notice also that an interface of a single function is as small as it can be. Often, we'll see a class Bar with, say, ten methods, and a corresponding IBar also with ten methods.

SOLID's Interface Segregation Principle says to prefer small 'role' interfaces over obese interfaces such as the ten-methods IBar. And, of course, if you're using functions, that happens naturally.

Hope this helps.


It does thank you, learnt a bit about partial application when dabbled with Haskell, tho never never really got close to Monad stuff (io I guess?!). Now I'm trying to learn more about Rust also I identified references about "Working effectively with legacy code".

My point being, do you have more references for studying about tests?! As a NodeJs and Python developer today (with almost no tests) it's really hard work with these legacy code :|


Honestly, I think "Working Effectively with Legacy Code" is the main work in this field.

If you can pair with someone who habitually works TDD, do so. Sounds like getting experience in a team with effective modern practices (see "Accelerate" by Forsgren et al) could change your life for the better.


This was one of the most important lessons in my career.

If you want to test your code thoroughly you needs these techniques.


Whereas one of the most important lessons of my career is that you don’t have to unit test everything.


The natural segue for this sort of contrarian “I don’t need your patterns” stuff is logically “I don’t need unit tests either”, soooo pretty sure the answer there is “you don’t need tests”


Well, he invented BDD, which very much builds on TDD, so, no, that's not the direction he's going.


Wow that makes it even worse. I had no idea, apologies.


1. Watching the tech community over a really long time, please guys, don't swing violently from "A is all good!" to "A is all bad!" It was good for something, else so many people wouldn't have successfully used it for so long. Work more on discrimination functions and less on hyperbole please. Future generations will thank you for it.

2. "...coupled with the way most developers conflate subtypes with subclasses..." Speaking as somebody who both likes SOLID and could write this essay/present the deck, I think there's a lot of confusion to go around. There are a lot of guys coding classes using rules better suited for types. There are a lot of guys applying OO paradigms to functional code and vice-versa. In general, whenever we swing from one side to the other on topics, it's a matter of definitions. There is no such thing as "code". There's "thinking/reasoning about code" and there's coding. You can't take the human element out and reason abstractly about the end product. Whatever the end product is, it's a result of one/many humans pounding away on keyboards to get there.

3. My opinion, for what it's worth: as OO took off, we had to come up with heuristics as to how to think about grouping code into classes, and do it in such a way that others might reasonably get to the same place ... or at least be able to look at your code and reason about how or why you did it. That's SOLID and the rest of it. Now we're seeing the revenge of the FP guys, and stuff like SOLID looks completely whack to them, as it should. It's a different way of thinking about and solving problems.

ADD: Trying to figure out who's right and who's wrong is a (philosophical) nonsense question. It's like asking "which smell is plaid?" Whatever answer you get is neither going to provide you with any information or help you do anything useful in the future. (Shameless plug: Just submitted an essay I wrote last week that directly addresses reasoning about types in a large app)


> don't swing violently from "A is all good!" to "A is all bad!"

Indeed. This clickbaity style of laying out arguments is not terribly constructive. Software is not black/white. It's entirely grey. And there's a lot of room for contextual nuance everywhere.

Principles like SOLID (and DRY, and YAGNI, etc) are principles. They are not laws. Principles are guidelines which can help you make solid (heh heh) decisions. They are subject to context and judgement.

If good software design were as easy as memorizing a couple of acronyms, we'd all be out of a job. But it's not. It takes practice and experience. Writers and academics can make things easier by presenting accumulated experience in principles and guidelines, but there are no silver bullets. It's unfair and pointless to expect SOLID (or anything else) to apply in any and all cases.


> It's unfair and pointless to expect SOLID (or anything else) to apply in any and all cases.

I think that's a big part of the problem, and how we end up with articles like this. A lot of developers do expect SOLID to apply to every case, and I've seen fine code get rejected in reviews because it wasn't SOLID enough.


>>A lot of developers do expect SOLID to apply to every case, and I've seen fine code get rejected in reviews because it wasn't SOLID enough.

Prescriptive principles without informed discretion. That is the problem, and it's not just software.

"A foolish consistency is the hobgoblin of little minds."

It's terrifying how easily a reasoned and seeming sensible polemic may be interpreted into a foolish oppression, even by otherwise rational folk...


> Software is not black/white. It's entirely grey

entirely? There aren't parts that are black/white? That's seems a bit black/white.


Ah, but that comment isn't software, is it?

[0]: https://esolangs.org/wiki/English


Let's settle on it being a gradient from black to white.


Well, the 0s are black and the 1s are white, except when it is the other way around.


Touché. "Only the Sith deal in absolutes" and all that.


Which, in using the word "only", is itself an absolute. I don't know if it's a winnable game.


A thousand times this!

Principles, heuristics, "best practices," and just generally good ideas are not absolute truth. SOLID is like hand washing. Please do it! Unless you have a good reason not to.

The root of the "disprove a heuristic by a single counterexample" problem is a misunderstanding of logic. A heuristic is not a statement that universally all hands must always be washed. It is a milder claim that generally handwashing has proved useful via inductive means, so you should probably wash your hands if you want to minimize your risk.

Any expert in a given field should know times when not washing hands has been justified. But by the same token, those people know that they should still recommend hand washing to the general public because they won't know when it's not justified.

Wash your hands, please.


"It was good for something"

The post does seek out where the SOLID principles came from. And it's not really debunking them; just saying they're not absolutes. Which, yes, the title is click-baity, but I've certainly found people who treated them as absolutes, or tried to talk about code from SOLID perspective, and I've certainly never found that useful.

In fact, I've not found -any "best practice" to ever be absolute in a generalizable sense, and it's never been useful rhetoric to bring them up in a design discussion because of that. In fact, they sometimes run counter to each other. "DRY would say we should combine these together" - "Yeah, but Single Responsibility; while the code is mostly the same, they fundamentally are dealing with different things in different contexts".

Learn the heuristics, then deal with the subjective realities each project brings; anyone who tries to treat a given codebase as having an objectively correct approach, rather than a nuanced and subjective series of tradeoffs, is not someone worth talking or listening to.


> "DRY would say we should combine these together" - "Yeah, but Single Responsibility; while the code is mostly the same, they fundamentally are dealing with different things in different contexts"

As originally coined, DRY speaks of ensuring every piece of knowledge is encoded once. If pieces of code are "dealing with different things" then those are two pieces of knowledge, and DRY (per that formulation) does not recommend combining them.

I agree that there is a prevalent notion of DRY that is more syntactic, but I find that version substantially less useful and so I try (as here) to push back on it. Rather than improving code, it's compressing it; I've joked that we should call it "Huffman coding" when someone tries to collapse unrelated things in ways that will be unmaintainable.

Note that it's not just that syntactic DRY sometimes goes to far - it also misses opportunities where the original DRY would recommend finding ways to collapse things: if I'm saying "there's a button here" in my HTML and in my JS and in my CSS, then I'm saying the same thing in three places (even though they look nothing alike) and maybe I should find a way to consolidate.

There are, of course, still tradeoffs - maybe the technology to unify the description isn't available, maybe deepening my tech stack makes things less inspectable, &c.


I posted to a sibling comment to yours, but wanted to say here too - at that point it ceases to be a useful statement to ever bring up, because I've never seen a discussion where everyone agreed things were the same 'piece of knowledge', and one side was saying it should be repeated. When I've heard "DRY" trotted out, it's -always- been in a situation where the other side was trying to claim/explain that they were different pieces of knowledge. Hence my statement - it's worth understanding the meaning of the principle, internalizing the lesson, as it were, but then the formulation ceases to be useful.


Interesting. Our experiences differ dramatically.

In my experience, misguided "could this be DRYer" is usually motivated by superficial similarity, and a focus on knowledge is clarifying. (Although as I mentioned, "yes, it's repeated, but given tradeoffs that's the best in this case at least for the moment" is still possible.)


Oh, no, I'm agreeing with - throwing out the statement, and instead focusing on whether or not it's a shared context, is everything. Once you understand it's the same context, you -know- what to do; you don't need to be reminded that DRY is a virtue


>>I've joked that we should call it "Huffman coding" when someone tries to collapse unrelated things in ways that will be unmaintainable.

heheh I've made that joke too, also "let's un-complicate this into a pointless complexity," when its goes over their head.


I understand that this is almost nitpicking (because the DRY example is not the point of your comment), but your DRY example is a really bad example of this, but rather a very good example of the lack of knowledge within the software community. According to Wikipedia, DRY means "Every piece of knowledge must have a single, unambiguous, authoritative representation within a system" [1], NOT to deduplicate all code that looks the same. It's actually more or less exactly the same as the Single Responsibility principle.

PS. Interviewing senior devs, team leads and tech leads atm. And so far none (!) have been able to properly formulate this (even after I'm hinting that it's not only about code deduplication) and 75-90% believe it's all about code deduplication. Imo quite scary, and tells you a fair bit about the shape of the software dev industry...

[1] https://en.m.wikipedia.org/wiki/Don%27t_repeat_yourself


>>none (!) have been able to properly formulate this (even after I'm hinting that it's not only about code deduplication) and 75-90% believe it's all about code deduplication

Well, duplicate code is the typical manifestation, as the sibling comment [1] relates:

>>> if I'm saying "there's a button here" in my HTML and in my JS and in my CSS, then I'm saying the same thing in three places (even though they look nothing alike) and maybe I should find a way to consolidate.

"and maybe" it's just otherwise hard to get the point across, as you've discovered. Isomorphisms are easier to distinguish (and validate!) versus homeomorphisms, but unnecessary pedantry usually results in MEGO. We shouldn't expect senior developers to automatically embody the virtues of mathematicians or philosopher-kings...

yeah, I am nitpicking worse, but after 40+ years in the industry and suffering through tens of thousands of marginally relevant distinctions in "gotcha" interview questions, I am without shame even though my head is nodding in acknowledgment of your points...

[1] oops, I meant https://news.ycombinator.com/item?id=26532424


I'm not really following you but intrigued. A few questions: You're quote of the parent doesn't seem correct? Where did you find that quote? MEGO?

This is imo not a gotcha interview question. It's A) Intended as a way for me to check if the person has an understanding of when to deduplicate code or not (I don't care if they know what DRY is, and I try to help them without giving them the straight up core part of the answer, give examples, etc). I believe this is one of the fundamental parts of writing maintainable code and B) Able to formulate this so that others can learn from them. Both are critical skills for the role.

In my experience, by applying DRY wrongly you're creating a really dangerous code base (maintainability wise). It's probably even worse to deduplicate code wrongly than not deduplicating at all.


>this is one of the fundamental parts of writing maintainable code

>It's probably even worse to deduplicate code wrongly than not deduplicating at all.

I strongly agree.

>A few questions: You're quote of the parent doesn't seem correct? Where did you find that quote? MEGO?

pardon me, I meant a sibling [1], not the parent comment. Otherwise I'm just relating my experience. MEGO is an acronym for "my eyes glaze over", I see searching on it leads to a LEGO clone, thanks Google. Trying to explain why a homeomorphism is potentially a DRY violation is the kind of explanation that leads to MEGO.

I don't like those "hinting" questions any more, in retrospect it seems more about exploiting assumptions of superiority at being obscure rather than a valid assessment of suitability. Also seems to waste precious time (interviews cost time, more valuable than money). Better to be direct and practical, principally because misunderstanding is more likely the default state of human transactions. It is probably better to first settle on a mutual understanding of some expected conceptual requirement (e.g. understanding of DRY), then produce some specific examples for the interviewee to judge and interpret. Since these are mostly heuristics, judgement with respect to application of heuristics is the key facility I would attempt to elicit, InMyHumbleOpinion (no more TLA, CamelCase from now on!).

[1] https://news.ycombinator.com/item?id=26532424


Yes, but at that point it stops being a heuristic and instead is a tautology. "Don't repeat the things that shouldn't be repeated". Or even what you said, which, again, is a nice statement, but who decides what 'a piece of knowledge' is? I.e., when is it the same bit of knowledge, vs when is it different? That's often the heart of it; I've had those debates (and in fact, was referencing them in my parent comment), where someone feels these ~things~ are basically the same, and so should be treated mostly the same, with shared code, and where someone else feels that, no, the differences are sufficient that they should be treated differently. And that's a reasonable discussion to have. But it's one that trotting out "Don't Repeat Yourself!" or SOLID or etc adds -nothing- to; the principles themselves clash, and ignore the core difference you're trying to work out.

In short, the reasons for the principle matter, but if you know to look for and understand the reasons, the principles themselves are obvious and do not serve as useful heuristics.


What is a piece of knowledge, who decidesn: A piece of knowledge in the domain. Of course there can be discussions around that as well, but should clear up the most obvious mistakes?

Principles and concepts: The reason for naming things is to be able to efficiently communicate things, make them memorable, etc? For example when communicating things in an interview? Why do we name things generally? Empire State Building? Or just the building on Xth street/avenue in NYC. Then of course, if everyone is misinterpreting the Empire State building for the Statue of Liberty then either the naming was off, or the teachings of the name....


Here's a very specific example I had -

A senior dev was creating an admin tool that took 7 or so different things, and treated them the same. They were very similar in how they operated, but there was some complexity.

Cue two months of MTTF never changing; squash a bug, introduce a new one.

I, as a junior, opted to rewrite everything. Got manager buy in. Did so, largely just separating these areas, treating them as unique things. Voila. MTTF started dropping.

Everyone agrees if it's the same thing, don't repeat yourself; no one thinks repeating themselves is a virtue. So once you understand that it's worth trying to find the things that are the same bits of context, knowledge, etc, as part of basic understanding of the domain, you understand the reasons for DRY, and the phrase ceases to need ever be uttered (and in fact, becomes counter productive, because it is to try and say "best practice would be to not repeat this!" without actually addressing the real issue at stake; are these the same thing?)


Well code is data, so it's understandable that things get squishy inside people's heads.


+1 on the hyperboles. You see so many articles with titles like “Never use a singleton unless you want to lose your marriage like me”


In politics and in tech consulting there is a lot of money and fame to be made by going to the extremes and not allowing the middle ground. I just wish people wouldn't constantly fall for this, be it in politics or in tech.

Wait another 10-20 years and FP will suffer the same fate.


Agree. Programming is just too vast of a field to have universal principles. It is also changing to fast for most principles to remain relevant.

We just like mental shortcuts. We ask should we use SOLID/whatever, instead of asking why was SOLID/whatever used and does this why apply to us.


> It was good for something, else so many people wouldn't have successfully used it for so long.

I don't know - some ideas just turn out to be completely mistaken and any success while using them to be actually down to something else entirely.


Uncle Bob has actually written a blog post about Dan's presentation: http://blog.cleancoder.com/uncle-bob/2020/10/18/Solid-Releva...


The article mentions the Bob’s post, and criticizes him for reacting to the slides out of context—that is, without seeing the pub talk that the slides were from, or contacting Dan to understand what the slides were about.


There's a big push across all of Dev Twitter right now to do away with SOLID. To say I think it's misguided is an understatement. There's a similar push underway to do away with unit tests. The direction of our industry right now is very concerning to me. And honestly, this may be an unpopular opinion, but I think a lot of it is driven by people who disagree so vehemently with Bob Martin's politics that they overcorrect and start throwing out his good work too.


I think the push against SOLID is fine. I've never really seen the 'single-responsibility' part ever really followed or used in a way that made sense.

I haven't really seen a strong push against unit tests?


Booking.com famously doesn’t write many tests at all. Something something move fäst. Well, other than the A/B kind. But you know what I mean. I also recall a recent Stack Overflow blog post mentioning that they don’t have many either.

Regarding a push against unit tests, the Frontend world, for what its worth, has a rising school of thought that favours integration tests based around what the user sees and interacts with.


Booking.com is one of the most horrifically unreliable, buggy pieces of garbage on the internet, so this doesn't surprise me.


You’ve never seen SRP in practice? This is honestly concerning.

You’ve never seen like a User class that only encapsulates fields and methods relating to a User abstraction?


I meant more as for non-trivial classes when people are deciding when to break up a class the "single responsibility" part is too loosely defined to the point where I've never seen people actually use it as a metric. I agree classes can grow too large the hard part is what rubric actually used for delineating it and just saying "single responsibility" really hasn't by itself been useful.


There has been plenty said about the topic. Have you read Clean Code? The Wikipedia page? The Pragmatic Programmer? The depth on the subject has been expanded far more than just “single responsibility”.

Here is Bob responding to OPs link: http://blog.cleancoder.com/uncle-bob/2020/10/18/Solid-Releva...

Stating:

“SRP) The Single Responsibility Principle.

Gather together the things that change for the same reasons. Separate things that change for different reasons.”

That’s a pretty clear delineation IMO.


@drooby yes we've all had to learn about it in class. "Single responsibility" has a definition but I've never seen a person have a good time defining the "single responsibility" part that well IRL when it comes to making/changing a class. Ironically, I think bob's later clarifications that the single responsibility refers to where a person/department would want change is much more usable https://blog.cleancoder.com/uncle-bob/2014/05/08/SingleRepon....

The want for change part is hard thing to define. Personally, I much prefer "separation of concerns" when deciding what to place where. I guess you could argue in some sense its around the same thing shrug.


>There's a similar push underway to do away with unit tests.

I agree with that push, as long as there's some other way of validating requirements. In my team, that's done with integration tests and our core principle is: code that is integration tested does not require unit testing. That principle surprisingly covers a good 70-75% of the codebase, leaving us with a few core unit tests for the secret sauce of our product.


He says the principles are wrong and says instead to just write simple code, but doesn’t offer suggestions for how to keep it simple. As if simple code is easy to write. There’s nothing more simple than a giant list of global variables. SOLID is an attempt to keep the code as simple as possible as it grows in size and complexity. It’s not perfect but anything is lousy if you misuse or overdo it


He writes at the end that there's a follow up post coming with exactly what you ask for.


Many years ago I read an article about Boeing's chief engineer - he apparently had the "whole aircraft in his head" - it as a schematic but as a series of deeply understood components the fitted together (from Bernoulli's principle to hydraulic pressure equations and the weight of fuel distribution as tanks empty. The whole enchilada).

There is nothing particularly magical - one assumes that the top 10% of any MIT graduating class could be taught to do the same over a decade or less.

But I am guessing that it needed a organisational effort to ensure the bits added never got beyond one person, and that he was comfortable saying "stop".

Every railway and new or refurbished bridge needs to be signed off by a railway engineer - who can and should refuse if it is sub-standard. Keeping our software under the same degree of control will need similar levels of handing the keys of the kingdom over to a small number of devleads. And ensuring they do not encounter unmanageable conflicts of interest.


This is a ranty and poor article, and I will continue to use SOLID. As with everything, just mindlessly applying the principles without consider if the resulting code is simpler/better or more complicated/worse is the wrong way to do it.


The author inverted the way we rationaly understand reality. Instead of observing and concluding, he concluded and then retro-fitted observations - he actually admited this by stating his motivation was to make an anti-solid presentation for the sake of it, I'm not making this up.

Thus it is unfair for us to try and contradict his arguments, it's like talking to a conspiracy theorist, they will glue random facts trying to support a pre-decided conclusion.

Having said that, this post is very shallow. Even if there is merit to criticising the SOLID principles (and I would be very interested in such a criticism), you can't just do it an a couple of sentences.


Your reply makes it sound like you missed the tone and context of the post. Light-hearted and intended to poke a few sacred cows...

But also you seem to be denying two important possibilities: 1. Something can be both flippant and insightful. 2. A "couple of sentences" is sometimes all that is needed to trigger a light-bulb moment in someone else's head.

We're not writing legal treatices here. We're trying to transfer insight from one conscious brain to another using an imperfect medium.


> Your reply makes it sound like you missed the tone and context of the post. Light-hearted and intended to poke a few sacred cows...

If that is the context and tone, then I must admit I missed it.

> But also you seem to be denying two important possibilities

I don't disagree this can happen. Let's see what the future will bring. Still I feel the writer could have done a better job going into more detail.


What? The post isn't a scientific article, its a persuasive piece. The hypothesis -> testing -> results model of understanding is great for problems it can be easily applied to. Design doesn't seem to be one of those.


I spent some time in Web Apps between video game jobs. I got into some tense conversations where I, as manager, recommended against the abstraction layers the Senior Developer was trying to explain to the team on a whiteboard.

He said something like "Why do all the game developers write code like that?".

He handed me Uncle Bob's Clean Architecture. Uncle Bob is something of a meme to me. I read the book and couldn't figure out how to explain how I thought this style was inferior to what I had done in games before.

Ive been thinking about that ever since it happened. How could I explain to this guy my programming philosophy?

Last night, back in games, after refactoring the same piece of layout code for several hours, I finally arrived at the irreducible, simple, clean result that I wanted. I slotted it into our code base, and mentally noted how I would onboard the team to it on Monday, and how they would instantly understand because the resultant code is so simple. Then it came to me:

"Unit Tests are like pouring concrete on your code base."

Same goes for using "the wrong abstraction". Sometimes you want concrete to encase your precious technology. The problem is when people are writing sub par code, then they pour all this concrete on it via Unit Tests, Inheritance and Abstraction Patterns. It can become harder to improve the core logic of the system if it's distributed across many files, and enshrined in Unit Tests.

I've dedicated over 10 years to coding, and only now do I think I can write clean code that is worth encasing in concrete. I think there might be a general flaw in the idea that everyone should be writing tests. I'm not certain about this, it's just my general intuition based on my experience.

They are pros and cons to every design decision. I acknowledge that they are many pros to these kinds of designs, I'm just enumerating some cons that have arisen from my experience.


> "Unit Tests are like pouring concrete on your code base."

My experience is the exact opposite. Tests make me feel safe to perform code changes a bit more aggressively than simple refactors. If behavior coverage is slow, I spend more time double checking I'm not breaking stuff along the way.

Sometimes changing implementation will mean changing tests too which might seem to be defeating the point, but this is because the test isn't good yet so it's an opportunity to improve the test and stop leaking implementation details. To me, it pays off handsomely in the long run.


If you have real unit tests then refactoring code also means refactoring unit tests, as they just test small units like functions in isolation. When refactoring you will need to replace units or delete them to be able to change units that are higher up in the abstraction. You may end up testing a lot of implementation details and keep them from changing.

If you only write tests of the higher abstracted units while also using the lower abstraction ones, you are not really doing unit testing anymore but what some people call integration testing.

In my opinion they both have their place. Most of the code we write is relatively throw-away code that will be used in one project/occasion and is prone to be changed because of changing requirements. You should test your requirements and functionality with integration tests, unit tests will cost too much time and will stand in your way. In rare cases we write code that is really reusable in a bigger scope and acts like a foundation set in stone that rarely changes and is widely reused. In general you should put way more effort in those and also use unit tests there.


I guess what you consider unit testing I consider useless testing, what you call integration testing I call unit testing, and maybe what I call integration testing you would consider end-to-end testing?


> "Unit Tests are like pouring concrete on your code base."

If they're very closely coupled to your code, yes.

I'll just say there's a reason why they're called "unit tests" not "class tests": because a "unit" is not necessarily a "class". You get to choose how big a unit is, and if it impedes change - like pouring concrete - choose differently.


Ya. I agree. If you can write good tests, they are very useful. I guess I'm thinking that for a lot of shops, for alot of features, the tests are doing more harm than good. Perhaps by merely sucking up so much developer time that the devs take longer to develop the intuitive feel for good code vs bad code.

I've always felt that "good code" is far superior to mediocre code with tests. I'm sure a lot of this is because I've never been impressed with the "testing culture" at any place where I have worked, it's always just a tacked on bullet point on some sprint board check list.

I guess for me, I feel like I have developed an intuitive sense for good code that has mostly come through real, lived, expensive, mistakes. I know it makes no sense as a programmer philosophy, but I truly think I care way more about every line of code because I have no unit tests. There is no safety net. I've been writing code like this for a very long time, and I think I've developed a certain skill that has been empowered by these high stakes. Pushing bad code and having GitHub tell you about a failed test before it ever hits prod is not nearly as powerful of a feedback loop as watching a crash rate sky rocket, having the Google Play store ranking tank and ultimately cost an unknown, large amount of revenue. I'll never forget that merge.

I guess I'm not against "unit tests", I'm just speaking about the kinds of lessons I've had and how they influence literally every line of code I write or read. Ultimately, yes, I've made big mistakes, but at this point I think I write better code, at a faster pace, than my peers who haven't had these "trial by fire" experiences. I know it's not for everyone, but I feel as though it has made me stronger.


> I guess I'm thinking that for a lot of shops, for alot of features, the tests are doing more harm than good.

Yes. Let me say that the conclusion that reached above about "unit != class" is one that I was forced to reach recently, by team members assuming the opposite.

In a lot of shops, the tests are done by rote, doing the opposite of what I said above: closely-coupled class tests on each public method. "pouring concrete" is a good description of the outcome.


The author rejects "Single Responsibility Principle" as “Pointlessly Vague" and offers instead the “Fits In My Head” principle. That is apparently better due to some hand wavy discussion which leaves no room to explain why same hand wavy justification would not apply to original.

stopped reading after that.


I think it's saying simple, dogmatic rules aren't a good idea and don't reduce complexity. Trusting a dev to not write functions that are unreasonably complex IS a good rule if you spend the time to check and review code.

Fitting your code around a load of "best practices" will basically always increase complexity because you've expanded the list of requirements the code has to conform to.


I don't think these principles are dogmatic, they rather give you a direction. Can you see that the rule "your code should have single responsibility" is more specific then "write simple code" ?


How is “fits in my head” not dogmatic, though?

And how does fitting around practices necessarily increase complexity? If you’re a jr engineer used to write a mess of code to solve a problem, and have to decompose it following a set of principles, you may end up with something SIMPLER very easily.

The whole point about principles is they lead to better outcomes. Problems arise when you confuse them with “rules that can’t be broken” or (as in this post’s case) just refuse to understand them and come up with your own half-assed heuristics - either to justify your code, for lack of experience, or just... to talk at a conference :-)


It's a good point though. Who decides what "one reason to change" is? a 10-line method could easily have hundreds of reasons to change.


I think the point is that you wouldn't apply those hundreds of reasons all at once. But it your code is doing two things, you could end up having two reasons to change it at the same time.


You the programmer do. Do you expect everything to be automated by successive application of precise rules?


"This article isn’t about those principles, that will be my next post."

The article does not present an alternative to SOLID, that's the next article.


...and the linked slides


> Code is not an “asset” to be carefully shrink-wrapped and preserved, but a cost, a debt. All code is cost.

The analogy to accounting is still useful and I believe correct. Assets depreciate (lose their value) and have carrying costs (storage, insurance, interest etc) that are booked as expenses.

Put another way, a phrase like "all existing code incurs costs" is an accurate analogy. So too is "all code is an asset", because it implies the former.


Sure, but replacing code with new (cheaper?) code has cost in itself. All the author has to offer here is a recommendation to "Write simple code", presumably one that follows his "fits in my head principle".


> Sure, but replacing code with new (cheaper?) code has cost in itself.

Of course. That's true of many assets, usually this is called the "replacement value" of the asset. You use replacement value and depreciation to calculate whether replacement is the best option, or whether to extend the life of the asset you already have.

Tockey's Return on Software is a fairly readable guide to some of these topics: https://www.amazon.com/Return-Software-Maximizing-Your-Inves...


The problem with SOLID is not SOLID, it's the fetishism that sometimes surrounds it. I get it, rules are attractive as they make everything easier. You put on blinders and follow the track. But the fact that there's so much disagreement about how to best build software, with valid arguments in all sides of the multiple spectra, should hint at its truer nature as a craft (some would say a philosophy, but I haven't yet reached that level of enlightenment) rather than an exact discipline. Isn't it time to perhaps get comfortable with the idea that there are no perfect solutions, only trade-offs and that the right answer is most likely "it depends"? I understand that some people feel protective of their investment at internalizing the various "principles", but learning them doesn't give anyone a license not to think and to mindlessly apply them. And yes, SOLID is mostly useful in its essence, but over time parts of it have caused confusion and have not always aged well (https://codeblog.jonskeet.uk/2013/03/15/the-open-closed-prin...).

At a more personal level, the DIP section of this article hit close for me, as I've (again) lived the described scenarios a mere week ago. Sometimes `main()` is all you need and all you'll ever need. The extra tooling is also debt. If you choose to incur it, it must palliate to an actual problem that you actually have, else YAGNI.


The issue with SOLID is not that it's wrong or outdated, it's that some people think it's something you learn in five minutes by reading its Wikipedia page, which it isn't. It takes a lot of effort to develop skills to adopt SOLID effectively.

> When I look at SOLID, I see a mix of things that were once good advice, patterns that apply in a context, and advice that is easy to misapply. I wouldn’t offer any of it as context-free advice to new programmers.

Off course! I wouldn't offer any OOP architectural/programming principle as context-free advice to new programmers. It's easy to misinterpret it if you don't contextualize it, show how it's used in real projects, discuss advantages and disadvantages, and so forth.

> The Single Responsibility Principle says that code should only do one thing. Another framing is that it should have “one reason to change”. I called this the “Pointlessly Vague Principle”. What is one thing anyway?

This gives me the impression that the author indeed misinterpreted this principle, just to name one, oversimplifying it. If you like the author have trouble understanding the Single Responsibility principle, take a look at "On the Criteria To Be Used in Decomposing Systems into Modules" by David Parnas [1].

It seems to me that the author also oversimplified the other principles as well, maybe on purpose, for using controversy to attract attention to his post.

[1] https://www.win.tue.nl/~wstomv/edu/2ip30/references/criteria...


As nearly every poster in this thread mentioned, this headline is click-bait. I don't blame the author actually, this is very targeted click-bait (OO programmers) and I took the bait and actually enjoyed the read.

The criticism of SRP hits home for me - I try and write SOLID code when I can and SRP is challenging to follow. I struggle jumping between "Does this really fit SRP?" and "I can probably justify SRP here." I usually settle on the author's "Fits my head" and justify that when doing code reviews with my co-workers.

The author's comments on OCP don't accurately reflect my experience with the field of modern software development. I've worked on many projects (even green-field projects in the last 3 years) that have been very expensive and risky to change, where work absolutely needed to be additive because you couldn't trust refactoring tools to handle everything that needed to be changed.


Let me guess, they were Python or Ruby projects?


The argument against the Liskov Substitution principle was the most interesting to me. It uncovers the principle that makes the world go round: composition. While it’s easy to decompose code into components of single responsibility, I think the rules of composition are not clearly understood. Category Theory has already told us that composition depends on context (what the thing is used for). Yet some problems are intrinsically coupled, having relationships across layers of abstraction. In those cases the developer should broaden the definition of what “one thing” is. A process emerges for writing such code: gather deep understanding of the domain > develop an accurate mental model for what is to be accomplished > make wise choices on how define categories of code. Overall not wrong, just insufficient.


The main problem I see is that the overall SOLID principles while still correct, the original definitions used are too outdated and heavily imply inheritance everywhere.

Single Responsibility, honestly I like 'separation of concerns' much more. People tend to think single responsibility means use tiny classes and single line functions. Open closed principle stating "should be open for extension, but closed for modification" infers to many it's about inheritance only. Open for customizability would be much better. Liskov Substitutions seems like its talking about inheritance when it really also applies to interface usage in general. Dependency Inversion principle is interpreted by many to be use dependency injection everywhere etc...

I wish someone would go and update these definitions for the modern world.


I always took SOLID with a grain of salt, because they clearly started with the acronym and worked backwards to the constituent parts.

Is Liskov's substitution principal really one of the 5 most important pillars of software design, or is it just the best one that starts with an L?


One of Bob’s anecdotes is that it took him several years after he started teaching the SOLID principles to realize that they could be rearranged to form the word “solid.” So, no, I don’t think he started with the acronym.


The author mentions in the post, and I think I agree, that Liskov substitution is the most "principle-like" of the principles in that it's pretty much always applicable. If the child object can't be substituted for he parent object, then you haven't really extended it, you've just made another unrelated class that happens to share some code. Inheritance is probably the wrong pattern here, and if you call what you've done inheritance you're going to confuse the person who next touches this code.


I'm not arguing that it's not a real thing, but how often does the LSP come up in your day to day life as a programmer? I have not thought about this once, and I do not think there is a problem this has solved for me in 10 years.


It's not always easy to discover interface commonality. But when you do and it's a good abstraction, there are definitely benefits over scattering ifs and switches throughout your code.


To me Single Responsibility means that the code should do one thing and one thing only, on the abstraction level which it exists.

The example given by OP is about ETL. Is that one thing or more than one thing? It depends on the abstraction level. If you are talking about a data access object, sure let it do all those things because when considered on that abstraction layer it does one thing, access data.

But if you are looking at individual functions or methods, that is one abstraction layer further down, then S in SOLID says you should not mix extraction and transformation, for example.

It all depends on context and SOLID are principles not law.


"single responsibility" is not reducible/replaceable with "code that 'Fits In My Head'".

Our head may very well fit two pieces of code which are disparate and which would be use to separate into distinct functions.

----

"Open-Closed Principle... was sage advice in an age where ... we hadn’t figured out refactoring yet ..."

Well, as far as I'm concerned, we haven't "figured out refactoring" yet. I'm currently working with a group of people 75% of whose' work is just gradually and slowly refactoring an existing codebase.

"Nowadays, the equivalent advice if you need code to do something else is: Change the code to make it do something else! It sounds trite, but we think of code as malleable now like clay"

That may be true when you're writing code for your own use, or the use of a very small group of people (the large "user base" doesn't work with the code, they interact with what the code does). When you're writing a library, or code with which near-strangers need to work, you can't just willy-nilly change things. Moreover, you must strive to make your code good enough so that it doesn't need to be changed all the time; an ever-changing codebase is difficult to rely on (unless somebody takes a snapshot, and is back in the case of never-changing code).

---

... but I do appreciate some of the points made. Defining what "one thing" means is definitely not trivial and obvious, for example.


>When you're writing a library

99% of developers are working in boring business software with 2-3 other developers.

99% of codebases will be touched by maybe a dozen developers in its entire lifetime.


Did you make up those numbers out of your head? When you don't refer to a specific (Meta)analysis you can still make up numbers from guessing but if you do you should give intervals e.g replace 99% by most


“The Single Responsibility Principle says that code should only do one thing”

Except it doesn’t really say that. Without understanding the S, the OLID is bound to seem like mindless pedantry.


The Single Responsibility Principle (and the related Separation of Concerns Principle) have been particularly damaging in the context of UI development.

They are reasonable principles and often do apply, but there are times when the concept of Locality of Behaviour is better for overall system simplicity and stability:

https://htmx.org/essays/locality-of-behaviour/


The only way someone could arrive at the conclusion that “write simple code” is superior to any clean code principles is when they have already “absorbed” and perhaps transcended these principles. What does this tell us? It takes a mature developer.

You cannot tell a junior developer to just “write simple code” and then expect any kind of sensible result. But then, you also cannot tell them to write SOLID code. Indeed, you need to teach the basics first: How to understand the requirements. How to write any code that fulfills them. How to write legible code that fulfills them. How to write good code that fulfills them.

If you dump these principles onto developers that have not yet learned the proper developer mindset, the outcome will most likely be disaster. It is very hard to repair the damage.

Only after a developer can actually write good code can you teach them how to take it to the next level. How some libraries are just so awesome to use without having to check the docs. How to conceive code you’ll look at in 6 month and think “damn, that’s some shit”.

Why every single clean code principle is a recipe for disaster – if told to juniors.


At first, I thought this was going to be about the Solid Project (which is sometimes stylized as SOLID) by Tim Berners-Lee. It's not. That would have been fun.

Regarding SOLID principles being outdated, I disagree. The high level concepts of SOLID are good and just as relevant as ever. While the author proposes that code is now less risky to change, I would argue that the asynchronous, distributed systems that we often work with today are significantly more complex and error prone. The world also got hooked on dynamic languages for the past couple of decades, although it feels like that shift is slowly reversing. Software bugs have become a routine part of daily life, partly because there's more software but also because we build that software more haphazardly to achieve deadlines for ever expanding sets of features. Today, a large number of developers lack proper education or training and have relatively little experience. We work in teams where there's a lot going on besides writing code and there's more distraction around us than ever before. To me, there's no shortage of reasons to worry about changing a line of code. I spend a lot of my time trying to understand and reduce that risk so that I and everyone around me can be more productive in the long run. The SOLID principles are a useful guide in that effort.

That said, you may want to update your interpretation of SOLID within the context of your latest coding practices. For me, that meant applying SOLID to functional programming. At first this seems odd, since much of the language around SOLID implies the use of classes and object-oriented programming. But it turns out there is a lot of overlap between the philosophies of SOLID and functional programming. [1]

1: https://dev.to/patferraggi/do-the-solid-principles-apply-to-...


As the author described it in the preface, the topic was rather meant to be engaging and entertaining for the occasion. So let's not confuse this with a true debate over SOLID.

> "... So what would I do instead? I thought there might be a one-to-one correspondence for each of the SOLID principles and patterns, since there is nothing inherently bad or wrong with any of them, but as the saying goes, “If I were going to Dublin, I wouldn’t start from here.” "

As anything, SOLID is a guideline, not a dogma. The purpose is to encourage developers to approach the design of their code mindfully.

Personally, in my memory the advent of SOLID principles coincided with widespread promotion of better approaches to testing. These principles indeed helped design for better testing. And better testing provided more freedom for changes going forward.

If another set of priciples help one's projects better attain the set goals, well, sure, embrace these by any means.


Why everything you know about software engineering is WRONG, and this one WEIRD trick you can do to get ahead of the pack.


SOLID isn’t very good, but most of the alternatives are worse. I mean if the author thinks the SOLID principles are vague, but then go on saying “it should fit in my head” seems not like a good replacement. That is also very vague and arbitrary. Perhaps the author should combine them and it gets somewhat less vague.


The SOLID principles are associated with model driven development, and that paradigm. These principles have helped me in many ways improving code, and explaining why it needs to be that way. It makes it easy for me to bring consistency to the code base and bring new people into the project.

There is definitively room for new models and new paradigms when writing code, and I welcome that. The click-bait article is just that... click-bait. It silly and reminds me of the "climb that tree" quote.

Consistency has always been a big deal for me. Different code-bases needs different principles, standards and models. If you are consistently wrong you can consistently fix your issues.

My experience is that if you are inconsistent, you usually will end up with the dreadful spaghetti of things.


There's much here I agree with -- and especially with the general lesson of "don't get too hung up on principles," but at one point the article speaks of "1980s entity modelling," in a denigrating way, as if methods of formalisation age. It's like saying that doing logic-style type-level programming or SQL is "1970s logic modelling", and using high-order functions combinators is "1950s functional modelling." Many of the interesting ideas in programming languages from the 1980s haven't yet come of age, let alone grown outdated. Also, it is true that there's much we learn over the years, but also much we forget and reinvent.


One of the most useful pieces of programming advice I have been given is to make an abstraction on the third time you solve a given problem. It encourages making abstractions for common tasks, but also acknowledges that in a large codebase the wrong abstraction is an order of magnitude more expensive than not having an abstraction at all, and it's unlikely that you have enough information to make the right abstraction the very first time you are looking at a problem.

As with all things, there is a cost (slightly more code duplication for rarely-solved problems) but also huge benefits in having really solid abstractions built from lots of experience.


SRP is vague because it really depends on how your team prefers to draw boundaries around the responsibilities. Some places choose to have meatier responsibilities (an entire ETL pipeline as one class), or prefer smaller responsibilities (same ETL pipeline, but each stage of it is a separate class). Both of these could use roughly the same amount of code, whether it's implemented in 1 class or across 3 classes. Some of this is driven by how user stories and work is organized.

SRP never really recommended how large those responsibilities should be, but that's almost impossible to do given how widely programming is applied (especially for a quick acronym parroted in interviews). Had it tried to specify this size, we would likely be dealing with "OLID" instead of "SOLID", since it would be much easier to criticize.

It comes down to how many tabs you want opened in your IDE and how good you are at moving between them and keeping the object relationships in your head. If I have a god object to handle one responsibility, I only have to look at one file, but it might be thousands of lines long. Or I can reduce that down to tens of files open with hundreds of lines each. Or it could be hundreds of files with only around ten lines each. The god object allows me to mostly forget about any object relationships or abstractions. But I pay some price by having to learn the entire god object before being able to sensibly work on it.

If I have some object inheritance like:

IPipeline -> AbstractPipeline -> SqlPipeline -> PostgresPipeline, SQLServerPipeline, MySqlPipeline

IPipeline -> AbstractPipeline -> NoSqlPipeline -> MongoPipeline, CosmosDBPipeline

(every architect dreams of this)

It's easier to change things per database without affecting the others. Since each of these products change independently of one another (Mongo doesn't consult Microsoft about SQL Server), it makes more sense to do it this way as opposed to having a god object when you know you must support these other databases.

On the other hand, if you know you will only ever use Postgres, IPipeline, AbstractPipeline, SqlPipeline/NoSqlPipeline become mostly useless abstractions.

This also varies greatly whether you're trying to build building blocks for other developers or just trying to ship a product. Principles aren't always going to apply to both of those, and SRP might be one of them. It wouldn't hurt to add "depending on how your team defines responsibilities" at the end of SRP.


Some of the points the author makes are good, but some are severely misguided. An example:

> The Single Responsibility Principle says that code should only do one thing. Another framing is that it should have “one reason to change”. I called this the “Pointlessly Vague Principle”. What is one thing anyway? [...]

If a change request comes in to change the format of the logs to include milliseconds and you're running a search and replace over the code base, you've broken the SRP. We can argue that is a good thing or a bad thing, but that's what it means. I think it's a bit of both, a trade-off, like most things.


I believe that would be violating DRY, not SRP.


I think a couple of the criticisms are valid - mostly the criticism of dependency inversion.

But overall I think the article (if we were to take it seriously) basically represents a nihilistic point of view that says "there's no point trying to have principles - code is either subjectively good or it isn't".

I'm as sympathetic to heterodox criticism as anyone, but this desire to throw away any received wisdom at all definitely seems misguided to me.

(For what it's worth, I think the single responsibility principle and the open closed principle are particularly valuable. I'm open to persuasion on the others)



Using the word “wrong” here feels click-baitey when he admits many of the principles are right in certain cases or certain ways. The opposite of every great truth is a great truth. If a principle were true across all space and time with zero exceptions or nuance then software development would be easy.

I very much like the “fits in your head metric” though and I wrote about it at length here:

https://tobeva.com/articles/brain-oriented-programming/


I think that the problem with SOLID principles was the use all capital letters and the word principles instead of guidance.

Now that that original sin has been committed, it is super easy to take shots at it.


To me, there is one fundamental principle:

Better code is the one that takes lowest time in total to get from point 1 to point N by mediocre developer and in the same time result in lowest amount of help desk tickets

Its the essence of MIT vs New Jersey design style (i.e. worse is better)

The problem is, its impossible to measure that (unless we tap in parallels universes) which is why design is a form of art in real world.



The tech community constantly give their opinions as truth (the article is an example of this). I wonder if YC could be a position to do some quantitative research about what works, especially as companies evolve their codebase and if that even matters as much as we software engineers feel it should.


SOLID acronym:

https://en.wikipedia.org/wiki/SOLID

As a pure mathematician, whenever I find myself reading an acronym-heavy paper I know I'm in the wrong part of town.


The thing that strikes me the most about SOLID and the associated suite of practices and doctrines ("clean code," "craftsmanship," etc.) is the complete lack of evidence.

What you have is a series of anecdotes from people pushing particular strategies saying that they think the thing they're saying is a good idea and other things are a bad idea. That doesn't tell you very much about whether the code worked in context, especially on teams of large numbers of developers. It doesn't tell you very much about whether the sample code on a blog that is (allegedly) more readable or understandable is actually more readable or understandable to others.

And many of the folks pushing strategies, giving talks, etc., seem to be better known for being consultants about how to program than for programming per se. I have, I think, access to more empirical evidence about good and bad programming practice than any blog or talk that a SOLID advocate has given - but it's all internal to my employer's source control repo and you can't see it. If I could, I could easily point to code that has tried to follow "proper" design and has turned into Enterprise FizzBuzz and I could also point to short, readable code that doesn't follow any of the SOLID principles but it doesn't matter because you can read the entire thing in 10 minutes.

Now I admit that I'm not offering evidence either, and as much as I agree with this article, it's also a first-principles argument. It says, for instance, that in our modern era recompiling a system from scratch is cheap and you don't need the open-closed principle, which I strongly agree with, but I can't immediately point at an example of a system that followed the open-closed principle and accreted into cruft, nor at a system that intentionally keeps its internals easy to edit and has worked well.

I think we ought to do something about it as an industry. I don't have great ideas about how. I suppose we could look at open-source code (does Git follow SOLID? Chrome? TensorFlow? OpenStack? what about systems that do the same thing but with very different architectures, like GCC vs. LLVM?). But open-source code is generally subject to different development pressures and expectations than internal code, and the question many of us are trying to answer is, how do we write code at our day jobs that works well?

All that said, I will say this article matches my (anecdotal) experience - and I would go further and agree with the comments that you want to test your system in its entirety, including running the actual database in tests instead of swapping it out. Just like it's much easier to recompile something now than it was in the '90s, it's much easier to spin up a database in a cloud VM/container/etc. than it was in the '90s. Dependency inversion and injection is a workaround for environments where unit tests are easy but integration tests are hard, and making integration tests easy both solves this and has a range of other benefits for software quality (and development velocity) too.

(And actually there's a potential piece of empirical evidence here: software that doesn't have per-unit licensing costs, either open-source software with no costs or cloud SaaS with continuous billing based on usage, has been beating traditional enterprise licensing. I think you could argue that a big reason for that is that you can spin up that software in a CI environment or on a developer's machine. I can run a local MySQL much more easily than I can run a local Oracle DB, so I'm much more likely to do it, and we're much more likely to go to prod with MySQL, in turn.)


It's difficult to come up with quantifiable evidence when your method doesn't have a precise measurable definition.

But good practices in a craft don't necessarily need to be quantifiable to be useful. If you learn a group of concepts on how to improve your work, you can ask yourself: "Is any of the SOLID principles applicable here?" and "Is the resulting code better if I apply them?"

If you answer "yes" to both questions, you're using the principles in practice to improve the code without having empirical proof that they worked, just gut instinct.


Although I’m sympathetic to your complaint, I think it’s misguided. I can’t think of any design technique or principle that has the kind of objective evidence you’re requesting. And yet, I’m still capable of subjectively evaluating a design.

The underlying problem, I think, is that design is a human problem, not a technical one. Design is about making code is to understand and modify. How can its quality be measured?

Without an objective criteria for design quality (and stuff like cyclomatic complexity are far too simplistic) then no study of design principles is possible.


The Accelerate book has empirical evidence about what matters in software development. Conformance to SOLID wasn't one of them. Being able to deploy your code was.


In other words, "I present here a particular interpretation of SOLID, and then demonstrate that this interpretation is problematic".

"Join us next week when I will introduce a new acronym, whose interpretation will be superior to this interpretation of SOLID".


SOLID (or any set of design principles) are important in the sense that you should think twice when you don't obey them. So it is not necessary that they are strictly well defined: they are stopping points for thinking not for marking on a YES/NO list.


I do not waste my time trying to prove / figure out what is the "proper" way of doing things. I use whatever approach I feel suits better for particular case and do not give a flying hoot about what's the latest fashion.


Hmm, I came here for a discussion of the history of the security identifier... https://en.wikipedia.org/wiki/CUSIP


Yawn. Exceedingly tired of these “X is bad” kind of posts. People can critique things all day - it’s hell of a lot harder to create something new, and those are the kinds of pieces I’m interested in.


SOLID is folk wisdom to begin with. Most of developing (as opposed to computer science) is unresearched old-wives-tales and cargo cult.

Is SOLID any good? Who knows? Who proved it? Heck, who even did tests?


I was hopeful, because I tend to think most "universal" principals are 80% working around problems/limitations/design choices of, at the time, mainstream programming languages (C, C++, Java). Which don't apply to (some) other languages (which have different limitations of their own).

But, I don't think this author gets it (what the SOLID elements actually are, nor how they are moot when using languages that solve the same issues they are solving).


I love this, especially the comment on DIP. We went way overboard with dependency injection implementations.


On this note, can anyone recommend a good tutorial that introduces SOLD with Typescript?


TL:dr; SOLID is basically a set of principles saying "keep things small and modular and encapsulated so you can keep them in your head at each layer", and so the specifics of SOLID are silly but the overarching goal is sound.


FYI this is not referring to TBL’s SOLID project


to me the whole saga of “clean code” and “solid” it’s just pseudo-science predicated by people who have made a fortune selling books and consulting on the subject. unfortunately software engineering it’s not an exact science and that’s for that specific reason that I rather consume these concepts with a pinch of salt and not take them as axioms


The article, imho, reads like the main goal of the author is to prove he has a bigger dick than Uncle Bob’s...

sadly, he fails miserably at it.


My god what awful clickbait


I'm not a huge fan of SOLID, but when writing OOP in a conventional style, I have found that it works pretty well. All the principles play nicely with each other. You don't have to write SOLID, but you probably shouldn't write Java code like you're a Haskell developer because that's not a natural or common way to write Java.

Writing simple code is great. For when you can write simple code. For everything else, people invented things like SOLID. We need and want to do complicated things and maintain complicated codebases. If you want to do simple things simply, I'm not even sure OOP is a great model, let alone SOLID, which is an attempt at simplifying people's OO constructions. For all intents and purposes, SOLID is how, in fact, you write simple code in complicated projects.

"Single responsibility" is a much better heuristic than "fits in my head" because code expands and gets bigger. It fits in your head now, but no decently sized, actively developed project stays simple. When do you break it up? When it starts taking on multiple responsibilities. Yes, that isn't objective. But the alternative proposed is even less objective. Additionally, "fits in your head" is inferior because it assumes you already understand the codebase. If you are new to the code base, trying to weave through the functions and objects called to figure out what it does, looking at code that does multiple things is confusing and hard to synthesize without first understanding the whole. And you can't understand the whole until you understand the pieces. Single responsibility is much more specific and identifiable than the author's alternative.

The author mis-attributes the Open-Closed principle to a performance optimization. The open-closed principle allows me to make assumptions about what a class is doing and how it behaves. This is because I know it hasn't been modified. Wikipedia notes that the original formulation of the term, a closed class or module "has been given a well-defined, stable description." The point here is that if you don't let something be fundamentally changed during runtime, then you can reason about it. It is definitely not a performance hack.

The author states that the Liskov Substitution Principle is the principle of least surprise, which is surprising as you cannot substitute one principle for the other. This suggests to me that the author does not actually understand the meaning of this principle, especially as he fails to give a substantive critique and unhelpfully declares that you should write simple code. The Liskov Substitution principle is, again, how we reduce the apparent complexity of code so it allows us to reason about it more effectively.

This next one is baffling. The author of this article recounts the anecdote showing an instance where the interface segregation principle would have been helpful in designing the code. However, the author appears to be unable to gleam from that anecdote the underlying principle being advocated and is fixated on the fact that in the story the code was refactored. Since the anecdote describes an implementation, the author is unable to see it as anything other than a design pattern.

The author claims this principle has cost billions of dollars in damage. He does not feel the need to explain this or provide a citation. Just, you know, billions of dollars in damage. Like we just all know that this powerful decoupling principle just set a large fortune on fire and this is plainly obvious to everyone. Next he says "the real principle here is option inversion." No, it isn't. I don't know where the author got this from. But I suspect given what he's said previously that it just sort of made sense to him and then decided that must be what this principle is about without any further discussion. But the author is none the less actually correct that you can over-use this principle to over-complicate code before it really needs to be that complicated. These are, of course, ultimately rules of thumb that work well together. Common sense and experience are still required.

Robert C. Martin has an unusual habit of giving advice that sounds kind of stupid at first glance, until you actually try it out for yourself. His "rules" often have hidden reasons behind them that only become clear when you write code with them. I suspect that the author did not give SOLID a serious shake as his criticisms all sound strongly like someone who read about SOLID, failed to internalize their meaning or give the technique a try, and is now attacking a strawman with advice that, quite unlike Robert C. Martin's style, sounds okay at first glance, but you quickly realize does not translate into actionable advice when you're coding. I've heard "fits in your head" before. It sounds like what you want, but it does not allow you to identify problems early and predict what sorts of changes you might need to make to the code to prevent those problems. Especially if you're a beginner, try writing according to SOLID principles. See if it helps. After you understand why those are the principles and why they all go together by seeing what sorts of decisions it causes you to make in your own code, you're free to ignore it having at least learned what the advice actually is.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: