Hacker News new | past | comments | ask | show | jobs | submit login
Solid Principles: A Software Developer’s Framework to Robust, Maintainable Code (khalilstemmler.com)
300 points by stemmlerjs 4 months ago | hide | past | web | favorite | 167 comments



The longer I've worked in software development, the less I'm convinced that any of this matters, at all.

What has mattered more to me is choosing the right architecture, technologies and dependencies. Understanding the fundamentals, especially at least a vague understanding of what the computer is doing with the code I am writing. Basic understanding of algorithms and techniques like state machines.

I've grown almost disdainful of things like SOLID principles, after taking them more seriously in the past.


So glad to see this stated publicly, and actually getting some positive responses. 25+ years of software development and the best "pattern" I can identify after dozens of projects is just one: simplicity.

If you can't follow the logic by just reading the code, then maintenance will be slow and cumbersome. New development will eventually taper off as more and more time is spent fixing issues. I've written production systems, still in operation today decades later, that hold true to the simplicity principle. I wouldn't do it any other way. But, the battle I frequently face is with upstart devs who want to use some new pattern or principle because they read that it was "better" than straight-up code. And yes, they can implement some new feature quick, but 2 years down the road all hell breaks loose.

Some of my pet-peeve anti-patterns:

* "Plugin" systems only used by the primary development team. There's power in being able to follow the logic for any call stack by just reading the code; plugins make the flow much more challenging, and provide little value.

* Large (and complicated) run-time configuration IOC systems which make any debug session a dreadful process.

* Rules Engines. 'nuff said.

* Piles of abstractions, one atop the other, because teams had no time to find commonalities and/or sensible use-cases.

* 25+ new source code files to respond to a /heartbeat endpoint.

* Fixing a problem in 3 lines of code is not fixing anything if it complicates the debug process, forces a re-write of the unit test drivers, enables mysterious build failures or taxes the DevOps team.

And some of my favorite simplifying patterns:

* AOP for logging; but NOT (dear god) for business logic.

* 3 levels of abstraction, and no more. If you can't figure out a way to do it in 3 levels of abstraction, then something else is wrong.

* 3 systems of state: config (global), current request and persistence (database). If you need more than that, then something is wrong with the architecture. (Ok, compilers may not be able to get away with this, but most applications can, in my experience.)

* If you use a pattern (MVC, for example) apply it to the whole system. E.g. doing IOC only in the upstream API system when the rest of the code doesn't, just makes a mess.


SOLID is useful as a rule-of-thumb, but there's always one over-arching principle that should trump all others: YAGNI.

It's easy to build something simple (and perhaps naive) and make it more complex over time as your needs become selectively more complicated and your grasp of the domain more nuanced, but it's impossible to start complex and pare down to a more simple implementation. Even worse, your bad design decisions due to your less-than-complete grasp of the domain become impossible to remove due to all the layers.

There is no One Right Answer (tm) to system architecture; it comes with experience. All of these principles should be considered guidelines to help in your design, not ironclad rules.


Except when you will surely need it, just not now, and will have to change your entire core everything to bolt it on.

People saying YAGNI usually make much worse atrocities than the ones aiming for SOLID. And on SOLID, the I alone may be responsible for the overwhelming majority of the problems.


Anyone writing a library should be forced to single step through it in a debugger in front of a group of people and listen to all the grumbling.

The thing is if you have twenty devs all making their code as absolutely simple as possible, the overall system is still gonna be pretty complicated. That’ll be challenging enough. Don’t add your own accidental complexity on. Even if you can fit it in your brain, you’ll be “misunderstood” and “have to” fix things for others all the time instead of working on what you want to work on.


Devil's advocate opinion.

Some might say that the library is poor if you have to debug it at all.

Any library you have to debug to make work is, by definition, a leaky abstraction.

Now, if you followed SOLID principles, you could simply swap the library with the leaky abstraction out with another under a new concrete implementation, verify in tests that you are calling the library with the correct arguments, and your problem should be fixed by only touching the wiring code to point to your new implementation. If it is, then you can deprecate the old implementation class, find all the uses via compiler warnings, and replace them with your new implementation. Then all your tests should pass, and the issue should be fixed.

Accidental complexity arises when you can't do this, and it comes from not properly abstracting away the use of others' code from your own and by overly applying both the NIH and YAGINI principles on both ends of the spectrum.

Simple means "of fewer parts." In maintenance, this means you change fewer parts of your system to make a change. In design, this means you use fewer abstractions. The balance is struck when you have just enough abstraction that maintenance and design are simple.


All software has bugs.

All abstractions leak.

Composition always leads to new problems.

And we are talking here specifically about people using “good practices” in a way that makes the code far more complicated than it needs to be.

The guy I know right now who writes the most tortured code is so stuck on code deduplication that he’s made a mess. Stepping through his code is a recursive nightmare of wiggle words (compounded by his raging overconfidence in his grasp of English). His code has no shape and he likes it that way.

Somebody hurt him and we are all paying for it.


> So glad to see this stated publicly, and actually getting some positive responses. 25+ years of software development and the best "pattern" I can identify after dozens of projects is just one: simplicity.

Yup. So much so that if it takes me several reads to understand code, it's a code smell for me.


“Just because it works, doesn’t mean it isn’t broken.”


Sickle cell trait works to protect from malaria.

It works but is broken.


> I can identify after dozens of projects is just one: simplicity.

Isn't that the point in SOLID though? To try and have some guiding principles for achieving simplicity. Certainly I have noticed that my own code hasn't been easy to understand a few months after writing it, one of the reasons used to be that I was trying to do too many things in one function / method.

Of course having said that, everything is usually a trade off and overemphasizing things will probably be counterproductive.

The other problem with ideas like SOLID, is that they tend to be so abstract that its difficult for them to be meaningful until you have enough experience to not really need them as guidelines.


> Isn't that the point in SOLID though? To try and have some guiding principles for achieving simplicity.

Probalby yes, but some things are an overkill.

"Single responsibility principle" is great, and should be practised.

"Open–closed principle" is also good. No problem here.

"Liskov substitution principle" sometimes feels like it should be more a language feature than an implementation one.

"Interface segregation principle" can be ok. The problem lays on when one creates interfaces for things that don't need one, because they are one of a kind. The code becomes confusing without any real gain.

"Dependency inversion principle" depends a bit on the previous point, because the links are through interfaces.

I've seen (and been charged to maintain) programs with too much ID (of SOLID) done, and it's chaotic.

It makes sense to do them in an enterprise size program, it makes sense to do them as a refactor when the program grows. But people applying principles willy-nilly on programs that do not need them (yet) make it too confusing for the ones after.


This so much!

simplicity is underrated and unfortunately esp. young devs are proud when they are capable to build complex systems.

After working extensively with plugin architectures like OSGi an learning about functional programming I am deeply skeptical when somebody mentions plugin systems and IoC containers.


I've just recently started using Java 8 on the job and what's really jumped out at me is the slight of hand of the Function interface. By simply creating a generic interface called Function and "tricking" developers into coding to an interface things like IOC containers and mocking libraries are no longer needed.

I don't need Spring to auto wire in dependencies in order to simplify testing. In my test class I can just whip up whatever dummy implementation I want and create add a constructor for the Unit Under Test to inject that implementation manually.

The other thing that's been rolling around in my mind using Java 8 with the Function interface is; if you want to expose architecture issues and other warts in the code base, disallow the use of mocking test frameworks.


Had a knee jerk response to this around SOLID really advocating for simplicity in a somewhat rigorous definition.

Completely agree with your statements, a good set of guidelines.

A challenge we face is that there will always be those who either don't know what they don't know so will continue to reinvent the wheel, or those that don't feel at ease unless they are doing something clever and difficult to maintain.

I have seen many talented folks from both sides but they are ultimately a cancer to any product.

Forcing them to maintain their mess has helped but most times it is pawned off.

Transparency, traceability, ownership...that I believe is a path of least resistance that could improve things and lead us towards simplicity.


Rules engines come up frequently in my domain area. What are some good resources for managing either high volume or high complexity business rules?

Usually these start out as hard-coding, then evolve into a rules engine or framework. Sometimes after the original devs leave, the rules engine turns into a shiny black box that remaining devs are afraid to touch.

What are some anti-anti-patterns?


Be deliberate and skeptical about how and why your rule definition language is a better fit for the problem space than a general purpose programming language. When you find yourself recreating a general purpose programming language, stop. Just drop down to one. Or start with one. A very successful rules engine at my employer is Python minus features, as opposed to the typical "config plus features until it becomes a shitty Python."

Realize that what you are doing is a programming language, and create as much of the infrastructure for programming as you can for rule authors (version control, code review, unit testing, incremental deploy, etc).


What about security and safety. A user provided rule in a DSL has controller access to the rest of the environment. While a user provided script in a language will have a lot of security and safety issues, even if you trust the user there is security in depth and safety of limiting the accidental damage


Not sure about Python, but it is very easy to embed Lua in an app in way that executed scripts have access only to what is deliberately exposed to them.


Sandboxing the code is a solved problem, is it not? There are a number of websites that run code for you somehow.


It's a surprisingly tricky problem, btw, at least for some languages. Here's a nice 2014 talk by Jessica McKellar: Building and breaking a Python sandbox that gives insight into some pitfalls. Might be "solved" by now though, don't know.

https://www.youtube.com/watch?v=sL_syMmRkoU


Running stand alone and throw away code in a container, is very different from running a user provided script within your long lived application securely. Think credentials, Db access, file system access, network

But you want to access the DB and write to files and the network just not anywhere, so you have different process and communicate via rpc


I prefer to use SQL because it handles the set definition very cleanly. It is basically the "where" clause of a query. And it's a good middle ground where programmers and business analysts can speak the same language to describe a dataset.

Once the set is defined, the programming language of choice can be used for the action. It could also be SQL, or since we use Spark for a lot of our compute it could be Scala, Python, or Java.

I've been in recent discussions about building a new DSL for this, but I haven't been convinced why we need a new DSL when SQL is already a widely supported DSL.


Bertrand Meyer suggested you split the code for making a decision and the code for acting on it into separate methods. Happens to work pretty well for writing unit tests, too.

If I don’t edit your function, it’s harder for me to screw it up. You can segregate code without pulling it out into an interpreter.


That sounds very much like functional programming styles, where decisions are made by "pure functions" and actions are taken by "interpreters" or "effect handlers". The "ports and adapters" and "functional core, imperative shell" approaches are similar.


Thanks for the lead, will read up on this.

In my SQL biased mind I think of the decision as a dataset defined by some filters and joins. When new data meets the criteria, it triggers an action for that set.


Can I ask a different question?

Very often rule engines are introduced with assumption that "advanced user" with understanding of the business side of the issue would provide (create/modify/remove) set of rules for the system.

More often than not it happens so only devs are able to modify those rules.

So the question is: why is it easier for you to write business logic in some kind of DSL instead of in the actual programming language?


I have never seen this work out as intended - at best it results in developers having to code up things in a limited way through a leaky abstraction. I've never seen users be actually able to use such systems.

The best that can be said is that it sometimes gives the ability to do some customization or extension without having to do a full deployment, but it also locks you into an ever-expanding surface area of code that needs to be supported and increases the chances of bugs from unforeseen interactions.


Constraints. I prefer not having the freedom to do anything I want in some cases. Pretty much the same rationale as the one behind removing the "goto" instruction.


One time we put these rules in a graph database. It worked better than hard-coding and a rules engine.


Ha, funny you should ask... I've started a project about a month ago that will apply various "rules" be applied to unique resource instances. (Sorry, I'm being explicitly vague as there are IP concerns here.)

Here are some approaches that I find myself gravitating towards with this project (but still in design/early dev, and much more to address before this part):

1. Tag-based attributes for resources (as in "meta-data"). Tag combinations can be leveraged to apply specific "rules". But the interpretation of those tags won't be a jump-table or vector-table. If I can't see the domain I'm debugging, neither will the devs who come after me!

2. Taxonomies suck for this kind of thing. Ontologies (what I see as Taxonomies with links) may be required in extreme cases, but just make sure that the full path is always "visible", not just a pointer to node a in the ontology graph. The project I did work on for some time was usurped by another that grabbed my attention (I was re-assigned). Too bad, I didn't get to follow this through. I don't think this is needed for my current project, but time will tell. (I hope not, as it's "elegant" but not at all "simple")

3. State tracking automata may be possible as well. Each request to execute rules stores the path through the rule set directly within the request. Easy debugging and visibility. I've not implemented this type of "rule" system before, but I'd like to give it a shot some day. Of course, writing rule paths is time consuming (even if in-memory) and may be overkill.

4. One (simple) example of a rules engine is a command-line tool's argument parser. Does anyone store this data in a jump-table? I don't think so, (maybe GIT or AWS-CLI, but I don't know as I've not looked yet). Most solutions have a parse pre-step and then a current-state result, usually followed by a "switchboard" type rule applier. But, that state is usually always available; I think that simplicity.

In short, for a given set of "attributes" which drive a "policy", I'd prefer to see the code rolled-out and explicit rather than deferring to some dance between a set of "rule tables" and a set of classes that get invoked in "special" ways.

Of course, I know nothing of your domain; so please take this all with a grain of salt. I've not worked with more than several dozens of "rule sets" before; you may be facing 1000's (and it sounds like speed is critical for you as well). Your rules may be transient, time sensitive and possibly even self-modifying (ugh!). If so, I'd make certain that the path through the rule-sets (if truly required) are well documented and verbose in their logging.


> * AOP for logging; ...

Can you expand on how this works in practice. Or link to an explanation? (I assume AOP means "Aspect Oriented Programming").


Yes, AOP == Aspect Oriented Programming

Logging systems can usually be enabled by package/namespace, and frequently to the class level. I've used AOP to log out the trace of the code (not just the call stack), but can turn it on or off as needed. Authentication acting strange? Enable those logging aspects and browse the log to see where things are behaving strangely, and it may not be in the stack trace of the exception that gets thrown somewhere else. Logging methods and some arguments is hugely helpful; it's like automatic debugging.

A quick googling found: https://www.yegor256.com/2014/06/01/aop-aspectj-java-method-... and https://nearsoft.com/blog/aspect-oriented-programming-aop-in...


Ahh OK. So you mean fine-grained selection and routing of log messages.

I'd never thought of that as AOP, but I suppose it is. And I suppose AOP frameworks provide the machinery to do it, without the logging subsystem having to reinvent those wheels.


I found IOC mostly usefull in my projects. Mostly problem is that when business is saying "we need to have this thing generic", dev team goes ape-shit like public WhateverGoesInAnythingCanComeOut List<T>(List<T> love), where "generic" for buisness people is "copy-shit-paste" and change 2 things.


I find myself somewhat reluctantly agreeing. While I think the SOLID principles and similar things are helpful tools for thinking, I think they should in general take a back seat to the simple principle of Less Code. I’ve seen so many over-complicated code nightmares that were created in the name of “separation of concerns”, or some such, that my spine tingles anytime someone starts talking about some abstract design principles.


> were created in the name of “separation of concerns”

when I hear "separation of concerns", I now immediately ask (internally or externally) "do we need to be concerned about this at all? right now or even years in the future?"

Recent project: untangling something that was written 2 years ago as "best enterprise engineering practices". That dude is gone, no one else on the team understands it because it's insanely over-engineered. This is literally "send a notification email", and there's about 38 core files, another 25 'helper' files, ~20 templates and a custom template parser/nested template thing, and a few dozen node/npm dependencies. This is a 'microservice' running on lambda. literally, someone needs to be able to send a notification that "X is ready", and it's this behemoth that no one understands. They do understand that all the templates and logic were taken away from each caller's context - so all the various teams that used to be in control of their own notification messaging now have to wait for someone to dig in to this other codebase which is basically unmaintained now.

It's got its own custom date parser, because... why not? And there have been random "wrong dates" being sent out for months because... no one can quite trace down the root problem. I finally did, and I'm not saying "no one ever can", but it's under no one's direct ownership, and the people that have tried to take it over have just been overwhelmed with trying to trace through it, make it testable, etc.

And last week I heard rumblings that another team is "rebuilding notifications".

It's as much a failure of management/oversight as it is "overengineering" in general, but at each review step (so I was told), things "looked good", in isolation. No one quite pieced together that this was an unwieldy behemoth until after this guy was gone. :/ (no doubt an extremely typical story, but one which, by now, I would have hoped would be 'less typical').


Beware Chesterton's fence. [1]

Is the network always reliable? Are the messages supposed to be at-most-once, at-least-once, or exactly-once? What happens if power is off when the service is supposed to send the email, and then it comes back: should the email be sent anyway? What if there are five thousand emails waiting to be sent? What if "X is not ready" at the moment, but there's a message waiting to be sent?

A lot of that complication might be because, like Spolsky says [2], it "fixes that bug that Nancy had when she tried to install the thing on a computer that didn’t have Internet Explorer".

[1] https://abovethelaw.com/2014/01/the-fallacy-of-chestertons-f... [2] https://www.joelonsoftware.com/2000/04/06/things-you-should-...


One way to keep this from happening is code reviews. Don't show me a presentation, don't show me blocks of architecture, let's open Github and start looking at the code. Nothing formal, just a 10 minute walkthrough and discussion. Unfortunately in many Eng cultures the EMs stop doing this, or do it too late. In my mind this simple idea is similar to the Toyota Production System's "Go and See for Yourself (Genchi Genbutsu)".


people feel bad about publishing a PR and being told "don't do it like this" and get defensive, argue about the sunk cost, throw references justifying why their approach is Good Actually, accuse the reviewer of not being serious about software engineering, etc.

The real way to project success is egoless programming.


started on this project a few months back and have had PRs blocked with stuff like

"in this test file, just use one assertion, don't need to use two"

"there's too many comments here"

"i don't like the extra line breaks in this file"

"bookAdminId needs to be bookAdministratorId"

these are generally comments just on test files.

I've struggled to deal with these because... well... ego is part of it, but when there are bigger structural issues, and the focus is on - imo - minutia (without much acknowledgement that these are, indeed, minor points), it's troubling. To then be told after a few weeks that you're "not delivering" is doubly so. It's hard to take 'ego' out of this altogether, but thanks for reminder.


Yeah this sucks. the egoless approach is to suck it up and do the changes, but also to raise with the team the difference between feedback for implementing the feature & expressing opinions. One thing I liked about Review Board (!) was making comments as 'Issue' (to make it clear this had to be fixed before merging) vs. not (making it just opinion/non-critical feedback). Though in a team of pedants you would find every comment being marked as an issue


what's "review board"?

also, it's triply bothersome because... i've been doing this > 20 years. that certainly doesn't mean I don't make mistakes, and am never wrong, but it does get tiresome to have whitespace arguments take up time 2 decades in. On top of this, there's actually an automated linter that we all adhere to, but these are 'extra' style things that someone likes to focus on outside of the linter. It's just wasteful of the company's time, more than anything else.


Review Board is a Django-based code review tool, like if you had a separate service for merging PRs in Github? It does a lot of things wrong (e.g. doesn't keep track of commits in a change request, flattens all the changes) and 2 or 3 things right (fix-n-ship, multi-line comment)


Stuff like this is why I've become a huge advocate for automatic formatting tools. If you can set rules for naming conventions, line breaks, etc. then it significantly cuts down on the minutia arguments.

It doesn't fix everything, of course. Someone can still argue about using too many assertions or comments but it at least helps. If someone makes an argument about formatting then you can just say "well, Tool X formatted it this way so this is how it will be. If we think that should change in the future then let's open a different issue and discuss that."


Man, I feel for you. You could be working with an ex-colleague of mine, from all you've said. I inherited _his_ codebase when he was let go, and had to re-write most of it, there was no way to fix that in a reasonable amount of time, if even at all.

But the same type of things on the PR comments, the over-architected, highly-coupled monstrosity that never quite worked.

In the end, we ended up with 6K fewer lines of code (out of a total of 16K or so), and the thing actually works now. Go figure...


Code formatting tools/linters and static analysis tools can deal with the bulk of this before it even gets to review.

For the small stuff that remains, ideally the reviewer can fix it - the reviewee is probably back in flow on another task, and it's irritating to have focus broken for inconsequential things.

As a side note, I like how github implemented this as "review suggestions", where the reviewer can make small changes inline and the reviewee approves them with a button push.


IIRC, code reviews were done, just... no one really saw the entire 'big picture' and how unwieldy this was until the very end.


I wonder how much of this is driven by the interview culture of forcing people to write complex solutions from scratch in isolation.


After thirty years of software development, the one principle that keeps recurring is Donald Knuth's "Premature optimization is the root of all evil".

It applies on so many levels, including whether to go all in on any software engineering methodology. For instance, applying SOLID principles to your personal side project is counter-productive; but if it grows into a multi-developer project you might well gain efficiencies by rationalising your code structure.


Yep. As another old-timer, pretty much the only principles I have seen that consistently pay off are KISS and YAGNI. And the problem is nearly all of this SOLID type stuff is directly contrary to that.


I've found it important to include the context of that quote:

> We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil. Yet we should not pass up our opportunities in that critical 3%.

That 3% can be significant.


Yes, I’ve had to find a new job many times because we waited too long to fix real problems that hampered our ability to make progress.


Sounds interesting, could you elaborate? Why did you have to find a new job? What were the problems that ruined your project?


If my competitor can add features faster than us then they will pull ahead and there’s very little we can do about it.

I’ve been on both sides of that. Sometimes on the same project (quick hacks up front leading to calcification and death).

If each feature takes longer than the last one it becomes hell, and you don’t really want to stick around for that.


I think I agree too. Maybe not disdainful, but certainly skeptical.

I put things like SOLID and design patterns in the same bucket as scalable agile, and similar prescriptive practices. To me, they seem more concerned with organisation as opposed to exposing real business problems so that you can solve them, and as a result you're wrestling with the abstraction and the glue code more than the business logic.

A finite state machine is my favourite example, as I've seen all kinds of 'well-organised' setups that could have been massively simplified as an FSM.

Whatever the right architecture and tech is, I've become more convinced over time that the easier it is to delete bits of your code, the better. And that doesn't always mean you go full-on OOP. More to the point, 'maintainable' is massively subjective. Many of the metrics we use to measure that are utterly shallow.


I agree, mostly.

The more experience I gain, the less I’m concerned about refactorable, local issues. Distributed systems is hard. Scaling is hard. Data consistency is hard. Security is hard. API design is hard. Refactoring a local code base to extract out methods and implement interfaces, much less so.

These days if the biggest problem we have is a monolithic service is badly coded (but is correct and works), it’s something I’ll take.


I too disdainful SOLID. I appreciate this finally has been pointed out. Principles are in fact moral high grounds, without telling you what to do.

For example, you need to resolve the expression problem[0] in Java, with Open-Closed principle compliant. Then the answer could be Java code like in this article[1] (the technique is Object Algebra).

However the moral for me is, if I'm using Haskell or Ruby, when expressing Open-Closed idea is simple, I would use it. If I'm using Java, just forget the principle. I would just express the business, and change the code if I need to add new types or operations. Because the principle is written the code is not understandable to other people, just like design pattern abused codebases.

Another thing is Liskov Substitution Principle. I found most of the books cover this idea don't even mention how to judge if an object is LSP compliant. Every people I know talk about the principle haven't heard of covariant and contravariant, and didn't realize almost all of the mutable data type inheritance are in fact not LSP compliant.

[0] https://en.wikipedia.org/wiki/Expression_problem [1] https://koerbitz.me/posts/Solving-the-Expression-Problem-in-...


I was active in the XP mailing group when things like SOLID got popularized, mainly by consultants. Of course the principles had been around for much longer in various forms. To me, there seemed to be a distinct difference between what some of the consultants focused on, and the books they ended up writing compared to what people who were using XP techniques and trying to leverage (determine) the best software design approaches to write actual production software. Some of the book writers / consultants were much better at actually observing (or having direct experience of) real issues and how to deal with them. But actual software design did not center around things like SOLID, those things sort of just fell out of approaching design in a particular way. I find the language too strong ( laws / principle ). Instead as you find ways to construct modular and composable software, you'll notice that things tend to have a single responsibility, you'll notice that "modules" tend to become stable and extendable through other "modules" of software. They are like side effects of good design, not things to focus on other than if a design is not really working well, it might provide an insight into why its suffering.

Which is all a polite way of saying it is mostly crap and the insight of people with real problems and who dealt with those problems is mostly ignored.


I agree to some extent. I think in general they're good principles, but I think people tend to take them as gospel and forget about code readability and complexity in order to adhere to them. It's important to understand the trade-offs involved, and to understand that different problems will require different solutions.


I wonder if this is because you used them, learnt from them, internalized whatever design intuitions you could from them, and have moved onto higher levels of understanding.

Expert bike riders disdain using training wheels, but only after they've learned how to ride using them.

---

(please hn don't derail with "well actually"s about training wheels)


Yeah

>write code that's easily understood >write code where things are where they're expected to be >write code that can be adjusted and extended quickly >write code that can be adjusted and extended quickly without producing bugs >write code that allows for implementations to be swapped out (think swapping out Email APIs, ORMs or web server frameworks)

The problem is to further these goals they introduce abstraction and complexity which make the code harder to change, harder to reason about, harder to test. And a lot of times you end up taking one step forward but two steps back.


Programming paradigms are like diet plans from weight-loss companies: most of the time they're just the original problem repackaged as a solution--with a snazzy name and aspirational slogans. The difficulty of the endeavor basically get hidden in the contrapositive. "Eating meals that we provide" sounds easy. Not eating what you want to eat--that's the hard part. Writing code that meet certain standards is easy. Not writing code that does not is incredibly hard when have hard requirements and delivery dates.


Agreed. Adherence to dogma like this is cancer.

i was rejected rom a pretty big company after clearing other stages because I wasn't able to recite SOLID. Its like asking people if they know 'design patterns' and expecting them to come up with the textbook answer from the GoF book.

Telling people that you need to use OOP or DI or any fixed software methodology is just stupid.

None of this crap is used in real companies, the good ones anyway. You should be flexible, open, and strive for maintainable, consistent, and simple design.


The most important rule is understanding the context of any design guideline, rule-of-thumb, "pattern" or "best practice".

Something like the Open/Closed-principle is a really great principle for the kinds of projects if was created for. It is just that for 90% of software development it is not appropriate and therefore leads to needless complexity.

Discussing whether SOLID is good or bad in itself does not make any sense IMHO.


Totally disagree and I am rather dismayed by people conflating their bad experiences with people over-engineering with these principles.

They are like anything else a good they are good rules of thumb when scaffolding your projects.

Understanding IoC with DI has cleaned up my code soo much and I know it works as I can Unit Test easily.


Seconding this. I used to really care about code maintainability, cleanliness, and other topics super important to us engineers.

4 years in The Valley have quickly dissuaded me of such foolish naive notions. Your code (outside some common sense basics) is completely irrelevant. Whatever you can stick together with duct tape and chewing gum is almost always the best solution. All the business wants is speed. By the time "bugs" is the highest reason for churn ... well you're likely going to be on your 5th employer by the time your first employer reaches that point.

Absolutely nobody in the real world cares about the quality of your code. They just want to do their job with as little interaction with your software as possible and move on.

It's kinda sad really how much time we waste talking about the best way to hold a hammer, the best hammer you can use, the best nails etc. when all our customers want is the picture to be up on the wall already.

Edit: One important addition -> If you're senior and you see juniors writing "bad code" on your team, your job isn't to teach them why it's bad. Your job is to create systems that make good code easier and more obvious to write than bad code. Be the force multiplier.


Not everyone works in "The Valley".

The vast majority of software is not written with that amount of churn.

The end user doesn't care about the quality of the code, but you can be damn sure they care about the quality of your software.

Also, talking about hammers is overly reductive. Software is far more complex than hitting something with a hammer. Builders might not talk about the best way to hold a hammer, but they constantly talk about the best way to build a house.


> The vast majority of software is not written with that amount of churn.

My current employer has code that is 6 years old. I had coffee with a former colleague and they are still running code from when I was there, and I left 5 years ago.

Code lives longer than you think.


6 years is still young for code. My current employer has code that's 13 years old. My last had code that was nearing 20.

And there's nothing wrong with that. Old code isn't "bad". In fact, it's almost certainly better than new code, because it's been battle hardened. It's got that fix in it for that weird edge case.


The last company I worked for was just trying to move off of a PowerBuilder+Sql Server 2008 system that was written by contractors in 1999 and maintained by two developers who had been there for 20 and 14 years.

What was wrong with it:

- neither the version of PowerBuilder or Sql server was supported by their respective companies.

- they were dependent on the two developers not getting hit by the lottery bus. Good luck trying to hire someone who knew PowerBuilder and/or was willing to learn it.

- the only way to access the program by the remote offices were Citrix Terminals.

I was originally brought in to lead the effort to modernize the system [1] but they had a change of plans when I got there and I ended up leading a completely separate effort.

[1] the plan I proposed was to upgrade to a newer version of PowerBuilder that could expose the system as COM objects, write a C# WebAPI around it. Write automated integration tests calling the Web service and slowly migrating the PowerBuilder code to C# and calling the underlying stored procs directly from C#.


I'm not saying that old code cannot be bad, merely that it's not bad because it's old.

Your approach seems like the right one.


Some code lives longer than you think it will, and some dies really quickly. The problem is that you never know which is which.


This seems related to the problem of how we have no idea how to hire people. I don't want to hire or work with people who are on to their fifth job by the time their crappy work starts causing major problems for the company that paid them to make it. But asking little whiteboard coding questions gives me zero signal to determine that.


It has null to do with "The Valley". A young startup should probably take velocity and "just ship" over any other consideration. But new startups are not a very big or important part of the IT industry.


Have you ever worked on the same project for more than a year?

That's the sort of timescale where you notice quick hacky solutions come back to haunt you later. Sure you will deliver things quickly at first, but when you do have to modify things, or fix your own bugs it becomes a nightmare.


Yeah, been maintaining the same software for 4 years now. Honestly the elegantly designed beautiful parts of the systems have been the hardest to maintain. They end up too specialized to a single use-case and you have to keep rebuilding them from scratch.

Meanwhile the hacky code keeps chugging along happily and dealing easily-ish with any new requirement you throw at it.


What a terrible attitude! You're happy to write shit code because it'll be someone else's problem to deal with by that point?!

I can understand not being dogmatic about SOLID, design patterns and the like, but doing any old shit because you can get away with it is pretty disgraceful IMO. That kind of behaviour certainly wouldn't fly outside of "The Valley".


The customer does generally prefer it if the picture doesn't fall off the wall the moment you leave, though.


This where the "some common sense" basics part comes into play.


Even using the "right" architecture or tech stack doesn't matter that much. All business problems can be solved with any language, with a monolith or a fleet of microservices, it doesn't matter.

What matters is to ship code that brings a solution to the client's problem.


These principles are the first thing thrown out the window for new apps. I've been developing for around 15 years and I've noticed the excitement of new devs on new projects who'll rush out and just throw something together quickly to solve the immediate problem. I've done this a lot too.

This gets you a long way very quickly, which is proof that none of these principles really matter. Development continues along this path at a cracking pace for 6 months until complexity spikes and your productivity is at a crossroads. Do you take a step back, refactor, restructure so the code can be built on with any degree of velocity?

Hell no! Everyone's expecting the same output as when you started, so now you have to start copy/pasting everywhere trying to keep up, but your bug tickets have started ticking in so you have to address those. But you don't have tests, so now each bug fix raises two more bug tickets. Uh oh.

Better get more devs. Now you have people who don't understand the system making changes. There's no real structure so they just squeeze code in where they can, but progress is slow because it takes them five times the amount of time to reason about what the system does and bugs are still going out. Must be bad developers. Better hire a QA team to stop all that.

QA is reporting defects with each release candidate, and puts it back in the dev queue. After a year each feature is taking months to go out. The business just wants some simple changes, why is that so hard?

Enter the hotshot dev. They whip up a quick MVP with the latest language/framework plus it's serverless! No SOLID principles but that doesn't matter because the latest technology will fix everything. Better fire the old team and start fresh.


Agreed 100%, I worked with legacy code at different stages: 1 to 2 years: very little of structure but still velocity matters a lot, complexity is not very high; 2 to 5 years: first customers. copy/paste designs and complexity arise; 5 to 10 years: I observed 2 different trends, software where refactoring happened, product is still not in a very good shape but it is still ok and the others, the garbage product with millions of lines: global variables, If/then/else copy/paste design, doesn't scale, broken everytime for no obvious reasons. It's going to take years to stabilise without too much churn. So everytime I'm working on a project, I pledge managers, devs to allocate time to refactor and follow those patterns, implement unit tests etc... everytime at whatever stage, after sometime the team realizes the benefit. So yes, without a doubt, good design patterns are keys to make our dev lifes as easy as possible. And the SOLID ones are definitely my favourites.


Thanks for saying this, it's especially strange to see so much people disparage SOLID as if they have outgrown the concept because it was invented some years ago.


You have missed an important step: Once the productivity goes down because complexity spikes, the original developers flee the company, if their cv allows them.

They are then replaced by new hires that got fed lies at their job interviews. The managers had to lie to get new developers for their shitty codebase.

That's where development time and costs double while quality is bottom of the barrel.


Surely we can do without the SOLID principles this time, right? right?


I am surprised by a number of people here with a lot of relevant experience and opinions like "to hell principles and patterns, just make your software simple". Most of the software just cannot be made simple. I'll go further and say: you cannot make life with simple software, because there are a lot of players around who will take your simple software and make it into more profitable complex software.

SOLID is essentially a way to make complex software simple to understand. Yes, you have to write 5 classes to introduce new type of persisting entity, but they are simple classes, mostly boilerplate. Complexity is not in number of LOC or files, it's in business processes, their interdependencies, "when this then that, except those" and so on.

With SOLID (coupled with DDD) your hiring process becomes quite simple: you check basic knowledge of language, performance tricks and caveats (like n+1 problem), understanding of principles with examples and voila, you have new team member. On the first day you will show him or her your domain dictionary, give access to git and ticket system, and you will get good enough code to fix bugs or introduce features in a matter of days. And it doesn't matter if your code base is 10 years old, if you followed those principles yourself for all that time.


I really can't write code other than a) experimental (when testing a new library that encompasses a totally new concept) or b) absolutely readable and SOLID. a) code never makes it to production before it became b) since when I understand the problem and my solution I can't leave dead or unreadable code.

I've seen people that write nice code by day check in horrible entangled vomit because they wanted to make a deadline. So the next time I pull it, "it has just a few issues" and it takes me two days to sort out their mess where just writing it cleanly myself might have cost me only one day.

My code just looks the same no matter what pressure. I just don't believe I go faster by making code harder to understand and maintain.


> I've seen people that write nice code by day check in horrible entangled vomit because they wanted to make a deadline. So the next time I pull it, "it has just a few issues" and it takes me two days to sort out their mess where just writing it cleanly myself might have cost me only one day.

I've checked in "bad" code before. Maybe I can shed some light on the other side of this.

The business needed a feature a week ago, because all their marketing material had a hard date on the go-live date for this feature. They could not miss that launch date.

Unfortunately the dev team was overloaded and they couldn't deliver all the features on time. So I, a consultant, was tasked with a few last-minute things.

Fortunately this new feature was very similar to an existing feature. It used a lot of the same code as something already in the system. So I copied that into a new class (C#), made some (very minor) adjustments, and checked it in.

We were able to launch the product on time. But now there was quite a lot of code duplication.

Yes, I could have just "written it cleanly myself". But this was a tremendously complex product. We had unit tests, integration tests, and more. But we still had to perform hours of manual testing on any code that changed prior to a release. Refactoring the old code would have introduced more testing requirements, which slows down the entire release cycle - the risk was too great for our timeline.

It's important to consider the total cost of your work, not just the amount of time a certain task takes you, individually.

It's also worth noting, we made a new Jira ticket about the code duplication right away, and we were able to fix the code duplication in the next sprint, after the product release. The next iteration, 1.0.1, had the "correct / ideal" code.


Yeah, duplicating code that by itself isn't quite what I was getting at. I mean one big god class full of flags that is shared by two quite distinctive views that happen to share just a bit of functionality. In fact duplicating code in that particular scenario would perhaps have been a more sane thing to do but that didn't happen.

It's just not helping me to ship something faster, slapping together crap. "Create new file" takes 15 seconds, adding a function there and calling it from the original class I was working in another 15. Debugging sessions to figure out where what flag in what condition was misplaced take 10 minutes. Every time. Where am I really winning time?


SOLID is... well, "solid" advice. But it absolutely should not be applied dogmatically - the same principle holds true for design patterns, agile processes, and the like.

It seems like most devs have some kind of epiphany when they learn about concepts such as SOLID, and apply them with zeal. It's almost like a stepping stone to learning what SOLID is really about - simple, clean, maintainable code.

At some point this realisation dawns, and the dev reaches a zen-like state, where the tenants of SOLID are obvious, without the need to consciously think about them - it's at this point that the dev realises the error of their ways.


tenets


Was on mobile, but thanks :)


This is how SOLID, or rather its most zealous proponents, go from reasonable but limited capabilities to claims that cannot be sustained.

While small classes and methods can be (though are not necessarily) at least superficially easy to understand, this does not guarantee that the system as a whole is understandable. This is because a lot of the complexity is in how the classes interact with one another. There are contracts and constraints that have to be observed, that are inherently about more than one thing, and so cannot be localized in one method.

This approach, applied to writing, would be to allow only kernel sentences using common words. This is not going to be enough to make a complex topic simple to understand, and, if applied zealously, is likely to have the opposite effect.

Unnecessarily complex code is not a consequence of not following simple rules; it is usually a consequence of trying to write simple code but not understanding all the issues. There is nothing about SOLID that guarantees that unnecessary complexity will be avoided. Your idea of turning programming into something like painting-by-the-numbers will not work.


In SOLID there is S for "single responsibility", which in turn gives us simplicity on a component (class/object/file, depends on language) level but lead to complexity due to number of components. 100 tasks gives us 100 components, then software grows to 1000 and even 10000 tasks.

But then comes letter D for dependency inversion. This principle gives us back simplicity due to abstracting away any set of those components as meta-components like repositories, queues, services, factories and so on. This is about integfaces, e.g. contracts, as you call them.

My zealotry comes from about 30 years of writing and SUPPORTING code. SOLID code is harder to write, but much more easier to maintain. All those war stories about "business needs that feature week ago" comes from unmaintainable code, when you just cannot add new features by any other means except spaghetti with epoxy glue and nuts and bolts all around.

Business wants to move fast and break things. Our mission as software engineers is to create and maintain codebases solid enough (pun intended) to sustain years of development, pivots, M&As and whatnot.


"But then comes letter D for dependency inversion. This principle gives us back simplicity due to abstracting away any set of those components as meta-components"

I honestly don't understand how this produces simplicity. Could you possibly elaborate?


I'll try. Actually it's just what we did on my job this week.

For a long time we've had a single Cassandra database for write-performant storage in our project. Let's say it was a storage for Foo objects, which we have to write like thousands per second and read back aggregated stats like total numbers, number per parent object, writes per second/minute/hour etc.

With time it's read performance degraded. Also from time to time the database was diving into some internal service state (like index compacting or something like that), which further affected performance. At the end Cassandra cluster consisted of 30 hosts, which costs us about US$1000/day. So we decided to cut it off and change it for MongoDB, where we have much more expertise on the inside.

In our code we had a service class, let's call it FooService. All operations with Foo objects (write, read, stats, delete) was made via that service object, like FooService::addFoo(Foo, Bar, Baz). Any business logic class wanting to deal with Foo objects just declares FooService as is dependency, which is injected in runtime on calling business logic class constructor. So when certain code path doesn't need Foo objects - system don't have to initialise FooService at all.

FooService, in turn, dependes on FooRepository, which is a descendant of abstract CassandraRepository, which has code to connect to cluster, send requests, handle errors and so on. CassandraRepository is a descendant of AbstractRepository with methods like getItemById, getItemsByIds and so on.

Overall LOC count in our project is about 400k, it is not very complex, but complex enough to stuck with task like that if your architecture is not clean enough. But in our case the task was super easy:

1.Spin up MongoDB cluster with just 7 hosts (master, 3 sharded replicas and 3 shadow replicas)

2.Inherit FooNewRepository from MongoRepository, point it on new cluster.

3.Inject FooNewRepository into FooService and start performing mutable operations (create, update, delete) into both Mongo and Cassandra bound repos

4.Migrate historical data with script, which is using the same common codebase (it took about 3 weeks actually)

5.Switch reads to FooNewRepository

6.Check that everyting is okay

7.Drop all Cassandra-related code from codebase and rename FooNewRepository to just FooRepository

Because FooService declares it's repository dependencies as AbstractRepository, ALL the new code except FooRepository wass looking EXACTLY the same as before. The team noticed changes only due to our weekly all-hands, where we discussing all such matters. All the coding took about 2 days net, most of it was a migrating script, so with about US$750/day cost cut we adjusted that work in a matter of days.

And now the cherry on a cake. When we tried to shutdown a Cassandra cluster, we've got errors on another instance of our service, which used the same database, but was forked about 6 months ago to temporarily serve another country/language. We turned Cassandra back on, I checked out that fork, cherry-picked only Cassandra->Mongo commits from master branch, run the migration script (there were much less data for that country), and voila - all tests passed like a charm, new code working now (we will migrate it to full master branch later).

edit: formatting and grammar


Thank you, the design you described look natural to me, but I don't understand how this reduces the complexity coming from having lots of objects.

If I understood you correctly, repository abstraction hides only one object - a Cassandra or MongoDb repository.

Don't get me wrong, I'm not attacking your point (in fact, I think we most likely would agree on almost everything), I just don't understand what you mean.


I agree; it seems that SOLID does a good job of marshalling the complexity out of individual elements into the system as a whole. This can be a good thing, if the system design itself was created with SOLID in mind, and is well implemented. However if the system design didn't account for this complexity shift, it will likely cause a lot of mayhem.


"if the system design itself was created with SOLID in mind"

Could you possibly elaborate what you mean by that?


Yes! In part I kept that ambiguous, because I've observed many edge cases and was trying to capture the overarching theme. The core idea is that often a project is designed without respect to the "design guidelines" that developers are asked to adhere to during implementation. I see SOLID as a set of design AND implementation guidelines; it contains rules for both the global and local systems. I've observed that often, the design of a system framework ignores the design portion, but requires the implementation portion (I've done it myself, and wondered why my project turned into a hot mess even though I followed all the rules!).

With SOLID, complexity is pushed out of individual components and into the global scope. Instead of having a component manage multiple responsibilities, it must have only a single responsibility; now you have multiple components each managing single responsibilities. Some problems that can arise from this are multiple classes that have shared or overlapping responsibilities, classes that only handle edge cases, data innaccesibility, data and code duplication... This is different than (but related to) abstraction, in that you aren't hiding implementation, instead you are dividing it. With that division comes some non-trivial overhead, which the global design of the system needs to have planned for. This is not necessarily a bad thing; it reduces the burden of designing each individual component, while increasing the burden of designing the global system to manage how the components must interact.

If, in designing a system using the SOLID principles, you do not plan for how much of the complexity is being pushed towards the global scope, you can have a huge codebase that isn't cohesive enough to function as a whole.

That might've been too much elaboration, but it is kind of difficult to explain a portion without being thorough.


I don't know that it so much that the essential complexity is being pushed into the global state - a global constraint, for example, is global regardless of how you implement the system - but factoring into tiny modules puts more layers below that level. That is not the wrong way to do things, but it has not gotten rid of the complexity.

And using only small classes and methods does not guarantee the elimination of accidental complexity either; I have seen enough cases where the code has become a sort of shell game, were you go from class to class looking for where something actually gets done. As a rule of thumb, code with classes having 'helper' in their names probably has this problem, though it would usually be easy enough to rename them to conform to the single-responsibility principle.


I'm a DDD convert, but boy do I still have trouble selling it. "why do we need a ubiquitous language?" "why can't I jam everything into a single domain?" "why can't everything be an aggregate root?" "It's too complicated, why can't I just PATCH my data directly into the database from API?"

Honestly for me it's these abstract principles and practices that outweigh any flavour of the month technology. DDD encourages code to model the business domain, meaning the complexity should equal that of the business. A lot of problems I see with apps are due to reducing complex domains into CRUD ops, then trying to build the complexity on top as an afterthought.

Also, a DDD helper package for node - https://github.com/node-ts/ddd


> you cannot make life with simple software, because there are a lot of players around who will take your simple software and make it into more profitable complex software.

I believe you are conflating simplicity and value. Someone may take one's solution and add value, and with it complexity inherent to that value, thereby increasing revenue. Adding (incidental) complexity, without adding value, will make your solution less profitable.

edit: formatting


Maybe you are right, but I honestly don't see how one can add value without adding complexity. Take any real life example: pizza vs plain bread, car vs bike, piano vs harp. There are still demand and markets for simple things, but it has much less profit margin vs complex goods.


I can't tell if you're being sarcastic or not.

There is irony in that if you're not being sarcastic.


Sorry if my text brings mixed message to you; alas, English is not my native language. No, I wasn't being sarcastic at all, I truly believe in what I wrote and I'm practicing it on a daily basis in my code and during code review.


Some constructive criticism:

    abstract class Employee {
    // This needs to be implemented
    abstract calculatePay (): number;
    // This needs to be implemented
    abstract reportHours (): number;
    // let's assume THIS is going to be the 
    // same algorithm for each employee- it can
    // be shared here.
    protected save (): Promise<any> {
        // common save algorithm
    }
    }

    class HR extends Employee {
    calculatePay (): number {
        // implement own algorithm
    }
    reportHours (): number {
        // implement own algorithm
    }
    }

    class Accounting extends Employee {
    calculatePay (): number {
        // implement own algorithm
    }
    reportHours (): number {
        // implement own algorithm
    }

    }

    class IT extends Employee {
    ...
    }
This is still a violation of SRP. IT does not care about the employees pay and shouldn't be forced to implement it.

Modern OO adds an additional rule to SOLID: favour composition over inheritance.

HR and IT should not extend from Employee. HR should be more like

    class HR {
    calculatePay (Employee e): number {
        // implement own algorithm
    }
    reportHours (Employee e): number {
        // implement own algorithm
    }
    }
The first question you should ask when deciding to use inheritance is: do I really need inheritance here? Then, ask yourself again: Is <childClass> a <parentClass>?

In most businesses, HR is a Function or maybe a Department. HR might have Employees but HR itself is not an Employee.


Every employee needs to be paid, even if that's 0. The original class was correct (a class is a namespace, then functionally dependent on language implementation).

The implementation of calculatePay, is extensible as it stands.

    public Number calculatePay (Interface department)
You delegate to the class (even if the interface is optional)

    Number val = defaultPayscale();
    if(department) {
        val = department.getPay();
    }
    return val;
Because EVERYONE expects Employee to have a calculatePay of some sort, for accounting software. The idea that you want to smear a concept across domains for a misguided utopian design is over-engineering. Programmers have an average intelligence, so aim for supporting a simpler mentality, regardless of your beliefs.


My point is that IT doesn't (and shouldn't) need to know about employee pay. Extending the Employee class to HR, Accounting and IT is a bad design.

You're violating ISP here. The Employee class can implement IThingThatGetsPaid, but IThingThatHasNetworkAccess should know nothing about pay.

The Employee itself should have the minimal amount of information it needs. Prefer interacting with an object through a small API surface. It's not "smearing a concept across domains", it's the exact opposite. It's confining a concept to the appropriate domain.

The original design pollutes the Employee interface with multiple concerns.


> IT doesn't (and shouldn't) need to know about employee pay

Project Budgeting, is one concern. You may not need to know the exact cost, but, having a hourly rate is useful.

"A,B & C 80 hrs a piece, how much for the budget ?" Don't know have to ask HR for ....


I don't know where you work but I generally find that employees do not calculate their own pay, apart from maybe reporting hours they tend to not be involved in the transaction at all, HR does the work and deposits it to the employees bank account.

If we subscribe to the idea that OO should model the real world (I don't) then the HR class is definitely where the work should be done.


> employees do not calculate their own pay

My paychecks list the hours and rate paid, more often than not. But dispensing with that (because it's not relevant), a class isn't an actor-model. The tried-and-true Class Car->Drive() example (the car is doing something) is a contrived way to illustrate one of the ways a class can be used, as a learning tool. You don't want to create systems with that mentality, as it will always disappoint and spawn countless new webs of inheritance as time goes on.

This is all hand waving, but that's a rather important issue to realize. There's no science behind design yet and it hasn't changed since the late 90s (when I started). My experience teaches me as with your own. I am afraid we'll just have to agree to disagree.


I would augment that to "ALWAYS favor composition over inheritance"

IMHO Solid is nice but I always struggle to remember what the acronym stands for. I mentally translate this to just optimizing two easy to remember metrics: classes should have low coupling and high cohesiveness. This also applies to functions and not just classes. E.g. functions with lots of parameters are a problem.


This is a big part of Liskov substitutability.


The author uses the more complicated definitions of the SOLID principles.

These are much simpler to remember:

* Single responsibility principle - A class should have only a single responsibility, that is, only changes to one part of the software's specification should be able to affect the specification of the class.

* Open–closed principle - Software entities should be open for extension, but closed for modification.

* Liskov substitution principle - Objects in a program should be replaceable with instances of their subtypes without altering the correctness of that program.

* Interface segregation principle - Many client-specific interfaces are better than one general-purpose interface.

* Dependency inversion principle - One should depend upon abstractions, not concretions.

A lot of these principles also tie into other principles such as DRY (Do not repeat yourself).


That is the cleanest clearest collection of solid definitions I have seen.


They come directly from Robert Martin's 2000 paper, 'Design Principles and Design Patterns'[1]

[1] - https://web.archive.org/web/20150906155800/http://www.object...


I stole them from Code Project :D


There is a fantastic talk by Kevlin Henney where he deconstructs the SOLID principles as each being either misunderstood when they were written - or frequently misunderstood after. It was done here:

https://vimeo.com/157708450

I won't spoil it by writing all the points here - just some pearls.

* The Open/Closed principle is redundant because it is already covered by the Liskov substitution principle.

* Bob Martin's view of the Dependency Inversion principle is in fact the Single Responsibility principle.

He suggests 5 more principles - but I won't spoil it by listing them here. Go see it!


My 2c to teams attempting to write SOLID code: Please make sure that the entire team that touches the code base understands what makes it SOLID-compliant (Don't like the term compliant, but it'll have to do). SOLID is more a framework for communication between engineers than anything else.

For example, if one engineer thinks class X has a single responsibility, but the name can be potentially interpreted differently, then unless she's on the job and reviewing every commit the principle will soon get violated. However if the team is open to having a healthy debate and iterating until they mostly agree, then SOLID raises the team's understanding of their code and THUS the quality of the codebase.


Here are two concrete problems that these general design articles never answer:

1. How to deal with global dependencies like logging, internationalization, application "constants" (not always constants in the programming language), preferences, ...? If you inject these as dependencies into every module or package, then you end up with a convoluted mess of initialization parameters and dependencies. If you make them globally accessible, re-usability is lost almost totally. In both cases, the code depends on the preference system and the particular values of preferences. (In a real-world application almost every "constant" has to be parametrized as a preference.)

2. Closely related to this, how do you deal with miscellaneous data that any real-world application needs, but that pollute and "impurify" your objects and classes. Typical examples of this second category are binary data from rich text and other potentially volatile and framework-dependent data---often, but not necessarily only GUI-related. If your application is database-driven, you have to store this data, which usually means you also have to store it in a class or struct in your programming language. For example, you might not just need to store an employee name, you might also need to store employee name richtext data, employee name display flags and filter category information, and so on. Trying to strictly separate model data from view-related data basically just gets you twice as many classes and tables for a little bit of gain in "purity." That is true even if you follow a strict MVC pattern (which I usually do), and in the end you store all of the data in the same database file anyway.

I'm currently using Go, but the same problems also came up in Racket for in the past. I've never read any software design article that provided concrete solutions to these problems. I wish those articles would stop discussing toy examples about cars and employees and instead explain how real-world problems are solved.


1. Decorators. See https://blog.ploeh.dk/2010/04/07/DependencyInjectionisLooseC...

2. Yes, you need a lot of model types (groups, layers). You need the view models, the database models, the business logic models, and possibly others. Sure, that's annoying at first because they're pretty much identical. That's fine though, because in that case you can just use AutoMapper or something similar.

I have never seen a project where the various models didn't change independently in a short timeframe (weeks, months at the most).


I have to confess, I have never been able to figure out what the Open Closed principle means. I've never seen an explanation that made sense. Extension vs modification? What's the difference? Is it just a very confusing way of trying to express the idea of encapsulation?

I guess I am plagued by the knowledge that I have virtually never been able to predict accurately what "extensions" will want to be done to my code in advance. So much so that I consider attempting to do it now a fairly pernicious form of over-engineering.


About the Extension:

Think about the concept of '+', you can '+' integers, rational numbers, matrix, etc. There possibly thousands of kinds of things that could be '+'-ed. (Actually, this concept is called monoid)

Your job is to implement the abstract idea of '+' (the operator itself without specific meaning) and the integer implementation for '+'.

Then you can compile publish your package to a jar, dll or any other form of package other people can download. Everyone, possibly thousands of people can download this package and extend and use the exact same '+' from you, for their own datatype without modifying your code or recompiling your package.

At last, 99 out of 100 this could be over-engineering for most of the languages like Java. But for language like Haskell have a feature named type classes, make this almost no-brainer, so it would become 5 out 10 this could be over-engineering in Haskell because the cost and mental overhead are reduced.


Essentially, it means that you can extend something by writing _new_ code, in a _new_ file, rather than modifying some code.

For example, a case statement cannot be modified without adding a new case. But something that finds all objects which implement IFoo and uses them can be modified by implementing a new implementer of IFoo. (That's not the only and probably not the best example, but it is an example).

A lot of people don't realize that most (maybe all) of these principles originated on C++ code bases way back, where changing one file in one tiny way could cause massive number of files to need recompiling -- which at the time was very painful due to slow compilers.

They have some good applicability, but I think they're massively over-sold.


Thanks. That makes "open for extension" clear, but it still leaves me wondering about "closed for modification". It is certainly a nice aspiration that code doesn't need to be modified to customize it if one can achieve that. But failing that, code should be as modifiable as we can possibly make it. Making it "closed for modification" sounds like a bad thing, like it will be very hard to maintain.


This is my interpretation. Usually bad devs I've worked with will mistakenly keep things DRY by reusing code in a bad way, for example modifying existing classes to handle cases they were not meant to handle. The most primitive example is reusing a function because it does 80% of some logic and then adding some bool flags to make it do something a bit different. Instead one should always abstract things and extract reusable bits, but only if it makes sense. But more often than not, lazy or misguided devs will see a similar piece of code and try to reuse it by modifying it to add new behavior, because "copy/pasting is bad".


Changing an existing class (modification) is possibly bad, because it can introduce bugs. The code was working, you made a change, oops!

Adding a new class (when the code was written to be extensible) should prevent that.

See for example [1] where Sandi Metz shows how to rewrite some incredibly complex code to make that possible.

[1] https://www.youtube.com/watch?v=8bZh5LMaSmE


Some of these code examples are not great.

Using inheritance when it's really not needed.


This looks to me like the usual textbook examples of inheritance that usually fall apart quickly when they get in touch with reality.


Code is not that matter. It's the database design. The understanding of domain that reflects in your database design.

Another way speaking, given me a well designed SQL database, i could produce a working software in one day, with bad code even.

Most of broken code is because of wrongly designed database, not the code itself.


I second this. I seventy this. The bottom layer, designed inconsistently, will percolate through until some wrapper code renormalizes any inconsistencies. Layers and layers of junk built on junk.


For green field projects, specifically for commercial products, I've always felt you should design the application interface first (wireframes, use cases, dashboards, etc.), then design the ERD of a database that will support that design, then do everything in between. Everything else is just moving data around.


Haha, good luck, usually the project is "agile" and experimenting and pivoting and "we'll know it when we see it" and only 7% of people are actually clicking a button so let's redesign those screens and so on.


You still need somewhere to start, though.


My experience is absolutely opposite.

Often, coders receive a design of Database from leaders, with SQL store procedures.

Only then they will follow the mock-up and then start to code.


So much goodness comes from a well designed database as you point out, but I agree with barbecue_sauce too. Do the interface mock-up first (and get the customers agreement), then do the database design and then build everything in between. The mock-up gives you a functional insight (an understanding of the problem domain) that's crucial to designing the database. Technical debt is heavily weighted to the database. Get the database design wrong and you'll be paying for years. As whycombinater says: > The bottom layer, designed inconsistently, will percolate through That's why it's so useful to get a front-end mock-up done. It's a learning exercise that's key to getting the database design right.


I know it. But you have to pay the cost. Do the mock-up, build an MVP just to get the insight from UX viewpoint.

My point of view is that, it's not worth the trade-off here (especially you have only limited time-budget or dev-budget).


Absolutely not!

The point of Dependency Inversion is that your business code doesn't know it's talking to a database.


Even with Rails, i query by writing directly SQL code, instead of using ActiveRecord.

Code is clean, no more layered abstraction for no purpose.


> no more layered abstraction for no purpose

I agree with this.

> i query by writing directly SQL code, instead of using ActiveRecord

I haven't used ActiveRecord, but I'm sure I'd hate it and prefer direct SQL - if I were forced to choose between the two.

So let me offer a third choice: an abstraction which serves an actual purpose, and is neither raw SQL nor ActiveRecord.

Pretend your domain is about farming fruit. A naive design might be:

FruitService -> FruitDatabase.

The DI way is to change that to:

FruitService -> Fruit <- FruitDatabase

You effectively divide-and-conquer the problem, so that you can think about your fruity business rules in the Service, and think about fruity storage in the Database.

When working in the Service, your might think about business thoughts like "We should throw away the 25% smallest green apples".

When working in the Database, you might think "Do I like the amount of SQL normalisation? Did I miss any indexes? How do I model an apple at the SQL table level?"

The purpose of this is to prevent you from thinking "Do I have an SQL column for colour?" when you're working on the business rules.


In your case, the logic of manipulating params to pass to SQL query doesn't change my PoV. No ORM layer when i query that "We should throw away the 25% smallest green apples".

http request => controller => params manipulation => sql => database.


Not to take away from your comment, but you do realize a huge amount of SW doesn't have any SQL database?


To summarize Linus (sigh)...programming is about data structures. Regardless of where they or the data comes from, data transforms are what we PRIMARILY do.


SQL is just one example of Database.

My comment is generic about storage implementation though. The same story applies to NoSQL or any other kinds of database as well.


you should say data model, because the majority of software in existence does not use any kind of database. No db in games, in photoshop, etc...


It's telling how dominant the (R)DBMS has become, when people can't imagine "database" meaning anything else.


“Bad programmers worry about the code. Good programmers worry about data structures and their relationships.”

Apparently a quote from Linus, but I think it holds true for almost every level of the stack.


I believe you got the D wrong. Dependency Inversion is not just about slapping an interface in between two components. It's about inverting the dependency. Let me explain:

Picture that we have two components, a main component "Master" and a subcomponent "Slave". What you seem to be saying is that we should have an interface between the two so that Master uses an "ISlave" interface that the Slave maintains (the developers of the subcomponent) instead of using Slave directly. Dependency Inversion however dictates that you should invert this dependency so that Master dictates which functionality Slave should implement. That is, Master should have an ISlave interface (or an "IDelegation" or similar new name) that Slave (!) is required to fulfil. There is a small but very important difference between the two directions of the dependency arrow. One of the main points is that with Dependency Inversion it is the main component that is in charge of functionality instead of the main component having to work with the functionality exposed by the subcomponent.


Nice. That's a really useful way of looking it it. I'll definitely commit that to memory.


For OOP, I aim for:

- Separation of concerns between classes/modules. The responsibilities of a class should be easy to explain to a non-developer. E.g. Ideally , every class/module abstraction should be an entity discovered in the specification (or easily inferred from it) and not invented from the developer's own mind.

- Be very careful about how you name things. Try to stick to the spec/domain terminology.


Does SOLID align with functional programming?

I've come to think that IOC/DI should almost be an anti pattern as it depends on interfaces and I've never seen anything use it that wasn't abstraction hell.

Functional programming seems like a breath of fresh air, which is ironic since it predates all this modern buzzword centric world and was around with LSIP.


I suspect the majority of people who claim to disdain SOLID principles are in fact using them every day without realizing it.

Seems pretty disturbing that so many engineers are piling on to recommend abandoning adherence to some really basic (and easy to understand and apply) ideas about design. Glad I don't work with you! :-P


I think a lot of people have seen that SOLID principles sometimes actually have the opposite effect of their intention, i.e. they make the code hard to reason about and thus less maintainable and harder to extend.

There are quite a lot of articles out there with valid criticism of SOLID, you might find that you don't disagree that much after all (or not). At the end of the day, blind adherence to dogma will likely result in negative effects.

If strict adherence to SOLID principles works for you, then great. Not been the case for me.


> I think a lot of people have seen that SOLID principles sometimes actually have the opposite effect of their intention, i.e. they make the code hard to reason about and thus less maintainable and harder to extend.

For me, it's half that and half that the people most likely to promote it - like in this blog post - are doing it badly.

For example, their very first one, the Single Responsibility Principle, isn't. It's blind adherence to a code smell at the syntactic level, resulting in subclasses that are closer to god objects than single-responsibility.

IMO, extracting "Payments" into its own class/module is the "right" day to do it, whether or not the switch statement is kept or a method is created for each Employee type.


I am not a big fan of 'Solid'. Some of them are true but of the category 'water is wet'. I.e., the SRP principle. Others don't seem useful. The open-closed principle, for instance. All software should be adaptable and we generally don't know what requests are going to come from the field so we cannot write parts of software to never be changed. Perhaps you think that

   bool is_prime(int n)
is never going to change but someday somebody is going to request that it can report progress during its calculation. So really, you can never know.

Also, these things with dependency injection and interfaces. Please, only introduce these complications when you can prove that you need them. The software will be complicated enough when it is satisfying all its requirements. It is unnecessary to invent complications for no reason.


If SRP is akin to water is wet I'm working with newborns.

What is so trivial about what to do in a class? Deciding where to put what is one of the hardest part of the job...


What to do in a class may not be entirely trivial but also not that complicated. What is trivial is noticing that a class is doing too many things. The problem is actually doing something about it. This seems in many case to be more of a discipline problem than it actually being that hard.

I also have seen that people are trying to do too much. When one sees a problem many have the inclination to want to refactor everything at once and also redesign an entire program. A more productive way is to do one small refactoring that solves just one code quality issue at a time. Before you know it you have done 5 or 10 small refactorings and things are starting to look much better. The big refactoring is very likely to make as many things worse as it makes better. The small refactoring has much more chance to have only or mainly positive effects.

Sometimes one also has to take a step back and think about in what direction to go with an application. That can be a bit hard, indeed. Even in that case it is much better to do small steps in the right direction that one has envisioned rather than some bing bang rewrite.


I must confess I am really bad at sticking to this. Sometimes I find myself writing a clunky class or function that is doing too many things and I say to my self "You're a dumb ass you are going to have to rewrite this." I just need to practice holding my feet to the fire and doing it. There are some things that I do regularly as habit, but occasionally I find myself in golden hour mode ignoring all the good practices. But I suppose as long as the refactor/rewrite follows good practice its probably not too much of an issue. Although my one complaint is the very unhelpful acronym at times.


In the first example, isn’t it kind of a problem to have a bunch of different functions for calculating pay? Seems like it would be easy for the accounting department to end up with a different version than the HR department that actually disburses the money, which would cause all kinds of hell when trying to audit your accounts. If one team or department owns the function, you end up with more interdepartmental communication if another team needs to change it, but you won’t end up with Accounting and HR disagreeing on how much employees should be paid.


A program is a theory.

A theory can be evaluated: firstly, does it describe/predict the data (i.e. does it work)? Secondly, is it parsimonious?

But the key issue with a program-as-theory is that you must understand the theory to understand the program. Because of this, it may be much more effective to shoehorn a problem into an ill-fitting theory, if that theory is already well-known and well-understood.

A new, unfamiliar and strange theory must be REAL GOOD - and that improvement must also be valuable in the applied context - for it to be worth using. If so, it becomes a new standard theory.


There's just no way to win this game, whatever good ideas are thrown in will be twisted, turned and corrupted into worse than what came before.

I once worked on the software side of a 500-employee electronics company that had managed to ISO-certify their so called Agile development process.

Everything, and I do mean everything; was performed according to some ISO-script. No deviation, no matter how meaningless the task. And we had more useless meetings than anything I've seen before, it's just that they were called stand ups and reviews.

Thanks, but no thanks.


One alternative to learning these principles for solving problems that arise when you wrap all your logic in objects, is to actually not wrap all your logic in objects.


Amen


It seems like 90% of this can be achieved for free using a language with duck typing and named parameters. But of course we know codes written in those languages are often not robust and inmaintanable. What is missing? Is it the static typing? Or that all of these goals are implemented mindfully? Maybe the key is in how you document the principles.


This article completely changed my mind about SOLID and I realized how underrated this simple mnemonic still is: https://www.gamedev.net/blogs/entry/2265481-oop-is-dead-long...




Applications are open for YC Winter 2020

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: