Hacker News new | past | comments | ask | show | jobs | submit login
[dupe] Why I Don't Teach SOLID (qualityisspeed.blogspot.com)
66 points by StylifyYourBlog on Jan 22, 2015 | hide | past | favorite | 60 comments




When I started coding, I just hacked stuff out any which way. This code was hard to extend, because it had no structure.

When I arrived at university, I tried to use all the "appropriate" design patterns and software engineering techniques. This code was hard to extend, because it had 60 separate extension points, all of which were in the wrong place (and none of which had ever been used).

When I arrived at my first internship, I killed a project or two though grotesque over-engineering.

Today, I'm just happy if the code is simple, readable, and does one thing well, and if it has enough unit tests to prevent bit rot. If I add an extra layer of abstraction, I do it because it makes the code simpler, or because it eliminates duplication.


"This code was hard to extend, because it had 60 separate extension points, all of which were in the wrong place (and none of which had ever been used)"

I've been working with "enterprise" systems for ~20 years and I swear that most of the extension points I've seen in custom-built systems have never been used to extend anything and simply complicate the original system.

Of course, a lot of products (particularly EPR and CRM) are extensible - sometimes quite elegantly, sometimes with Lovecraftian horrors, but at least they actually get used.


Yes, I think that this is a rather typical 'programmer growth curve'.

You start with spaghetti code which just works (hopefully), after a while you are taught how to do things 'properly', resist to it a bit and suddenly you have an epiphany - design patterns should be everywhere, etc. After over-engineering some projects, with loads of UML diagrams and stuff, you step back and stick to simplicity.

And I believe it's a good curve. You learn why some certain things (like overengineering) doesn't work, hopefully it happens early, and learning from trial and error is much more valuable compared to when someones teaches you how to do things 'properly'.


I've been the elephant in the application architecture room for years. I was opposed to Dependency Injection for years for the primary reason of Complexity, Readability, and Teachability.

As a consultant, I have the opportunity to traverse many different development environments managed by diversely capable development teams. This experience has led me to the conclusion that there are many more entry-level, mid-level, and worker-bee developers than senior programmers and architects.

So when architects design new systems, you'll see a lot of highly complex, loosely-coupled code that's simply unreadable, and no amount of "knowledge-transfer" will bridge the gap with the wider audience of developers who are not as skilled.

You end up with those mid-level developers altering code in ways that break the original intent with the primary concern of being productive and completing tasks. I can't tell you how many times I've had to unravel shoddy code baked on top of or into an otherwise "normal" architecture.

This is why I eventually postulated that complexity trumps loosely-coupled architectures. Our "customer" as architects are those mid-level developers. We need to build frameworks and code bases that _anyone_ can maintain and enhance.

So let's change it to SOLID-C. SOLID principals minus the Complexity. If we can achieve that, everyone will succeed.


It's funny to me from a mathematical POV. Dependency injection is clearly "just" function abstraction. It could not be simpler.

Yet here we are belabored by whole frameworks to achieve it.


There are ways to talk about it that help developers understand. Instead of focusing on the pattern overall, talk about the core dependency manager as often as possible. If a junior or mid-level developer gets it pounded into their heads that there's a magical piece of code doing some stuff they can't see, then they might just figure things out.

But that's only a band-aid. It's my experience that they still won't truly understand why they can't just call A from B and they certainly don't understand how to get B in A through an injection.


It always just feels like unnecessary terminology and technology. Here's the core pattern.

    function app(injected) {
      return injected.doSomething();
    }

    function main() {

      var injectable = new Injectable();
      var result     = app(injectable);
      console.log(result);

    }
Everything else is around obscuring that core pattern or making it "convenient" by providing some kind of naming abstraction. That at least makes a little bit of sense in Javascript, like above, since you need the stringy names to simulate types... but in any language with nominal typing you're just killing yourself with unnecessary complexity.


apparently LSP is just a contravariant functor https://apocalisp.wordpress.com/2010/10/06/liskov-substituti...

But I don't see how the relationship of contravariant functors to OO-style subclasses is as clear as the DI <-> function abstraction example (functions are just functions in OO or FP)


The trick of that article is noting that Rúnar translates "S is a subtype of T" to "S => T", e.g. subtyping implies coercion. This is the first part of subtyping. The second part is that we must have that definitions of Q[S] for any Q must follow naturally from definitions of Q[T].

So let's say that Apple is a subtype of Fruit. That means that there exists a natural coercion function f : Apple => Fruit. It also means that if we have a predicate Delicious that we define for both Fruit and Apple then the answer Delicious[Apple] must be exactly Delicious[Fruit].comap(f). If they don't align then your coercion function is somehow wrong and not implementing subtyping properly.


I think what you describe only happens when there is not a good unit testing and code reviewing culture. When the mid-level developers are required to write unit tests they start to appreciate the loosely coupled architectures and Dependency Injection. This culture has to come from all the way to the top to avoid the "just get it done" pressure to cut corners, or give up and hire consultants.


Culture doesn't always help. It's a step in the right direction, but even in the best development environments, some developers just don't improve their understanding of complex code.

So then the company solves the problem by having a higher standard for hiring new developers. Eventually this becomes unsupportable because finding developers who can pick and use an in-house framework is costly and often impossible.

Keep it simple overrides make it SOLID.


If you learn only 1 thing from SOLID, it would be the Single Responsibility Principle (SRP). Honestly, this is over 80% of the value of SOLID.

The Interface Segregation Principle is a special case of SRP, and Open-Closed Principle and Liskov Substitution Principle are most applicable to deep inheritance hierarchies, which are rarer than they were. SRP pushes you towards "composition over inheritance" which is also good.

Yes, using a lot of interfaces and an IoC container does push you towards a particular style, but it's not that hard to read once you know it.


> Liskov Substitution Principle are most applicable to deep inheritance hierarchies, which are rarer than they were.

The Liskov Substitution Principle is applicable wherever inheritance is used, and failing to follow it anywhere when using inheritance pretty much guarantees bugs will eventually emerge. If its "most applicable to deep inheritance hierarchies", its because the cost of finding and fixing the source of the bugs resulting from violating the LSP is greatest in such hierarchies.


This is key!

Most people try to at least ad hoc'edly treat "inherits from" as "is subtype of". Many typed languages even encode this directly. This is a total lie unless LSP is followed, however. LSP doesn't do much more than actually define what the necessary and sufficient properties of the "is subtype of" are.


ISP is underrated. I think the problem is that the logical endpoint of ISP is that every interface has one method, which is unworkable in a classic OO language. At this point you realize that what you really want is functions as parameters and move on with your life. :)

Not true if there are laws specifying the relationship between the methods. But way more often true than you'd expect from reading a (good) java codebase.


The ISP is good, but you can derive it from SRP: The SRP says that things should have one responsibility, and the ISP says that Interfaces are things too.

I saw some Java 8 recently. The automatic conversion from a lambda to an interface with one (compatible) method was interesting, but yeah, it points to the problem that what you sometimes really want is just a function.


> it points to the problem that what you sometimes really want is just a function

Interestingly, this fact has been known to procedural as well as functional programming proponents for decades (one of the few aspects on which they agree).


The problem with SRP is it's hard to define what that "one reason" is exactly. http://sklivvz.com/posts/i-dont-love-the-single-responsibili...

> sometimes really want is just a function

summed up nicely here: http://blog.ploeh.dk/2014/03/10/solid-the-next-step-is-funct...


I find Uncle Bob's definition of SRP makes far more sense if you change it from "responsibility for something" to "responsibility to someone." That is, there should be one and only one person, or business role, who would want a given class to change. The "one reason" then becomes "That guy over there gave me a business justification for it," and nothing else really matters. It's just as easily read then as "a class cannot serve two masters."


That matches well with the original guiding principles for software modularity: modules should encapsulate deferred or changing decisions, nothing more or less.


I think the point is you cannot predict what decisions are going to change or be deferred. That's why I like the interpretations of this that try to make change in any direction more reasonable. Otherwise you get people building things like abstract factories "just in case".


I think that's a little silly, or rather worded imprecisely. Freedom in all directions prohibits all structure, all implementation.


> The problem with SRP is it's hard to define what that "one reason" is exactly

Sure it's hard to automate it. It's one of those subjective, experience-based factors that keep humans like me in a job. I never thought that it was a rule that can be mechanically applied.


Sure, but there are other measures, and the coupling vs cohesion take at least provides a semblance of objectivity vs completely subjective.


It's hard not to empathize with the author. However, I think Sandi Metz put it well in Practical Object-Oriented Design In Ruby when she said:

"Concrete code is easy to understand but costly to extend. Abstract code may initially seem more obscure but, once understood, is far easier to change."

So, as with most things in life, it's all about balance: Readability vs Maintainability.


Sandi Metz seems to offer the most sane advice on the topic imho https://www.youtube.com/watch?v=x1wnI0AxpEU&list=PLE7tQUdRKc...


I believe a lot of these principles came out of the observation that some good codebases followed them, and this was taken as a sign that all good codebases should; it's a sort of "if X is good, then not doing X is bad" type of fallacy. If you try to analyse them in detail they all have an element of subjectivity and vagueness (e.g. "what is a 'responsibility'?" ) that tends to encourage overabstraction and unnecessary, misguided extensibility. Blind, dogmatic application of a set of principles with little reasoning behind them is basically cargo-cult-programming in disguise. Trying to follow the indirections in such a codebase where SOLID has been applied liberally, which is particularly troublesome when debugging, does not make it any easier to maintain or extend.

There is no replacement for careful thought (including foresight) and pragmatic design.


I think a big issue is that a lot of these rules need to be revised as language features evolve. A lot of productive paradigms are no longer "pure" code.

I see a lot of comments kindof poo-pooing annotations, but one of the better Java devs I know is convinced that annotations are the solution to code readability/overengineering in Java -- in essence, custom annotation processors can replace both the need for interfaces and abstract classes.

One of the biggest problems I see is simply that the schism-ing of modern design paradigms means that debugging tools have to play catchup and therefore make code seem a bit less linear. But the reality is, through IoC/AOP/annotations, developers are often reducing the number or interfaces to traverse and making the code more readable, while at the same time actually making it more generic (your class doesn't have to conform to so many standards if you can tack the annotations on whatever fields/methods you want). Should someone be introducing a proxy layer for every class just in case they need to fit it into a more advanced design in the future? Or would it be easier to just code more literally while language/container designers work on a more seamless replacement method?

In a way, it does just seem like some of these new techniques are a hacky way of forcing FP into OOP. Lots of different design paradigms playing nicely within the same VMs, ecosystems is a nice problem to have though. :)


I came to the conclusion a while ago that IOC containers are a real "two problems" solution. I still heavily use constructor injection in my code, but I wire the constructors by hand. Keeps you honest and actually forces you to think through abstractions more clearly.


Exactly. You can follow SRP, use clean constructors and do it by hand. An IoC container is a nice /mechanical/ helper to do the wire up for you. I really dislike that the majority (all?) of the javaland IoC containers work with annotations; I believe that your object graphs should have no idea if they were built "by hand" with manual calls to new, or resolved from a container.


didn't realize that about javaland. almost all the projects I've worked on in .net use conventions to wire interfaces to concrete types. the IoC is mentioned in one file that defines the conventions and the 2-3 odd-ball one-offs.

I've been on one project that manually wired everything, it was crazy. such a waste of time.


I haven't used it yet but I'm excited by the possibility of dagger 2, which generates the code that constitutes the dependency graph upon compile.

http://google.github.io/dagger/


Uncle Bob imho has a pretty good strategy for this problem: https://twitter.com/unclebobmartin/status/308983161143058432


He basically just described what a container does. You put objects in the container in main, then it resolves everything else.


That's exactly what it's not. A container is a dependency in itself. Secondly, the wiring of the dependencies happens in one place in the code, not littered in the codebase or a proprietary obstruct XML file. And lastly, a loose container tends to be abused. When a dependency can be included with a simple annotation, everything depends on everything.


Errr, you setup up the container in one place, normally main or somewhere at the start. Very few containers are xml only these days.

There's normally section at the start like

container.RegisterType<IMessageQueue, MSMQMessageQueue>

container.RegisterType<IGeocoder, GoogleGeocoder>

Modern containers don't even use annotations, they just scan the constructor parameters.


Yes, but they still magically figure out what goes into what. In the case that you've got a relatively flat structure (services injecting into lots of handlers) this is convenient. When you're building a complex tree, it reduces your understanding of the structure of your code.


In general I think that a lot of these design "principles" are just mislabeled. A better label is generally "rough hewn guideline to use in a first pass at design". Many of them contradict each other, or even themselves, especially when rotely applied beyond sensibility.

Take for instance DRY - if you follow it too far, you end up with InterfaceFactoryFactoryFoo. And of course all those FactoryFactories start to look like violations of DRY anyway.

Or the over-application of SRP ends up with 40 classes that are tiny slices of something that could easily be 1 class.

Amusingly both are the result of myopic application, going fractal if you will, on the principle, rather than setting a decent "scope" for the application of the ideas.

Further as you go through design and implementation, you find places where the design abstractions were wrong, and the "single thing" or "unrepeated task" is violated, in the large (rather than in the tiny) and you have to accept it or do some refactoring. Such is life.

None of these things takes away the value of DRY or SOLID or any of the other design principles - it's just that there is a very hard orthogonal problem of "proper scoping" for these principles.


> Take for instance DRY - if you follow it too far, you end up with InterfaceFactoryFactoryFoo. And of course all those FactoryFactories start to look like violations of DRY anyway.

Right. DRY runs into limits when you use languages with limited expressiveness -- and, particularly, Java-like class-oriented languages where classes are not, themselves, first-class are problematic here.

The problem isn't with the DRY principle, its with a language that doesn't really let you follow it, because certain things cannot be effectively abstracted out into reusable library code and require boilerplate. Most of these things are not problems with, e.g., Lisps.


> Most of these things are not problems with, e.g., Lisps.

This is where the Lisp macros start to shine - you can DRY up the code structure itself. The need for that doesn't come that often (I'm personally in the camp of avoiding macros until you're really sure they're the best tool for the job) but when you are really starting to get sick of that repetitiveness that obscure the intention behind your code, macros are really godsend.


I always struggled to understand the appeal of the open/closed principle.

"The idea was that once completed, the implementation of a class could only be modified to correct errors; new or changed features would require that a different class be created."[1]

This sounds a lot like bolt-on coding, always adding code rather than assimilating new features into a codebase. This doesn't seem like a sustainable strategy at all. Yes you don't risk breaking any existing functionality but then why not just use a test suite? The major problem though is that instead of grouping associated functionality into concepts (OO) that are easy to reason about, you are arbitrarily packaging up functionality based upon the time of it's implementation... (subclassing to extend).

[1] http://en.wikipedia.org/wiki/Open/closed_principle


  > ...you don't risk breaking any existing functionality but 
  > then why not just use a test suite?
The examples fail to make explicit that there are two programmers: One is building and distributing a library, the second is building an application using that library.

The library programmer can easily distribute the test suite so that the application programmer can run the tests, but that doesn't change the fact that if the library programmer changes an object's interface, it breaks the application programmer's code. By committing to keep the old object's interface intact, the library programmer is giving the application programmer time to migrate their code to the new objects.


I like your library/application distinction.

For applications I'm not sure open/closed makes as much sense - http://codeofrob.com/entries/my-relationship-with-solid---th...


  > For applications I'm not sure open/closed makes as 
  > much sense.
I agree with you. If the same programmer (or team) is maintaining "both sides" of an object's interface, i.e. both implementing the behavior of the object AND consuming the object in their application code, I think we can assume that they'll know that if they change one side they'll need to immediately change the other.

I'm not sure I agree with Rob Ashton's points, though. In his blog post, Rob trivializes the utility of third-party libraries:

    > * These [libraries] are either replicable in a few 
    >   hours work, or easily extended via a pull request.
This is simply not the case with any truly useful library. Useful libraries often represent years of careful design work and debugging. (Think networking libraries, UI frameworks, etc.)

He also underestimates the amount of time it takes to continuously change application code to keep up with breaking changes from third-party libraries:

    > * These [libraries] can be forked for your 
    >   project and changes merged from upstream with 
    >   little effort.
Again, if the library distributes a breaking change, it may require many, many hours of code changes and re-testing to make sure everything's still working properly. Hours that could be spent building new features. For that kind of tradeoff there'd better be a damn good reason for the change: Improved security or performance or ease of use.


I think the distinction is a straw man really. Changing the semantics of interfaces is generally a bad idea once you have code depending on them whether it be within your application or a library. Where open/closed falls down is that it requires that you subclass just for _adding_ new methods to an interface. You're not effecting any existing code but you're still introducing entropy into your codebase.


I agree it sounds like bolt-on coding. I've been told the way to avoid bolt-on coding (great name, by the way) is to refactor the software regularly. When you are in minor versions, you might do just-in-time coding, but to really qualify as a major version requires a careful view of the code as it exists now (not as originally designed), a documentation of a new design that meets the needs of the current customers, and a refactor to make sure the code reflects that design. Wash, Rinse, and Repeat.


Continuous refactoring is really the only way to maintain a good time-to-market for new features. Unfortunately by the time your time-to-market suffers you've probably already accrued a substantial level of technical debt and it requires a more serious investment to get the codebase back into shape. As developers the only way to really combat this is to include refactoring time into all development.


> It came from really important people in our field.

I must say that's my least favorite argument as to why something is important...


Those people because important because of good ideas, the ideas are not good because they are important.


That goodness is also relative, and good ideas will often expire if you give them time. Conflating importance with goodness makes it hard to move into the future, (but it also means you'll have to make your case much more clearly.)


I personally thought the follow up to this article. Available at:

http://qualityisspeed.blogspot.com/2014/09/beyond-solid-depe...

Was way better than this one.


I would add "don't repeat yourself" as another not-so-good rule of thumb. In one of my past projects one of the most serious problems we faced was too much generalisation on early phase of development when requirements at this time were quite straight forward, but when we meet with real need of using "advantages" of our generalisation and avoidance of repeating things... well, it didn't work smoothly and even maintanance of over-engineered libraries were harder.

Of course of I am not saying that we should copy-paste everything, but adding a lot of layers of abstractions also doesn't seem neat.


Personally, I find that taking a very aggressive approach to the Liskov substitution principle is the most helpful rule of thumb. I follow it to the point of practically never using inheritance. If you are using inheritance to modify behavior, then you are probably ignoring LSP and making your code "unintelligible".


I think this is about finding a balance between pragmatism and perfectionism. We should strive to find the simplest, cleanest solution to problems and continually refactor to keep complexity at bay. The SOLID guidelines, practically applied, may help in achieving that goal, so they are worth having in your toolbox.


The whole point of SOLID was "Only Apply When Necessary." Not everything needs an interface


I think this just highlights the ridiculousness of software development.

You can get two highly skilled, well renowned software developers and they can completely disagree.

How are you meant to objectively evaluate code quality of developers then?


Because software development is more art than science in a number of ways, and because it's not equal to much of anything else.

Suggested reading: http://thecodelesscode.com/case/154


There is very little in this field that can be described as objective, beyond the rawest of performance metrics, or lines of code. Most statements about code quality are just completely contrived and based upon essentially nothing.

My worst experiences encountering other people's code have been the code that followed the best practices, and defensively shielded itself from criticism. Layers of interfaces and injections and abstractions and separation of concerns yielding hundreds to hundreds of thousands of artifacts for the simplest task, always sold on the notion that it was ready for the future, but in reality would never be adapted to the future.


Reading this article and its sequel, I feel like the author is handwaving one key bit. I absolutely agree with his position on dependency elimination as a primary goal, but by saying "Oh, a class that operates on a dependency is hard, so let's not write those", and "We're not going to deal with interfaces" he's punting on the entire problem -- handwaving the hard bits and then ignoring the fact that they exist.

At some point, your code does have dependencies. The entire purpose of an interface is to be able to specify what your dependencies are -- to be able to say "This is the smallest thing I need in order to be able to work". When untangling dependencies, adding that bit in there makes it very clear what the seams are -- where you can say "I depend on something that does Foo -- Feel free to replace it", rather than "I depend on this thing that comes entangled in its own network of dependencies".




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: