
Why I Don't Teach SOLID - StylifyYourBlog
http://qualityisspeed.blogspot.com/2014/08/why-i-dont-teach-solid.html
======
dang
[https://hn.algolia.com/?query=why%20i%20don%27t%20teach%20so...](https://hn.algolia.com/?query=why%20i%20don%27t%20teach%20solid&sort=byPopularity&prefix&page=0&dateRange=all&type=story)

------
ekidd
When I started coding, I just hacked stuff out any which way. This code was
hard to extend, because it had no structure.

When I arrived at university, I tried to use all the "appropriate" design
patterns and software engineering techniques. This code was hard to extend,
because it had 60 separate extension points, all of which were in the wrong
place (and none of which had ever been used).

When I arrived at my first internship, I killed a project or two though
grotesque over-engineering.

Today, I'm just happy if the code is simple, readable, and does one thing
well, and if it has enough unit tests to prevent bit rot. If I add an extra
layer of abstraction, I do it because it makes the code simpler, or because it
eliminates duplication.

~~~
arethuza
"This code was hard to extend, because it had 60 separate extension points,
all of which were in the wrong place (and none of which had ever been used)"

I've been working with "enterprise" systems for ~20 years and I swear that
most of the extension points I've seen in custom-built systems have never been
used to extend anything and simply complicate the original system.

Of course, a lot of products (particularly EPR and CRM) are extensible -
sometimes quite elegantly, sometimes with Lovecraftian horrors, but at least
they actually get used.

------
ChicagoDave
I've been the elephant in the application architecture room for years. I was
opposed to Dependency Injection for years for the primary reason of
Complexity, Readability, and Teachability.

As a consultant, I have the opportunity to traverse many different development
environments managed by diversely capable development teams. This experience
has led me to the conclusion that there are many more entry-level, mid-level,
and worker-bee developers than senior programmers and architects.

So when architects design new systems, you'll see a lot of highly complex,
loosely-coupled code that's simply unreadable, and no amount of "knowledge-
transfer" will bridge the gap with the wider audience of developers who are
not as skilled.

You end up with those mid-level developers altering code in ways that break
the original intent with the primary concern of being productive and
completing tasks. I can't tell you how many times I've had to unravel shoddy
code baked on top of or into an otherwise "normal" architecture.

This is why I eventually postulated that complexity trumps loosely-coupled
architectures. Our "customer" as architects are those mid-level developers. We
need to build frameworks and code bases that _anyone_ can maintain and
enhance.

So let's change it to SOLID-C. SOLID principals minus the Complexity. If we
can achieve that, everyone will succeed.

~~~
tel
It's funny to me from a mathematical POV. Dependency injection is clearly
"just" function abstraction. It could not be simpler.

Yet here we are belabored by whole frameworks to achieve it.

~~~
ChicagoDave
There are ways to talk about it that help developers understand. Instead of
focusing on the pattern overall, talk about the core dependency manager as
often as possible. If a junior or mid-level developer gets it pounded into
their heads that there's a magical piece of code doing some stuff they can't
see, then they might just figure things out.

But that's only a band-aid. It's my experience that they still won't truly
understand why they can't just call A from B and they certainly don't
understand how to get B in A through an injection.

~~~
tel
It always just feels like unnecessary terminology and technology. Here's the
core pattern.

    
    
        function app(injected) {
          return injected.doSomething();
        }
    
        function main() {
    
          var injectable = new Injectable();
          var result     = app(injectable);
          console.log(result);
    
        }
    

Everything else is around obscuring that core pattern or making it
"convenient" by providing some kind of naming abstraction. That at least makes
a little bit of sense in Javascript, like above, since you need the stringy
names to simulate types... but in any language with nominal typing you're just
killing yourself with unnecessary complexity.

------
SideburnsOfDoom
If you learn only 1 thing from SOLID, it would be the Single Responsibility
Principle (SRP). Honestly, this is over 80% of the value of SOLID.

The Interface Segregation Principle is a special case of SRP, and Open-Closed
Principle and Liskov Substitution Principle are most applicable to deep
inheritance hierarchies, which are rarer than they were. SRP pushes you
towards "composition over inheritance" which is also good.

Yes, using a lot of interfaces and an IoC container does push you towards a
particular style, but it's not that hard to read once you know it.

~~~
dragonwriter
> Liskov Substitution Principle are most applicable to deep inheritance
> hierarchies, which are rarer than they were.

The Liskov Substitution Principle is applicable wherever inheritance is used,
and failing to follow it anywhere when using inheritance pretty much
guarantees bugs will eventually emerge. If its "most applicable to deep
inheritance hierarchies", its because the cost of finding and fixing the
source of the bugs resulting from violating the LSP is greatest in such
hierarchies.

~~~
tel
This is key!

Most people try to at least ad hoc'edly treat "inherits from" as "is subtype
of". Many typed languages even encode this directly. This is a _total lie_
unless LSP is followed, however. LSP doesn't do much more than actually define
what the necessary and sufficient properties of the "is subtype of" are.

------
alphabiosx
It's hard not to empathize with the author. However, I think Sandi Metz put it
well in Practical Object-Oriented Design In Ruby when she said:

"Concrete code is easy to understand but costly to extend. Abstract code may
initially seem more obscure but, once understood, is far easier to change."

So, as with most things in life, it's all about balance: Readability vs
Maintainability.

~~~
platz
Sandi Metz seems to offer the most sane advice on the topic imho
[https://www.youtube.com/watch?v=x1wnI0AxpEU&list=PLE7tQUdRKc...](https://www.youtube.com/watch?v=x1wnI0AxpEU&list=PLE7tQUdRKcyYYHU2QwS8gQzkQkaOn_lHr)

------
userbinator
I believe a lot of these principles came out of the observation that some good
codebases followed them, and this was taken as a sign that _all_ good
codebases should; it's a sort of "if X is good, then not doing X is bad" type
of fallacy. If you try to analyse them in detail they all have an element of
subjectivity and vagueness (e.g. "what is a 'responsibility'?" ) that tends to
encourage _over_ abstraction and unnecessary, misguided extensibility. Blind,
dogmatic application of a set of principles with little reasoning behind them
is basically cargo-cult-programming in disguise. Trying to follow the
indirections in such a codebase where SOLID has been applied liberally, which
is particularly troublesome when debugging, does _not_ make it any easier to
maintain or extend.

There is no replacement for careful thought (including foresight) and
pragmatic design.

------
fat0wl
I think a big issue is that a lot of these rules need to be revised as
language features evolve. A lot of productive paradigms are no longer "pure"
code.

I see a lot of comments kindof poo-pooing annotations, but one of the better
Java devs I know is convinced that annotations are the solution to code
readability/overengineering in Java -- in essence, custom annotation
processors can replace both the need for interfaces and abstract classes.

One of the biggest problems I see is simply that the schism-ing of modern
design paradigms means that debugging tools have to play catchup and therefore
make code seem a bit less linear. But the reality is, through
IoC/AOP/annotations, developers are often reducing the number or interfaces to
traverse and making the code more readable, while at the same time actually
making it _more_ generic (your class doesn't have to conform to so many
standards if you can tack the annotations on whatever fields/methods you
want). Should someone be introducing a proxy layer for every class just in
case they need to fit it into a more advanced design in the future? Or would
it be easier to just code more literally while language/container designers
work on a more seamless replacement method?

In a way, it does just seem like some of these new techniques are a hacky way
of forcing FP into OOP. Lots of different design paradigms playing nicely
within the same VMs, ecosystems is a nice problem to have though. :)

------
moomin
I came to the conclusion a while ago that IOC containers are a real "two
problems" solution. I still heavily use constructor injection in my code, but
I wire the constructors by hand. Keeps you honest and actually forces you to
think through abstractions more clearly.

~~~
ecoffey
Exactly. You can follow SRP, use clean constructors and do it by hand. An IoC
container is a nice /mechanical/ helper to do the wire up for you. I really
dislike that the majority (all?) of the javaland IoC containers work with
annotations; I believe that your object graphs should have no idea if they
were built "by hand" with manual calls to new, or resolved from a container.

~~~
emsy
Uncle Bob imho has a pretty good strategy for this problem:
[https://twitter.com/unclebobmartin/status/308983161143058432](https://twitter.com/unclebobmartin/status/308983161143058432)

~~~
UK-AL
He basically just described what a container does. You put objects in the
container in main, then it resolves everything else.

~~~
emsy
That's exactly what it's not. A container is a dependency in itself. Secondly,
the wiring of the dependencies happens in one place in the code, not littered
in the codebase or a proprietary obstruct XML file. And lastly, a loose
container tends to be abused. When a dependency can be included with a simple
annotation, everything depends on everything.

~~~
UK-AL
Errr, you setup up the container in one place, normally main or somewhere at
the start. Very few containers are xml only these days.

There's normally section at the start like

container.RegisterType<IMessageQueue, MSMQMessageQueue>

container.RegisterType<IGeocoder, GoogleGeocoder>

Modern containers don't even use annotations, they just scan the constructor
parameters.

~~~
moomin
Yes, but they still magically figure out what goes into what. In the case that
you've got a relatively flat structure (services injecting into lots of
handlers) this is convenient. When you're building a complex tree, it reduces
your understanding of the structure of your code.

------
ollysb
I always struggled to understand the appeal of the open/closed principle.

"The idea was that once completed, the implementation of a class could only be
modified to correct errors; new or changed features would require that a
different class be created."[1]

This sounds a lot like bolt-on coding, always adding code rather than
assimilating new features into a codebase. This doesn't seem like a
sustainable strategy at all. Yes you don't risk breaking any existing
functionality but then why not just use a test suite? The major problem though
is that instead of grouping associated functionality into concepts (OO) that
are easy to reason about, you are arbitrarily packaging up functionality based
upon the time of it's implementation... (subclassing to extend).

[1]
[http://en.wikipedia.org/wiki/Open/closed_principle](http://en.wikipedia.org/wiki/Open/closed_principle)

~~~
jt2190

      > ...you don't risk breaking any existing functionality but 
      > then why not just use a test suite?
    

The examples fail to make explicit that there are two programmers: One is
building and distributing a library, the second is building an application
using that library.

The library programmer can easily distribute the test suite so that the
application programmer can run the tests, but that doesn't change the fact
that if the library programmer changes an object's interface, it breaks the
application programmer's code. By committing to keep the old object's
interface intact, the library programmer is giving the application programmer
time to migrate their code to the new objects.

~~~
platz
I like your library/application distinction.

For applications I'm not sure open/closed makes as much sense -
[http://codeofrob.com/entries/my-relationship-with-solid---
th...](http://codeofrob.com/entries/my-relationship-with-solid---the-
big-o.html)

~~~
jt2190

      > For applications I'm not sure open/closed makes as 
      > much sense.
    

I agree with you. If the same programmer (or team) is maintaining "both sides"
of an object's interface, i.e. both implementing the behavior of the object
AND consuming the object in their application code, I think we can assume that
they'll know that if they change one side they'll need to immediately change
the other.

I'm not sure I agree with Rob Ashton's points, though. In his blog post, Rob
trivializes the utility of third-party libraries:

    
    
        > * These [libraries] are either replicable in a few 
        >   hours work, or easily extended via a pull request.
    

This is simply not the case with any truly useful library. Useful libraries
often represent years of careful design work and debugging. (Think networking
libraries, UI frameworks, etc.)

He also underestimates the amount of time it takes to continuously change
application code to keep up with breaking changes from third-party libraries:

    
    
        > * These [libraries] can be forked for your 
        >   project and changes merged from upstream with 
        >   little effort.
    

Again, if the library distributes a breaking change, it may require many, many
hours of code changes and re-testing to make sure everything's still working
properly. Hours that could be spent building new features. For that kind of
tradeoff there'd better be a damn good reason for the change: Improved
security or performance or ease of use.

------
sophacles
In general I think that a lot of these design "principles" are just
mislabeled. A better label is generally "rough hewn guideline to use in a
first pass at design". Many of them contradict each other, or even themselves,
especially when rotely applied beyond sensibility.

Take for instance DRY - if you follow it too far, you end up with
InterfaceFactoryFactoryFoo. And of course all those FactoryFactories start to
look like violations of DRY anyway.

Or the over-application of SRP ends up with 40 classes that are tiny slices of
something that could easily be 1 class.

Amusingly both are the result of myopic application, going fractal if you
will, on the principle, rather than setting a decent "scope" for the
application of the ideas.

Further as you go through design and implementation, you find places where the
design abstractions were wrong, and the "single thing" or "unrepeated task" is
violated, in the large (rather than in the tiny) and you have to accept it or
do some refactoring. Such is life.

None of these things takes away the value of DRY or SOLID or any of the other
design principles - it's just that there is a very hard orthogonal problem of
"proper scoping" for these principles.

~~~
dragonwriter
> Take for instance DRY - if you follow it too far, you end up with
> InterfaceFactoryFactoryFoo. And of course all those FactoryFactories start
> to look like violations of DRY anyway.

Right. DRY runs into limits when you use languages with limited expressiveness
-- and, particularly, Java-like class-oriented languages where classes are
not, themselves, first-class are problematic here.

The problem isn't with the DRY principle, its with a language that doesn't
really let you follow it, because certain things cannot be effectively
abstracted out into reusable library code and require boilerplate. Most of
these things are not problems with, e.g., Lisps.

~~~
TeMPOraL
> _Most of these things are not problems with, e.g., Lisps._

This is where the Lisp macros start to shine - you can DRY up the code
structure itself. The need for that doesn't come that often (I'm personally in
the camp of avoiding macros until you're really sure they're the best tool for
the job) but when you are really starting to get sick of that repetitiveness
that obscure the intention behind your code, macros are really godsend.

------
jowiar
Reading this article and its sequel, I feel like the author is handwaving one
key bit. I absolutely agree with his position on dependency elimination as a
primary goal, but by saying "Oh, a class that operates on a dependency is
hard, so let's not write those", and "We're not going to deal with interfaces"
he's punting on the entire problem -- handwaving the hard bits and then
ignoring the fact that they exist.

At some point, your code does have dependencies. The entire purpose of an
interface is to be able to specify what your dependencies are -- to be able to
say "This is the smallest thing I need in order to be able to work". When
untangling dependencies, adding that bit in there makes it very clear what the
seams are -- where you can say "I depend on something that does Foo -- Feel
free to replace it", rather than "I depend on this thing that comes entangled
in its own network of dependencies".

------
bjornsing
> It came from really important people in our field.

I must say that's my _least_ favorite argument as to why something is
important...

~~~
pixeloution
Those people because important because of good ideas, the ideas are not good
because they are important.

~~~
Retra
That goodness is also relative, and good ideas will often expire if you give
them time. Conflating importance with goodness makes it hard to move into the
future, (but it also means you'll have to make your case much more clearly.)

------
kasey_junk
I personally thought the follow up to this article. Available at:

[http://qualityisspeed.blogspot.com/2014/09/beyond-solid-
depe...](http://qualityisspeed.blogspot.com/2014/09/beyond-solid-dependency-
elimination.html)

Was way better than this one.

------
kornakiewicz
I would add "don't repeat yourself" as another not-so-good rule of thumb. In
one of my past projects one of the most serious problems we faced was too much
generalisation on early phase of development when requirements at this time
were quite straight forward, but when we meet with real need of using
"advantages" of our generalisation and avoidance of repeating things... well,
it didn't work smoothly and even maintanance of over-engineered libraries were
harder.

Of course of I am not saying that we should copy-paste everything, but adding
a lot of layers of abstractions also doesn't seem neat.

------
phamilton
Personally, I find that taking a very aggressive approach to the Liskov
substitution principle is the most helpful rule of thumb. I follow it to the
point of practically never using inheritance. If you are using inheritance to
modify behavior, then you are probably ignoring LSP and making your code
"unintelligible".

------
rdez6173
I think this is about finding a balance between pragmatism and perfectionism.
We should strive to find the simplest, cleanest solution to problems and
continually refactor to keep complexity at bay. The SOLID guidelines,
practically applied, may help in achieving that goal, so they are worth having
in your toolbox.

------
exabrial
The whole point of SOLID was "Only Apply When Necessary." Not everything needs
an interface

------
UK-AL
I think this just highlights the ridiculousness of software development.

You can get two highly skilled, well renowned software developers and they can
completely disagree.

How are you meant to objectively evaluate code quality of developers then?

~~~
Karunamon
Because software development is more art than science in a number of ways, and
because it's not equal to much of anything else.

Suggested reading:
[http://thecodelesscode.com/case/154](http://thecodelesscode.com/case/154)

