
I don't love the single responsibility principle - sklivvz1971
http://sklivvz.com/posts/i-dont-love-the-single-responsibility-principle
======
thibauts
This obviously is mostly smoke and mirrors. Uncle Bob recycles Separation of
concerns [1] and overshadows it in the process. You won't define what are good
or bad boundaries in code anymore than you will define "5 easy rules to author
a great piece of fiction". The computer doesn't give a damn about separated
concerns, it's all about communication between humans. As such it is a matter
of feeling, dare I say emotion, and not a matter of laws and rules. Ultimately
our ability to break problems in sub-problems tends to be bounded by the words
we have at our disposal.

The TCP/IP stack exemple in the Wikipedia article linked below hints at SoC
pretty well in my opinion. One layer allow the layer above to disregard
details about the physical link. The next layer makes cross-network addressing
transparent. The next one allows apps to disregard packet ordering and loss,
and so on. Each layer solves a problem for the one above. Yet one can have
multiple responsibilities. This is all about cognitive load.

A system described in code is nothing more than a proposed consensus around
tools, names, boundaries, paradigms to be used to talk and think about a
solution to a problem. The ultimate measure of success is this: our ability to
tame the binary beast. If "change" plays a role (it sure does) it's probably
not the main or only metric. Well ... when you're not selling "ability to
change" consulting and books to software companies, this goes without saying.

[1]
[http://en.wikipedia.org/wiki/Separation_of_concerns](http://en.wikipedia.org/wiki/Separation_of_concerns)

~~~
zenbowman
Great post. I think this is the fundamental thesis proposed by Abelson and
Sussman in SICP. You build a system in layers, each relying on the one below.

The reason you do this is to manage cognitive load in humans, not "coupling"
in computers.

~~~
thibauts
It is.

 _First, we want to establish the idea that a computer language is not just a
way of getting a computer to perform operations but rather that it is a novel
formal medium for expressing ideas about methodology. Thus, programs must be
written for people to read, and only incidentally for machines to execute.
Second, we believe that the essential material to be addressed by a subject at
this level is not the syntax of particular programming-language constructs,
nor clever algorithms for computing particular functions efficiently, nor even
the mathematical analysis of algorithms and the foundations of computing, but
rather the techniques used to control the intellectual complexity of large
software systems._ [1]

[1] [http://mitpress.mit.edu/sicp/full-text/book/book-
Z-H-7.html](http://mitpress.mit.edu/sicp/full-text/book/book-Z-H-7.html)

------
romaniv
This article speaks about one of the many, many reasons I don't like Robert
Martin's approach to programming. If some principle results in a handful of
single-method classes that don't really do anything on their own, the
principle is not a good basis for design.

I find the author's alternative much more useful in practice.

~~~
jdminhbg
> If some principle results in a handful of single-method classes that don't
> really do anything on their own, the principle is not a good basis for
> design.

Absolutely agree. Proponents of approaches like this tend to only worry about
intra-object complexity, and ignore the fact that a vast, complicated object
graph is also hard to reason about.

~~~
userbinator
_only worry about intra-object complexity_

That reminds me of this, not sure if it's satirical or not:
[http://www.antiifcampaign.com/](http://www.antiifcampaign.com/)

Basically it's advocating removing if/switch statements and replacing them
with polymorphic method calls. I understand that polymorphism has its value,
but think that it's only valuable when, for lack of a better phrase, the thing
that's being polymorphosed is "big and varied" enough that it makes sense to
impose this extra level of indirection. I think that it being hard to explain
when something is worth it, is enough justification that principles shouldn't
be arbitrarily decided based on that.

~~~
sunir
I don't understand that page in the slightest, but a lot of if-cascades can be
factored more cleanly using polymorphism.

[http://c2.com/cgi/wiki/wiki?PolymorphismVsSelectionIdiom](http://c2.com/cgi/wiki/wiki?PolymorphismVsSelectionIdiom)

(Disclaimer. I wrote the top of that wiki page in a former life.)

~~~
userbinator
_However, if you already have a class type for each key_

I think that's precisely what I was trying to say - polymorphism makes sense
when you already have a bunch of classes to do it with, and classes which
already contain lots of other fields and methods; it doesn't make sense to
create a bunch of classes just to use polymorphism.

~~~
nardi
Often, though, a series of complicated if statements is hinting at a type
system for your objects that hasn't yet materialized in your code. I find it's
a good idea to always look at cascading if statements and switch statements
and ask, "Would this be cleaner if I reified these concepts as types?"

[http://en.wikipedia.org/wiki/Reification_(computer_science)](http://en.wikipedia.org/wiki/Reification_\(computer_science\))

~~~
alttab
This is the single most important trick for factoring out shitty code. I
cannot believe how many times reification collapsed complexity in our code
base, or how not using it was the source of bugs.

If you have cascading ifs, there is a good chance there is a huge set of ifs
for _every place_ this type system is missing. Meaning, if you wanted to add
another "case" to a feature, you are modifying cascading ifs in 5-10 places,
not even just one.

Wrapping up all of this code into an implementation of an interface, that is
"hooked in" at each contact point allows you to add a cohesive new use case by
generating a new implementation of an interface, instead of "forgetting 2
places in the code" and causing massive bugs or issues because of it.

It of course, also makes it easier to test!

~~~
platz
So to reify a type involves making an abstraction, which is odd because reify
seems the inverse of making an abstraction...

~~~
derefr
The abstraction already existed in your mental model. You're reifying it by
making it something the code can refer to/reflect on/pass around.

~~~
Arkadir
The only difficulty is naming the abstraction :)

------
usea
I like many of the author's points. Pragmatism, thinking instead of blindly
following principles, pushing back against size as a metric for measuring
responsibility. I think Robert Martin's work absolutely deserves examination
and critique. However, I don't share the author's definitions of simple and
complex.

 _Stating that "binding business rules to persistence is asking for trouble"
is flatly wrong. Au contraire, It's the simplest thing to do, and in most
cases any other solution is just adding complexity without justification._

I don't feel that increasing the class count necessarily increases complexity,
nor do I feel that putting several things into one class reduces it. A dozen
components with simple interactions is a simpler system than a single
component with which clients have a complex relationship. My views align more
closely with those expressed [1] by Rich Hickey in Simple Made Easy.

Classes as namespaces for pure functions can be structured in any way; they
don't have any tangible affect on complexity. "Coupling" is irrelevant if the
classes are all just namespaces for pure functions. I also find that most data
can be plain old data objects with no hidden state and no attached behavior.
If most of your code base is pure functions and plain data, the amount of
complexity will be fairly small. As for the rest, I think that the author's
example of maximizing cohesion and the SRP are functionally identical. They
both recommend splitting up classes based on responsibility, spatial, temporal
coupling, or whatever other metric you want to use. Personally I prefer
reducing the mingling of state, but I think they're many roads to the same
place. Gary Bernhardt's talk Boundaries[2] covers this pretty well.

[1]: [http://www.infoq.com/presentations/Simple-Made-
Easy](http://www.infoq.com/presentations/Simple-Made-Easy)

[2]:
[https://www.destroyallsoftware.com/talks/boundaries](https://www.destroyallsoftware.com/talks/boundaries)

~~~
joevandyk
Unfortunately here, Rails encourages putting each class into a separate file,
so you have 10 classes spread over 10 files, which does increase complexity.

I dislike having a class/module per file.

~~~
kyllo
In Django (the closest thing Python has to Rails) the convention is to put all
your models in one models.py file. I also prefer it this way.

~~~
tragic
Having worked with both, there's a trade-off. Given that in Django you're
(mostly) explicitly importing classes and modules rather than autoloading,
it's handy to have them all in one place. OTOH, when your project grows, you
end up with enormous model files (especially if you follow the fat models/thin
views pattern). So you then have to split them into different apps, so
fragmentation slips in eventually anyway. (In a rails project, unless you're
bolting on engines and such, all your models are at least in one _folder_ ).

Where I definitely do prefer Django in this regard is that models declare
their data fields, rather than them being in a completely different part of
the source as in AR (not Mongoid, I now realise). Do I remember the exact
spelling I gave to every column when I migrated them months ago? No. It's good
to be able to see it all in one place rather than having an extra tab to cycle
through. I don't see any practical benefit from decoupling here.

~~~
kyllo
Especially since the Rails way is not "decoupling" in any real sense.
Splitting tightly coupled code into multiple files != decoupling.

I also like that in Django, you declare the fields on the models first and
then create the db migrations from them, rather than writing a db migration
first to determine what fields the models have.

~~~
tragic
Indeed, decoupling is probably the wrong word here: I haven't seen an ORM
implementation that was not tightly coupled to the database layer, which in
the end is surely the point of an ORM - to represent stuff from the database
in application code. (I know some people consider this a bad abstraction, but
whatever.)

South/1.7 migrations is definitely the best way of the two to manage that
coupling. Rails's charms lie elsewhere.

~~~
kyllo
Right, and the debate raging in the Rails community now is whether your
business logic should be in your models at all, or whether it should be
extracted into plain old ruby objects, separating your domain model from your
data model. Reason being, the OOP purists see it as a violation of the Single
Responsibility Principle--an object should only have one reason to change, and
the models are tightly coupled to the database schema so they have to change
if the schema changes, plus you need to start up a database just to test their
business logic, if you put business logic in them.

Meanwhile a lot of the practically minded developers like DHH just accept that
their objects will be tightly coupled to the database and just deal with it,
claiming that anything else would be adding unnecessary layers of indirection.

I am pretty new to Django, but I get the impression that it's not so hard to
just not put your business logic in models.py, and put it in separate classes
of plain old python objects instead. Maybe that's why I haven't heard about
this debate playing out in the Django community the way it is in the RoR
community...

------
wellpast
>> The future is pretty irrelevant, so asking to design based on future
requirements is uncanny.

The future is irrelevant? That's exactly what design is for -- to protect you
from the future. If you didn't care about the future there would be no need to
design at all.

A good way to evaluate various candidate designs is to imagine future use
cases and ask which design holds up better. A good design will respond really
well (ie., require fewer modifications) to novel use cases. Many people think
this means design starting with future use cases. But you really just need to
design with good principles. Future use cases are useful however at
objectively evaluating designs and resolving arguments, such as Which design
is better for futures that we care about?

>> Stating that "binding business rules to persistence is asking for trouble"
is flatly wrong. Au contraire, It's the simplest thing to do, and in most
cases any other solution is just adding complexity without justification.

Everything is a tradeoff. It doesn't make sense for many cases (hacking, etc)
to avoid the simplest thing. But if you are building a project that's longed-
lived, I can give you countless examples of how coupling business rules to
model objects resulted in pain and copious paper cuts.

For me, I'll always decouple them. Why? Because it costs me nothing extra to
do it and I have internalized the value in this principle from experience.

>>> Not all applications are big enterprise-y behemoths that benefit from
Perfect 100% Decoupling™

You're right. It depends on how much you care about your future. Do you want
your codebase to respond with agility to the future. Or are you okay with
increasing your technical debt?

>>> Therefore, classes should be: >> small enough to lower coupling, but >>
large enough to maximize cohesion.

Not sure I understand. Author seems to suggest these two are in contention
with each other.

>>> Coding is hard and principles should not take the place of thinking.

Absolutely. There are many cases where the rules are to be broken. Systems
with too much flexibility are just as bad as monolithic ones. It's an art what
we do.

~~~
hderms
I think that putting cohesion and coupling on a 1 dimensional axis is kind of
disingenuous. There are an infinite number of design changes that could be
made and some of them increase coupling, some of them increase cohesion and
all number of variations between the two variables. The best designs will find
cohesion and coupling at local maximums.

~~~
kasey_junk
I'm not saying you are wrong, but most of the literature indicates that high
cohesion is correlated with loose coupling. The entire industry treats them as
an axis so I don't think it is disingenuous for the author to do so.

~~~
hderms
I was a bit unclear: my general point is just that you can decouple code in
ways that doesn't necessarily increase cohesion and you can decrease cohesion
in ways that doesn't necessarily increase coupling. If you are designing a
system with some rational interfaces and layers of abstraction, then most of
the time you will find these two variables have a very direct relationship.

------
d0m
I rarely ever use classes anymore. My life is complicated enough, I like my
code to be simpler. I used to be proud of my complex classes hierarchy and
clever designs.. now simple objects and mostly functions. I don't and won't
argue with anyone who prefer to use class hierarchies, but it really annoys me
when I need to spend 5mins wrapping my mind around all those class
relationships where a simple imperative (or functional) code would have done
the job perfectly.

Most of the big design philosophies are around "building for the future", but
the truth is that we're almost always wrong about it. And thus, most of time,
the code needs to be rewritten. And a simpler, straightforward code is much
more easier to rewrite or refactor.

~~~
CmonDev
I rarely ever use functions anymore. My life is complicated enough, I like my
code to be simpler. I used to be proud of my numerous functions and clever
designs.. now simple objects and mostly classes. I don't and won't argue with
anyone who prefer to use plain functions, but it really annoys me when I need
to spend 5mins wrapping my mind around all those functional relationships
where a simple OOP (or imperative) code would have done the job perfectly.

~~~
kyllo
It sounds like you're trying too hard to reason about functions as if they
were, well, objects. You don't need to wrap your mind around "functional
relationships" because functions don't have relationships. They just take
arguments and return values (which can be other functions). Functional
programming is not more complicated, if anything it's less complicated. It's
just different, so it requires un-learning a lot of the things you learned
about procedural and object-oriented programming. And if you have a good
compiler/typechecker it will do the job with much less potential for bugs.

------
jacquesm
Typically I write something 3 times. The first is a proof of concept, I don't
care much about how it works as long as it works. The second time I put a lot
more care into the 'how' but I usually still get a few things wrong enough
that they are out of place and feel awkward to maintain or extend. The third
time I have a really good handle on the problem, where the tricky parts are
and how to tackle them. That third version will then live for many years,
sometimes decades. Some of stuff I wrote like that in the 80's is still alive
and well today (or some descendant thereof).

I try very hard not to get attached to code too much, refactor agressively and
will throw it out when I feel it needs redoing. For larger codebases I tend to
rip out a chunk between a pair of interfaces and tackle that section
indpendently. I'll change either the code _or_ the interfaces but never both
in the same sitting.

------
Cymen
So Uncle Bob talks about SOLID. There is one S there and then we have O, L, I
and D. Applying SOLID is a balance of all of these things. We are craftsmen
applying knowledge and skill to code. Trying to figure out SRP in isolation is
not practical. When you attempt to apply SRP in balance with the other
principles, you get more of the give and take that matches reality. There are
trade offs and deciding what those trade offs are is your responsibility as a
developer.

What the blog post is about is more of a: "I do things this way". Now I'm on a
team and my team mate wants to do things a different way. Instead of realizing
I'm on a team and it is matter of balancing abilities to make a cohesive
software product, I'm going to bicker about why my team mate is wrong. And I'm
going to write a big blog post about it and look at SRP in isolation instead
of applying the same cohesive viewpoint (that I claim my team mate is missing)
to understanding SRP.

~~~
superdude264
I'd love to see a pragmatic take on the open/closed principle. I've seen a
colleague extend a class, override the method in question by copying in the
code from the super-class, then modifying it in the sub-class. Afterward he
then replaces all instantiations of the old class with the new class. 'Open
for extension / closed for modification' seems like it would lead to a
nightmare in a few years.

~~~
kasey_junk
So that is a really dogmatic approach to open/closed that is from the original
formulation like 25 years ago. I've never encountered someone who would follow
that practice when every instance needed to be changed (I assume Meyer would
have construed that as a bug and allowed modification in that case). Even the
less dogmatic Meyer definition of open/closed most people have left behind as
it relies on implementation inheritance which is decidedly out of favor (and I
believe rightly so).

In the more modern reading of the open/closed principle if you have multiple
different variants on the behavior of a thing, you compose those variants in
via an abstract interface. Then as you need to add even more variants you
needn't change the original code any more only introduce more concrete
implementations of the abstract interface that you compose at instantiation
time as necessary. This approach is especially useful as your variant behavior
grows. That is, a single boolean switch is probably easier to reason about
than 2 implementations on an arbitrary interface. But once you get to 3 it
becomes less obvious which is better. Any more than that and I usually reach
for an interface without too much thinking about it.

------
unclebobmartin
SRP is very simple. If two different people want to change a class for two
different reasons, then pull those reasons into two different classes.

That is the SRP.

Example: A class that analyzes a data stream and prints a report. The data
analysis will interest one group of people. The format of the report will
interest another, different, group. The first group will ask for changes to
the algorithms. The second will ask for changes to the format. The principle
says to separate those two concerns.

This goes all the way back to David Parnas and the separation of concerns. His
papers that describe it are freely available on the web. I suggest that they
be studied, because they are full of wisdom and insight.

~~~
pjschwarz
Hi Bob,

>SRP is very simple...different people want to change a class for ...
different reasons...

Back in October 2009, in
[https://sites.google.com/site/unclebobconsultingllc/getting-...](https://sites.google.com/site/unclebobconsultingllc/getting-
a-solid-start) I asked you a question on the SRP and you replied as follows:

"SRP says to keep together things that change for the same reason, and
separate things that change for different reasons. Divergent change occurs
when you group together things that change for different reasons. Shotgun
surgery happens when you keep apart those things that change for the same
reason. So, SRP is about both Divergent Change and Shotgun Surgery. Failure to
follow SRP leads to both symptoms."

Has the avoidance of Shotgun Surgery taken a back seat?

Philip

~~~
unclebobmartin
Not at all. SRP is about enhancing the cohesion of things that change for the
same reasons, and decreasing the coupling between things that change for
different reasons.

------
jdminhbg
In an OOP design where objects already encapsulate both state and behavior,
describing the ideal state of affairs as "single responsibility" seems kind of
optimistic.

~~~
dragonwriter
I think "state and behavior" is a bad way of thinking about it; in OOP design
objects provide behavior. Its true that they encapsulate state, but the state
(ideally) is simply that state necessary to provide the intended behavior. The
behavior covers the area of responsibility, the state is part of the
implementation of the behavior.

~~~
tel
That's still a lot of state. How do you know an object is in the right state
to perform its behavior? Well, acting as some other object you have to either
read the first objdct's state or model it internally. And now we're coupled on
state and behavior.

So what if the first object ensures that it's always in the right state to
perform it's behavior? Well, now it's observationally equivalent to a state-
free actor. Why did you involve state in the first place?

Because languages make it hard to not do so?

------
dev360
This doesn't feel like a genuine analysis of SRP.

The OP asks "but why should a class have one single Reason To Change™?" Answer
is in the text - "If a class has more than one responsibility, then the
responsibilities become coupled. Changes to one responsibility may impair or
inhibit the class’ ability to meet the others. This kind of coupling leads to
fragile designs that break in unexpected ways when changed."

The case the OP makes for mixing Business logic and Persistance is really
ironic too. In thinking he's arguing \against\ SRP, the poor example he gives
actually argues \for\ it. Yes, validation and persistance _usually_
conceptually go together as one responsibility - storing your data.
Calculating payroll (see PDF), is a whole different responsibility, better
suited for a separate abstraction. So the strawman argument is not really in
the PDF.

The book mentions the "Unbalanced" counterpoint already. See: "If, on the
other hand, the application is not changing in ways that cause the the two
responsibilities to change at differen times, then there is no need to
separate them. Indeed, separating them would smell of Needless Complexity.
There is a corrolary here. An axis of change is only an axis of change if the
changes actually occurr. It is not wise to apply the SRP, or any other
principle for that matter, if there is no symptom."

Now for the amazing TLDR of the entire piece, it goes something like this: SRP
is not a clear and fundamentally objective design principle, so let me offer
another principle __huff ____huff __but dont worry its not clear cut (so dont
follow it?!)!

This doesn't lead anywhere. Talking to the co-worker more might have been
better than adopting a defensive attitude.

The big lesson here is, learn concepts like Cohesion vs Coupling, SRP, design
patterns, to give you a common vocabulary, but keep your mind open and try to
write code that is well organized and is free of code smells. If anybody has
suggestions on how things can be improved, measure their suggestion on its
merits. Software development is an excercise in balancing all kinds of
different types of concerns - pun intended :). Sit down and list pros and cons
between alternative approaches, and let the design that best addresses the
problem/constraints at hand win.

And even better, next time, if you want to save yourself some trouble, ask
others for their opinions and share your concerns with the code before you
even write a line of code. Then nobody will get 'religious' in a code review
and it will allow a team to focus on results.

------
hderms
To me it makes sense to think about how much state is being mutated in a given
class. If you have a class with 200 different properties that are changing
somewhat orthogonally to each other, you probably need to think things through
and separate concerns a bit. Still kind of an 'arbitrary principle' but
another way of looking at things. You want to encapsulate the moving parts but
you don't want to throw every moving part in a giant bag.

Designating a class as having a single responsibility is an easy way to ensure
that there is a minimal amount of state per class, but if the abstractions you
are creating with this principle aren't logical than you are still left with a
lot of moving parts someone has to think about when making modifications.

All these principles mostly only work if the system is also being designed
rationally. Applying principles arbitrarily to a messy codebase will not
necessarily get you a good design (and probably not) but thinking about these
principles when making design decisions can sometimes be useful.

~~~
AnimalMuppet
More cynically: Trying to apply a set of principles (any set) instead of
thinking will likely lead to trouble.

~~~
hderms
Haha yeah, I think you have a point there. Perhaps programming is so
challenging it awes otherwise intelligent people and makes them reach out for
simple rules to simplify their lives.

------
userbinator
I've found that if you approach OOP from a more pragmatic point of view,
metrics like the sizes of classes mean very little: the point of classes is to
encapsulate code+data so they can be reused to avoid duplication, so it feels
obvious to me that if you see (non-trivial) functionality in a class that is
likely to be reused in the future, then extract it into another one, but if
you don't then there's no point in doing so, because it will only increase the
overall complexity. Same for OOP in general - it's a tool, use it when it
makes sense and simplifies the design. Sometimes you don't need a class, and
sometimes a function doesn't belong in one because it does things across
several, so all the "which class should this function go in" questions have a
straightforward answer to me: if it's not immediately obvious which class it
goes in, it probably doesn't belong in one. I mainly use Asm/C/C++ so I have
the luxury of doing this, but I can see how some of the "more-constraining"
languages make this more difficult.

 _All the examples I see are one-way towards simply creating a million single
method classes._

That's a phenomenon that I've definitely seen a lot; often with the
accompanying _obfuscation_ that the method bodies have only one or two
statements in them, that just calls into some other methods. It may look like
it's made the code simpler locally, but all it's done is spread the complexity
out over a wider area and increased it. This isn't "well-designed" or
"straightforward", it's almost intentional obfuscation. I've seen this effect
most with programmers who were taught OO design principles very very early,
possibly before they even had a good grasp of concepts like procedures, loops,
and conditionals. "When all you have are classes, everything turns into an
object."

------
samrift
I agree that the SRP is certainly a subjective rather than objective
principle, and possibly general guidance that can and should be broken in
specific circumstance. This article points that out but rather than try to
apply prescriptive guidance around making it more objective in specific
scenarios, the author seems to believe that its subjective nature is too
flawed to fix.

What's the issue?

> A good, valid principle must be clear and fundamentally objective. It should
> be a way of sorting solutions, or to compare solutions.

Okay, I'm listening. What is your alternative?

> It's not a clear-cut principle: it does not tell you how to code. It is
> purposefully not prescriptive. Coding is hard and principles should not take
> the place of thinking.

And.... we're right back to subjective and general again. Set up a straw man
only to knock it down with an identical straw man.

Of course, reducing coupling and raising cohesion makes the class responsible
for less and less... So are we back at the authors interpretation of the SRP?
Seems like it to me.

------
kasey_junk
First let me say, I completely agree that the definition of the SRP as it
shows up in the Uncle Bob book is a little hard to understand. The wording is
off.

That said this entire article is essentially arguing FOR the SRP. The whole
point of the SRP (in conjunction with the rest of the SOLID principles) is to
decrease coupling and increase cohesion. In his refactor of class C we have an
example in the first case of a class that violates the SRP and an example in
the second of 2 classes that follow it. Excellent, the author and Uncle Bob
are both happy.

But what really bothers me about this article is the following: "Furthermore,
there is no reason to separate "business" logic from "persistence" logic, in
the general case. The large majority of Employee classes that people need to
write are likely to only contain fields and maybe some validation -- in
addition to persistence."

A) Please don't tell me what the large majority of things people need to write
are. You cannot possibly actually prove that assertion. In my experience
having a 1 to 1 relationship with a class and a database table is a sign of
either a very simple problem space, or a very poor design.

B) If you do happen to work in a problem space and are finding yourself
writing something that is a collection of fields, some validation and some
persistence, that is not an example of a violation of the SRP, it is an
example of you needing an EmployeeRecord and not an Employee. The difference
is simple, an Employee has complicated business logic in it and a record does
not.

This seems to be the central debate currently around Uncle Bob's tactics. Lots
of people seem to be writing PVC (Persistence-View-Controller) applications
and thinking they are writing MVC (Model-View-Controller) applications. It
then seems to be overkill to split out the persistence layer from the "model"
layer because there isn't much difference. If there isn't much difference you
didn't violate the SRP! Your Single Responsibility is persisting some data!
Your design is fine. Move on with your life. But on the other hand, if you
find your self writing complicated business rules that are hard to test
because of the wiring required for your persistence, maybe you've violated SRP
and should split them up a bit.

------
tunesmith
In the absence of finding the time to really tear it apart, it seems that
there are a few false choices in the article. I don't think a class that
fulfills SRP necessarily means a tiny single-method class. It could mean a
collection of methods that each do one conceptual thing, at one abstract level
of understanding, but where the class and collection of methods is still
cohesive.

The example of the client needing to instantiate B to pass it to A's
constructor seems like a poor example of tight coupling. The only reason to
instantiate B is because A needs it. The client doesn't literally need to know
about B, it simply needs to know how to construct/build A. This could be done
through a factory or service locator, or it could be done by A having B
autowired into it, which would free the client from having to instantiate B
directly.

I do, however, agree that SRP is poorly defined in most of the resources you
find, so if anyone has links of case studies of applying it properly, they'd
be interesting to review.

------
donatj
I'm not an expert but I think he is misinterpreting what is meant by 'change'
entirely. My reading of it is as in the class only has one reason to change
(state). Correct me if I'm wrong.

~~~
Fargren
The SRP, as I was taught it, means that there must for each class only one
kind of change in the requirements of the program that requires a change in
the class. "Kinds of change" are quite vaguely defined, though.

------
patrickmay
From the linked article: "Fundamentally, the SRP principle is therefore a
class sizing principle."

This is incorrect. The SRP is a dependency management principle. It has
nothing to do with the size of classes (however that might be measured). The
goal of this principle, like the rest of the SOLID principles, is to minimize
the impact of changes.

------
palakchokshi
I think you are focusing on the wrong phrase in the definition of SRP. You are
focusing on reason for change without the context of responsibility. The way I
read that definition is: if a class has more than 1 responsibility then making
a change affecting any one of those responsibilities will require the class to
be recompiled where as if those responsibilities were broken up in different
classes making a change affecting one responsibility would not affect the
other classes.

Hence a bug fix does not constitute a "change" since it does not change the
responsibility of the class, neither does a refactor for code clarity or
performance improvements. The purpose/responsibility of "what" the class does,
does not change when you do any of those things.

"sometimes a class with more reasons to change is the simplest thing" Could
you give an example of this? I can't think of one instance.

"Stating that "binding business rules to persistence is asking for trouble" is
flatly wrong. Au contraire, It's the simplest thing to do, and in most cases
any other solution is just adding complexity without justification" Now this
statement here is flatly wrong precisely for the reason specified in the Uncle
Bob's example. Let's do a thought experiment. Consider that you have an
Employee class that is a black box to you. Someone else wrote that code and
didn't have any exception handling logic in their code. The Employee class did
business logic and persistence control. When you tried to use the class there
was an error. Now which part of the class was faulty? The business logic part
or the persistence control part? Now consider that you needed to use the same
Employee class in another project but it needed a different set of business
rules but the same persistence control, you would not be able to use this
Employee class you would have to create a new one and duplicate your
persistence control code.

Design principles have been developed to make sure that code written is
maintainable, extensible, clear, comprehensible. Sure me as a single developer
who is working on a simple project, who knows what each of my class does and
has a small enough code base such that I don't need to separate the
application into layers (e.g. call my database directly from my view because I
just need this one value and don't want to write a couple of classes to that)
can indeed write classes that have more than 1 responsibility. There's nothing
to stop you but it does violate the SRP which will make it tougher for someone
else to come in and maintain your code or extend it.

------
johnlbevan2
I didn't read the whole post since it starts by stating that bug fixes,
performance tweaks and refactoring may be changes - which whilst true is being
deliberately pedantic, and if an article starts out fussing about such
pedantics for more than a paragraph without then owning up to this being a
deliberate extreme case example for illustrative purposes causes me to devalue
the whole article and thus not invest more time reading it.

A change, in the context of SRP is a change of responsibility / functional
purpose.

Regarding the granularity to which you should go to with SRP, the idea is that
it's not fixed; you change this as feels best for your code base. If you write
code without much logic (e.g. a basic CRUD application with minimal
validation) it's fine for the classes representing your tables' records to
also hold their logic, since at this stage the single responsibility is, say,
Employee. Once you start to get something more complex the separation of
concerns becomes more important, so you need to break this apart into Employee
Record and Employee Validation; perhaps more layers. That may seem
contradictory as there are now two classes with two responsibilities where
before there was only one class with one and a bit responsibilities - but the
key is to be pragmatic. Don't write thousands of lines of code and multiple
classes if that introduces complexity with no pay off. However, if you have
one class that does too much it'll become hard to maintain; and being aware of
the SRP principle will help you work out a sensible way to break that class
down into more comfortably manageable chunks.

------
Arkadir
The Single Responsibility Principle is like Body Mass Index for code: it's
easy to measure (humans have an innate sense of what "reponsibility" means)
but outside of extreme situations it is not precise enough to base serious
decisions on: there is a huge grey area where people do not agree on what
responsibilities are.

It's the same as EDD: it's fairly subjective, but it weeds out the extreme
cases.

The benefits of SRP are usually a side-effect of applying more objective rules
to the code.

DRY is the primary driving force. It pulls shared responsibilities out of
modules and leaves no doubt that those responsibilities were shared. It
detects recurring concepts and gives them a representation, thereby increasing
coupling.

The second force is to reduce access to private code: out of 100 lines of code
in your module, how many actually need to access that private concept on line
42 ? If the answer is "30", then what are those other 70 lines doing in this
module ?

Apply both forces to a code base, and the SRP will appear out of nowhere.

------
strictfp
I always interpreted the SRP in terms of how well one can describe the
responsibility of the class.

If one has to write "This class does x and y and sometimes z if w", it's got
several responsibilities.

If one can write "This class does x", it's OK.

X can then be arbitrarily complex, so code size has nothing to do with it in
my world.

------
nercury
"The purpose of class is to minimize complexity" "The purpose of class is to
organize code" "A class should have one, and only one, reason to change"

What?

What about: "The purpose of class is to encapsulate state".

If there is no inner fields in class, there is no reason for class. There is
no state.

One or more fields in class means that they make sense when looking at all of
them together, as a "state". If some method changes something, the all state
encapsulated by class means something else.

It is possible to extrapolate all other guidelines from this basic starting
point.

~~~
DiscoBeat
Do you see "having a state" as "being able to change state" ? Because classes
may be used to represent concepts or objects from the real world. Some of them
may not have a changeable state but inner fields certainly.

------
jbangerter
Likewise with many here, I agree with most of the author's points, especially
that one should think for one's self, which sklivvz did very well.

However, the fear of many little objects smells strikes me as backwards. Unix
systems are built with and used by many little utilities. As a result, a few
simple commands can be strung together to make quick work of interacting with
the system. Often, I find that the hardest utilities to master are the ones
with the broadest scope. My experiences with large objects are similar.

~~~
DiscoBeat
You got a point. Let me add that little objects allow you change inheritance
for composition more easily.

------
vinceguidry
I think the "don't mix persistence with biz logic" idea came largely from the
Rails world, where the ease of doing so led to a great deal of bloated code.
Well, it's only slightly harder in Ruby to separate them, and it gives so many
ancillary benefits to cutting ActiveRecord::Base out of your biz logic that
you probably should. Other languages/platforms, they don't always make this
kind of separation easy or idiomatic.

~~~
dragonwriter
> I think the "don't mix persistence with bizlogic" idea came largely from the
> Rails world

I'm pretty sure I encountered that particular example of things that shouldn't
be coupled in OO design before Rails _existed_.

~~~
vinceguidry
I guess I wasn't clear enough. Certainly the _idea_ predated Rails, but I
think the heavy insistence you see pretty much everywhere on segregating them
probably came from Rails.

~~~
altcognito
“Multi-tier architectures are characterized by the separation of the user
interface, business logic and data access logic. Many organizations are
implementing multi-tier architectures for enterprise applications to realize
the two key benefits."

1996, and I am pretty sure this marketing speak page reflects what had already
been being talked about in the industry for quite some time.

[http://edn.embarcadero.com/article/10134](http://edn.embarcadero.com/article/10134)

------
th3iedkid
In most OO languages like Java , classes the only large-scale structuring
mechanism.An opposite example is OCaml which provides both classes as well as
sophisticated module system.So classes in these(like Java) most languages tend
to get tied with more and more features ( from lang perspective) to which
implementers are forced to adapt!Hence most of these principles fail to see
day-light in practice!

------
andystannard
I think SRP in terms of using classes with dependency injection. If you are
injecting an instance of a class you only want it do do one thing and changing
the class being injected you only want it to change one piece of behavior. If
the class has more than one responsibility it would mean that injecting
dependencies would become unmanageable.

------
CmonDev
"If client code need to know class B in order to use class A, then A and B are
said to be coupled. " \- still better than the common 'Service Locator' anti-
pattern where you aren't even aware of dependency.

------
nraynaud
I generally go by file size now, after 200 lines, we need to find an OOP
excuse to split (interestingly, I almost never merge, but there might some
thermodynamic reason for that).

~~~
rectangletangle
I do it this way simply because I don't like having to scroll too much.

------
enterx
How else would you add N new persistence options to your class that implements
persistance & data model?

~~~
fleitz
I prefer to work on features rather than changing databases for the fun of it.

------
sillysaurus3
It seems like the best way to design a codebase is to redesign (rewrite) it
several times and then go with the most succinct design. For any given
problem, it's unlikely you'll design the codebase properly on the first try.
Also:

 _when one writes code, there are only real, present requirements. The future
is pretty irrelevant_

I'd disagree with this. The future of your codebase matters unless you're
writing a throwaway prototype. And when you rewrite a codebase N times, you'll
discover that there are ways of structuring it so that future tasks will
become far easier. E.g. for Lisp you'd refactor common patterns into utility
functions, and for C you'd carefully craft the interfaces between modules so
that they're unlikely to be used improperly / unlikely to be surprising.

------
fleitz
Fuck SRP, just read the resulting code.

Whichever code is more concise & readable is generally the winner.

(Note: a lack of understanding of algebra / calculus does not make the code
'unreadable', it just means the developer is innumerate)

~~~
kasey_junk
Unless it fails to meet the requirements or is written in such a way that will
require more maintenance...

~~~
fleitz
Well, I'm assuming we're comparing code that works.

Maintenance is generally a red herring, it's best to use statistical methods
to determine the likelihood of maintenance.

Usually if you're doing lots of maintenance you have other problems in your
code / workflow, such as mistaking your codebase for your database.

~~~
kasey_junk
"Well, I'm assuming we're comparing code that works."

That is an interesting assumption given that in many problem domains proving
that the code works as specified is the hardest problem.

"Maintenance is generally a red herring, it's best to use statistical methods
to determine the likelihood of maintenance."

I really dislike anyone who makes general claims about the entirety of
software development. I for one spend way more time changing existing code
than writing new code. I know that there are problem domains where that is not
the case so I don't recommend my methods for those spaces.

What are these statistical methods you are referring to?

"Usually if you're doing lots of maintenance you have other problems in your
code / workflow, such as mistaking your codebase for your database."

Or you are working in a problem domain that shifts a lot, or where
requirements specification is more expensive than deployment opportunity
costs, or in a legacy system.

~~~
fleitz
Proving your hand won't go through a wall is a difficult problem in quantum
physics, for most people they just push against the wall. I don't care whether
the code is provably correct, I care that it does what the user expects.

Actually 'change requests' are even easier when the requirements change
frequently, just provide an estimate in excess of when you think the next
change will be, then put your feet up and wait for the requirements to change
again.

PS. Changing requirements isn't 'maintenance', it's a change request.

PPS. Having to add code to add fields to a form means you spec'd your solution
around your forms instead of specing your solution around solving the problem
of changing forms. (aka. you baked your problem domain into your code and now
you're fucked) (eg. you mistook your codebase for your database)

------
lugg
Size is, and has always been, at least in my opinion, a symptom of failing the
single responsibility principle. Its not the cause. Its only useful as an
indicator, you still have to look at the case, figure out if the function /
class is doing more than one thing. And even then, a function / class can do
more than one thing, to increase cohesion, and limit bloat / complexity.

General rule I use: does this block / section of code increase complexity of
the overall purpose of the function or class. If it does, it should probably
move, likewise if it isn't relevant to the overall functionality / behaviour
it should also probably be moved.

Everything is a trade off in the tech world. You can't argue you are right or
they are wrong, only that you are right in this instance or that.

------
rimantas
Reminds me of this: [http://bendyworks.com/geekville/articles/2014/2/single-
respo...](http://bendyworks.com/geekville/articles/2014/2/single-
responsibility-principle-ios)

Comments are the most interesting part :)

