Hacker News new | comments | show | ask | jobs | submit login
I don't love the single responsibility principle (sklivvz.com)
123 points by sklivvz1971 1055 days ago | hide | past | web | 120 comments | favorite

This obviously is mostly smoke and mirrors. Uncle Bob recycles Separation of concerns [1] and overshadows it in the process. You won't define what are good or bad boundaries in code anymore than you will define "5 easy rules to author a great piece of fiction". The computer doesn't give a damn about separated concerns, it's all about communication between humans. As such it is a matter of feeling, dare I say emotion, and not a matter of laws and rules. Ultimately our ability to break problems in sub-problems tends to be bounded by the words we have at our disposal.

The TCP/IP stack exemple in the Wikipedia article linked below hints at SoC pretty well in my opinion. One layer allow the layer above to disregard details about the physical link. The next layer makes cross-network addressing transparent. The next one allows apps to disregard packet ordering and loss, and so on. Each layer solves a problem for the one above. Yet one can have multiple responsibilities. This is all about cognitive load.

A system described in code is nothing more than a proposed consensus around tools, names, boundaries, paradigms to be used to talk and think about a solution to a problem. The ultimate measure of success is this: our ability to tame the binary beast. If "change" plays a role (it sure does) it's probably not the main or only metric. Well ... when you're not selling "ability to change" consulting and books to software companies, this goes without saying.

[1] http://en.wikipedia.org/wiki/Separation_of_concerns

Great post. I think this is the fundamental thesis proposed by Abelson and Sussman in SICP. You build a system in layers, each relying on the one below.

The reason you do this is to manage cognitive load in humans, not "coupling" in computers.

It is.

First, we want to establish the idea that a computer language is not just a way of getting a computer to perform operations but rather that it is a novel formal medium for expressing ideas about methodology. Thus, programs must be written for people to read, and only incidentally for machines to execute. Second, we believe that the essential material to be addressed by a subject at this level is not the syntax of particular programming-language constructs, nor clever algorithms for computing particular functions efficiently, nor even the mathematical analysis of algorithms and the foundations of computing, but rather the techniques used to control the intellectual complexity of large software systems. [1]

[1] http://mitpress.mit.edu/sicp/full-text/book/book-Z-H-7.html

Superb. You obviously get it.

I have to wonder if part of the reason there's so much bickering about this concept is that a lot of apps aren't a complex problem domain, but rather they just drown slowly in incidental complexity and feature creep.

IMO it stems from the fact that managers up the ladder don't understand or have never been told that software that produces the expected results isn't necessarily finished work. There is a reason most of us have this gut feeling, this strive for elegance. We can only be at rest when we feel we master the creature we have built, and can live up to our clients expectation of maintaining it and making it evolve. Managers see it under a different light. TDD was made for them, for they are those who buy it. At the end, cut the costs as much as possible. The system (still) gives the expected results ? Alright, job done. There is no consideration for the hell of a ball of mud that often drives the whole thing behind the curtain. Yet, at the end they are those who pay for hiring the wrong persons or following the wrong cult. This is how you get overbudget or fail a project. TDD ? Separation of concerns ? Why not, but the same goal of working and flexible code can be met by developing and encouraging good craft in a team. Not by luring people in adhering to practices that constrain them to do a bad job unbeknownst to them. Testing and structure stem from good craft, not the other way around.

I like many of the author's points. Pragmatism, thinking instead of blindly following principles, pushing back against size as a metric for measuring responsibility. I think Robert Martin's work absolutely deserves examination and critique. However, I don't share the author's definitions of simple and complex.

Stating that "binding business rules to persistence is asking for trouble" is flatly wrong. Au contraire, It's the simplest thing to do, and in most cases any other solution is just adding complexity without justification.

I don't feel that increasing the class count necessarily increases complexity, nor do I feel that putting several things into one class reduces it. A dozen components with simple interactions is a simpler system than a single component with which clients have a complex relationship. My views align more closely with those expressed [1] by Rich Hickey in Simple Made Easy.

Classes as namespaces for pure functions can be structured in any way; they don't have any tangible affect on complexity. "Coupling" is irrelevant if the classes are all just namespaces for pure functions. I also find that most data can be plain old data objects with no hidden state and no attached behavior. If most of your code base is pure functions and plain data, the amount of complexity will be fairly small. As for the rest, I think that the author's example of maximizing cohesion and the SRP are functionally identical. They both recommend splitting up classes based on responsibility, spatial, temporal coupling, or whatever other metric you want to use. Personally I prefer reducing the mingling of state, but I think they're many roads to the same place. Gary Bernhardt's talk Boundaries[2] covers this pretty well.

[1]: http://www.infoq.com/presentations/Simple-Made-Easy

[2]: https://www.destroyallsoftware.com/talks/boundaries

Unfortunately here, Rails encourages putting each class into a separate file, so you have 10 classes spread over 10 files, which does increase complexity.

I dislike having a class/module per file.

In Django (the closest thing Python has to Rails) the convention is to put all your models in one models.py file. I also prefer it this way.

Having worked with both, there's a trade-off. Given that in Django you're (mostly) explicitly importing classes and modules rather than autoloading, it's handy to have them all in one place. OTOH, when your project grows, you end up with enormous model files (especially if you follow the fat models/thin views pattern). So you then have to split them into different apps, so fragmentation slips in eventually anyway. (In a rails project, unless you're bolting on engines and such, all your models are at least in one folder).

Where I definitely do prefer Django in this regard is that models declare their data fields, rather than them being in a completely different part of the source as in AR (not Mongoid, I now realise). Do I remember the exact spelling I gave to every column when I migrated them months ago? No. It's good to be able to see it all in one place rather than having an extra tab to cycle through. I don't see any practical benefit from decoupling here.

Especially since the Rails way is not "decoupling" in any real sense. Splitting tightly coupled code into multiple files != decoupling.

I also like that in Django, you declare the fields on the models first and then create the db migrations from them, rather than writing a db migration first to determine what fields the models have.

Indeed, decoupling is probably the wrong word here: I haven't seen an ORM implementation that was not tightly coupled to the database layer, which in the end is surely the point of an ORM - to represent stuff from the database in application code. (I know some people consider this a bad abstraction, but whatever.)

South/1.7 migrations is definitely the best way of the two to manage that coupling. Rails's charms lie elsewhere.

Right, and the debate raging in the Rails community now is whether your business logic should be in your models at all, or whether it should be extracted into plain old ruby objects, separating your domain model from your data model. Reason being, the OOP purists see it as a violation of the Single Responsibility Principle--an object should only have one reason to change, and the models are tightly coupled to the database schema so they have to change if the schema changes, plus you need to start up a database just to test their business logic, if you put business logic in them.

Meanwhile a lot of the practically minded developers like DHH just accept that their objects will be tightly coupled to the database and just deal with it, claiming that anything else would be adding unnecessary layers of indirection.

I am pretty new to Django, but I get the impression that it's not so hard to just not put your business logic in models.py, and put it in separate classes of plain old python objects instead. Maybe that's why I haven't heard about this debate playing out in the Django community the way it is in the RoR community...

Interesting. In CommonJS modules, a file can only export one thing. You could namespace multiple things into one exported object, though I find that granular dependencies can lead to insights about how reusable your modules really are.

Why do you say it increases complexity?

If I'm in extreme mode I take the view that each file should be a single screen. That means a tangible reduction in the complexity of working on them (no more scrolling - each class is just in its own tab).

This can be solved with standard IDEs. Putting two modules or classes into a single file pretty much guarantees a level of coupling. This does not reduce complexity.

By that definition, I could just as easily argue that requiring different files for every class reduces cohesion. The idea that class definitions and file definitions are in any way related is a leaky abstraction.

I've never been a fan of the class-file coupling. It pulls me out of the mental model I'm trying to build in my head and forces me to think about file organization which is almost always inconsistent with the language semantics I'm dealing with.

I've used IDE's that make this more or less painful, but none that actually solved it. If anyone has any suggestions on one that does, I'd be interested to try it out. I don't really care what language. I can pick up enough to see what it feels like.

I also want to say that rich hickey talked about a file as a unit of code not being very good, but I don't recall where, or if he really said it. I want to say it was in a Datomic podcast right around when details about it were coming out.

I think it's this podcast, where Rich Hickey explains codeq:


That is standard practice in many languages.

I too identify strongly with Rich Hickey's view on this. That's not to say Uncle Bob is wrong, but I don't think he is as clear a communicator. I see Uncle Bob as having a lot of wisdom that he is able to apply based on his experience but which becomes very hand-wavy when he tries to explain it.

UB happens to be flatly wrong. UB says that docucomments re-stating what the simple function does are excessive and bad. This is totally wrong when one looks at the generated documentation, but UB doesn't seem to use the documentation much. He seems to be one of the people who prefer digging through the code, even if presented with sensible API documentation.

I understand he's a polarizing figure and is overly prescriptive of things that are a matter of style, but his stance on documentation doesn't seem germane here.

I'll add this to your links: https://www.youtube.com/watch?v=cidchWg74Y4

He talks about his definition of "simple" (by digging into what the original English definition was) and what that means for code.

This article speaks about one of the many, many reasons I don't like Robert Martin's approach to programming. If some principle results in a handful of single-method classes that don't really do anything on their own, the principle is not a good basis for design.

I find the author's alternative much more useful in practice.

> If some principle results in a handful of single-method classes that don't really do anything on their own, the principle is not a good basis for design.

Absolutely agree. Proponents of approaches like this tend to only worry about intra-object complexity, and ignore the fact that a vast, complicated object graph is also hard to reason about.

only worry about intra-object complexity

That reminds me of this, not sure if it's satirical or not: http://www.antiifcampaign.com/

Basically it's advocating removing if/switch statements and replacing them with polymorphic method calls. I understand that polymorphism has its value, but think that it's only valuable when, for lack of a better phrase, the thing that's being polymorphosed is "big and varied" enough that it makes sense to impose this extra level of indirection. I think that it being hard to explain when something is worth it, is enough justification that principles shouldn't be arbitrarily decided based on that.

I don't understand that page in the slightest, but a lot of if-cascades can be factored more cleanly using polymorphism.


(Disclaimer. I wrote the top of that wiki page in a former life.)

However, if you already have a class type for each key

I think that's precisely what I was trying to say - polymorphism makes sense when you already have a bunch of classes to do it with, and classes which already contain lots of other fields and methods; it doesn't make sense to create a bunch of classes just to use polymorphism.

Often, though, a series of complicated if statements is hinting at a type system for your objects that hasn't yet materialized in your code. I find it's a good idea to always look at cascading if statements and switch statements and ask, "Would this be cleaner if I reified these concepts as types?"


This is the single most important trick for factoring out shitty code. I cannot believe how many times reification collapsed complexity in our code base, or how not using it was the source of bugs.

If you have cascading ifs, there is a good chance there is a huge set of ifs for every place this type system is missing. Meaning, if you wanted to add another "case" to a feature, you are modifying cascading ifs in 5-10 places, not even just one.

Wrapping up all of this code into an implementation of an interface, that is "hooked in" at each contact point allows you to add a cohesive new use case by generating a new implementation of an interface, instead of "forgetting 2 places in the code" and causing massive bugs or issues because of it.

It of course, also makes it easier to test!

So to reify a type involves making an abstraction, which is odd because reify seems the inverse of making an abstraction...

The abstraction already existed in your mental model. You're reifying it by making it something the code can refer to/reflect on/pass around.

The only difficulty is naming the abstraction :)

Great point. Squeezing unrelated objects into class hierarchies to avoid a couple of trivial "ifs" is the ultimate destroyer of readability.

If it is sensible to call the same function with them as the same parameter, the objects are not unrelated. There is a very real and relevant-to-the-application sense in which they share a common type. (Now, if they aren't sharing implementation -- which they clearly aren't in at least one relevant way if you are avoiding an "if" statement by putting them in a class heirarchy -- then it probably makes more sense for them to implement a common interface rather than being subclasses of the same class, in a language which distinguishes those rather than using multiple inheritance class heirarchies plus method overloading to serve the role of interfaces.)

I can't help you determine if that page is satire or not, but now you've got this song running through my head: https://www.youtube.com/watch?v=-Lx8c3-djc8

Agreed. I find the best advice comes from people who have built significant systems that are both complex and innovative for their time. Rob Pike and Dennis Kernighan's The Practice of Programming is one of my favorites for this reason.

They tend to recommend solutions that are highly dependent on the problem to be solved.

Trying it the other way round, i.e. fitting the problem to an idealized solution, rarely works, and I find this is what I see a lot when I see people who place an emphasis on being "object-oriented" as opposed to just solving the problem with minimum redundancy but not twisting their code to do it.

As an example, creating a function and calling it twice from two similar classes is much easier to read than inventing an intermediate class that is an ancestor of the two (something I see very often from the hardcore OO folks).

"As an example, creating a function and calling it twice from two similar classes is much easier to read than inventing an intermediate class that is an ancestor of the two (something I see very often from the hardcore OO folks)."

You really see this a lot? As far as I can remember using inheritance over composition for de-duplicating code has been considered bad practice in OO circles for at least 15 years.

I definitely still see it, "best practices" once enshrined, change slowly.

Wow, that's terrible, but I'd prefer if you referred to those people as "out of date OO people" as opposed to "hardcore OO people". Even though I don't consider myself a "hardcore OO" person now, when I did this would bother me.

I concur. Many programmers, when attempting to reuse code, still reach blindly for inheritance in 2014.

> If some principle results in a handful of single-method classes that don't really do anything on their own, the principle is not a good basis for design.

I call those "half responsibility classes", where classes don't actually have any responsibility other that carrying the little extra information the parent needs to perform the actual job.

That's bad, because classes are a way to encapsulate data and logic; and those... _things_ don't have any logic.

That's terrible OOP right there. I don't think the SRP encourages such thinking.

> I don't think the SRP encourages such thinking.

I think the SRP is a subtle and often poorly explained thing that is very easy to misunderstand in a way which encourages such thinking, even though understood properly, it does not. The basic problem is getting the right "level" for the responsibility in context, and that's not an easy thing to explain how to do.

It seems like the best way to design a codebase is to redesign (rewrite) it several times and then go with the most succinct design. For any given problem, it's unlikely you'll design the codebase properly on the first try. Also:

when one writes code, there are only real, present requirements. The future is pretty irrelevant

I'd disagree with this. The future of your codebase matters unless you're writing a throwaway prototype. And when you rewrite a codebase N times, you'll discover that there are ways of structuring it so that future tasks will become far easier. E.g. for Lisp you'd refactor common patterns into utility functions, and for C you'd carefully craft the interfaces between modules so that they're unlikely to be used improperly / unlikely to be surprising.

>> The future is pretty irrelevant, so asking to design based on future requirements is uncanny.

The future is irrelevant? That's exactly what design is for -- to protect you from the future. If you didn't care about the future there would be no need to design at all.

A good way to evaluate various candidate designs is to imagine future use cases and ask which design holds up better. A good design will respond really well (ie., require fewer modifications) to novel use cases. Many people think this means design starting with future use cases. But you really just need to design with good principles. Future use cases are useful however at objectively evaluating designs and resolving arguments, such as Which design is better for futures that we care about?

>> Stating that "binding business rules to persistence is asking for trouble" is flatly wrong. Au contraire, It's the simplest thing to do, and in most cases any other solution is just adding complexity without justification.

Everything is a tradeoff. It doesn't make sense for many cases (hacking, etc) to avoid the simplest thing. But if you are building a project that's longed-lived, I can give you countless examples of how coupling business rules to model objects resulted in pain and copious paper cuts.

For me, I'll always decouple them. Why? Because it costs me nothing extra to do it and I have internalized the value in this principle from experience.

>>> Not all applications are big enterprise-y behemoths that benefit from Perfect 100% Decoupling™

You're right. It depends on how much you care about your future. Do you want your codebase to respond with agility to the future. Or are you okay with increasing your technical debt?

>>> Therefore, classes should be: >> small enough to lower coupling, but >> large enough to maximize cohesion.

Not sure I understand. Author seems to suggest these two are in contention with each other.

>>> Coding is hard and principles should not take the place of thinking.

Absolutely. There are many cases where the rules are to be broken. Systems with too much flexibility are just as bad as monolithic ones. It's an art what we do.

I think that putting cohesion and coupling on a 1 dimensional axis is kind of disingenuous. There are an infinite number of design changes that could be made and some of them increase coupling, some of them increase cohesion and all number of variations between the two variables. The best designs will find cohesion and coupling at local maximums.

I'm not saying you are wrong, but most of the literature indicates that high cohesion is correlated with loose coupling. The entire industry treats them as an axis so I don't think it is disingenuous for the author to do so.

I was a bit unclear: my general point is just that you can decouple code in ways that doesn't necessarily increase cohesion and you can decrease cohesion in ways that doesn't necessarily increase coupling. If you are designing a system with some rational interfaces and layers of abstraction, then most of the time you will find these two variables have a very direct relationship.

The most important rule about the future is that You Aren't Gonna Need It. If the future is known then it is called "requirements". Design is for protecting you from the unknown future: allowing code to be changed in arbitrary, unforeseen ways.

As for coupling and cohesion: if modules are too small, they usually lack cohesion ; if module are too big, they usually have high internal coupling. The sweet spot is in the middle. And you can create low-cohesion, high-coupling modules if you want :-)

I rarely ever use classes anymore. My life is complicated enough, I like my code to be simpler. I used to be proud of my complex classes hierarchy and clever designs.. now simple objects and mostly functions. I don't and won't argue with anyone who prefer to use class hierarchies, but it really annoys me when I need to spend 5mins wrapping my mind around all those class relationships where a simple imperative (or functional) code would have done the job perfectly.

Most of the big design philosophies are around "building for the future", but the truth is that we're almost always wrong about it. And thus, most of time, the code needs to be rewritten. And a simpler, straightforward code is much more easier to rewrite or refactor.

I also went through the stages of using classes with inheritance, and then using plain functions on untyped Plain-old-Data objects.

There's a third stage, though: using classes without inheritance, but with interface/protocol declarations. This is made much easier in the latest generation of languages (Go, Rust, Elixir) that infer the interfaces/protocols which a given piece of data supports, allowing you to use interfaces/protocols without classes.

This was just as easy in the previous generation :)

OCaml has had this feature since 1996.

Couldn't have said it better. Classes are useful, but only occasionally so. About the only time I find myself using classes is when there's several functions that have a very strong relationship to a particular portion of state (like when using an ORM). The rest of the time it's referentially transparent functions.

My beef with plain functions is that you can't mock them properly. But apart from this, yes. Referential transparency as much as possible.

This is highly dependent on your language. A language where functions can be passed as values allows functions to be injected as parameters (for polymorphic behavior). In Clojure, you can even replace a function definition just for unit tests without dependency injection at all.

For example, with-redefs. This allows you to replace a given function anywhere in the whole codebase with any lambda. This rebinding only lasts inside the scope of the s-expression. When I realized that, I realized everything I knew about DI and interfaces were just hacks of incredible complexity to get around a poor language that wasn't built to allow unit testing. I now think it's impossible to have both unit testing and a good design in Java/C#. Neither were made to be easy to unit test, it wasn't even a concept when Java was made.

In C#'s defense, there is a way to mock a static function by replacing it using the Mocks library. To MS's shame, that library requires the 13k edition of Visual Studio. When I realized that, I realized I want to leave C# for good. I'm not interested anymore in a language that resists testing so much I have to choose between an easy to understand design and unit tests.

Hm, that may be true for Clojure (or Python, as long as you don't redefine anything important), but I wouldn't how to do that with Haskell, for instance.

It's true that you can't replace arbitrary pieces with no source changes in Haskell. On the other hand, there are a few approaches to making this easier - implicit parameters seems made for this kind of thing, "foo = fooBy whatever" as a pattern leaves it easy to swap in an alternative; and of course keeping functions pure it becomes less necessary to swap out internal bits because they don't matter.

What situation would require to mock a function ?

Maybe I'm not interested in testing what the function does? But yes, for pure functions, it's kind of a weak argument. More generally, you can't really swap a different implementation in place.

I answered this in my other answer to you: https://news.ycombinator.com/item?id=7709395

It's a common misconception that interfaces are the only way to do polymorphism, because that's usually the only way in Java.

when you require unit tests to finish quickly for functions doing io or otherwise time consuming processes. this being a good idea or not I'll leave to the threads about dhh's tdd article

I would argue that functions have no business performing global side-effects in the first place --- these things are better expressed as state encapsulators (such as objects) anyway.

I rarely ever use functions anymore. My life is complicated enough, I like my code to be simpler. I used to be proud of my numerous functions and clever designs.. now simple objects and mostly classes. I don't and won't argue with anyone who prefer to use plain functions, but it really annoys me when I need to spend 5mins wrapping my mind around all those functional relationships where a simple OOP (or imperative) code would have done the job perfectly.

It sounds like you're trying too hard to reason about functions as if they were, well, objects. You don't need to wrap your mind around "functional relationships" because functions don't have relationships. They just take arguments and return values (which can be other functions). Functional programming is not more complicated, if anything it's less complicated. It's just different, so it requires un-learning a lot of the things you learned about procedural and object-oriented programming. And if you have a good compiler/typechecker it will do the job with much less potential for bugs.

I have a hard time thinking of a situation where proper objects-and-functions code becomes easier to understand by removing the functions. Do you have any examples ?

So Uncle Bob talks about SOLID. There is one S there and then we have O, L, I and D. Applying SOLID is a balance of all of these things. We are craftsmen applying knowledge and skill to code. Trying to figure out SRP in isolation is not practical. When you attempt to apply SRP in balance with the other principles, you get more of the give and take that matches reality. There are trade offs and deciding what those trade offs are is your responsibility as a developer.

What the blog post is about is more of a: "I do things this way". Now I'm on a team and my team mate wants to do things a different way. Instead of realizing I'm on a team and it is matter of balancing abilities to make a cohesive software product, I'm going to bicker about why my team mate is wrong. And I'm going to write a big blog post about it and look at SRP in isolation instead of applying the same cohesive viewpoint (that I claim my team mate is missing) to understanding SRP.

I'd love to see a pragmatic take on the open/closed principle. I've seen a colleague extend a class, override the method in question by copying in the code from the super-class, then modifying it in the sub-class. Afterward he then replaces all instantiations of the old class with the new class. 'Open for extension / closed for modification' seems like it would lead to a nightmare in a few years.

So that is a really dogmatic approach to open/closed that is from the original formulation like 25 years ago. I've never encountered someone who would follow that practice when every instance needed to be changed (I assume Meyer would have construed that as a bug and allowed modification in that case). Even the less dogmatic Meyer definition of open/closed most people have left behind as it relies on implementation inheritance which is decidedly out of favor (and I believe rightly so).

In the more modern reading of the open/closed principle if you have multiple different variants on the behavior of a thing, you compose those variants in via an abstract interface. Then as you need to add even more variants you needn't change the original code any more only introduce more concrete implementations of the abstract interface that you compose at instantiation time as necessary. This approach is especially useful as your variant behavior grows. That is, a single boolean switch is probably easier to reason about than 2 implementations on an arbitrary interface. But once you get to 3 it becomes less obvious which is better. Any more than that and I usually reach for an interface without too much thinking about it.

Apply the Template Method pattern. The parent class doesn't seem to be extensible.

"Instead of thinking out of my box for once, I'll biker about OP that's probably not a craftsman (tm) and assume pretty much anything about him and his bad intentions provided it helps me rationalize my fear in a belief-preserving way".

So no thanks, I don't want Uncle Bobby to save me, I don't need an encyclopedia either. Have a good day my friend. (Go easy on the blue pill ...)

I may have been a bit harsh, sorry for that.

I too was too harsh and I too apologize. It can be a struggle to try to convey one's thoughts about a subject while still being reasonable.

Typically I write something 3 times. The first is a proof of concept, I don't care much about how it works as long as it works. The second time I put a lot more care into the 'how' but I usually still get a few things wrong enough that they are out of place and feel awkward to maintain or extend. The third time I have a really good handle on the problem, where the tricky parts are and how to tackle them. That third version will then live for many years, sometimes decades. Some of stuff I wrote like that in the 80's is still alive and well today (or some descendant thereof).

I try very hard not to get attached to code too much, refactor agressively and will throw it out when I feel it needs redoing. For larger codebases I tend to rip out a chunk between a pair of interfaces and tackle that section indpendently. I'll change either the code or the interfaces but never both in the same sitting.

In an OOP design where objects already encapsulate both state and behavior, describing the ideal state of affairs as "single responsibility" seems kind of optimistic.

I think "state and behavior" is a bad way of thinking about it; in OOP design objects provide behavior. Its true that they encapsulate state, but the state (ideally) is simply that state necessary to provide the intended behavior. The behavior covers the area of responsibility, the state is part of the implementation of the behavior.

That's still a lot of state. How do you know an object is in the right state to perform its behavior? Well, acting as some other object you have to either read the first objdct's state or model it internally. And now we're coupled on state and behavior.

So what if the first object ensures that it's always in the right state to perform it's behavior? Well, now it's observationally equivalent to a state-free actor. Why did you involve state in the first place?

Because languages make it hard to not do so?

SRP is very simple. If two different people want to change a class for two different reasons, then pull those reasons into two different classes.

That is the SRP.

Example: A class that analyzes a data stream and prints a report. The data analysis will interest one group of people. The format of the report will interest another, different, group. The first group will ask for changes to the algorithms. The second will ask for changes to the format. The principle says to separate those two concerns.

This goes all the way back to David Parnas and the separation of concerns. His papers that describe it are freely available on the web. I suggest that they be studied, because they are full of wisdom and insight.

Hi Bob,

>SRP is very simple...different people want to change a class for ... different reasons...

Back in October 2009, in https://sites.google.com/site/unclebobconsultingllc/getting-... I asked you a question on the SRP and you replied as follows:

"SRP says to keep together things that change for the same reason, and separate things that change for different reasons. Divergent change occurs when you group together things that change for different reasons. Shotgun surgery happens when you keep apart those things that change for the same reason. So, SRP is about both Divergent Change and Shotgun Surgery. Failure to follow SRP leads to both symptoms."

Has the avoidance of Shotgun Surgery taken a back seat?


Not at all. SRP is about enhancing the cohesion of things that change for the same reasons, and decreasing the coupling between things that change for different reasons.

This doesn't feel like a genuine analysis of SRP.

The OP asks "but why should a class have one single Reason To Change™?" Answer is in the text - "If a class has more than one responsibility, then the responsibilities become coupled. Changes to one responsibility may impair or inhibit the class’ ability to meet the others. This kind of coupling leads to fragile designs that break in unexpected ways when changed."

The case the OP makes for mixing Business logic and Persistance is really ironic too. In thinking he's arguing \against\ SRP, the poor example he gives actually argues \for\ it. Yes, validation and persistance usually conceptually go together as one responsibility - storing your data. Calculating payroll (see PDF), is a whole different responsibility, better suited for a separate abstraction. So the strawman argument is not really in the PDF.

The book mentions the "Unbalanced" counterpoint already. See: "If, on the other hand, the application is not changing in ways that cause the the two responsibilities to change at differen times, then there is no need to separate them. Indeed, separating them would smell of Needless Complexity. There is a corrolary here. An axis of change is only an axis of change if the changes actually occurr. It is not wise to apply the SRP, or any other principle for that matter, if there is no symptom."

Now for the amazing TLDR of the entire piece, it goes something like this: SRP is not a clear and fundamentally objective design principle, so let me offer another principle huff huff but dont worry its not clear cut (so dont follow it?!)!

This doesn't lead anywhere. Talking to the co-worker more might have been better than adopting a defensive attitude.

The big lesson here is, learn concepts like Cohesion vs Coupling, SRP, design patterns, to give you a common vocabulary, but keep your mind open and try to write code that is well organized and is free of code smells. If anybody has suggestions on how things can be improved, measure their suggestion on its merits. Software development is an excercise in balancing all kinds of different types of concerns - pun intended :). Sit down and list pros and cons between alternative approaches, and let the design that best addresses the problem/constraints at hand win.

And even better, next time, if you want to save yourself some trouble, ask others for their opinions and share your concerns with the code before you even write a line of code. Then nobody will get 'religious' in a code review and it will allow a team to focus on results.

To me it makes sense to think about how much state is being mutated in a given class. If you have a class with 200 different properties that are changing somewhat orthogonally to each other, you probably need to think things through and separate concerns a bit. Still kind of an 'arbitrary principle' but another way of looking at things. You want to encapsulate the moving parts but you don't want to throw every moving part in a giant bag.

Designating a class as having a single responsibility is an easy way to ensure that there is a minimal amount of state per class, but if the abstractions you are creating with this principle aren't logical than you are still left with a lot of moving parts someone has to think about when making modifications.

All these principles mostly only work if the system is also being designed rationally. Applying principles arbitrarily to a messy codebase will not necessarily get you a good design (and probably not) but thinking about these principles when making design decisions can sometimes be useful.

More cynically: Trying to apply a set of principles (any set) instead of thinking will likely lead to trouble.

Haha yeah, I think you have a point there. Perhaps programming is so challenging it awes otherwise intelligent people and makes them reach out for simple rules to simplify their lives.

I've found that if you approach OOP from a more pragmatic point of view, metrics like the sizes of classes mean very little: the point of classes is to encapsulate code+data so they can be reused to avoid duplication, so it feels obvious to me that if you see (non-trivial) functionality in a class that is likely to be reused in the future, then extract it into another one, but if you don't then there's no point in doing so, because it will only increase the overall complexity. Same for OOP in general - it's a tool, use it when it makes sense and simplifies the design. Sometimes you don't need a class, and sometimes a function doesn't belong in one because it does things across several, so all the "which class should this function go in" questions have a straightforward answer to me: if it's not immediately obvious which class it goes in, it probably doesn't belong in one. I mainly use Asm/C/C++ so I have the luxury of doing this, but I can see how some of the "more-constraining" languages make this more difficult.

All the examples I see are one-way towards simply creating a million single method classes.

That's a phenomenon that I've definitely seen a lot; often with the accompanying obfuscation that the method bodies have only one or two statements in them, that just calls into some other methods. It may look like it's made the code simpler locally, but all it's done is spread the complexity out over a wider area and increased it. This isn't "well-designed" or "straightforward", it's almost intentional obfuscation. I've seen this effect most with programmers who were taught OO design principles very very early, possibly before they even had a good grasp of concepts like procedures, loops, and conditionals. "When all you have are classes, everything turns into an object."

I agree that the SRP is certainly a subjective rather than objective principle, and possibly general guidance that can and should be broken in specific circumstance. This article points that out but rather than try to apply prescriptive guidance around making it more objective in specific scenarios, the author seems to believe that its subjective nature is too flawed to fix.

What's the issue?

> A good, valid principle must be clear and fundamentally objective. It should be a way of sorting solutions, or to compare solutions.

Okay, I'm listening. What is your alternative?

> It's not a clear-cut principle: it does not tell you how to code. It is purposefully not prescriptive. Coding is hard and principles should not take the place of thinking.

And.... we're right back to subjective and general again. Set up a straw man only to knock it down with an identical straw man.

Of course, reducing coupling and raising cohesion makes the class responsible for less and less... So are we back at the authors interpretation of the SRP? Seems like it to me.

First let me say, I completely agree that the definition of the SRP as it shows up in the Uncle Bob book is a little hard to understand. The wording is off.

That said this entire article is essentially arguing FOR the SRP. The whole point of the SRP (in conjunction with the rest of the SOLID principles) is to decrease coupling and increase cohesion. In his refactor of class C we have an example in the first case of a class that violates the SRP and an example in the second of 2 classes that follow it. Excellent, the author and Uncle Bob are both happy.

But what really bothers me about this article is the following: "Furthermore, there is no reason to separate "business" logic from "persistence" logic, in the general case. The large majority of Employee classes that people need to write are likely to only contain fields and maybe some validation -- in addition to persistence."

A) Please don't tell me what the large majority of things people need to write are. You cannot possibly actually prove that assertion. In my experience having a 1 to 1 relationship with a class and a database table is a sign of either a very simple problem space, or a very poor design.

B) If you do happen to work in a problem space and are finding yourself writing something that is a collection of fields, some validation and some persistence, that is not an example of a violation of the SRP, it is an example of you needing an EmployeeRecord and not an Employee. The difference is simple, an Employee has complicated business logic in it and a record does not.

This seems to be the central debate currently around Uncle Bob's tactics. Lots of people seem to be writing PVC (Persistence-View-Controller) applications and thinking they are writing MVC (Model-View-Controller) applications. It then seems to be overkill to split out the persistence layer from the "model" layer because there isn't much difference. If there isn't much difference you didn't violate the SRP! Your Single Responsibility is persisting some data! Your design is fine. Move on with your life. But on the other hand, if you find your self writing complicated business rules that are hard to test because of the wiring required for your persistence, maybe you've violated SRP and should split them up a bit.

In the absence of finding the time to really tear it apart, it seems that there are a few false choices in the article. I don't think a class that fulfills SRP necessarily means a tiny single-method class. It could mean a collection of methods that each do one conceptual thing, at one abstract level of understanding, but where the class and collection of methods is still cohesive.

The example of the client needing to instantiate B to pass it to A's constructor seems like a poor example of tight coupling. The only reason to instantiate B is because A needs it. The client doesn't literally need to know about B, it simply needs to know how to construct/build A. This could be done through a factory or service locator, or it could be done by A having B autowired into it, which would free the client from having to instantiate B directly.

I do, however, agree that SRP is poorly defined in most of the resources you find, so if anyone has links of case studies of applying it properly, they'd be interesting to review.

I'm not an expert but I think he is misinterpreting what is meant by 'change' entirely. My reading of it is as in the class only has one reason to change (state). Correct me if I'm wrong.

The SRP, as I was taught it, means that there must for each class only one kind of change in the requirements of the program that requires a change in the class. "Kinds of change" are quite vaguely defined, though.

I think you are focusing on the wrong phrase in the definition of SRP. You are focusing on reason for change without the context of responsibility. The way I read that definition is: if a class has more than 1 responsibility then making a change affecting any one of those responsibilities will require the class to be recompiled where as if those responsibilities were broken up in different classes making a change affecting one responsibility would not affect the other classes.

Hence a bug fix does not constitute a "change" since it does not change the responsibility of the class, neither does a refactor for code clarity or performance improvements. The purpose/responsibility of "what" the class does, does not change when you do any of those things.

"sometimes a class with more reasons to change is the simplest thing" Could you give an example of this? I can't think of one instance.

"Stating that "binding business rules to persistence is asking for trouble" is flatly wrong. Au contraire, It's the simplest thing to do, and in most cases any other solution is just adding complexity without justification" Now this statement here is flatly wrong precisely for the reason specified in the Uncle Bob's example. Let's do a thought experiment. Consider that you have an Employee class that is a black box to you. Someone else wrote that code and didn't have any exception handling logic in their code. The Employee class did business logic and persistence control. When you tried to use the class there was an error. Now which part of the class was faulty? The business logic part or the persistence control part? Now consider that you needed to use the same Employee class in another project but it needed a different set of business rules but the same persistence control, you would not be able to use this Employee class you would have to create a new one and duplicate your persistence control code.

Design principles have been developed to make sure that code written is maintainable, extensible, clear, comprehensible. Sure me as a single developer who is working on a simple project, who knows what each of my class does and has a small enough code base such that I don't need to separate the application into layers (e.g. call my database directly from my view because I just need this one value and don't want to write a couple of classes to that) can indeed write classes that have more than 1 responsibility. There's nothing to stop you but it does violate the SRP which will make it tougher for someone else to come in and maintain your code or extend it.

Fuck SRP, just read the resulting code.

Whichever code is more concise & readable is generally the winner.

(Note: a lack of understanding of algebra / calculus does not make the code 'unreadable', it just means the developer is innumerate)

Unless it fails to meet the requirements or is written in such a way that will require more maintenance...

Well, I'm assuming we're comparing code that works.

Maintenance is generally a red herring, it's best to use statistical methods to determine the likelihood of maintenance.

Usually if you're doing lots of maintenance you have other problems in your code / workflow, such as mistaking your codebase for your database.

"Well, I'm assuming we're comparing code that works."

That is an interesting assumption given that in many problem domains proving that the code works as specified is the hardest problem.

"Maintenance is generally a red herring, it's best to use statistical methods to determine the likelihood of maintenance."

I really dislike anyone who makes general claims about the entirety of software development. I for one spend way more time changing existing code than writing new code. I know that there are problem domains where that is not the case so I don't recommend my methods for those spaces.

What are these statistical methods you are referring to?

"Usually if you're doing lots of maintenance you have other problems in your code / workflow, such as mistaking your codebase for your database."

Or you are working in a problem domain that shifts a lot, or where requirements specification is more expensive than deployment opportunity costs, or in a legacy system.

Proving your hand won't go through a wall is a difficult problem in quantum physics, for most people they just push against the wall. I don't care whether the code is provably correct, I care that it does what the user expects.

Actually 'change requests' are even easier when the requirements change frequently, just provide an estimate in excess of when you think the next change will be, then put your feet up and wait for the requirements to change again.

PS. Changing requirements isn't 'maintenance', it's a change request.

PPS. Having to add code to add fields to a form means you spec'd your solution around your forms instead of specing your solution around solving the problem of changing forms. (aka. you baked your problem domain into your code and now you're fucked) (eg. you mistook your codebase for your database)

I didn't read the whole post since it starts by stating that bug fixes, performance tweaks and refactoring may be changes - which whilst true is being deliberately pedantic, and if an article starts out fussing about such pedantics for more than a paragraph without then owning up to this being a deliberate extreme case example for illustrative purposes causes me to devalue the whole article and thus not invest more time reading it.

A change, in the context of SRP is a change of responsibility / functional purpose.

Regarding the granularity to which you should go to with SRP, the idea is that it's not fixed; you change this as feels best for your code base. If you write code without much logic (e.g. a basic CRUD application with minimal validation) it's fine for the classes representing your tables' records to also hold their logic, since at this stage the single responsibility is, say, Employee. Once you start to get something more complex the separation of concerns becomes more important, so you need to break this apart into Employee Record and Employee Validation; perhaps more layers. That may seem contradictory as there are now two classes with two responsibilities where before there was only one class with one and a bit responsibilities - but the key is to be pragmatic. Don't write thousands of lines of code and multiple classes if that introduces complexity with no pay off. However, if you have one class that does too much it'll become hard to maintain; and being aware of the SRP principle will help you work out a sensible way to break that class down into more comfortably manageable chunks.

The Single Responsibility Principle is like Body Mass Index for code: it's easy to measure (humans have an innate sense of what "reponsibility" means) but outside of extreme situations it is not precise enough to base serious decisions on: there is a huge grey area where people do not agree on what responsibilities are.

It's the same as EDD: it's fairly subjective, but it weeds out the extreme cases.

The benefits of SRP are usually a side-effect of applying more objective rules to the code.

DRY is the primary driving force. It pulls shared responsibilities out of modules and leaves no doubt that those responsibilities were shared. It detects recurring concepts and gives them a representation, thereby increasing coupling.

The second force is to reduce access to private code: out of 100 lines of code in your module, how many actually need to access that private concept on line 42 ? If the answer is "30", then what are those other 70 lines doing in this module ?

Apply both forces to a code base, and the SRP will appear out of nowhere.

From the linked article: "Fundamentally, the SRP principle is therefore a class sizing principle."

This is incorrect. The SRP is a dependency management principle. It has nothing to do with the size of classes (however that might be measured). The goal of this principle, like the rest of the SOLID principles, is to minimize the impact of changes.

I always interpreted the SRP in terms of how well one can describe the responsibility of the class.

If one has to write "This class does x and y and sometimes z if w", it's got several responsibilities.

If one can write "This class does x", it's OK.

X can then be arbitrarily complex, so code size has nothing to do with it in my world.

Size is, and has always been, at least in my opinion, a symptom of failing the single responsibility principle. Its not the cause. Its only useful as an indicator, you still have to look at the case, figure out if the function / class is doing more than one thing. And even then, a function / class can do more than one thing, to increase cohesion, and limit bloat / complexity.

General rule I use: does this block / section of code increase complexity of the overall purpose of the function or class. If it does, it should probably move, likewise if it isn't relevant to the overall functionality / behaviour it should also probably be moved.

Everything is a trade off in the tech world. You can't argue you are right or they are wrong, only that you are right in this instance or that.


Even the author says "Without any reference to the SRP it's obvious that this class needs fixing." He just makes an allowance that it's pretty much not worth fixing if your app is small enough and basic enough, IE it might be good enough design if the app in question does not do anything besides draw rectangles and calculate their area.

I deleted my comment after realizing I missed that sentence.

"The purpose of class is to minimize complexity" "The purpose of class is to organize code" "A class should have one, and only one, reason to change"


What about: "The purpose of class is to encapsulate state".

If there is no inner fields in class, there is no reason for class. There is no state.

One or more fields in class means that they make sense when looking at all of them together, as a "state". If some method changes something, the all state encapsulated by class means something else.

It is possible to extrapolate all other guidelines from this basic starting point.

Do you see "having a state" as "being able to change state" ? Because classes may be used to represent concepts or objects from the real world. Some of them may not have a changeable state but inner fields certainly.

Likewise with many here, I agree with most of the author's points, especially that one should think for one's self, which sklivvz did very well.

However, the fear of many little objects smells strikes me as backwards. Unix systems are built with and used by many little utilities. As a result, a few simple commands can be strung together to make quick work of interacting with the system. Often, I find that the hardest utilities to master are the ones with the broadest scope. My experiences with large objects are similar.

You got a point. Let me add that little objects allow you change inheritance for composition more easily.

I think the "don't mix persistence with biz logic" idea came largely from the Rails world, where the ease of doing so led to a great deal of bloated code. Well, it's only slightly harder in Ruby to separate them, and it gives so many ancillary benefits to cutting ActiveRecord::Base out of your biz logic that you probably should. Other languages/platforms, they don't always make this kind of separation easy or idiomatic.

> I think the "don't mix persistence with bizlogic" idea came largely from the Rails world

I'm pretty sure I encountered that particular example of things that shouldn't be coupled in OO design before Rails existed.

I guess I wasn't clear enough. Certainly the idea predated Rails, but I think the heavy insistence you see pretty much everywhere on segregating them probably came from Rails.

“Multi-tier architectures are characterized by the separation of the user interface, business logic and data access logic. Many organizations are implementing multi-tier architectures for enterprise applications to realize the two key benefits."

1996, and I am pretty sure this marketing speak page reflects what had already been being talked about in the industry for quite some time.


I think the more frequent discussions of separating domain logic from persistence in the context of Rails is a simple result of the fact that Rails is an opinionated framework whose basic structure promotes the exact opposite of that good practice, to wit, implementing the domain model not merely tightly coupled to the persistence layer, but indistinguishable from it -- in that Rails models provide both the domain model and the persistence functionality. So you see a lot of talk about basic separation of concerns in the context of Rails, because Rails so strongly leads people away from it.

This certainly has a certain kind of pragmatic value in getting something simple functioning quickly, its not just simply bad design of the framework. OTOH, it has the potential to impose a legacy cost as the complexity of the application increases.

Yes, dynamic languages make it easy to make more mistakes, so strict design guidelines and excessive unit-testing is always recommended.

In most OO languages like Java , classes the only large-scale structuring mechanism.An opposite example is OCaml which provides both classes as well as sophisticated module system.So classes in these(like Java) most languages tend to get tied with more and more features ( from lang perspective) to which implementers are forced to adapt!Hence most of these principles fail to see day-light in practice!

I think SRP in terms of using classes with dependency injection. If you are injecting an instance of a class you only want it do do one thing and changing the class being injected you only want it to change one piece of behavior. If the class has more than one responsibility it would mean that injecting dependencies would become unmanageable.

Reminds me of this: http://bendyworks.com/geekville/articles/2014/2/single-respo...

Comments are the most interesting part :)

"If client code need to know class B in order to use class A, then A and B are said to be coupled. " - still better than the common 'Service Locator' anti-pattern where you aren't even aware of dependency.

I generally go by file size now, after 200 lines, we need to find an OOP excuse to split (interestingly, I almost never merge, but there might some thermodynamic reason for that).

I do it this way simply because I don't like having to scroll too much.

How else would you add N new persistence options to your class that implements persistance & data model?

I prefer to work on features rather than changing databases for the fun of it.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact