Hacker News new | comments | show | ask | jobs | submit login
Understanding SOLID Principles: Interface Segregation Principle (codeburst.io)
57 points by thdespou on Nov 13, 2017 | hide | past | web | favorite | 34 comments



I think a lot of what makes SOLID valuable is that its articulating patterns that make sense to people because they’re obviously good ideas.

S -> don’t write spagetti code.

O -> If you want your code to be extendable, explicitly choose the parts that can be extended so you can predictably deal with new functionality.

L -> If you extend code, don’t screw it up so the code doesn’t work the same way. Extensions shouldn’t break stuff.

I -> Make your interfaces tiny.

D -> Dependency inversion to make things testable; because testing is good right?

Well, DI is always a bit contraversial, but at least its easy to say why its good; its just a bit of pain to setup.

The interface segregation has always been the odd one out for me.

One method interfaces? I know the golang best practice folk love that stuff too, but I just find it irritating when I have to work on code that uses it extensively.

Anyone actually explain tangibly why its a good idea?

If you’re using it to defend against workmates who might abuse an interface with too many functions on it... well, thats kind of lame imo.

Sure, mega-interfaces are bad... I guess... but so are objects with massive sets of methods on them. Why the special focus on interfaces?


I don't think the push is for 1 method interfaces.

In C#, a good example is that many many APIs use the IList<T> interface to pass lists around. But that interface contains methods like .Add .Remove .RemoveAt .Insert... So while in most cases, you'd be just fine passing a IReadOnlyList<T>, using the IList<T> interface means that as a user of the API, you have to implement these methods which may have no sense for what you're doing.

And so more often than not, you see things like:

void Add(T element) { throw new Exception("Shouldn't go in there"); }

And actually I've just checked, the ReadOnlyList object itself implements IList (cause IList is so used), and so has these exceptions all over the place: https://goo.gl/G6L9Ko


Wow, the ReadOnlyList really does implement all methods of the IList interface. That's just ridiculous, %50 of the methods on the class throw an exception!

Why would the class implement these methods if they aren't even public?

Also, aren't statically typed languages like c# meant to reduce run-time errors by catching them during compilation? Wouldn't throwing exceptions from unimplemented methods from interfaces frequently break this idea?


Ideally yes you would catch this problem during compile time. However apparently the situations where you need to provide an IList for something that really shouldn't be changed is common enough that they decided to provide this functionality in the ReadOnlyCollection.

Now if you read carefully you actually do need to jump through some hoops in order to get an exception.

First of all the methods aren't public, you need to explicitly convert the ReadOnlyCollectionto to an IList or ICollection to access them.

Secondly the dependency inversion principle requires that any ReadOnlyCollection is stored as an IReadOnlyList or some other appropriate interface, there should be no way to convert it to an IList accidentally.

And finally the ReadOnlyCollections seems to be designed for the very specific scenario where you can't solve things using interfaces and need to design a class that supports IList, but throws an exception when it is modified. In all other cases you should use something else.


> Sure, mega-interfaces are bad... I guess... but so are objects with massive sets of methods on them. Why the special focus on interfaces?

Its not a special focus; the SRP is a bigger deal and focuses on objects.

OTOH, violations of the ISP force violations of the SRP, because an interface with unnecessary methods means that objects which must implement the interface will be forced to have unnecessary methods.

Screw up SRP in a leaf class and you've created a problem for that class; screw it up in a branch class or an interface and you've screwed up a wider swath of the code base. As the LSP encourages composition over inheritance, if you are doing the rest of SOLID, interfaces are a bigger opportunity to screw up but chunks of the code base at once than classes.


Concrete example:

Many languages's standard libraries have stuck methods for mutation into the base interface for collections of objects. This makes it awkward (at best) to try and use immutable collections. Either you create your own base interface, which makes you incompatible with the rest of the standard library, or you inherit the standard interface and then do something like throw an exception when someone attempts to mutate your collection, and just hope that never happens in production.


Exactly the example I've written in my other comment... I guess many people have been bitten by this!


It seems like a pretty ubiquitous problem.

I feel like it's actually just about the least painful in C#, since pretty much every collection interface derives from IEnumerable<T>. It doesn't define have methods for random access or getting the size of the collection, but it is at least immutable, so you can get away with using it quite a bit, maybe even most of the time.

In Java, on the other hand. . . woof.


>Dependency inversion to make things testable

Dependency inversion makes things unit testable.

>testing is good right?

All other things being equal, wouldn't a form of testing that doesn't require rearchitecting your code be better than one that doesn't?


Rearchitecting sure, but when designing a code base from scratch, designing for DI is not a bad approach.


It can be. I think it's a trade off that needs to be applied on a case by case basis. Too much dependency inversion leads to writing way more code than necessary, all of which needs maintenance. Too little leads to brittle, tightly coupled code.

I think you have to weigh up the relative merits of having less code vs. having a lower cost of code change. Furthermore, following YAGNI dictates that you shouldn't really do DI until you actually do need it.

I think doing it to facilitate unit testing is a universally bad approach, and a code smell that indicates that what you really wanted was an integration test.


Another reason for small (not tiny) interfaces is to help with unit testing, when different smaller interfaces could be mocked in different ways.


I have to disagree, or at least say "it depends". If your domain or stack tends to use a bunch of similar and related operations over and over, then packaging them together in a "UtilitiesForX" class or module is often much less code than managing them independently. Plus, they often use or reference each other so you don't have to reinvent the wheel.

Standardizing tools and conventions in a shop is how you save time. Independent bolts may not fit independent wrenches so that you have to reinvent both. Bundling tools and parts into "kits" helps ensure they fit together well. There's a balance between part independence and kit standardization. Experience and analysis is needed to weigh that trade-off well.


This sounds a little like an intro to writing "Enterprise" code. Essentially each function has to be its own interface. And you probably need some factory functions in addition.

Why not provide a set of static functions for ByteUtils that behave in a specified way and be done with it?


There's more than one language that will quite happy type convert back and forth between a function and an equivalent single-method interface e.g. "a function that returns an int" and "an interface with one method, a function that returns an int".

Though "one method per interface" isn't the aim of the Interface Segregation Principle.


Because a function is not a contract enough - it doesn't have a strong name at point of injection. But yeah, pure functions should be used way more often.


Does everything need to be dependency injected ? Sometimes it seems this is more important than code that works.


You can inject e.g. a Func<int, int, int> or an IAdder interface. Although they are for the sake of example equivalent, I prefer the latter for several related reasons: it gives the DI container a specific type name to work with and avoids complex setup there, it gives the person reading the code a lot more to work with when trying to understanding what the injected thing is used for, and Func<int, int, int> is very general - not all functions with this signature will be be a useful adder.


Again, why does everything need to be dependency injected? Why not a simple function that adds two numbers? I see these codebases where everything is abstracted two and three times and I don't understand what this achieves other than adding a ton of complexity for even simple things.


I haven't seen your codebase so I can't answer if your codebase injects too many things to too few, injects the right things or not. It also depends on how big the codebase is, and how long people are going to continue working on it - i.e. "enterprisy" concerns.

But injection is done for testability and flexibility.


No, but you will have extremely hard time finding devs who can mix OOP and func effectively. Mostly it's factory-factory and but-muh-Haskell types.


Oh good example. It shows everything that's wrong with OO thinking

I could split the file interface like this, thinking that some modules will want to read only, some people will want to write only (in rare cases you want trim only, but that is obvious except to "enterprise architects" apparently)

Except it's splitting hairs. And my readers can ignore the write interface and my writers can ignore the read interface and that works in most languages.

> This gives the flexibility for the clients to combine the abstractions as they may see fit and to provide implementations without unnecessary cargo.

Your compiler won't let you call an abstract method. It's that simple.

It's not increasing flexibility, it's increasing red tape and maintenance costs.

One of the reasons C# feels more fluid is that they reduced the level of lunacy created by this way of thinking where you create two or three classes to solve a problem that's solvable in 2 lines in other languages


As I understand this, the client here is not the caller of the methods defined in the interface, but the one implementing them.

Caller has no problem just ignoring the methods that it doesn't need. But the implementer doesn't have that possibility. If you have a read-write interface and a read-only source, you need to resort to subpar solutions like throwing an exception.


Solid principles also make porting between languages so much more possible. I ported Apache Shiro from Java to Python. It was a major undertaking. However, it was made possible by the authors following SOLID design principles. There was no one point where I was completely overwhelmed.

If you are writing software that others will use in a variety of ways, consider following SOLID principles as empathetic programming.


Interface Segregation Principle: The Single Responsibility Principle applies to interfaces too.


Yep. I view ISP as simply a reminder that an object's public interface should be focused and cohesive.


I like to work together with open-minded smart people. Stuff like SOLID only matters if you don't.


Oh, come off it.

Not every team gets to be made of 100% 10x supergenius ninja rock gods. And no one gets to be at their absolute best for every line of code they're ever going to write. Everyone, no matter how gifted, does a better job on average if the boring, obvious thing also tends to be the correct thing. Having some structure makes doing real work with real humans better, not worse.

SOLID isn't an iron law, it's a heuristic. And a damn good one. "Open-minded smart people" can and will find reasons to bend and break the rule sometimes, and the more experienced one becomes the more readily they'll be able to see when the rules doesn't apply. That doesn't mean they're bad rules.

Maybe I'm not open minded or smart enough, but I've seen plenty of codebases that adhered to some set of coherent architectural guidelines and codebases that didn't, and I know which subset I prefer to work on.


No need to come off it. I don't mind working with mediocre people, but I do mind when they pester me with principles like SOLID, just because they don't understand good design when they see it. And, if you are working with great people, codifying good design into guidelines like SOLID is not necessary.


I would say that this is egotistical rubbish. As the saying goes "Learn the rules like a pro, so you can break them like an artist."

In other words, good experienced people (if they are also open-minded and smart) aren't ignorant of these rules - they have practised the rules, internalised them and occasionally decide to deliberately bend or break them.


I’d argue if you started with these rules from the beginning, you are not much of an artist.


You are welcome to try and make that argument. But in the context of computer programming, I don't think that idea is very coherent.


I won't make this argument for you, as you already seem very convinced that you are right. I know that I never gave much about these principles, they are fads that come and go. Get a math degree, and a computer science degree, and you won't need some made up principles you need to believe in like you believe in religion.


> Get a math degree, and a computer science degree

Been there, done that. Also - I am open to positive cases being put forward, dismissive of angry rants and straw men arguments.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: