Hacker News new | comments | show | ask | jobs | submit login

I always struggled to understand the appeal of the open/closed principle.

"The idea was that once completed, the implementation of a class could only be modified to correct errors; new or changed features would require that a different class be created."[1]

This sounds a lot like bolt-on coding, always adding code rather than assimilating new features into a codebase. This doesn't seem like a sustainable strategy at all. Yes you don't risk breaking any existing functionality but then why not just use a test suite? The major problem though is that instead of grouping associated functionality into concepts (OO) that are easy to reason about, you are arbitrarily packaging up functionality based upon the time of it's implementation... (subclassing to extend).

[1] http://en.wikipedia.org/wiki/Open/closed_principle




  > ...you don't risk breaking any existing functionality but 
  > then why not just use a test suite?
The examples fail to make explicit that there are two programmers: One is building and distributing a library, the second is building an application using that library.

The library programmer can easily distribute the test suite so that the application programmer can run the tests, but that doesn't change the fact that if the library programmer changes an object's interface, it breaks the application programmer's code. By committing to keep the old object's interface intact, the library programmer is giving the application programmer time to migrate their code to the new objects.


I like your library/application distinction.

For applications I'm not sure open/closed makes as much sense - http://codeofrob.com/entries/my-relationship-with-solid---th...


  > For applications I'm not sure open/closed makes as 
  > much sense.
I agree with you. If the same programmer (or team) is maintaining "both sides" of an object's interface, i.e. both implementing the behavior of the object AND consuming the object in their application code, I think we can assume that they'll know that if they change one side they'll need to immediately change the other.

I'm not sure I agree with Rob Ashton's points, though. In his blog post, Rob trivializes the utility of third-party libraries:

    > * These [libraries] are either replicable in a few 
    >   hours work, or easily extended via a pull request.
This is simply not the case with any truly useful library. Useful libraries often represent years of careful design work and debugging. (Think networking libraries, UI frameworks, etc.)

He also underestimates the amount of time it takes to continuously change application code to keep up with breaking changes from third-party libraries:

    > * These [libraries] can be forked for your 
    >   project and changes merged from upstream with 
    >   little effort.
Again, if the library distributes a breaking change, it may require many, many hours of code changes and re-testing to make sure everything's still working properly. Hours that could be spent building new features. For that kind of tradeoff there'd better be a damn good reason for the change: Improved security or performance or ease of use.


I think the distinction is a straw man really. Changing the semantics of interfaces is generally a bad idea once you have code depending on them whether it be within your application or a library. Where open/closed falls down is that it requires that you subclass just for _adding_ new methods to an interface. You're not effecting any existing code but you're still introducing entropy into your codebase.


I agree it sounds like bolt-on coding. I've been told the way to avoid bolt-on coding (great name, by the way) is to refactor the software regularly. When you are in minor versions, you might do just-in-time coding, but to really qualify as a major version requires a careful view of the code as it exists now (not as originally designed), a documentation of a new design that meets the needs of the current customers, and a refactor to make sure the code reflects that design. Wash, Rinse, and Repeat.


Continuous refactoring is really the only way to maintain a good time-to-market for new features. Unfortunately by the time your time-to-market suffers you've probably already accrued a substantial level of technical debt and it requires a more serious investment to get the codebase back into shape. As developers the only way to really combat this is to include refactoring time into all development.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: