"if you use a hardcoded value more than one time consider making it public final constant"
"if you have block of code in more than two place consider making it a separate method."
Also, half of these principles imply the other ones.
edit: This is meant seriously btw, if you're giving advice to novice programmers, the instructions should be clear. If you say 'consider making a separate method' what should the programmer consider? If the programmer is new to this he or she won't know what considerations to make either.
This goes for all principles by the way, they're all fine and dandy but they're also trade offs. Every time you abstract something, you risk obfuscating the code for the uninitiated.
As arethuza suggests, Keep It Simple Stupid is also a solid design principle, and it contradicts a whole lot of the other principles.
Adding e.g. a symbolic constant is a bet. You win if during the lifetime of the system, you discover that the constant must be changed, and the gain is the time saved by changing it quickly and with few errors. If you never have to change the constant, you lose the time spent on introducing the constant and the time spent on looking up the constant while reading the code.
Most of the rules here are long change (or whatever the financial people call betting on something), except for KISS.
I don't think these are rules of thumb that you should take for granted without criticism, especially when paraphrased from their original (well researched) source.
And I especially don't think you should take from them what you like. They are principles you should know because they are known to lead to well designed software, you should leave none of them alone.
The person you responded to mentioned novice programmers. Teaching novice programmers with the attitude of "use as much of this as you like/want/understand" is a pretty poor technique that will not give good results.
If you separate the block out into their own methods, they reader will have a much easier time verifying that they are decoupled.
I believe that inclusion solves the issue you've described. Obviously you don't need every 5 lines of code encapsulated into a method, but preventing code duplication should be paramount.
A lot of people seem to thing that code duplication is the only reason to refactor code out into it's own function. While that's certainly a good reason, there are many others. For me, being able to test the code easily is a major one. The smaller the function (specifically the less branching and the less it does), the easier it is to test.
Obviously, it's a balancing act.
The worst effect of this kind of doing things is the image of programming that it shows to beginners. Programming well has nothing to do with that, really.
I mean, of course you shouldn't try and deliberately flaunt any good guidelines, but the measure of your code is ultimately not these.
The guidelines set out here allow you to deliver that value without having to rip up your product for every minor change.
Just as an example, I really hate principles like 'program to interface, not implementation'. More often than not, you are not writing 'extensible frameworks' or 'reusable classes', so hiding each.and.every.class behind an interface serves no purpose. I get really depressed browsing the typical Java or C# code written by all the architecture astronauts around me that are obeying some variant of the '10 object oriented design principles' and thinking they need to make ISomething for every Something constructed by a AbstractSomethingFactory they can dream of, even though it's completely obvious only one kind Something will ever exist.
TL;DR: there's almost as much risk creating crappy code blindly following what is commonly sold as 'OO design principles' as there is in hacking up everything with complete disregard of sound design and architecture.
I think this should come pretty close to what your parent poster was trying to say. I also think many developers working in languages that aren't strictly OO would agree.
When it comes to testing, how are you going to test your ClassX if it's tightly coupled to ClassY rather than IClassY (which can be mocked/stubbed)? Oh, you have to test both.
Also, an interface is a great way of implementing "worry about that shit later - just let me talk to it now". You can define and evolve the contracts before you have to refactor expensive implementations.
As for AbstractSomethingFactory, I've not written a bit of code like that for night on 10 years. The container gives me what shit I need, ready configured with all dependencies.
Blindly following rules is dangerous, but burning entire books for what appears to be lack of experience is much more dangerous.
Why would I need to introduce a separate interface class to define this contract, instead of making it explicit through the public class interface? What problem do I solve by adding another layer of abstraction between Something (the class) and Something (the interface), if Something is just some internally used entity that isn't part of a framework or public API? I've seen millions of lines of code to date, and the number of times hiding public class interfaces behind interface classes (note how those sound almost identical, wonder why that would be?) can be counted on the fingers of one hand. Interface classes only make sense if you expect classes to be used like you would use components in a component-based architecture, e.g. for plugins, or polymorphic, adaptable structures that may be implemented by combinations of classes, etc.
>> When it comes to testing, how are you going to test your ClassX if it's tightly coupled to ClassY rather than IClassY (which can be mocked/stubbed)? Oh, you have to test both.
You don't always need interface classes for loose coupling, abstract base classes cover many cases, and in many other cases loose coupling is simply not required. There is nothing wrong in testing multiple classes that are tightly coupled together, without stubbing. In those cases where loose coupling is absolutely essential (frameworks for example) and you want to compose classes to provide some kind of interface, use interface classes. My point is not that interfaces classes are wrong, but they are overused, and more often than not, they add complexity without benefits.
>> Also, an interface is a great way of implementing "worry about that shit later - just let me talk to it now". You can define and evolve the contracts before you have to refactor expensive implementations.
Again, I don't see why you would need interface classes for any of this. I can write a stub class that let's me 'worry about shit later' simply by defining its public interface and leaving its implementation empty. If I have a lot of classes adhering to the same (semi-) abstract interface and I don't want to stub them all with empty methods, I simply create a temporary base class I can remove later. Eventually I'll have to implement everything anyway, and I'm not writing interfaces just for the fun of it, or the remote possibility that some interface may be required later, even though no-one asked for it.
>> As for AbstractSomethingFactory, I've not written a bit of code like that for night on 10 years. The container gives me what shit I need, ready configured with all dependencies.
Interesting that you should say that, because I used AbstractSomethingFactory simply as a caricature of an often overused design pattern, without implying that you should never write an abstract factory. In fact, abstract factories are a useful solution in many common designs, even though they are so overused.
>> Blindly following rules is dangerous, but burning entire books for what appears to be lack of experience is much more dangerous.
Which means we have to aim somewhere in between, which is exactly what I was trying to say in the first place. Unfortunately many developers somehow seem to drift to either one of the extremes, the Java/C#-like language programmers to the 'create moar classes!' side, everyone else to the 'just make it work' side. My point is that both lead to unmaintainable, bug-ridden code.
Secondly, the abstract base concept is actually a deadly form of coupling. As many people have suggested since the dawn of the problem, composition is better than inheritance and what sits at the composition boundaries? Interfaces! Inheritance usually turns into an LSP-violating  clusterfuck. I know - I have spent 2 weeks refactoring one into something which doesn't stick a fork in your eye every time you change something. And this is a non trivial one with over 100 classes in the inheritance graph (this is a roughly vomited out version of the Party archetype from the MDA book ).
So your tests are chock full of empty stub classes rather than mocks? Yuck.
If you look at ASP.Net's native API (before System.Web.Abstractions hid all the crimes away), you will see the sort of shit you'll get yourself into if you follow your own advice.
Agree with your last point though.
For reference, yes I am an architecture astronaut with one foot nailed to the ground :)
That's just semantics, let's concentrate on why I would always want to have an interface class instead of using the public class interface as the contract between a class and the rest of the system using it. Remember that many classes in any software design don't have (or need to have) base subclasses, composed classes, abstract interfaces, whatever. They are often nothing more than pieces of data with related methods, and nothing else. Writing lots of fluff around them to make such classes extensible, flexible, replaceable, loosely coupled, easy to stub for testing, whatever, does not add any value for you (as the programmer), your colleagues (when reading the code) or the customer (when running it). You'll know it when you'll need to do all of this anyway, and the moment you know it is early enough to think about it, no need to have contingency plans for things your classes are not ever going to be used for.
>> Secondly, the abstract base concept is actually a deadly form of coupling. As many people have suggested since the dawn of the problem, composition is better than inheritance and what sits at the composition boundaries? Interfaces!
Interfaces, not interface classes. We have many classes in our codebase that composite other classes, and none of them have interfaces at all, because the whole codebase is C++, which doesn't even have the concept of interface classes. Yet, our codebase is very clean and robust, fully covered by unit and integration tests, and rarely (if ever) contains bugs that are caused by bad or inflexible architecture. It does happen, on occasion, that some part of the code or its design doesn't satisfy new requirements anymore, but when that happens we simply refactor it, and if needed (which is extremely rare) write a facade around it to support its old public interface.
Note that I'm not saying I'm happy with the fact C++ doesn't have interface classes, because there definitely are situations I would want to use it, for example where a component-based design would best fit the problem at hand. In those cases we usually introduce a pure-abstract class that mimics an interface class, which isn't ideal but does the job.
I'm not sold on the 'abstract base class is a deadly form of coupling' at all. This may hold in some situations where you have many classes interacting in an architecture or framework that has a high probability of changing/evolving over time, but as I've said before: most code isn't like that. Most code is actually pretty plain and boring, without much need for architecture at all. Two or three levels in your class hierarchy and not more than a handful of class dependencies can solve a surprising amount of programming problems. Indiscriminately discarding inheritance or abstract base classes as 'evil' is naive at best, ignorant at worst. I can come up with many programming problems that are a perfect fit for concrete and/or polymorphic classes deriving from (semi-) abstract base classes, that would most definitely not be better implemented using composition.
>> I have spent 2 weeks refactoring one into something which doesn't stick a fork in your eye every time you change something
Similarly, I've spent a whole lot more than 2 weeks refactoring big balls of mud created by developers who threw as many classes against the wall as they could come up with, introducing interfaces, delegates, composites, strategy classes, and whatever other design pattern they found in the GoF book, only to arrive at the conclusion that 80% of the code was boilerplate, and the solution that was spread out over all those classes could be implemented by 1/10th the lines of code, concentrated in one or two classes. YMMV depending on the application domain you work in, but having worked on large business logic applications, web applications, and (now) scientific simulation software, my experience is that most complexity in software systems is only there because someone introduced it, not because it is necessary to solve the problem it was supposed to solve.
>> So your tests are chock full of empty stub classes rather than mocks? Yuck.
Our test cases aren't full of stub classes, and they arent full of interface classes either. Most classes can be perfectly tested in isolation by simply instantiating them and feeding them with static inputs. Some others are tightly coupled and covered by integration tests. None of this is rocket science, we're at over 1M SLOC and we have no problems covering them in our tests.
However to turn things a bit more constructive, let me know what languages you see as less of a mess if you have the time. It doesn't matter whether we agree and I'm not looking for a fight, just your opinion and reasoning. I might well agree.
Or at least avoiding "general-purpose tool-building factory factory" factories:
There's low harm in duplication of code. When something changes you don't have to refactor your class structures. You just change the code for the thing it affects. And if you have to make multiple edits it's often pretty obvious what you need to do.
(Data and configuration are different, it's more important for these to have a canonical home.)
I invoke Layne's Law and go back to work.
"Classes, methods or functions should be Open for extension (new functionality) and Closed for modification. This is another beautiful object oriented design principle which prevents some-one from changing already tried and tested code."
No, no, no. Let someone modify whatever the hell they want. If they screw it up, that's on them. Quit it with this childish belief that you have to control people.
My girlfriend came up with a nice metaphor trying to follow along. The open/closed principle applies to natural language as an interface. It is important language can be extended to convey new meaning, but also important that it is closed for modification so we retain the ability to understand eachother.
If changing a common component or library breaks my code, they may be at fault, but it's still my feature that breaks. The open-closed principle is about sandboxing modifications to the scenarios that need those modifications. It's not about babysitting.
Without it, when you make a change you need to consider every consumer or risk breaking their case. Or, you get these nightmarish components that have "modes" (if I'm included in this context, act like this, if I'm included in that context, act differently).
It's trickier to use, because blindly applying it can lead to immense classes, but I find it a useful thing to keep in mind.
class Foo < ActiveRecord::Base
delegate :hello, :to => :greeter
The book "Effective Java" (from 2001) still has to one the "must read" for any Java programmer that didn't read it.
In it Joshua Bloch has always been very clear, from the start, about all the deeply ingrained ugly Java warts that Java had from the start (and still has). Like for example the complete and total utter impossibility to respect the equals() and hashcode() contracts in the face of inheritance.
Took a long time to set in but now quite some Java programmers are beginning to accept that having equals() and hashcode() at the top of an inheritable OO hierarchy may not have been the greatest thing since sliced bread.
The second book to read is Goets and al.'s "Java Concurrency in Practice" (one of the best looking programming book cover ever, which never hurts an incredibly high-quality content).
But the real lesson is that: Java is too complicated and has way too many warts to ever produce beautiful code. The JVM is not all bad but "Java the language" is really fugly.
Switch to a language that has a saner way to deal with concurrency (Say Clojure's STM or Go) and you can throw most of the pages in these books to the trash.
But is that a problem with Java or is it a problem with ANY OO language?