Please don't. Interfaces are one of the few redeeming aspects of the language. They give you a way to fake mixins, they're the only hook for dynamic proxies (for quick, simple, non-bloated aspect-oriented programming), and they let you swap in different implementations. Interfaces are good, use more.
What's the perceived downside of using interfaces? That you end up with more files? That you dead-end into bare method declarations when browsing your code? Any others?
How about trading off code readability for code complexity that has some potential future payoff? Library developers might want to trade more of their readability than app developers because of the nature of 3rd party code. It's ultimately a trade off that you have to make, but there are some general rules of thumb can help you so that uber flexilibity doesn't take over.
Please read the article. Its title is very misleading. Basically it only says "With this great new mocking framework, you don't anymore have to create interfaces for everything just to be able to write proper unit tests".
I recognize the point of the blog post, but I think that's it's more dismissive of interfaces than is warranted - the title outright says to mostly stop using them, and the 3rd paragraph suggests their use is split between over-engineering and mocking. It's not until the end that you start talking about what else interfaces are good for, and even then there's a sense that you think they should be used with care.
I just disagree with that; I think it's far better to have an interface you don't really need lying around than to be caught without one when you do need it.
There are good reasons to use interfaces and to just accept the extra minor bit of work that goes with them. When using injection frameworks (like Spring), it makes life easier to reference an iface and have your concrete class plugged in automagically (using annotations). The iface also provides a clean look at the intentions of the application. Writing impl classes is trivial with an IDE - you can create your impl and fill in stub methods quite easily. The tradeoff of separation can be worthwhile in a complex system.
One reply to address the others. Yes, Spring does not require iface proliferation - it just makes life easier when you decide you need different implementations of the same idea if you've committed to ifaces up front. YAGNI still holds true since it's usually easy enough to refactor in an interface when it does become necessary.
As far as comments about the ugliness of Java in this regard, I couldn't agree more. However, Java is Java and Python is Python, etc. If you are working in Java, and a lot of people are and don't have the option of switching to something else (think corporate, where what is common and familiar is good), if you are committed to Java, then you must embrace its paradigms. Interfaces and implementation classes are one of those. Enjoy it without fuss and focus on the problem at hand. You'll lose less sleep that way.
Ideally, you'd just get it right the first time, but if you're going to screw up, it's usually better to screw up by having an interface you don't really need than by having a class that should really be an interface. Generalizing that makes it hard for me to agree that FooIFace and FooImpl is, on its own, evidence that something's wrong.
This does condense what I dislike about java quite nicely. All proper java code is full of speculative architecture and genericity -- in order to have your code easily extensible, you do have to have setters, getters, interfaces, factories and everything else from version one.
As an example of a language that gets it right, look at python. You can write your orginal classes in the simplest possible way they could be written, and yet expand from there to just as much architecture as your problem needs without ever touching the users of the class.
It's easier to evolve Python programs along that particular axis because of duck typing and dynamic dispatch, but otoh, it's harder to rename or remove methods or change method signatures without thorough unit tests. I really don't think one is better than the other in all cases, and neither really seems to get it 'right' in the sense that there's no room for improvement.
Well, point taken about the getters/setters, those should be automatic or at least managed via keyword or annotation (private gettable String fname or something like that)
As for the rest of it.. it's just a question of proper design. If you're just making a utility, you don't need any interfaces or factories. If you're making a module for something that's intended to be run with a half-dozen service dependencies, it's probably a good idea to design it in a way that said dependencies can be injected -- whether you're using spring or doing something more lightweight. That applies across languages, same deal in python.
I honestly don't think "private gettable String fname" is in any material way better than getters and setters, as your ide can already insert the boilerplate for you. The problem isn't the boilerplate per se, it's forcing you to think about things before their time. Your attention is a scarce resource. The only satisfactory way for the first version is "public String fname", with the ability to later turn that into proper getters and setters when it is needed without the consumers of the class even knowing.
My big point about python is that you can always inject the dependencies, and you don't have to spend any thought beforehand to achieve this. Java could go a long way here with just a few minor changes -- for example, remove "new", you get a new object by calling it's constructor, without any special operators. Allow shadowing of functions. This alone would end the factory madness -- in fact, factories are just an awful boilerplatey way to implement this. For example, if I have a class Foo that needs to instantiate Bar to function, and I want to inject a different bar for some tests (or something), in python I can just:
def make_test_foo():
Bar = DummyObjectConstructor
return Foo()
(this will of course occur only in local scope, so that other potential users in other threads are not going to be stuck with dummy objects against their will)
To achieve the same in java, the way I was taught to was to design in a BarFactory for use when needed. In most cases, it's not going to be needed. But if you don't make it, boy are you going to be in a world of pain when you do need it, and half the world already depends on your class.
What really irks me about this, is that hotspot is perfectly capable of dealing with these kinds of things -- it is a really nice platform for this kind of dynamicity. Sun just used think that poor programmers are going to be confused when you hit them with too many high-level concepts, and refuses to add constructs like this. Which would be fine, except that the programmers invariably go on to develop their own ugly workarounds, like factories, because they need the power.
This I believe to be the big reason why python is so much faster to program in than java. I can program for the now, completely ignoring superfluous architecture when my programs consists of 3 files, with the knowledge that I can safely add it later when needed. The syntax and dynamic typing are just a nice extra.
I don't do Java, but I always prefer to code to interfaces rather than implementations. It is more sane and allows you to "late bind" more. It is also easier to test, even in languages that aren't as strict as Java. Sure, not everything should be an interface/implementation combination, but it's not a bad start.
I guess this is only tangentially related to each other Mock objects and interfaces. I stopped using interfaces in this manner. My rule if you only have one implementation of an interface you don't need an interface. Wait to create the interface when you get your second implementation, and we have great tools that make that easy.
Now for people who do lots of mock objects this rule doesn't help them out much, but my other rule is I don't overly separate my system for testing purposes. Sure there are times when you need a mock object for java mail or external services you need to mock out. But, doing it for every service you have is really a lot of work for questionable gain. This is predicated on practical experience rather than architecture theory. The reason you separate your system for testing is to find more bugs. I found that I wasn't finding anymore bugs by separating things than I was by testing it integrated. And, in fact I found more bugs in the integration of services than I would if I only tested them in separate form. Therefore, I stopped doing the extra work to separate them because the payoff was really too small to bother. It's been much more productive to think like this.
So, the argument is that you can stop the pattern of writing IFoo and FooImpl when you're pretty sure that the only other impl of IFoo will be MockFoo.
The first { opens the body of the anonymous inner class.
The second { starts an instance initialization block, which runs on each object when it is constructed - not sure if it's before or after the constructor - gonna guess before!
It's useful in this case as anonymous inner classes can't define constructors.
No, it isn't. The first { encloses the class and the second encloses the "anonymous constructor" or whatever is called. It's code executed at object creation, even before the "named" constructor.
What's the perceived downside of using interfaces? That you end up with more files? That you dead-end into bare method declarations when browsing your code? Any others?