Hilarious! Doing x=C.getX() is bad, but if it is hidden by numerous layers of libraries and monstrous config files, somehow it becomes acceptable. Out of site, out of mind. The fact that global scope bean is essentially a singleton, doesn't seem to bother architecturally inclined crowd - they are too busy admiring sound of their own voice pronouncing words "dependency injection", "mutability" and "coupling".
The top answer is a perfect example of what is wrong with IT today. It takes a working solution, declares it wrong and starts piling up classes and interfaces to solve a problem, that was never a problem in first place (OP never said that their singleton-based cache didn't work, he merely asked if there are "better" ways of doing it). So in the end we have the same singleton cache, but hidden behind interfaces ("It makes the code easier to read" - yea, right, easier, my ass! Ctrl+click on interface method and try to read the code), thousand lines xml Spring configs, and other crap that is completely irrelevant, hard to follow and debug, but glamorous enough for SOA boys to spend endless hours talking about it.
Doing x=C.get(x) is bad because there's no way of mocking C, thereby making unit testing a component that uses C impossible (the unit test will need to use the concrete implementation of C). Using dependency injection as described in the accepted answer, and particularly separating the concerns of C by using several different interfaces allows you to not only mock C, but actually create mock classes each of which is mocking one particular logical subset of C's functionalities.
It's easy to dismiss all this as "bloated" and "enterprisey" and people that use this as "architecture astronauts" but in reality, this pattern really does help. Well, at least if you want to be able to unit test your code properly. Otherwise, you might as well just use a singleton object.
I realized recently that the reason that there is no way of mocking C is because of a weakness in the language implementation/design. In Python, it's easy to mock and unit test globals, because in the test you can just overwrite them in the test's setup, and restore them in the test's teardown.
You could argue that it's a static vs dynamic typing thing, and I don't know enough to really dispute that, but it seems to me that there's no inherent reason that a static type system can't handle using a different concrete implementation. I think you can do this with Java/C# by doing some library loading trickery, but it'd be inconvenient.
And that's the whole point of all this, isn't it? Automated testing is a convenience that saves you from doing manual testing. Unit testing is a convenience that catches test failures before commit rather than during the automated testing phase. Creating the unit test, well, is it more convenient to create 8 interfaces and completely change the way the code is structured, or is it more convenient to mock out the singleton? It depends on the circumstance, I've done both in various languages. It's good to have the option.
It's a scoping thing that has no direct relation to static/dynamic typing, though probably has plenty relation through the underlying philosophies of many lanaguages which motivate them to choose particular typing and scoping rules.
I might be speaking blasphemy here, but is the advantages of unit testing worth the increased code complexity?
In quite a few codebases I've seen, the amount of tests was entirely unrelated to how stable and maintainable the system was. Some times developers just fixed the tests, rather than the bugs. And in many cases, there were a slew of failing tests even though the system ran fine.
Who says that you have to be able to mock C? Not every class should be mockable. In case described by OP, why would we want to mock cache anyways? In most cases singleton is used as a global state that can be easily accessed from all parts of the app without explicitly passing the state obj. We are dismissing a useful technique (singleton) based on requirement, that is not applicable to this technique.
> Who says that you have to be able to mock C? Not every class should be mockable. In case described by OP, why would we want to mock cache anyways?
Well it isn't really a unit test if you don't, is it? If you don't mock it then you're not only testing classes under unit tests, but also C's methods. And you're doing that not just in one place but in several different places in your testbase. The same methods. Over and over again.
C.getX() is going to be called by many different methods in many classes. This means that each time you unit test any of these methods, you're also testing C.getX() when you should only be testing what the method does.
+ sign and - sign are also called by unit tests many times, so you also repeatedly testing arithmetic operations "each time you unit test any of these methods, ... when you should only be testing what the method does".
Edit: I see what you say, but I think that effort we spend on decomposing app into perfectly isolated unit tests is far greater then the effort of identifying and troubleshooting where the less perfect test failed.
In my case a cache could be something remote, having any calls directly to that will slow down unit tests a lot, having one or two calls to it unit testing the cache implementation it self is OK, but having every method which makes use of it make those calls isn't going to work.
In reply to your original question, its not extremely bad if you can't unit test that class alone , but in this case you wouldn't be able to unit test anything which uses it either.
Fill up the cache with mocks of the objects it needs to contain. 10 objects = 10 lines of code.
If you really want to have different cache implementation for testing purpose, use C as just an instance storage, and return reference not to self but to cache interface. Put a switch inside static C.getX() that returns IX. Inside of switch statement see if System.getProperty("isTest")=true and return XSimple (which implements IX), otherwise return XReal (which also implements IX). Few lines of code, transparent and debuggable.
"Fill up the cache with mocks of the objects it needs to contain. 10 objects = 10 lines of code."
But then I'm not testing the code that uses the cache, I'm testing the cache plus the code that uses the cache. Which is a valuable test but a separate one.
Regarding your proposed implementation, that sounds like basically what I proposed here:
Funny, I saw your code, but didn't put your name and code together:) There are many ways to mock a singleton. I am not against mocking it, but against over-complicating. Also, I think that we are taking unit testing a bit too far in trying to decompose the app into smallest pieces. Unit tests have specific goals, like verifying correctness of calculations or performance benchmarks. If test raises a red flag, it takes a few minutes to isolate the piece of code that is at fault.
a) Now your singleton is also responsible for providing a mock implementation of itself (which shouldn't really be one of its concerns).
b) You didn't really do anything here: the class under test is still going to use the concrete implementation of C.
You need a way of providing a mock implementation to your class while testing. The obvious answer to how to do it is dependency injection. But if you're going down that route, you can just ditch the whole singleton thing, instantiate a class using "new" somewhere else and pass it to your class. And in your unit test, you simply pass the mock implementation. This is what DI containers like Spring do. We'd be doing the exact same thing here, only manually.
a) True, but it can probably be pushed into the parameterized Singleton class. I didn't try, since that might get more language dependent and since I wanted to keep things explicit-but-short for demonstration purposes anyway.
b) You're touching a tiny part of the implementation of the real C, yes. If you're able to push the Instance and Mock functions into Singleton, you might be touching literally no code of C other than relying on the fact that it in-fact inherits from Singleton.
I nonetheless agree dependency injection is a better solution. The biggest objection I have to this implementation is that there's nothing that tells me I should be mocking C, or catches it when I forget to mock C.
Not only can you mock it for testing but it encapsulates the system better, there is no reason for straight access to C.get with this pattern you can easily drop in a brand new system for caching and not break your original code. Some might find that a bit too much like coding for the future, but in my case that is quite a likely situation to occur especially if you ever run into scaling issues.
I've never really understood the point of most of the Spring usage I've seen; seems like most people want to replace Java code that is type-checked and can throw exceptions at reasonable places when something goes wrong with XML 'code' that has neither of these properties and thus turns debugging into a black art. Apparently, this is the right way to do things.
The only reasonable use case I've encountered in practice is the ability to replace some major components of your program with stubs for use in automated testing.
In the former, you make the fact that the method uses the comment cache explicit. If you were to have another method that deleted a video (and all its associated comments), it would have to ask for a comment cache explicitly in its signature. This would make the fact that these two methods interact with each other explicit and make reasoning easier.
This is perhaps less important than the ease of testing, etc. that you get as well, but I find that I work better with this style.
I am not sure how first case is more explicit. Try Ctrl+click (say we are in Eclipse world) on getCommentById(...) in first case and you will see interface signature - bummer. And while you will be pressing Ctrl+Shift+G and trying to guess (just guess because only Spring knows for sure) which implementation is more likely to be used on this particular line, I will jump straight to getCommentCache(...) implementation (2nd example, non-interface) and move miles ahead.
EDIT: Not to mention, that passing references to cache object along each method call bloats the code. Now you have references to cache not in 50 cases where cache is used (example #2), but in 5000 cases because each method signature needs to include reference to cache (each method along the call hierarchy until you reach the point where you use cache object).
That's not what I mean. When you see an instance of explicit `isCommentSpam` used, you know immediately that it is using the CommentCache. Compare:
if (isCommentSpam(423)) {
doSomething();
}
if(isCommentSpam(423, commentCache)) {
doSomething();
}
See, the fact that the first kind uses the CommentCache is completely opaque to you when you're reading the code. You don't know it does until you actually look inside `isCommentSpam`. The second way, you can trace the flow of information through the program very easily.
It's perfectly feasible to instrument your app at the top-level to pass the needed components down, and then use them through interfaces. Thus, you pay the cost of the interfaces, plus the cognitive cost of using those interfaces, instead of implementation classes. Same effect as DI, but no container.
Aside: there are costs and benefits to a good software architecture, but you won't learn it from reading HN.
It does add complexity, but it's a trade-off. Sometimes you need something to be easily testable, and this pattern can be worth the complexity it brings, if done minimally and properly (not blindly). Obviously we all prefer simplicity, but sometimes complexity is a necessary evil. (But, I would argue, not nearly as often as some people think.)
I always get cranky about people fixating on the difference between a static class and a singleton. A static class is just your language's default implementation of the singleton pattern, it just fails at providing any polymorphism to swap out the object for a new one. Mechanically, they do the same thing. Single global point of access to some effectively-global-state.
Also, the singleton thing (and interfaces everywhere) stinks of YAGNI. Your toolkit can find all references to X.Y, which means you can replace all references to X with a reference to X.Instance.Y. when you need to change a static class to a singleton, do so.
Or just use a dynamic language where there isn't any difference. Seriously, I wish C# offered a "root" static class or global variables or something equivalent so that the singleton pattern was less verbose to implement, because you will need them.
It really depends on the context of what you're trying to do. If you're whipping together something as a test, or if it's code that you _know_ will be thrown away in the future, then cowboy it up and get it done quickly. Good design and quality work takes more time than is needed for many first iterations so YAGNI is right.
The flip side is that if quality does matter for a project, then unit tests are important, and the whole point of Aaronaught's answer is that things like unit tests and refactoring are usually impossible with a traditional singleton pattern.
It's definitely a fine line though. As a wise programmer once said: "Broken gets fixed, but shitty lasts forever." Be careful when you deliberately choose the shitty route - it usually comes back to bite you.
Other languages can sport a class full of static methods and a private stub constructor, all C# offers is the means to force a class to only contain static methods and block the constructor.
You can make an all-static class in any OOP language that supports class-level methods and members.
That's my point. This argument is all semantics. There's a million little ways to create something with the functional utility of a Singleton, and there's no sense in pretending that they're vastly different.
It was just idle curiosity - I wasn't familiar with the term "static class", looked it up and wondered if that particular set of features (and the term) were used in any other languages.
Nothing special about it, it was just a cheap feature to add (static was already reserved and not previously added before class) while helping to exemplify a common protocol.
The one nice thing it does provide is access to defining extension methods, which aren't allowed outside static classes.
Singletons are always a lazy solution to the problem. For me the rule of thumb is if there's some class requiring lots of work to test, I've been lazy. So, go back and review my sloppiness.
The questioner was in a way implementing a poor man's cache using Singletons (I wonder how was he planning to test it); some use it for connection pooling instead of a specific library.
I've heard even GoF regretted including Singletons in the s/w patterns club.
I'm a big fan of Misko Hevery's (now working on Angular) work on this.
>"For me the rule of thumb is if there's some class requiring lots of work to test, I've been lazy."
Unit tests are great, but when they get in the way I always look it as a chicken and egg paradox. Is my simple and easy to maintain code inferior because it is not test friendly? Or is the test friendly alternative inferior because is more complex and harder to maintain?
And this haven't happened to me just with singletons, sometimes unit tests just like to get in your way.
I find the biggest barrier to unit tests is not noticing when it's more appropriate to create a public static function (probably on another class) instead of a private one. If a private method is so complex that you cannot appropriately test it using the public interface, then it really needs to be its own entity.
I could be a bit off base, and to be honest am only now coming to grips with unit testing fully, but I find if something is painful to test my implementation wasn't good to begin with. I'm guessing this is where writing tests first really comes in handy as you have to implement in the right way from the start.
The only difference is whether the state of the CreditCardProcessor is stored statically on the class, or is stored in one magical instance. They will both have the exact same pathology in the code and during testing. The former will be slightly easier to refactor, since at least it's already based on instances.
But the whole point of the singleton pattern is to solve these exact issues. To enforce a single entry point that is 1. Lazy evaluated to ensure initialization.
2. Preferably locked to solve threaded access.
3. Instance based to ensure you are manipulating an object vs truly static variables.
I honestly think there's a huge potential difference between
Solve what issues? I did not mention any issues. I see no difference between the two, other than one basically enforces initialization by gating it through a "getInstance" call. (And thereby also enforcing an additional function-call penalty on every functionality presented. Nothing is free.) I could do the same, with the same trade-offs, by simply invoking init() at the start of every method call.
To address your points specifically:
1. Whether the item is lazily initialized or not is an implementation detail. In fact, the call of "init" on the first format is lazy initialization; an eager initialization would have completed at class-load.
2. Either method can implement exclusionary locks. There is no difference here.
3. This is the big difference between the two, but you write it as if it's a benefit unto itself. From an API perspective, why do I care whether I'm manipulating an variables that live on an object instance or statically on the class?
Here's an exercise. How would you even know whether the code to a Singleton class looks like this?
class SomeNumber {
private static Integer value;
public static SomeNumber getInstance() {
if (value == null) value = 5;
return new SomeNumber();
}
public Integer getValue() {
return value;
}
}
If you cannot tell whether the Singleton class was implemented this way from the external API, then there is no difference between the two implementations.
Invoking init at the start of every method call won't be caught by the compiler if you forget to do it in one spot. Static safety is usually a win - on a big project, someone will do something wrong and it's best if it's caught right away when they try to compile.
I've actually been wanting ways of enforcing arbitrary rules on the structure of my code at compile time...
Honest question as I'm wondering if I could have done something differently.
I have a third-party library that has expensive initialization done inside of its class's parameterless constructor. Once created the library's instance is fully thread safe. Because I only wish to incur the initialization cost once I have a straightforward singleton to wrap the instance:
GetInstance():
hold mutex
if instance is null:
instance = ExpensiveLibraryConstructor()
return instance
Now the code I'm writing code plugs into a framework I don't control. I create a class that conforms to the framework's client interface and the framework creates instances of my class and calls a Run() method. My Run() method needs to be able to make use of the third-party library, so I'm calling my singleton's GetInstance() method there.
Not that it would make much practical difference but ExpensiveLibraryConstructor() needs some environment initialization done before it will succeed so I cannot invoke the constructor in global scope.
I really loved Misko's blog back in the day, and these are the exact articles I always quote to people when they ask "why is the singleton pattern a bad thing?".
As the author would say: singletons (single instances), yes. Singletons (as in the pattern), no. In his articles, it appears that he uses the word 'singleton' to mean 'Singleton' though, which is mildly annoying.
>Singletons are always a lazy solution to the problem
The fact that you said x is always y in programming is troubling. Sure singletons are usually a lazy solution, but saying there isn't a single case EVER where a single is the proper solution is ridiculous.
> saying there isn't a single case EVER where a single is the proper solution is ridiculous.
Is it? Some things are simply mistakes to and through. Consider nailing your penis to a table, is it ever the right solution to anything, aside from "how could I get my penis nailed to a table"?
Religious wars against particular design patterns just seem so, well, religious!
To me, a singleton is a particular implementation of a global variable. Nothing more. You use it one when you need one. It isn't magic or evil. It's a global variable.
The problem comes in when you are using one but don't know it. It's the beginner Spring problem. Developers don't realize that some libraries use them by default and start storing state there. Needless to say, not good.
Singleton pattern feels a bit like someone once heard that global variables were bad. Nonetheless people still need some sort of global state in some problems, but one couldn't use global variables because they were bad. So they invented alternative way to do exactly the same thing.
And not surprisingly singletons are being abused just like good 'ol global variables by bad programmers, and thus eventually they will be considered bad. And then a new way of handling some global state will be invented because singletons are bad and we still need some way of accessing some global state in some cases.
It's not quite exactly the same thing. It provides some control over initialization and (depending on the language) possibly eases synchronization, compared to a scattering of globals or a static class. It has the other problems of global variables, though.
Whereupon you can put tracing in to find out when the state was screwed up.
Singletons aren't bad, they're just misused a lot by bad programmers. It's like saying "Food is bad," and sure, people misuse food and hurt themselves, but that doesn't mean we should ban food.
It's more like saying "poison is bad", and losing the benefits of bleaching the bathtub, just because people keep killing their kids with it by accident.
Except it's nothing like saying "food is bad". If you don't ever eat food, you die. If you don't ever use singletons, some of your code has an additional parameter uglily threaded through it.
Tracing can be very lightweight. Not utterly non-instrusive, but pretty close. As a kernel / games dev, I'm quite aware of timing issues.
You get into serious trouble with "lock free" stuff, where pretty much any additional activity (even register operations) will perturb things and make bugs go away. Hopefully at this point you have help from the hardware. Then again, if you're doing LF, you're in a special place and probably have the chops to deal with it...
That is a feature of unconstrained, shared, mutable state. While singletons are often used that way, there are two problems with the resulting code. On the other hand, imagine a hypothetical situation where you have a bunch of static data that you would prefer is only generated once, and only ever generated if needed. A singleton gets this done, without any sort of "where on earth is this set to X?" but retains issues with injection for testing and the like (depending on precisely how it's implemented, in most languages).
> To me, a singleton is a particular implementation of a global variable.
Except that it's not. Global variables aren't initialised lazily, and their type can be an interface just fine. I can think of a few valid use cases for singletons, but it's not a great replacement for global variables IMO. Service locators are closer, and I think they have far more valid use cases than singletons.
> Except that it's not. Global variables aren't initialised lazily
Depends on the language, and the global.
In java, a static attribute is initialised when the class is first used. Until the class itself is accessed and used, the attribute remains uninitialized.
In clojure, delay[0] will invoke (and cache) its body the first time it's deref'd.
A getInstance() call can return a pre(statically)-initialized instance if it wants to. Laziness is optional. The pattern gives you the freedom to do it either way.
Singletons are fine. It's just some idiots plug everything into them then realise they've blow their toes off with a monolithic ball of mud architecture.
I use a singleton on every project which is OO based but it only ever holds the service locator/container.
Asp.net MVC is a fine real world example of how to fuck up Singletons (routing dictionary, global action filters etc). Totally stinks and is coupled like glue.
> I use a singleton on every project which is OO based but it only ever holds the service locator/container.
iOS devs love singletons; it seems like almost everything is a sharedSomething. Definitely a design smell.
Infrastructure can sometimes call for a singleton. It's very convenient to have something that is only created when it is needed. That way, users only pay for what they use, and they don't have to bear the conceptual burden of toting an extra context object around.
But it is a sledgehammer, and the people most inclined to abuse it lack the ability to tell the damage it is doing to the design.
It's mildly interesting to note that in practice essentially every GUI application on OS X is a singleton because the OS makes a strong distinction between application instances and windows (compared to e.g. MS Windows where the distinction is muddy).
Launch TextEdit.app twice and there is still only one running instance of the application. If you want a second window, you open a second window.
I now realise I was providing reading comprehension training†
† NOT, I REPEAT NOT, a literal course of training. I want to make it doubly clear that I did not certify anyone and can not be held responsible for any of their future actions vis-a-vis reading or comprehending.
My favorite sight when I review iOS code for clients is seeing a method on the App delegate that 'preloads' a bunch of singletons in one go. It sets the tone for the rest of the trip.
In my experience, a large bunch of iOS developers tend to use singletons out of imitation. The "sharedWhatever" is the usual way for the framework to provide access to anything that is purposefuly single-instance[1]. Same thing with delegation, though that one is usually less cringe-worthy in practice.
[1] The "the accelerometer on your phone" kind of single-instance, which is a much more defensible use-case than what the typical iOS coder uses them for.
IMHO, and to avoid abuse, singletons in iOS/MacOS should not be considered as a pattern that can replace class methods or simple C utility function. And this in all ways. I have seen piece of code doing everything with class methods while a set of plain C functions would have been more adequate (those were really utilities)
Singletons aren't always bad. I use a few in my Objective-C Cocoa apps to share common behavior. For example in my music player Bahamut[1], I have a singleton for actually playing music. This way, you can have several Bahamut windows open at a time which all control the same music player.
Sure, you could store this variable as a property on AppDelegate or some other parent in the object graph, and then either pass it down to each window controller which needs it, or give them access to it, either directly or via delegates or something. But these are all very convoluted compared to having a single class-method in the class itself which returns a shared (static) object.
The only caveat is to make sure that you don't introduce race conditions by poor design. For example, don't do stuff in -init on your singleton, and especially don't do stuff that might call other stuff that might access the singleton via the public accessor. Doing so could turn into an infinite loop if -init is called when creating the singleton. (I've done this before.) Instead, just add a new -setup method and call that somewhere in -applicationDidFinishLaunching: or whatever. It's also a good idea to use dispatch_once to create the singleton[2].
EDIT: Of course, these suggestions don't only apply to Objective-C apps, although your entry points and race-condition-avoiding functions will certainly be different in other environments.
> But these are all very convoluted compared to having a single class-method in the class itself which returns a shared (static) object.
I used to think the same thing. But lately, I've come to think it's better for code to explicitly reveal its dependencies via its public API. If class Foo uses the singleton class Bar, you can't see that dependency just by looking at Foo's API.
My preferred solution in OOP is either: a) have Foo's constructor accept a reference to an instance of Bar, or b) for any method on Foo that requires a Bar, have that method accept a reference to an instance of Bar. I find this disciplined approach helps me reason about my code and understand how units of code relate to each other.
I only developed this insight after diving into functional programming. Prior to that, I was all about convenience. Singletons were convenient, and I didn't want to pass a references around all the time. It seemed like a lot of pointless ceremony. Now I've changed my tune. Rather than pointless ceremony, it's a form of self-documentation.
What's the benefit of doing it that way? The only one you mentioned is self-documentation, but that seems like a weak argument against these kind of singletons. It sounds like you're giving up the convenience of singletons solely for the sake of not having singletons, which seems pointless in itself.
I read that as not having singletons for the sake of self documenting code dependencies. In a way, a singleton is a "magic" dependency that can reach into the middle of your code without a trace. By always passing it in, you're making it explicit. This is the same as the pure functional discipline of having all values passed in, never coded into a function.
I'm giving up singletons because hidden dependencies are a source of bugs. Let me make it more concrete:
Suppose classes A, B, and C use singleton class S, and S has state. Via the hidden dependency on S, it's possible for A, B, and C to affect each other without explicitly calling each other's methods.
As a programmer, you can take those hidden interactions into account and still have a relatively bug-free program. But it's one more thing you have to hold in your head. When you return to the project six months later, you have to read through the internal code of A, B, and C to see these interactions; the public APIs don't reveal them. You spend more time figuring out how the objects interact, and there's a greater chance you'll fail to notice an interaction.
In the general case, I agree. But for a very few things, singletons make fine sense. Usually in small programs where you can draw the entire object graph on a single piece of paper, and where all the features it will have are known up front.
Certainly! Tiny programs are the exception to most best practices. (Not security best practices, but I digress.) I write a lot of little programs, and they're certainly not all works of art. Sometimes you need to bang out a tiny program quickly, and it's never going to become a maintenance burden. In which case you can use any pattern or anti-pattern you like.
That sounds like a very black/white way to say it. I don't agree that using a singleton in small programs is an anti-pattern. The world is much more gray than that.
I said you can use anti-patterns in small programs, not that singleton is an example of an anti-pattern. I don't believe singleton is so terrible as to be an anti-pattern, even for big programs. I just think it has some downsides that are really worth considering before you use it.
This article echoes the same idea (instead of big singleton, view controllers should accept the data they require in their initializer (constructor)) and gives some reasons why to do it this way:
This is mostly an argument against making AppDelegate into a God object, which I totally agree with. It's not saying singletons themselves are inherently bad. It does generalize saying that they usually make it difficult to evolve an app's internals, but I disagree with that. As long as they're chosen when appropriate, and used properly, there's no problem.
Bingo. If you want to test class Foo in isolation, it is essential to know that it depends on Bar. Furthermore, when taking testing seriously, you might want to test Foo's behaviour in combination with differently configured instances of Bar - which is only possible if Bar is not a singleton.
>But lately, I've come to think it's better for code to explicitly reveal its dependencies via its public API.
The downside is that you're forcing a new dependency on the user of your API, and that dependency may be an implementation detail that the user should not be concerned with.
You could argue that the user was already depending on that singleton because singletons are effectively globals. But that's not necessarily true as the singleton could have package/module visibility.
I'm not convinced that all encapsulation should be sacrificed on the altar of TDD.
To quote either pragmatic programmer or the gang of four (I can't remember which), singletons make sense when, if the singleton doesn't exist, it doesn't change behavior through the application. So for example, setting up a logging singleton probably makes a lot more sense than adding a logging dependency through every class in your code.
But conversely, if the behavior you are putting in your singleton actually has effects, and can be deemed a dependency on the code that uses it, then it makes sense to make that blatant by passing the reference into each of the dependent modules, if only for unit testing, clarity and mockability.
I'm aware that the example provided makes it slightly harder to test, since it's more tightly coupled than other options. It's a trade-off, simplicity over testability. In this case, I manually test my music player object when I make changes to it. So using a simple singleton here makes sense. I find the same true for most GUI controllers. But I'm not willing to get into a debate about whether or how you should test your GUI. I'm just stating how I do things, and it's worked well for me.
You have highlighted the good parts about using singletons. Now say your application wanted to play two different music streams (lets pretend you added a mix feature), you are now stumped.
I first ran into the pain of singletons when using them in a framework. This framework formed the bases for small modules, which when loaded together into the same application domain blew up.
My solution was to use IOC/DI. I could still have singletons , but they are just instanced managed by the container. This mean I am not relying on static values for state, which in my view is as bad as global variables.
I guess this is really the problem most people associate with the singleton model, not that there is a single instance in your application, but that typically people use static variables to store the instance.
No, I just refactor my code to not use a singleton anymore. This only involves deleting the static accessor, adding a few properties, and passing a value downward. It would take all of 5 minutes. I don't see any value in designing my architecture around a feature I might never add.
I depends on the complexity of your application, sure it could be easy, it could also be a nightmare.
People make the same arguments about using interfaces (think Java not Objective-C ones), they are only useful when you need them, but almost all of the time it's easier to do it in advance than at a later date.
At the end of the day it's your call as the architect of your application. If you can convince yourself using a singleton is a better fit that another method go for it, chances are you'll know your application better than a one of the GOF.
If it then trips you up down the line, you learn from it and revise your thinking the next time you face this problem.
If it's a simple 5 minute change, why not implement it and use DI for a single instance? I imagine you would save a good amount of time by being able to run automated tests now rather than the manual tests you described in your other reply.
Apple makes it notoriously hard to do any kind of testing for ObjC/Cocoa apps, especially testing the GUI [1]. My solution is to only write really tiny apps, and manually test them. I still use TDD, I just make sure the app is small enough that the regression suite can be done manually in a minute or two.
[1]: I'm aware of third party solutions like Kiwi and Cedar, but I haven't found them to make this process much more pleasant than the built-in tools. Apple is really just an overbearing mom in this case, forcing you to use the sweater she chooses, and making life difficult if you don't.
Objective C land there a specific API interfaces that really should be wrapped in a single instance if you're going to be accessing them a lot because they are costly to make I.E EventKit.
A great talk on the topic is this one. The basic argument is that global singletons (and global state generally) is detrimental to maintainability, flexibility, and testability. The talk discusses several strategies for moving away from global-state to a dependency-injected and testable codebase.
Ah, I really enjoyed that talk. Thanks for the link!
I've been trying to move into more of a DI style with my code so that it's easier to test. One thing that I'm fuzzy on is where you instantiate everything.
In the past, if I have something like a UserSettings kind of structure, I would usually turn it into a singleton. It made it conceptually simply to instantiate it at the start of the program, and then have any other class that needed it to simply do a `user_settings.get_instance()` type of call.
If going a DI route, would the UserSettings object be instantied in the program's start, and then simply passed as a parameter to every object that needs it? I tried it out, but it felt a little "off" passing this one object to almost every class that had anything to do with user settings. Is that just what you do when your doing DI?
I've been using manual DI a lot (there aren't many reasonable alternatives in C++), and occasionally ran into situations where doing that was insanely cumbersome. Like passing a resource managers down a widget hierarchy. Service locators implemented as singletons have served me well in those cases. They're fairly simple, and share almost all the benefits of DI (as opposed to singletons).
Some people have been calling service locators an "anti-pattern" as well, but I'm not buying it. Singletons have very obvious drawbacks and have been incredibly overused, so they sort of deserve that term - even if they have a few valid uses. Service locators on the other hand have very few drawbacks, the only real one I can think of is that dependencies are no longer obvious. Is that really that much worse than throwing complex IoC containers into the mix? (Note that I just recommend service locators as a fallback when manual DI doesn't make sense. I much prefer manual DI generally.)
I've spent the last few months (casually) testing the differences between the service container and DI container and I've come to the conclusion that it's a stylistic decision. I found that using a DIC and passing references to services to the constructor of every class was just far too cumbersome. Is it clear which services the class uses? Absolutely. But it's also clear if you document the services used before the class declaration. The cited advantage of DI is that it's easier to pass service mocks during testing. This isn't difficult to do with a service locator; as long as you instantiate your services at the beginning of the test you can have full control over mocking certain services on a test-by-test, or group-by-group level.
Personally, the added cruft of passing services to every constructor made me want to rip my eyes out.
I am of the school of thought that making relationships explicit leads to better coding practices and use DI via constructor to do so.
I think the really important factor though is how mature the ecosystem is in supporting DI. The richness of C# and Java DI container implementations make using service locator patterns dubious. Well and how many ice cold stares you would get by using service locators. I'm trying my hand at iOS development after years of C# and the ecosystem is radically different.
On the other hand mocking libraries are a dream in a message passing runtime like Objective-C.
Yes, I agree that using singletons to store global state is a bit dumb. You may as well use static methods and variables.
Where a singleton does become useful is as an immutable carrier of methods. No state.
It means that an object somewhere can have a reference to a WeirdOperator object, which contains a method called doWeirdOperation(inputs). WeirdOperator would then be an interface or abstract class, and various different operators are implementations, thus effectively allowing methods to be passed around. Consider Comparator (in Java).
The alternative is to access methods by introspection, but doing Method.invoke(parameters) is nasty and evil and slightly slower than doing it properly.
When classes have no state, there's also really no need to have a singleton instance. If someone wants to instantiate a new object, who cares? It's a few extra cycles and a few more bytes in memory. Sure, you can have an easily accessible instance for convenience and performance, but nothing's going to hurt if somebody uses a different instance. Your program isn't compromised.
Singleton exists as a pattern to enforce a single instance because of state and side effects. What you're talking about is just a cached instance.
Certain languages don't have methods as first class objects which can be passed around. In Java, you can pass around a Method object, but it isn't really the full thing, as calling it is not type safe, and is slower than a real method call. Likewise, passing around a Class object in order to identify a static method to use is equally unwieldy.
Therefore, a singleton object can be passed around, and it carries with it the implementation of the method. That allows you to have a variable that effectively references a type-safe fast method, allowing the nearest thing to having methods as first class objects.
1. so you're not only using singletons but you're using them in horrible ways, that's just great
2. to pass your singleton around, you need to know its type. If you know its type, you can just bolt the methods on the type itself and get rid of the useless singleton
No, you don't need to know the singleton's type. You can have several implementations of an interface, each of which is a singleton. Then you have a reference to the interface type, which allows different methods to be selected.
Ah yes, the fabled multisingleton, because once you've dug your hole there's nothing better than filling it with raw sewage, and once you've got a singleton you can fill it up with half a dozen interfaces because reasons.
Static classes and singletons just tell you that the people who shoehorned everything into an OOP paradigm in your language of choice were mistaken, because there are plenty of things that are simpler and clearer as top level functions and data.
I think you're right, but wouldn't the alternative then just be namespacing all of your top level functions into separate files? It seems that a static class of related functions is just another way to organize top level functions.
It really depends. I had used singletons when wanted to manage a resource using RAII and have to be easily available across all the applications. My canonical example would be writing on a log but YMMV.
The big issue with singleton is that every time you have one db connection, one cache store, one file store, one of anything, you wind up needing a second one.
Make it easy to use the default one, but also make it easy to swap it out and use a different one.
And by that moment it is not a singleton any more. The other way around is also true, you design your system to be scalable in db, cache store, etc. and you end using only one.
> And by that moment it is not a singleton any more.
That's the point. If it's originally a singleton and you need a second instance, you're going to have a hard time. If it's a regular factory/object and you only ever use a single instance, chances are nothing will care (much, it may let some concurrency bugs or global state creep in)
>Something which I've come to believe does not exist, the pattern itself is broken.
>Paraphrasing Mencken, for every complex problem there is an answer that is clear, simple, and wrong. Singleton is ever that answer.
Just because you haven't found a good use it doesn't mean it doesn't exist. And about Mencken... All he said is that you can't have Singletons without having a Global State (obvious) and apparently he has an strong allergy to global states.
Global states are fine. Sure, they will make your tests harder but there are a lot of software that don't use unit test for legitimate reasons and hence don't have that problem.
>"One of these things is a problem you need to fix the other is a solution to a problem you don't have."
That's true, but there are so many cases were a singleton is way more convenient than the alternatives, and not using it because it might "potentially" need to scale but never does sounds a bit silly to me.
I am agreed though that you must be clear on the specifications, if a shadow of doubt appears is very to play it safe.
Having used Spring for a year or so now, I'm curious what people think of singletons in Spring? Most of the problems noted in the OP (inability to use interfaces, unable to mock, tight coupling, etc...) aren't really problems that I've noticed. For example, if you're autowiring a typical spring bean, as in:
@Autowired
private Blah blah;
You can actually autowire an interface here. Similarly, it's pretty easy to mock these or swap them out in tests. Parallelization is usually handled by not storing private state in a singleton bean.
I'm no Spring expert, so I'm not sure what I may be missing, but most of these concerns appear to be less problematic than many suggest.[1]
[1] Nothing is perfect, of course. I have noticed problems with spring singletons where a test would basically screw up the application context, causing subsequent tests to fail. This caused some hair pulling until I learned about @DirtiesContext.
Spring makes the discussion confusing because it refers to "single instances of a class" as "singleton" scope.
What is being debated here is when a class intentionally restricts how many instances of itself can be instantiated. That is the true "Singleton pattern" - a class that combines two concerns, 1) the behavior of the class and 2) the instantiation pattern of the class.
How is that not a singleton? I can see the argument that dependency injection is better than a factory method. Especially considering the memory model, but I thought this was still a singleton.
The way it is normally used, a singleton is a self-managing, globally accessible, single-instanced object (lazy or not). Just having a single instance does not a singleton make.
So the complaint is against the implementation detail of a singleton? Because, just having a single instance is the crux of the benefit for that pattern.
Spring does it right. This is a IoC container btw, and not unique to spring.
This isn't a singleton, but a single instance of a object.
You can change the lifetime of an object, by changing a setting in the IoC container. From ApplicationLifetime To SessionLifetime quite easily. That solves the biggest problem of the singleton, which if you ever DO need more than one, it requires a refactoring.
I hate singletons, and I'm a iOS developer. So i'm continually mad about misuse of singletons(Working on legacy code).
Then what? Then you use dependency injection if you think you need a singleton, and inject a single instance of a cache object. Which means you can mock it when unit testing, or make different instances of it if needed, or have different implementations of it.
In theory there is absolutely nothing wrong with singletons.
In practice singletons become God objects where hack upon hack upon hack is placed.
Eventually to modify a program in a safe way you end up needing to understand the entire program because everything depends on the singleton and the singleton depends on everything else.
Singletons like writing an entire program using only global vars are a tradeoff, you tradeoff maintainability and design for performance and ease of (initial) implementation.
In the long term maintainability is usually more important than performance and ease of initial implementation.
Without starting a flame war, I use a few singletons in my game and I dont understand why it is a problem.
The instantiation of the game, an object that manages layers, HUDS, Preferences and saving game data (it is done in a separate thread).
I have also worked places where literally every class is a singleton (practically, not really all) and to avoid the xxx->getInstance()->yyy->getInstance()...... They would use some Defines to shorten typing it all.
The major problem they cause IMHO is coupling. Some examples of how this has bitten me:
- can't instantiate the singleton differently.
- program does task A, then task B, and both use the singleton; now you have to worry about the state that the singleton was left in after completing task A.
- can't create multiple, separate instances.
- can't easily mock for testing.
- can't easily swap out with a different implementation.
I've run into each of these problems, and I can never see any value provided by the singleton. All the pattern does is reduce the flexibility, adaptability, and usefulness of the code it's applied to (at least in my experience -- I'm not implying that it's always or necessarily so).
If getInstance returns an instance of object class, and object is an instance itself, then it doesn't make sense or it is not a singleton.
You can think it like this: if object class has one unique instance, then object == object->getInstance(), so in your case either object != object->getInstance and they are from the same class (hence it is not a singleton) or they are from different classes (once again then, it is not a singleton).
Reading and re-reading the StackExchange answer with the most votes, I'm still not sure I get it. It SEEMS like he's saying:
[1] Hand-rolling a class around the "Gang of Four" singleton pattern is bad (i.e. tightly coupled, etc).
[2] Grabbing a singleton reference from Spring, CDI, Guice, or some other dependency injection framework is awesome (i.e. easier to use inheritance, abstract out some interfaces, inject mock objects into your unit tests, etc).
Am I missing something here, or is this entire discussion really no more complex than that (once you scrape away the enterprise-babble)? It's been almost a decade since I last worked on a non-trivial project that wasn't using a dependency-injection framework anyway, so some of this may just be so obvious I haven't thought about it in awhile.
1. The "Gang of Four" "Singleton" pattern, meaning stateful functionality that is statically available from anywhere, is broken. It hides dependencies and makes it impossible to uncouple the items for testing.
2. However, the idea of an application needing a single instance of an object, whether it be a configuration or renderer or what-have-you, is quite common.
3. Using "dependency injection" allows use of single instances, without the "Singleton" pattern. Which, to clarify, does not necessarily mean the more magical forms of dependency injection found in systems such as Spring, where you simply declare a variable and it is magically fulfilled later. It can also just be a parameter in the constructor of the dependent object.
Here's a good use of something close to a singleton. Different things will initialize the class, and you want to keep track of the last one initialized, so that's what you assign to the static T Instance. I used this for a component that could be added to something, but the Update function of which should only run for the latest initialization.
There's nothing wrong with singletons when they're used when needed. They've been particularly useful in the Unity/C# development I've done on an MMO, both on the client and server. The problems someone listed in an answer haven't really occurred in our use as far as I know.
The top answer is a perfect example of what is wrong with IT today. It takes a working solution, declares it wrong and starts piling up classes and interfaces to solve a problem, that was never a problem in first place (OP never said that their singleton-based cache didn't work, he merely asked if there are "better" ways of doing it). So in the end we have the same singleton cache, but hidden behind interfaces ("It makes the code easier to read" - yea, right, easier, my ass! Ctrl+click on interface method and try to read the code), thousand lines xml Spring configs, and other crap that is completely irrelevant, hard to follow and debug, but glamorous enough for SOA boys to spend endless hours talking about it.