
I’m Adam, and i’m a recovering Singleton addict. - aschepis
http://adamschepis.com/blog/2011/05/02/im-adam-and-im-a-recovering-singleton-addict/
======
latch
The OP is down, but from reading the 1 comment, I'm sure this is a C#
(possibly Java) developer.

1- Take a programmer who's tied to an IoC container and DI as a way of life,
along with anti-singleton, and interface everything.

2- Introduce him or her to a dynamic language. Specifically focusing on
testability without DI, interfaces and fear of singletons and statics.

3- Watch as he or she either:

a - accepts the fundamental truth that all that crap in a static language
doesn't add value outside of freeing you from the language

b - refuses to believe that what you are doing can even be classified as
programming

This is equally entertaining to do to either a very "experienced" (doing the
same thing for the last 10 years) programmer, or someone who's just discovered
mocking and mocks everything.

You can tell a lot about a Java or C# programmer by how readily he or she
accepts this shift (which isn't to say they magically switch over to a dynamic
language, but they should recognize that all that stuff a static language
demands of us is really a limitation of said languages).

~~~
klodolph
Hm, I think your statements is too broad: "...the stuff a static language
demands of us is really a limitation of said languages." You could apply it to
some languages, like Java or C#, but the Haskell and ML folks have an entirely
different view of static typing. As Haskell has proven, a good static typing
system can make it easier to write short, clear code, correct code. And
dynamic types are really just a kind of static types -- reduce a static type
system until it has only one type, voila, a dynamic type system. I jest.

I tend to think about it the other way around, as "These are things I can do
in a static type system but not in a dynamic one," but never let it be said
that arguing about type systems on the internet was a good use of my time.

------
demallien
Well, two points come to mind:

1) Singletons do make sense when they represent a true physical singleton -
for example, you only want one piece of code drawing directly to the screen,
the Window Manager, which should indeed be a singleton.

2) Most of your problems stem from the us of GetInstance() not from the use of
the Singleton pattern in and of itself. For example, without GetInstance(),
the other way to get a reference to the Singleton object is to pass it in to
the object that is going to use it. This indicates the dependency in the
client class's interface, and also makes mocking out the Singleton for test
purposes a real possibility.

~~~
jasonlotito
Question regarding the injection. I've found that while injecting dependencies
is good, I still prefer the brevity of not having to declare the common case.
Basically, I allow an injection to take place, but make it optional, and if
one isn't passed, I use the common case, which in effect makes a call to a
getInstance.

Any thoughts on that?

~~~
demallien
Well, that is a workable solution, but then, I'm sure you're already aware of
that, seeing as you use it already ;) Personally I prefer having the object
appear in the published interface, it helps highlight the existence of a
dependency, but your way doesn't seem horrible either. I suspect that it
depends a lot the language you're using. I'm a C programmer, and optional
parameters are quite verbose in C, so I would never choose your solution, but
in Ruby or Javascript your solution could be quite clean.

~~~
jasonlotito
My big concern is always implementing a solution and not seeing an otherwise
obvious problem. Hence the question. =)

As a PHP guy, setting up optional params are easy, so I like the solution.

------
chris_j
I don't normally do this but Google Cache is being a pain for me so it might
be being a pain for others. Here is the original article from
[http://webcache.googleusercontent.com/search?q=cache:http://...](http://webcache.googleusercontent.com/search?q=cache:http://adamschepis.com/blog/2011/05/02/im-
adam-and-im-a-recovering-singleton-addict/&hl=en&strip=1)

I’m Adam, and i’m a recovering Singleton addict.

Posted by aschepis on May 2, 2011

My name is Adam, and i’m a recovering Singleton addict. There, i’ve said it. I
used to use singletons all the time because they make doing some things, like
sharing state globally, incredibly convenient. My code was littered with
XYZManager classes. The defining trait of these classes was the static
GetInstance() method that magically enabled me to get access to that object
and its state wherever I wanted!! What a great idea, right?

What a mess! I learned over time that the cost of changing one of these
things, or the cost of doing a major refactor was really high in terms of code
change. And to make things worse, because my classes’ dependencies were hidden
in implementation and weren’t transparent in the interfaces it was impossible
to write real unit tests. This made doing a big refactor even less attractive.

So over the years, through work in the industry and coding on my own I’ve come
to the conclusion that the singleton sucks, and that there are very few places
where they are actually appropriate (logging comes to mind as one acceptable
place). The fact of the matter is that most of the places I see singletons
used in software they are actually just an enable for developer laziness.

So here is my off the cuff list of why singletons suck. Feel free to comment
and add your own reasons (or counterpoints)

Singletons hide your dependencies. This makes code harder to understand
Singletons make unit testing difficult. It’s hard to mock out a global object
that you can’t inject into a class Singletons reduce reusability. If i’m
writing a class that utilizes a singleton because my application will only
ever use one then i’m limiting myself because I can’t use that library to
write test tools that may want to simulate how many of these object (for
instance, many users) interact with a system. Singletons reduce scalability. A
single, global object? Sounds like a source of contention to me. Singletons
are not good object oriented design, they are lazy!

------
neilk
Steve Yegge wrote on this some years ago, calling it the Simpleton Pattern.
[http://sites.google.com/site/steveyegge2/singleton-
considere...](http://sites.google.com/site/steveyegge2/singleton-considered-
stupid)

------
aschepis
Apologies!!!! You guys killed my little ec2 instance! It's back up, but
running slowly. I'm going to look for some new hosting. thanks for the
comments. i'll take some time to respond later.

------
palish
As a video game programmer, I use the Singleton pattern quite often. For
example, "TextureMgr", "MaterialMgr", "ModelMgr", "GrSubsys" (for Graphics
Subsystem), etc..

The most beautiful Singleton code I've seen in C++ is:

    
    
      class GrSubsys
      {
      public:
        GrSubsys();
        ~GrSubsys();
      };
      extern GrSubsys* gGrSubsys;
    
    

... then the constructor and destructor are written such that you can
startup/shutdown the singleton as follows:

    
    
      //-------------------------------------------------------
      void
      App_Startup()
      {
        // initialize graphics subsystem.
        new GrSubsys;
      }
    
      //-------------------------------------------------------
      void
      App_Shutdown()
      {
        // shutdown the graphics subsystem.
        delete gGrSubsys;
      }
    
      //-------------------------------------------------------
      int
      main()
      {
        // startup the engine.
        AppStartup();
    
        // enter the per-frame application loop.
        while ( AppFrame() )
        {
        }
    
        // shutdown the engine.
        AppShutdown();
    
        return 0;
      } 
         

_Shrug_. A friend introduced it to me, and I liked it a lot. The same concept
can be easily applied to C, too.

~~~
latch
how much unit testing do you do?

~~~
palish
A better question might be, "Would this pattern impact our ability to write
unit tests?" I believe the answer is "No."

Let's say the module Foo depends on the Graphics subsystem. That is, Foo.cpp
has the code:

    
    
      #include "GrSubsys.h"
    
      //-------------------------------------------------------
      Foo::Foo( const string& name )
      {
        // fetch a handle to our model (loading it if necessary).
        _model = gGrSubsys->GetModel( "models/" + name );
      }
    
      //-------------------------------------------------------
      bool
      Foo::IsValid()
      {
        return ( _model != NULL );
      }
    

In order to write a unit test that takes into account the aforementioned
Singleton pattern, you might write:

    
    
      //-------------------------------------------------------
      void
      Test_EngineComponents()
      {
        // prepare for science.
        AppStartup();
       
        //==========================
        // Test #1 - Foo
        //==========================
        {
          // load a Foo entity.
          Foo* sunTzu = new Foo( "test/warlord" );
        
          // verify the entity loaded successfully.
          assert( sunTzu->IsValid() );
    
          // shutdown.
          delete sunTzu;
        }
    
        // conclude our science.
        AppShutdown();
      }
    
    

and AppStartup() is the function which initializes the subsystem singletons
(and those will initialize their manager singletons).

It's about discipline. Any fool can butcher with any tool.

~~~
hartror
Unless I have missed something about unit testing somewhere I am pretty sure
that fails the "unit" part of unit testing. Unless that "#include GrSubsys.h"
is actually importing a mock graphics subsystem.

~~~
palish
A mock graphics subsystem isn't possible since it depends directly on the
video card.

~~~
latch
My initial point was that the code isn't really unit testable. From what I
know, this is common in video games..too much code tied too hardware.

I'm certainly not saying this is bad. I am saying that, from my experience,
video game programming is very different than other sorts of programming.

~~~
palish
Ah. I gave an example of running a unit test against the hardware. I'd like to
understand _why_ it's not really testable? Not to defend myself -- to better
myself.

~~~
latch
Again, video programming _is_ different..you might want to seek out the advice
of people from that field specifically.

We have an analogue in enterprise/web programming. Code that interacts against
the database. Do you, or don't you, stub/mock out the data layer code?
Traditionally people did mock this out. Then Rails came along with
ActiveRecord...and now hitting the DB isn't only common, I think a lot of
people agree that, at a point, it's the right approach. Because, after all,
what are you really testing if you don't hit an actual DB?

But, that's specific to code that has dependencies on outside systems (like
the DB, or your graphic subsystem). When I look at your Foo class and the
IsValid method, I see a pretty specific unit that really can and should be
tested separately from an actual GrSubsys implementation. I mean, _you've_
given it a dependency on GrSubsys, but I'm not sure that's right.

Should the test read:

"it is invalid when the model is null"

or

"it is invalid when the graphic subsystem returned null"

?

The difference is subtle, but important. The first is decoupled from the
implementation. Foo is invalid when it's model is null. The second is coupled
to the depedency: Foo is only invalid when the graphic subsytem returned null.

What value does that extra coupling, within your test, get you? What should
IsValid (either the test, or the implementation) care that why the model is
null? Why add that extra complexity which makes your test more likely to break
due to changes to the implementation of the dependency?

If you test Foo independently, and then you test that gGrSubsys returns null
on an invalid model separately, the two unit tests end up working together in
isolation from changes you might make to the other.

------
malkia
Singleton's are fine, as long as you don't know they are such. For example
calling a function, that does lazy initialization (first time init) - is
really good - e.g. not requiring certain initialization is sometimes really
practical, and does not introduce messiness in your code. Most importantly
does not require putting that initialization code throughout every application
that you might use.

For example, we use at work DEJA Insight Profiler, and you can directly put
profile probes (C++) with DEJA_CONTEXT("SomeFunction") - it's using RAII to
mark start/end of the probe. But the point is there is no explicit call to
DEJA_INIT, or DEJA_CLOSE, etc. But this only works if the DEJA main
application is loaded.

------
zoul
The post is down, so I can’t comment on that, but getting rid of singletons
was the single most effective thing I did to improve my software design. I
also summed up my objections to the singleton pattern in a blog post:

<http://zmotula.tumblr.com/post/1390385240>

There’s also a blog post called _Singletons Are Pathological Liars_ by Miško
Hevery, which is very well thought-out and contains links to other related
topic:

[http://misko.hevery.com/2008/08/17/singletons-are-
pathologic...](http://misko.hevery.com/2008/08/17/singletons-are-pathological-
liars/)

Hope that helps somebody, reading Hevery’s articles was a huge eye-opener for
me.

------
cageface
Is it just me or have there been a lot of content-free short opinion pieces on
software here lately? I expect a lot more nuanced and detailed critique than
"x is bad".

------
MichaelGG
>"Singletons are not good object oriented design, they are lazy!"

Hmm, so perhaps that says something more about OO than "singleton style"?

