

Stop Using Mocks - latch
http://openmymind.net/2011/3/23/Stop-Using-Mocks

======
ericb
I'm a Rails dev, I do mostly test driven development, but my current bit of
heresy is that there are some tests with low or negative ROI that are not
worth writing.

Here are some examples: I almost never test at the view level, and barely at
the controller level, and instead skip to the integration level. View, and
even controller-level tests are brittle. If I'm going to spend time on brittle
tests, I'd rather it be at the highest level. My thinking is, if your app can
run a marathon, it can probably also walk. I'd also rather my test be married
to output than to implementation details, as that gives more ways the code can
change and still not break the test.

I feel like some folks view testing like a religion, or code coverage percent
as a "score" in the game of coding.

~~~
russellperry
"I almost never test at the view level, and barely at the controller level"

You sound like every Rails dev I know.

It depends on the controller action in question. If it's a boilerplate Rails
controller action, then the ROI might not be that high, but if the controller
is doing something more specific (assuming it's not doing something that ought
to be pushed to the model) then unit testing the action is important,
sometimes very important.

Code coverage is pretty much a meaningless metric because it can't say whether
the tests are actually good and useful. But it can help point experienced devs
into areas of risk that may or may not warrant more attention.

~~~
ericb
Well, part of that goes to how I code. The controller is rarely doing anything
important, more than boilerplate. I push that into the model as much as
possible.

------
PaulHoule
Like most TDD examples, I see the code and the first thing I do is want to
punch the wall.

Here we see an example of modularity for the sake of modularity, not business
requirements.

In all systems I've built, if encryption is pluggable, it is selectable on the
level of an individual user. The reason for this is that, in general, you
can't transform one kind of encrypted password into another kind of encrypted
password, not unless you have the plaintext form.

Practically, your encryption is pluggable on paper only because you can't
decide you want to use a new algorithm for people who register after a certain
date... Examples like this add to the perception that TDD is rife with "Cargo
Cultism".

~~~
latch
Better than wanting to punch me! I'm curious what language you program most
often in? DI, and this artificial modularity for the sole purpose of
testability, is a common pattern and limitation of static languages. In fact,
most .NET and Java developers don't even realize that DI is a form of IoC.
Most think the two are synonyms.

If you move this example to a dynamic language, suddenly artificial complexity
and the verbosity of the code shrink to something of (relative) pure beauty.

As a ruby developer, I've made this argument many times before. As a C#
developer, while I every line of code I write, I feel the same frustration
that you do, I don't see much that can be done.

~~~
PaulHoule
I program quite a bit in C#, which I like, as well as in dynamic language such
as PHP, Perl, Python, etc.

I avoid the use of DI in C#; it's not a "limitation of a static language" but
it's a choice that people make. I find that wonderfully concise and
maintainable code can be written in C#. Some of the benefits of DI can be had
through careful use of the static scope in desktop applications and of the
request scope in ASP.NET applications. There are all sorts of tricks you can
do to improve testability in a system of this sort.

Testability is a good thing, in and of itself, but you've got to weigh the
cost of bulking up your code. In a case like this it may make sense to make
the unit of testability larger.

I see everything in a program, including lines of codes, methods, classes,
files, tests to be a liability. Each artifact is like a puppy, you not only
have to create it but you're committed to maintain it in the future. I've used
automated unit tests to create systems that I could never have built
otherwise, but there comes a point where the benefit of tests exceeds the cost
and when the artifact count increases dramatically (as when you create mock
objects) that it's time to consider the economics.

------
nvarsj
A better article, in my opinion, is Martin Fowler's "Mocks Aren't Stubs":
<http://martinfowler.com/articles/mocksArentStubs.html>.

However, I disagree with some of the basic premises. For example, Fowler says
that using mocks ties the test to the implementation. I don't see how stubs
are much different - you test the state of the stub, and this requires
knowledge of how the implementation will use the stub. Any internal
interaction testing will be tied to the implementation by its very nature.

I suppose the argument is that mocks can be even more coupled to the
implementation by using expectation matchers. I find this is a bit
disingenuous though - any decent mocking framework will make these matchers
optional. But when you really need to test that specific interaction, it can
be very handy.

So perhaps "mocks aren't stubs", but I would say a stub is a mock - just a
dumber one.

~~~
brown9-2
I agree with you on this, and I feel like I must be missing out on some
feature of "stubs" or working with a different definition of what a "stub" is
- testing with a stub implementation of the collaborator, which you have to
write yourself anyway, seems to tie your test to not just the implementation
of the class-under-test but also to the stub you've written.

In the OP's article, I don't see the difference between what the author is
calling "mock" or "stub", only that in the latter the expected value of the
arguments passed to the collaborator's method are not asserted. This still
seems like a mock to me.

------
Groxx
I see what they're trying to say, and I like the increased decoupling with
FakeItEasy (I'll probably switch to it), but _every single test_ in there
makes a call that depends on parameter type and order.

For instance. Every test includes a line like this:

    
    
      var user = new UserRepository(store, encryption)...
    

Later, there are lines like this:

    
    
      A.CallTo(() => encryption.CheckPassword("Ghanima", "Duncan"))).MustHaveHappened();  
    

How is that not identical? The faking code is more flexible, as you can fake
any kind of call without regard to the parameters or types, but that's just as
tightly bound as the other examples, and they'll all fail if you re-order the
parameters.

~~~
latch
The test that you picked is specifically there to test that call. Did you not
see:

    
    
      Any.CallTo(store).WithReturnType<User>().Returns(user);
    

Also, two of the tests have seen been updated to truly leverage FakeItEasy's
stubbing capabilities:

    
    
      [Test]
      public void ReturnsNullIfThePasswordsDontMatch()
      {
        var store = A.Fake<IDataStore>();
        var encryption = A.Fake<IEncryption>();
    
        A.CallTo(() => encryption.CheckPassword(null, null)).WithAnyArguments().Returns(false);
        var user = new UserRepository(store, encryption).FindByCredentials(null, null);
       
        Assert.IsNull(user)     
      }
    

There's no setup/expectation on the store - it'll just returned a canned
(default new user) response.

------
angelbob
His example code isn't really about mock versus stub. It's about only
depending on the parts (parameters passed, mostly) that you actually use, not
creating extra bits that mock other objects.

I agree that your test code should only specify the bits you care about. I
find the mock versus stub distinction weird and artificial.

Maybe .NET people use those terms differently? I'm a Rails guy.

~~~
latch
No, you are right. Its about being as unspecific as the test requires. I
updated a couple examples to lean more heavily on more traditional stubs;
still generated by the framework, but expectation or set-response, its all
canned.

------
ericmoritz
The biggest issue that ever comes up for me when using mocks is getting the
assumptions wrong. For instance perhaps I mock my interactions with an
external web service and they change the JSON structure on me. My tests
continue to pass but in production, bad things happen.

~~~
jdminhbg
I've been using ephemeral_response[1] in Ruby, there may be a similar library
in whatever language you're working with. The idea is that it makes one real
call to the service and caches it for a specified amount of time as a fixture.
That way you don't constantly make calls against e.g. the Twitter search api,
but when they change how it works in 6 months, your test will fail.

[1]: <https://github.com/sandro/ephemeral_response>

~~~
jjrumi
I don't know if ephemeral_response solves this, but the first thing that comes
into my mind is caching an error (webservice doesn't respond in time or
whatever). This happened to me some times.

Caching the error is a big problem when you have a system that determines if
your code is stable or not and blocks deployments in production to prevent
errors.

------
jister
Looking at your code, I don't think you still need the UserRepository class.
From the 2 sample methods you provided, notice that "Store" and "Encryption"
can be standalone so leaves the UserRepository class only as a wrapper which
complicates your unit tests.

Of course I might be wrong since I haven't seen the rest of the code in
UserRepository.

Also, you can also design your FindByCredentials method as:

public bool FindByCredentials(string userName, string password, out User user)
{}

~~~
latch
The entire example is contrived to talk about testing. Not much about
UserRepository is ideal.

I think out and ref are pretty horrible. I understand that the pattern is
convenient for the BLC *.TryParse methods, but it should really stick there.
Your signature tells me that you'd likely return false rather than throw an
exception. I'd agree that returning null isn't the best, but I'd favor
returning a NullObject that true/false with a out parameter.

~~~
MartinCron
_I think out and ref are pretty horrible._

I agree wholeheartedly, though I'm not sure why. I often find myself creating
small "data bunch" types with names like ValidationResult that consist of a
bool & a set of messages to use as responses to the question "is this input
OK, and if not, why not?"

------
aneth
The idea that every test should cover only one code change is absurd and a
recipe for writing way more code than necessary. More code is more brittle and
more difficult to change agile-y. Write enough tests to ensure things work,
not to ensure every line of code is where you put it. That's folly and a waste
of resources.

------
jjrumi
In my case, most of the time mocks are used to avoid Database interactions:
They are dangerous for writing and slow for reading.

The best thing I've run into lately have been "mocking" the DB creating a copy
(temporary tables with only the needed data) of the tables that the code
interact with.

This way you create a "mock" of the data you're going to work with and code
the test getting a more reliable test in my opinion.

~~~
stuhacking
Testing a DB connection sounds more like integration than unit testing.

Mocks are used to fake things that add significant overhead but no value. If
you have a suite of 1000 unit tests, you really don't want half of them making
actual database requests.

Simply making 'safe copies' of the database data does not solve the problem
that Mocks are intended to solve.

~~~
jjrumi
It's not a "safe" copy, it's a set of known data in memory that fakes the DB
connection and reacts as it will with a real DB connection.

And yes, I have 500+ UTs.

~~~
stuhacking
I'm sorry, I misunderstood the situation you described.

