
Testing code is simple - th3james
http://th3james.github.io/blog/2013/09/25/testing-code-is-simple/
======
mercurial
What I wish people had told me before about this article:

\- this article only deals with unit testing

\- it does not help you write more maintainable tests

For the record, I don't think (unit) testing is simple. Tests are, in my
experience, the least maintainable and readable part of a codebase. You often
end up with many lines of setting up complex business objects. And usually,
the only help wrt to what the hell is being tested is the method name. And
that's just unit tests. Integration tests are a major pain, and often even
harder to understand and maintain.

~~~
arethuza
Not to mention the fact that unit tests can only help with _verification_ \-
i.e. that you have built the software to expectations and to catch
regressions.

I would say that 90% of the grief I have experienced building applications is
the validation part - _that you are actually building the right thing_.

~~~
tootie
I work in web consulting where we frequently take on a project from scratch
(greenfield or otherwise) and ramp it up quickly in 6-18 months then hand off
to the client who will likely maintain it for a year or two or three then
redesign (nature of the projects, not a result of quality). My company is
pushing for more and more sophisticated automated testing at the time that I'm
starting to think that automated testing is costing us more effort than it's
actually saving. Our bugs are, as you say, predominantly misunderstood
requirements. The tests never catch them. Even with correctly understood
requirements, unit tests frequently miss edge cases. Something that gets
missed in so many of these process debates is that different kinds of work
require different processes. If your job is to maintain a massive software
product, then automated tests are probably your bread and butter. But for my
line of work, they just ain't.

~~~
janjongboom
Software is not just development, there is also a maintenance perspective. If
I have a bug report in a complex piece of code it's a lot easier for me to
write a test that covers the use case and work from there then building,
fireing up a test device, etc.

Or when refactoring a piece of code that renders UI, write some tests to
verify a couple of common cases and then refactor. Run the tests and I'll know
whether the output is the same.

~~~
tootie
I'm saying there's a curve measuring the time spent writing tests and the time
the lifespan of the application. Think of like the rent vs buy debate. Tests
are a the big downpayment you make hoping that it will pay off in the future.
The kind of project my company does are rentals. Same reason we rarely do
major refactors.

------
myoffe
Nope, there's no way around it. Tests are _not_ easy. But! Writing tests,
learning from that, then improving your tests, which in turn help you design
better software will make writing tests easier. And will help you write better
code, in general.

Also, 99% of the time, the first test you are going to write in a job is for
an existing piece of software. So it will definitely not be easy. But you will
learn so much more about the software you are writing the test against rather
than writing code in the edit-and-pray methodology.

I highly recommend reading [http://www.amazon.com/Growing-Object-Oriented-
Software-Guide...](http://www.amazon.com/Growing-Object-Oriented-Software-
Guided-Tests/dp/0321503627)

------
icefox
"Test driven development is now widely recognised as ‘a good thing’"

should probably be written

"Using test testing in development is now widely recognised as ‘a good thing’"

There is controversy around specifically TDD and studies showing that it does
no better than writing unit tests during or after development. For citation
see the references at the end of the chapter on TDD in "Making Software: What
Really Works, and Why We Believe It"

~~~
NateDad
One reason why TDD is better than writing unit tests after development is
because then the tests actually get written. Whereas, if you wait to write
them after the code, they often get lost in the pressure to deliver the
feature and/or move on to developing new features.

The other reason TDD is good is that it forces you to make modular code with
limited functionality to make it easy to test. You can do that without TDD...
but TDD forces you to think about it ahead of time, rather than writing a huge
mess of code and then saying "well, this will be too hard to test, so we won't
bother".

TDD isn't magic, it's more of a mental hack to get you to do the right thing.
If you do the right thing anyway, then no, TDD won't help.

Note, I'm not a huge TDD guy. In fact, I usually write my code first. But I've
done some TDD and I can definitely see the benefit, and I try to do it when I
can.

------
pasbesoin
I've done my share of testing. When you walk into a meeting with the most
senior dev team in the (significantly sized) shop, ask a single question about
an underlying assumption, and have development go back for another couple of
weeks to take another whack at it, you realize -- as one example -- that
"testing" is not simple.

If you're lucky, you're doing real QA, where an smart, knowledgeable, and
independently minded QA analyst can discover and raise such concerns _before_
the work gets done.

"Testing" is a shithole because many (most?) shops don't give it much respect
-- nor resources.

P.S. One of the shorter meetings of my life. 5 minutes, including the
socializing. The technical part took maybe 2.

------
citricsquid
I recently decided to become a proper developer and started looking into
testing, I'm almost through with "Laravel Testing Decoded", a book by Jeffrey
Way, it focuses on Laravel but contains a wealth of information relating to
testing in general. The book has given me a lot more value than I got out of
any other resource that covered testing. Highly recommended and only $20:
[https://leanpub.com/laravel-testing-decoded](https://leanpub.com/laravel-
testing-decoded)

------
DanWaterworth
I worry that many developers write tests because it is "widely recognised as
‘a good thing’" without knowing why. It's clearly important to know why,
otherwise how do you know when you are getting better at it?

So, what is the purpose of unit tests?

~~~
NateDad
unit tests make sure that what you _think_ the code does is what it is
actually doing. This is great for catching corner cases that you might not
always test in a live test... or testing code that isn't yet called in live
use.

It also makes sure that if code changes, it's still functioning correctly.
Aside from catching initial bugs, this is the biggest addition to
maintainability of code.

I find unit tests to be incredibly liberating. Have you ever made changes to
some parts of the code, but then worried you might have broken something? With
unit tests, you can be confident that your change didn't break any expected
behavior.

~~~
bluntly_said
I'm not sure I agree with you here. Tests are still only as clever as the
person writing them. This means it's rarely going to catch corner cases that
the developer wasn't already aware of.

I think the real value of tests is exposing expected behavior to teammates,
and then providing quick sanity checks against that expected behavior.

The biggest thing new developers miss about testing is that most times you're
not writing tests for yourself, you're writing them as a courtesy to your
teammates.

~~~
RHSeeger
> it's rarely going to catch corner cases that the developer wasn't already
> aware of

Learning to recognize potential edge cases and partition inputs to check as
many input categories as possible is a skill learned through experience
testing.

------
AlwaysBCoding
What I wish people had told me about testing:

* You write tests in accordance with a pattern:

1\. Arrange

2\. Act

3\. Assert

4\. Annihilate

If you annihilate your test data with configuration options, and you abstract
your "arranging" into factories and custom methods, then every test you write
should be exactly three lines. When I learned this it gave me a consistent
mental framework to think about how to write good tests.

* Test the interface not the implementation:

This was an eye opening realization about what to test. If you haven't seen it
yet, watch Sandi Metz' incredible RailsConf talk -
[http://www.youtube.com/watch?v=URSWYvyc42M](http://www.youtube.com/watch?v=URSWYvyc42M)
\- it's so good

* Stubs:

Stubs are exactly like the viruses' that we learned about in biology class.
You load them up with a value then they literally latch onto a method and
inject that value into it. When you're just starting out with testing,
effective use of stubs can get you really far, and only when you start hitting
the limits of practicality is when you need to think about bringing in
something more complicated.

* Testing isn't easy per se, but it is definitely enjoyable if you focus on making it that way

* Cucumber is pretty stupid, don't waste your time with it

~~~
charlietran
That video was excellent, thank you! Sandi Metz always comes up with great
analogies that clear up the conceptual fog in my mind. Her book Practical
Object Oriented Design in Ruby is a great read too.

------
linuxlizard
May I also suggest an extended assert?

xassert(condition,value);

e.g., xassert(result==3,result);

When the assert fails, the value is also reported. Helps with the question,
"So it's not 3. What is it?"

I started using xassert when working in embedded systems where there is no
debugger, no stack trace. Knowing the value of the error is valuable.

retcode = some_os_function(blah,blah,blah);
xassert(retcode==OS_SUCCESS,retcode); <\--- why did my call fail?

~~~
TorKlingberg
A common version is to have assertEquals(a, b), that will print out both a and
b if they are not equal. Then add similar asserts for "not equal", "less than"
and other relations.

------
tenpoundhammer
This is a good example for a small scale isolated project with a small amount
of complexity. If however you are writing an enterprise level, scalable
application, with many business objects and tons of complexity, things get way
harder.

For example: What do you do if your function has a call to a 3rd party
webservice? You do a mock.

What do you do if you have 37 levels of state that are dependent on the
objects you are receiving from the mock? Create 37 different business objects
and verify that the output for each is correct.

This problem builds and gets bigger, harder, and more conceptually complex. So
while it can be easy to get started, it is not by definition intrinsically
easy.

However, it is well worth every minute spent. It makes refactoring and
building up much much easier. It also leads a bread crumb of clues to figure
out what the code is doing. It leads to better code stability, and far less
regression.

~~~
tonyarkles
I'm not criticizing your comment, just adding a little bit to it. You're
totally right, there's a lot of complexity when you're testing complex
features in a huge app.

Buuuuut, a huge complex app is probably not the right place to try learning
any new technology. Do a few small things first, prototype something out, test
it along the way. Then maybe something a little more complicated. Slowly
expand your horizon of uncertainty.

That way you only need to focus on learning one thing at a time. The assert()
call that OP suggested is exactly enough to start thinking about testing
simple code; once you're doing well at it, you can start noticing repetition
and opportunities for abstraction (and finding existing libraries that have
these abstractions already done for you); then, you start doing testing on
more complex code and start learning about mocks to facilitate testing there.

Some people do really well in a "big bang" kind of learning environment, but I
definitely don't. For me, learning works best given: a rough map of where I'm
trying to get (even just a mind map that gives me things to look up when I get
stuck), and an opportunity to master each piece before expanding onto new
topics.

~~~
tenpoundhammer
I agree.

------
bcoughlan
I develop software in a research environment, turning PhD students code into
production quality systems. I think there are a lot of developers out there
who don't write tests, or only write tests because you're "supposed" to (as
evidenced by the popularity of the Stack Overflow question "Is Unit Testing
worth the effort?" [http://stackoverflow.com/questions/67299/is-unit-testing-
wor...](http://stackoverflow.com/questions/67299/is-unit-testing-worth-the-
effort)).

They don't see the benefit for themselves, but like proper documentation, it's
mainly a courtesy to other developers.

What I wish I was told about testing is that if you modify your workflow, it
benefits YOU as well as your colleagues. Without testing, trying out a feature
means:

1\. Make changes. 2\. Restart application or server (possibly refreshing
page). 3\. Go through a series of clicks or keystrokes. 4\. Didn't work? Go to
step 1.

Steps 2 and 3 can take a lot of time, and you'll likely be doing it over and
over again (boring!). With testing this becomes a 2 second cognition-free
process, and even that can be automated with Grunt/gnotify notifications. The
boring part of coding is now automated. With good coupling, you can quickly
isolate your change/compile/run cycle to the part of the code you're working
on.

Programmers already know the benefits that testing brings 6 months down the
line when something breaks, but it's wishful thinking to expect people to
think that far ahead (and smaller companies are often in a hurry to get
something out the door before the money dries up). If you want intrinsic
motivation to write tests, you learn how to work such that tests provide
immediate benefit.

~~~
dspeyer
But testing doesn't eliminate 2 and 3. Testing only checks for bugs you've
thought of. You still need to do manual tests of new features to catch bugs
you didn't think of.

~~~
bcoughlan
It doesn't eliminate them, but it minimises the effort of performing these
steps. You are of course correct in saying that you need to do manual testing
to discover issues that you never thought of, but this is a separate process
(which should not usually be done by developers who are too entrenched in the
inner workings of the system to see it with fresh eyes).

------
EGreg
I also always wondered why testing frameworks need all that stuff besides
assert()?

In fact, in browser js you can even wire onerror to send error server to a
server (eg google analytics) in production.

What's the point of all the fancy .isEqualTo etc!

~~~
jdlshore
Error messages. If the assertion framework knows what you're checking, it can
provide a better message.

E.g., "assert(a == 3)": "Assertion failed."

vs.

"assertEquals(a, 3)": "Expected 3 but was X."

The fancy "spec" style ("a.should.be.equal.to(3)") is overkill, though, I'd
agree. It adds complexity for dubious benefit.

~~~
EGreg
Why can't it be

assert("a === 3", a, b)

And then it can say "a === 3 failed. (2, 4)"

~~~
dragonwriter
It _can_ be, but both the test code _and_ the error messages from that are
worse than with many existing testing frameworks to my eye, so I wouldn't
_chose_ that over the existing alternatives.

~~~
EGreg
how are they worse?

I don't want.to.sitThere.writingThis(kindOf, crap)

~~~
dragonwriter
> how are they worse?

The code reads worse for a number of reasons:

:: you've written an equality, rather than truth, assertion named "assert"
rather than "assertEquals".

:: you're writing what would be the code of a truth assertion as a string for
the message test.

For the same role, most unit testing frameworks have an equality assertion /
expectation assertion that serves essentially the same role but reads better.

> I don't want.to.sitThere.writingThis(kindOf, crap)

Then don't use spec-style frameworks. There's plenty of xUnit and similar
style frameworks that use simple assertions (and, yeah, you can just use bare
truth assertions all the time if you want, but you've usually got a handful of
nice tools for common cases like equality assertions to go with them.)

~~~
EGreg
Fine, how about this:

    
    
      function testCase() {
         // do some stuff
         assert(a>=b, 'less pictures than participants', a,  b);
         // do some stuff
         assert(cool.length, 'cool is empty');
      }
    

output:

    
    
      "testCase: less pictures than participants (3, 4)"
      "testCase: cool is empty"
    

easy, no?

~~~
dragonwriter
If you want to write your own one-function test microframework on that model,
feel free; I don't see any reason to prefer using it to existing frameworks,
whether xUnit-style or spec-style.

------
xsace
really not worth the read

------
zwieback
Testing is only simple if you work on simple code, does not work nearly as
well for real time, embedded or low level code.

However, the beneficial side effects of TDD are much greater than just the
verification aspects so I think more foundational work needs to be done on
test frameworks for hard problems.

~~~
trcollinson
I respectfully beg to differ with your first statement. I TDD in embedded and
low level code, well, always. There is a great work on this by Grenning, worth
reading by anyone who wants to understand TDD better, whether it is for
embedded C or any other type of application.

Test-Driven Development for Embedded C [http://pragprog.com/book/jgade/test-
driven-development-for-e...](http://pragprog.com/book/jgade/test-driven-
development-for-embedded-c)

~~~
zwieback
I have that book and really like it. I have been trying to apply it to my
embedded work. It's a great start but we need a lot more development in that
area, including more active support from the big embedded tool vendors.

------
fennecfoxen
I object to calling this "test-driven development." The test is not driving
the development. This is merely "development, with some tests."

Let the tests actually shape your workflow, and then get back to me about how
your development is "test-driven".

------
lectrick
I really think all languages should have "assert" and "assert_equal" methods
right out of the box, built-in.

Not that they're not hard to write yourself of course.

------
Dirlewanger
So he doesn't like Rspec because it's in the spec style...which if anything
makes tests a bit more easy on the eye and makes it more human-readable? Ooook

------
waterlion
Hm. This is possibly the most simplistic conceivable form of unit testing.
Granted, lots of testing takes this form but it's only the start.

~~~
vpeters25
My experience is that a simple "it works" test covers most of the potential
bugs in the test subject, so I just write {method name}_itWorks and just call
the method inside asserting whatever the method was supposed to do.

I only write more tests if there are important business rules (new employees
should be created with "Pending" status), or to cover edge cases uncovered by
QA.

------
gcb1
this little article is pernicious. please do not start thinking this is proper
testing. what he shows is a lame replacement for some manual browser testing.

theres no talk about clean states between tests. no concern about emulating
data sources. etc.

this is what should be called half assed functional tests. not even unit tests
as everyone here says.

man tests really are black magic around those parts...

------
addflip
TTD FTW

~~~
orng
You probably mean "Test-Driven-Development" or "TDD" for short.

