What I wish people had told me before about this article:
- this article only deals with unit testing
- it does not help you write more maintainable tests
For the record, I don't think (unit) testing is simple. Tests are, in my experience, the least maintainable and readable part of a codebase. You often end up with many lines of setting up complex business objects. And usually, the only help wrt to what the hell is being tested is the method name. And that's just unit tests. Integration tests are a major pain, and often even harder to understand and maintain.
Not to mention the fact that unit tests can only help with verification - i.e. that you have built the software to expectations and to catch regressions.
I would say that 90% of the grief I have experienced building applications is the validation part - that you are actually building the right thing.
I work in web consulting where we frequently take on a project from scratch (greenfield or otherwise) and ramp it up quickly in 6-18 months then hand off to the client who will likely maintain it for a year or two or three then redesign (nature of the projects, not a result of quality). My company is pushing for more and more sophisticated automated testing at the time that I'm starting to think that automated testing is costing us more effort than it's actually saving. Our bugs are, as you say, predominantly misunderstood requirements. The tests never catch them. Even with correctly understood requirements, unit tests frequently miss edge cases. Something that gets missed in so many of these process debates is that different kinds of work require different processes. If your job is to maintain a massive software product, then automated tests are probably your bread and butter. But for my line of work, they just ain't.
You folks are in an odd situation: you always see the costs of testing, but you never experience much of the benefits. Your intuition of the right amount of coverage is guaranteed to be off.
For me the right amount of automated testing is all about exploring the cost vs benefit boundary, which shifts with tooling and circumstance. But my firm rule now is: code must either have a firm expiration date or full test coverage. Nothing between is allowed.
So if you're going to cut back on testing, make sure your customers have agreed in writing on a date after which they should throw the code away.
When you say "misunderstood requirements" my read on that is a failure of functional testing. Unit testing is good at testing small chunks of code for things like valid inputs/outputs, but not for validating whether the software performs as specified. Functional testing is hard to automate initially, but can be automated for regression testing. Performance testing, on the other hand, is easy to automate. Sounds like your organization is pushing automation as a way to control costs, without perhaps fully understanding what can be automated and when.
Software is not just development, there is also a maintenance perspective. If I have a bug report in a complex piece of code it's a lot easier for me to write a test that covers the use case and work from there then building, fireing up a test device, etc.
Or when refactoring a piece of code that renders UI, write some tests to verify a couple of common cases and then refactor. Run the tests and I'll know whether the output is the same.
I'm saying there's a curve measuring the time spent writing tests and the time the lifespan of the application. Think of like the rent vs buy debate. Tests are a the big downpayment you make hoping that it will pay off in the future. The kind of project my company does are rentals. Same reason we rarely do major refactors.
As well has helping with verification, I find unit tests help with design. I write less code, and that code is simpler and with looser coupling when I write tests first.
This has been my experience as well. And I actually like the rspec style of testing that the author dismisses because it allows you to group tests by functionality covered. This goes a long ways towards making your tests more approachable and maintainable. When I contribute to a Ruby gem with good rspec-style tests it's generally trivial for me to see how to format and where to put my tests covering the new functionality.
And then every time that stuff changes, all the tests are suddenly out of date, which means people get used to the idea that failing tests are fine and fixed by updating the tests.
This is a people problem as much as a test problem.
1. Write your code so that independent units of functionality (algorithms) can be tested.
2. Make sure people understand, when changing code, either a) change the tests first, or b) recognize what tests should be failing because of the change and which shouldn't.
For good, testable code combined with good tests, it should be rare that the tests start failing when you change the implementation. Admittedly, that's true in theory and can vary widely in practice.
Sure, that's not a problem unit tests are meant to solve.
The appropriate practices there are short iterations, frequent releases, acceptance tests, and having product people sitting next to developers.
For those wondering how unit testing relates to other practices, go read Kent Beck's Extreme Programming Explained. He's one of the people who led the current wave of unit testing adoption via jUnit. Years before the term "Agile" existed, he was on a team that found a highly productive way of working, one for which unit testing was an important foundational element. But it was just one of 12 practices they saw as necessary.
Once you have that working well and you're building what the businesspeople expect, the next step is to make the businesspeople start testing their assumptions. That's basically what the whole Lean Startup thing, minus the fad, is about: let's not just go build whatever the guy with the tie says. Let's all get evidence about what's really worth building.
I imagine the point is to overcome inertia --- to get people to take a first step. Often that's the hardest part when someone is facing something they expect to be painful, dreary, and complicated.
It's sort of like "the first dose is free", but applied for good.
Yup. "testing" is sometimes mythologized as this huge, complicated, complex thing. And it is. But it's also really, really easy. As the post says, fundamentally, you do something, then see if it did it right. That's it. Everything else builds from there.
I absolutely agreed that writing high quality, maintainable tests is hard (in fact, I say so in the article). My point is really that the way testing is discussed, particularly in the Ruby community, makes testing seem more complicated than it is, and I think puts some people off.
I'm not sure this point really gets through. The way the article read to me was "Testing is simple and easy (1)"
1: small print - writing good tests is hard
I could likewise argue that "writing code is simple and easy, but writing good code is hard", but this statement has little value in itself. It seems more an argument against using complex testing frameworks and going for simpler syntax. As a fan of Nose, I absolutely agree.
If you write your tests first (TDD), your "complex business object setup" (aka state setup) invariably gets simpler.
The more complex your code, the more coupling there is, the more sources of bugs there are potentially... and the harder it is to write tests for that code.
Break down your code into smaller methods and more focused objects and just watch your unit tests get simpler.
People tend to forget that TDD isn't a 3-step process. Even this article makes that mistake. Write a test, pass that test, refactor. People generally forget the refactoring step.
Basically though, if you find your test difficult to write, there is a problem, and it's not with the test. And that should be your sign that you either don't understand the problem, or you are trying to do too much.
Now, this is hard. It's hard to accept that way of thinking. But the end result has always been cleaner, better code in my experience. It's not always the obvious way, but it's the best way.
I'm not convinced, for a couple of reasons. I haven't found unit testing to help in any way with the complexity of business objects, which are often more-or-less dumb beans. When you are handling complex problems, you are going to end up with complex data structures. Unit testing can help with modularizing services, though.
The second problem is that whenever you're faced with a large, older codebase, it's often difficult or impossible to refactor away complexity in core components.
TDD is really enjoiable in Coffee Script.
Since Coffee Script supports literate code you can write your specs in markdown and embed litte code snippets which verify that your claims are true.
Currebtly I am only toying with it in one of my pet projects but it seems promising.
Nope, there's no way around it. Tests are not easy.
But! Writing tests, learning from that, then improving your tests, which in turn help you design better software will make writing tests easier. And will help you write better code, in general.
Also, 99% of the time, the first test you are going to write in a job is for an existing piece of software. So it will definitely not be easy. But you will learn so much more about the software you are writing the test against rather than writing code in the edit-and-pray methodology.
"Test driven development is now widely recognised as ‘a good thing’"
should probably be written
"Using test testing in development is now widely recognised as ‘a good thing’"
There is controversy around specifically TDD and studies showing that it does no better than writing unit tests during or after development. For citation see the references at the end of the chapter on TDD in "Making Software: What Really Works, and Why We Believe It"
One reason why TDD is better than writing unit tests after development is because then the tests actually get written. Whereas, if you wait to write them after the code, they often get lost in the pressure to deliver the feature and/or move on to developing new features.
The other reason TDD is good is that it forces you to make modular code with limited functionality to make it easy to test. You can do that without TDD... but TDD forces you to think about it ahead of time, rather than writing a huge mess of code and then saying "well, this will be too hard to test, so we won't bother".
TDD isn't magic, it's more of a mental hack to get you to do the right thing. If you do the right thing anyway, then no, TDD won't help.
Note, I'm not a huge TDD guy. In fact, I usually write my code first. But I've done some TDD and I can definitely see the benefit, and I try to do it when I can.
Unless I'm thinking of something else, that report fails to account for the other benefits of TDD. Benefits that cannot be gained writing it later. Nor does the study even consider them in it's final analysis.
I've done my share of testing. When you walk into a meeting with the most senior dev team in the (significantly sized) shop, ask a single question about an underlying assumption, and have development go back for another couple of weeks to take another whack at it, you realize -- as one example -- that "testing" is not simple.
If you're lucky, you're doing real QA, where an smart, knowledgeable, and independently minded QA analyst can discover and raise such concerns before the work gets done.
"Testing" is a shithole because many (most?) shops don't give it much respect -- nor resources.
P.S. One of the shorter meetings of my life. 5 minutes, including the socializing. The technical part took maybe 2.
I recently decided to become a proper developer and started looking into testing, I'm almost through with "Laravel Testing Decoded", a book by Jeffrey Way, it focuses on Laravel but contains a wealth of information relating to testing in general. The book has given me a lot more value than I got out of any other resource that covered testing. Highly recommended and only $20: https://leanpub.com/laravel-testing-decoded
I worry that many developers write tests because it is "widely recognised as ‘a good thing’" without knowing why. It's clearly important to know why, otherwise how do you know when you are getting better at it?
Unit tests serve a number of different purposes (for me, YMMV):
1. When writing tests before the code, the tests can help you wrap your head around what the code should and should not do.
2. Forces distinct functionality/algorithms to be broken out into their own functions for easier testing.
3. Forces algorithms to be broken away from other, less testable, code (ie, read from data base, algorithm, write to database... the piece in the middle there should be in it's own function for testing)
4. When refactoring, gives (more) confidence that the changes being made aren't breaking anything. The corollary to this is it makes refactoring more common because you can feel braver about doing so.
5. Proves the code is doing what you say it's doing (with the tests). Note that this doesn't mean the code is right, just that it isn't wrong in the ways you're testing. Learning to write tests with good coverage and how to partition input data for that is a skill learned by experience.
6. Documents exactly what the code does for someone that comes along later. For much code, this can be determined by looking at the code... but in many cases the tests can say it cleaner.
7. Documents what the code DOESN'T do, and this is an important one that can't really be understood by just looking at the code. If the tests don't say the code does something, it could change later. If the tests only exercise negative number inputs but the code happens to do something useful with positive numbers... user's shouldn't rely on the positive number behavior.
8. Confirm that, when a bug is found, the fix actually fixes it. When finding a bug... a) find a way to reproduce is, b) write a test that does so and fails, c) fix the code and see the test pass... d) the bug never returns without being caught by the test.
This is a great answer, but perhaps you could clarify some points for me:
* I'm of a functional persuasion, I like to think about types before code, I already tend break things up into testable components (also known as pure functions). What benefits might I find by adopting TDD?
* I understand the there are different benefits to be had from the act of writing tests and the artefact that is produced (ie the test suite). Do you the expect the tests that you write find bugs immediately? Or does the artefact only help to document code/detect regressions?
unit tests make sure that what you think the code does is what it is actually doing. This is great for catching corner cases that you might not always test in a live test... or testing code that isn't yet called in live use.
It also makes sure that if code changes, it's still functioning correctly. Aside from catching initial bugs, this is the biggest addition to maintainability of code.
I find unit tests to be incredibly liberating. Have you ever made changes to some parts of the code, but then worried you might have broken something? With unit tests, you can be confident that your change didn't break any expected behavior.
Excellent. You've brought up some points that come up commonly. I have some follow up questions:
* To quote Dijkstra, "Testing shows the presence, not the absence of bugs", so presumably, you write tests to build confidence rather than to ensure "that what you think the code does is what it is actually doing". Is this accurate?
* If so, suppose you given a code-base that contains tests. How do you determine how much confidence to place in said software?
* It's very common for the space of possible inputs to be infinite, so you can't possibly test every case. How do you decide which ones are useful to write tests for?
* What is your response to Rich Hickey's depiction of, what he calls, guard-rail programming ("Simple Made Easy" from around 15:30 onwards)?
* Finally, how do know when you've written a good test suite? It's obviously good if it finds bugs, but how do you know that it is good without running it?
Sorry to bombard you, I'm asking because I'm genuinely interested and I want write more robust software.
> Have you ever made changes to some parts of the code, but then worried you might have broken something? With unit tests, you can be confident that your change didn't break any expected behavior.
I must say, I've done every permutation of changing code with/without tests where my changes have broken/have not broken things. Subjectively, I can't say that there has been noticeable correlation in any direction. Perhaps that's because I'm bad at writing tests, but then, how do I write good ones?
I'm not sure I agree with you here. Tests are still only as clever as the person writing them. This means it's rarely going to catch corner cases that the developer wasn't already aware of.
I think the real value of tests is exposing expected behavior to teammates, and then providing quick sanity checks against that expected behavior.
The biggest thing new developers miss about testing is that most times you're not writing tests for yourself, you're writing them as a courtesy to your teammates.
> it's rarely going to catch corner cases that the developer wasn't already aware of
Learning to recognize potential edge cases and partition inputs to check as many input categories as possible is a skill learned through experience testing.
Documentation. Tests show the reader what the code is intended to do, and how to work with said code – information that is not always conveyed with the canonical source. The benefit over other forms of documentation is that it comes with automatic verification that the code works exactly as the documentation describes.
If you annihilate your test data with configuration options, and you abstract your "arranging" into factories and custom methods, then every test you write should be exactly three lines. When I learned this it gave me a consistent mental framework to think about how to write good tests.
* Test the interface not the implementation:
This was an eye opening realization about what to test. If you haven't seen it yet, watch Sandi Metz' incredible RailsConf talk - http://www.youtube.com/watch?v=URSWYvyc42M - it's so good
* Stubs:
Stubs are exactly like the viruses' that we learned about in biology class. You load them up with a value then they literally latch onto a method and inject that value into it. When you're just starting out with testing, effective use of stubs can get you really far, and only when you start hitting the limits of practicality is when you need to think about bringing in something more complicated.
* Testing isn't easy per se, but it is definitely enjoyable if you focus on making it that way
* Cucumber is pretty stupid, don't waste your time with it
That video was excellent, thank you! Sandi Metz always comes up with great analogies that clear up the conceptual fog in my mind. Her book Practical Object Oriented Design in Ruby is a great read too.
A common version is to have assertEquals(a, b), that will print out both a and b if they are not equal. Then add similar asserts for "not equal", "less than" and other relations.
This is a good example for a small scale isolated project with a small amount of complexity. If however you are writing an enterprise level, scalable application, with many business objects and tons of complexity, things get way harder.
For example:
What do you do if your function has a call to a 3rd party webservice? You do a mock.
What do you do if you have 37 levels of state that are dependent on the objects you are receiving from the mock? Create 37 different business objects and verify that the output for each is correct.
This problem builds and gets bigger, harder, and more conceptually complex. So while it can be easy to get started, it is not by definition intrinsically easy.
However, it is well worth every minute spent. It makes refactoring and building up much much easier. It also leads a bread crumb of clues to figure out what the code is doing. It leads to better code stability, and far less regression.
I'm not criticizing your comment, just adding a little bit to it. You're totally right, there's a lot of complexity when you're testing complex features in a huge app.
Buuuuut, a huge complex app is probably not the right place to try learning any new technology. Do a few small things first, prototype something out, test it along the way. Then maybe something a little more complicated. Slowly expand your horizon of uncertainty.
That way you only need to focus on learning one thing at a time. The assert() call that OP suggested is exactly enough to start thinking about testing simple code; once you're doing well at it, you can start noticing repetition and opportunities for abstraction (and finding existing libraries that have these abstractions already done for you); then, you start doing testing on more complex code and start learning about mocks to facilitate testing there.
Some people do really well in a "big bang" kind of learning environment, but I definitely don't. For me, learning works best given: a rough map of where I'm trying to get (even just a mind map that gives me things to look up when I get stuck), and an opportunity to master each piece before expanding onto new topics.
I was beginning to believe your comment would be like many others I have read in the past trying to justify why testing, and specifically TDD, should not be done in "real world" development. "Real systems are real complex, which means tests become complex, so we shouldn't waste our time with tests," and so on. But really you put it so well! Systems are complex and complicated, and tests are as well. But the effort we put into both have value in the end. Well said!
I am constantly confused by this idea that because testing can get hard and complicated we should not do it. It's as if to say we should trust our untested code purely because the testing would be more complicated than not testing.
I develop software in a research environment, turning PhD students code into production quality systems. I think there are a lot of developers out there who don't write tests, or only write tests because you're "supposed" to (as evidenced by the popularity of the Stack Overflow question "Is Unit Testing worth the effort?" http://stackoverflow.com/questions/67299/is-unit-testing-wor...).
They don't see the benefit for themselves, but like proper documentation, it's mainly a courtesy to other developers.
What I wish I was told about testing is that if you modify your workflow, it benefits YOU as well as your colleagues. Without testing, trying out a feature means:
1. Make changes.
2. Restart application or server (possibly refreshing page).
3. Go through a series of clicks or keystrokes.
4. Didn't work? Go to step 1.
Steps 2 and 3 can take a lot of time, and you'll likely be doing it over and over again (boring!). With testing this becomes a 2 second cognition-free process, and even that can be automated with Grunt/gnotify notifications. The boring part of coding is now automated. With good coupling, you can quickly isolate your change/compile/run cycle to the part of the code you're working on.
Programmers already know the benefits that testing brings 6 months down the line when something breaks, but it's wishful thinking to expect people to think that far ahead (and smaller companies are often in a hurry to get something out the door before the money dries up). If you want intrinsic motivation to write tests, you learn how to work such that tests provide immediate benefit.
> I think there are a lot of developers out there who don't write tests
I like to imagine a world where automated analysis supplements testing by providing a middle ground of "moderate gain + no programmer effort" (if you're interested, http://bugchecker.net)
But testing doesn't eliminate 2 and 3. Testing only checks for bugs you've thought of. You still need to do manual tests of new features to catch bugs you didn't think of.
It doesn't eliminate them, but it minimises the effort of performing these steps. You are of course correct in saying that you need to do manual testing to discover issues that you never thought of, but this is a separate process (which should not usually be done by developers who are too entrenched in the inner workings of the system to see it with fresh eyes).
It can be, but both the test code and the error messages from that are worse than with many existing testing frameworks to my eye, so I wouldn't chose that over the existing alternatives.
:: you've written an equality, rather than truth, assertion named "assert" rather than "assertEquals".
:: you're writing what would be the code of a truth assertion as a string for the message test.
For the same role, most unit testing frameworks have an equality assertion / expectation assertion that serves essentially the same role but reads better.
> I don't want.to.sitThere.writingThis(kindOf, crap)
Then don't use spec-style frameworks. There's plenty of xUnit and similar style frameworks that use simple assertions (and, yeah, you can just use bare truth assertions all the time if you want, but you've usually got a handful of nice tools for common cases like equality assertions to go with them.)
If you want to write your own one-function test microframework on that model, feel free; I don't see any reason to prefer using it to existing frameworks, whether xUnit-style or spec-style.
Testing is only simple if you work on simple code, does not work nearly as well for real time, embedded or low level code.
However, the beneficial side effects of TDD are much greater than just the verification aspects so I think more foundational work needs to be done on test frameworks for hard problems.
I respectfully beg to differ with your first statement. I TDD in embedded and low level code, well, always. There is a great work on this by Grenning, worth reading by anyone who wants to understand TDD better, whether it is for embedded C or any other type of application.
I have that book and really like it. I have been trying to apply it to my embedded work. It's a great start but we need a lot more development in that area, including more active support from the big embedded tool vendors.
So he doesn't like Rspec because it's in the spec style...which if anything makes tests a bit more easy on the eye and makes it more human-readable? Ooook
My experience is that a simple "it works" test covers most of the potential bugs in the test subject, so I just write {method name}_itWorks and just call the method inside asserting whatever the method was supposed to do.
I only write more tests if there are important business rules (new employees should be created with "Pending" status), or to cover edge cases uncovered by QA.
this little article is pernicious. please do not start thinking this is proper testing. what he shows is a lame replacement for some manual browser testing.
theres no talk about clean states between tests. no concern about emulating data sources. etc.
this is what should be called half assed functional tests. not even unit tests as everyone here says.
man tests really are black magic around those parts...
- this article only deals with unit testing
- it does not help you write more maintainable tests
For the record, I don't think (unit) testing is simple. Tests are, in my experience, the least maintainable and readable part of a codebase. You often end up with many lines of setting up complex business objects. And usually, the only help wrt to what the hell is being tested is the method name. And that's just unit tests. Integration tests are a major pain, and often even harder to understand and maintain.