Is it just me? I don't know if I'm just particularly cranky today. After a major refactoring of our app I find myself drained out with unit testing every minor change. Ugh I wish VIM had some automatic unit testing suite similar to some major IDE's...anyone else share my rant?
Before I got the bug, I was wary of refactoring and adding new features. I would do a few simple tests when I originally wrote the code to satisfy myself that it worked. But then if I'd change the program flow or move things around, the benefit of that ad hoc testing was now lost, and I'd have to do it again.
Unit tests make me feel confident that the stuff I've got really works, even after I've modified it substantially. I'm a lot more likely to make big sweeping improvements because of it.
I actually enjoy writing unit tests _because_ I'm lazy :).
When I write a piece of Ruby code, I have a choice: I could either write a unit test or load the code into IRB and manually test it (or even load the real app and test the code in there).
For me, doing it manually is often more work. Let's say I have five tests in my head. My code passes the first four, but fails the fifth. After I fix the bug I found, I have to manually go back and test all five behaviors again. Unfortunately, my code often has more than one bug, so this process can take awhile. Once my code passes the tests, I always run the real app to make sure there are no bugs in my tests.
Plus, for me, writing code is just more fun than doing tedious manual testing, even if it takes a little longer initially.
As with everything, there are exceptions. If I'm fixing bugs, I tend to have a TDD approach, but if I'm doing exploratory programming, I tend to not write tests until the code is pretty stable. It's also hard to unit test GUI code, so think about the expected return on your time investment.
One thing to keep in mind: tests are more fun to write (and more importantly, more effective) as you get better at them. Writing tests is an art unto itself - just because you write good production code doesn't mean you'll instantly write great test code. It takes practice.
I sometimes hear people say, "Unit testing takes too time to write and maintain!" While there will always be some costs, I've found that as I've gotten better at writing tests, those costs have gone way down, while the benefit has gone up.
But that's exactly the point. You can be MORE lazy. You spend so much more time than you realize testing and re-testing your own code... doing monkey-level QA! Stop doing all that work. Be more lazy. Write your tests first!
I do. I used to consider it a necessary evil - I hated unit testing, but I hated not unit testing more. But then I started using the green bar as my first indication that the code works. Everybody likes seeing their code work, so naturally I liked seeing the green bar. I'd just write a test and have it pass instead of running the program.
(Naturally, I'd run the program afterwards to make sure it really worked - but if it didn't, I could just consider that a bug in the tests, write the test, fix the code, and close the bug. Everybody likes closing bugs, so there's the reward...)
Doctest is great too, since it's pretty common to use the Python interpreter as the first indication that your code works. Then you just paste the session transcript into the docstring, and you're done.
The idea is that you write a few lines of code, test it (most likely you'll get red), and you keep fixing your code until you get green, and only then you proceed to the next task.
As far as I know, it started with jUnit on Eclipse, and then others implemented the concept on other languages / IDEs.
In Smalltalk/Squeak, you can write the Unit tests and the code you're testing alongside each other, live, from the debugger. It certainly makes things easier being able to see the live objects, in deciding how they should behave. And by testing the objects in situ, you pretty much know your code is going to run as you expect.
It's a feature in some implementations: a progress bar advances while your tests execute, and finally turns red if any test fails, or green if all passed.
The term "green bar" is used, by extension, to refer to the notification that all tests passed, even in colorless console-based frameworks.
For anyone reading this w/ a .NET/C#/Visual Basic bend, NUnit is one of the big unit testing tools (and yes you get the big green bar [or you aspire to]). VS2008 has unit testing built in.
I tried it for a few weeks. I got mildly test infected, but I don't do it anymore.
I found unit tests are most adequate for the parts of my code I'm the most confident about anyway (i.e., the functional parts). Testing side-effects intensive code (e.g., games) can be done and the discipline can improve your design skills and taste. But it can also drag you into boilerplate addiction and architecture astronautics.
But what really keeps me from writing unit tests is that they are a hindrance for the types of changes that scare me the most: those that introduce far-reaching changes to the interfaces between parts of my codebase. Those kinds of changes force you to rewrite your tests anyway. Thus, they add a cost and a mental barrier without offering the desired confidence in the transition (in a way, tests and code 'validate' each other; part of that confidence is lost while you are refactoring your tests).
To summarize: they help where I least need it, and they are a net hindrance where I'd need their help the most.
I don't do pacemakers or financial software, so I'd rather get the occasional bug during development, and use heavy human testing before release.
Test driven turns unit testing into a game where you're trying to improve your code to get more 'points' (passing tests). It's very fun to do in pair programming with one of you improving the code and the other writing tests to catch any problems.
I never did testing until my current app and it has made my life much easier.
Especially when working in a group they're a great way to verify that your parts are working the way someone else needs them to work. It means you both don't have to go chasing after what's wrong.
One thing that that I don't like about unit testing is it can give you false positives that everything is working correctly.
When you follow the red-green pattern and write the test, have it fail, then write the code to make it pass, I find that my brain is thinking the same when I write the test to when I write the code. So as a simple example if I write a test that says add(2,2).should == 5, it fails since there is no add() method. I then quickly write the add method to allow the test to pass and I am happy that I get a passing test even though the logic is incorrect. Obviously this is an over-simplification, but I have experienced situations that have some resemblance.
I can't say I always agree with the laziness argument. I can't even count the number of times it would take me only a few lines of code to get something done, but writing all the unit tests increases the amount of time to complete the task by a factor of 10. Other times the tests are easily saving me hours in a day. So it depends on exactly what they are testing and how they are doing so.
I am not saying I don't like them all the time, just sometimes :)
I know some people who enjoy testing other people's code as a challenge: "lets see what happens when I enter a negative number of days/make an employee their own boss/paste word.exe into that text field..."
The first job I had after college, I remember playing around with my company's online commerce software. I found out I could enter a negative number of items to purchase and my total cost would be negative. Or, I could buy 1.537 dresses.
Amazingly, no one else knew this. (That probably should have been a big red flag for me.)
If I'm using Python, I'm happy because it has built-in testing modules I can count on.
There are also cases where Python unit tests are worthwhile for other languages. For instance, I've subclassed unittest to quickly run and diagnose arbitrary Unix programs in isolation, and further subclassed it to help launch specific compiled programs under test.
If I join a team where someone has already done the work of setting up a reasonable test infrastructure, adding and maintaining tests is often OK.
But anything starting from scratch, count me out. :) Sometimes you're thrown into a semi-mature C++ project where they weren't making modules "testable" from the start, and that's way too much effort to retrofit.
I love it. But I only have, like, two tests, so it's not very useful.
Honestly, I've only recently started writing tests, and so it doesn't impact my code very much, but it's awesome for being able to know that a function change is doing what I think it's doing. Just tonight I added a new regex to a function that generates a "safe" string for using as an ID for DOM elements, and being able to test it from the command line--and see just the relevant output--rather than reloading the page several times and search for the converted string, saved me several minutes.
As the comments here show, people tend to come down on unit testing in a binary way (love it or hate it). Like a lot of things in software, especially in software process and double-especially when consultants and authors are involved, it became a religious question. Advocates insist up and down that they love it, it saves time, it's all good. But it's not all good. Unit testing has benefits and it also has costs: you have to write the tests and you have to maintain them. The cost of an entire second codebase is hardly insignificant. So the rational question to be asking is whether the benefits exceed the cost, or rather, when they do.
My experience is that it depends on things like language, environment, and team. I've written many thousands of unit tests in C# and Java and trained quite a few people to write them, too. But when I began working in Common Lisp, I was surprised to find that didn't need them as much. I write and test my code by evaluating expressions in the REPL and I mostly write functional code without side-effects. This turns out to yield many of the benefits of unit testing without many of the costs. The overall tradeoff becomes different, making unit testing less valuable, except in targeted places. Other things that in my experience increase the overall benefit of unit testing include OO designs, larger teams, and corporate environments.
How much I enjoy writing unit tests depends on my gut feeling of whether the benefits are exceeding the costs. If I'm struggling with a tricky algorithm that I want to work out a bunch of examples for, I enjoy writing unit tests very much because they're helping me solve my problem. But if I'm going through the motions of updating a bunch of unit tests that aren't relevant to what I really want to do (and this happens a lot on larger XP projects despite what the advocates say), I don't enjoy it.
I hate it but I still try to force myself to do it. It really, really helps to find bugs if refactoring or a new feature breaks something, so it's a necessary evil. I call it evil because unit-tests themselves have to be written and debugged, which takes time. I believe I should just practice more, so that I could write them incredibly quickly, then they won't suck.
I have been using Behavior driven development (BDD):
http://en.wikipedia.org/wiki/Behavior_driven_development
This has made unit testing more intuitive (and fun) for me. Specifically, I have been using RSpec, http://rspec.info/ , which is an excellent BDD framework for Ruby. I am not a purist who first writes the tests(behavior's) before writing the actual code. However, my experience has been that it is important to eventually have thorough unit tests for your code to ensure that your application is more robust. It allows you to add new features with greater confidence.
I enjoy unit testing when it all lights up green and I'm doing TDD. What I don't enjoy is retrofitting unit tests to my code. Time spent debugging the unit tests themselves (when I know the code is ok for at least a small value of "working") always feels frustrating and wasteful.
I find that TDD doesn't work well for exploratory programming where I'm feeling my way around a new library, or trying to work out a satisfactory architecture for a new component.
It's well worth it for future maintainability but certainly sometimes it is a chore.
Even aside from support pretty unit testing, having a good tool for automating the generation of test cases makes a world of difference. http://www.haskell.org/haskellwiki/Introduction_to_QuickChec... for example is a great tool that allows for pretty expressive specifications and counter example generation
I find that by building a test, I get to know what the code is doing and why better than any other way. It's a great learning experience.
Also, while thinking and building about a test, I think of simplifications and new features to add. Why? It's because technically, a test actually uses the code. If you and your code are friends, prepare to become lovers.
I absolutely loathe writing them, but that's easily overcome by the sense of relief I get at being able to extensively test my software with a single automated script.
Automated regression testing while I enjoy a coffee at the local cafe? That's a win!
Not exactly: It forces you to throw away bad code inside a component _that conforms to the unit test_. If the unit test inherently tests something that isn't optimal for the larger problem then you're no further.
There's a difference between checking that the code works overall, and checking that each individual component works.
Anyways, I realize I'm in the minority (in many ways..), but I find that if I have a crapload unit tests, then I'm more inclined to not want to change how something works (because then I'd need both new code AND new unit tests - for everything).
To me, that's a bad thing
As an aside, is there any studies that show that the proliferation of unit testing in the last few years has actually reduced the number of bugs and/or decreased development time _overall_?
It does make it harder for me to completely throw out older versions, but it makes it way easier for me to refactor my old version into a better new version (which I think is often more productive than just starting from scratch)
I find this to be very true, writing the tests at the beginning is exciting because you get to see what your code is going to accomplish. Then filling in the code to pass the tests is exciting because you're working toward a very clear goal with a very clear payout. Best way to work, imho.
Fastest way to work, too, since you figure out in advance specifically what you're trying to accomplish. It helps avoid YAGNI syndrome (http://c2.com/xp/YouArentGonnaNeedIt.html)!
Before I got the bug, I was wary of refactoring and adding new features. I would do a few simple tests when I originally wrote the code to satisfy myself that it worked. But then if I'd change the program flow or move things around, the benefit of that ad hoc testing was now lost, and I'd have to do it again.
Unit tests make me feel confident that the stuff I've got really works, even after I've modified it substantially. I'm a lot more likely to make big sweeping improvements because of it.