What are the most important things to test? What are some good intros to pick up the basics on writing automated tests?
The cool thing about testing (and TDD in particular) is that even introducing a bare minimum of that sort of discipline into your work consistent will quickly pay pretty big dividends. A quick way to get started is to read this Wikipedia article: http://en.wikipedia.org/wiki/Test-driven_development. Focus on understanding the TDD cycle. Mocks and stubs are useful to know, but you can safely consider them "advanced" and skip them if they confuse you initially.
Then, pick a test harness for your platform, say RSpec (http://rspec.rubyforge.org), if you're a Ruby person. Read the documentation and search for blogs and tutorial articles on the framework you've chosen. Then just start designing a simple module using it!
As you practice, you'll find it useful to read up on Mocks and Stubs and dig deeper into the philosophy and best practices. Dave Astels has some great writings out there.
From Ganssle's article at http://www.ganssle.com/articles/deconstructing%20xp.htm
"XP's motivating guru, Kent Beck, also has a new book out. Test Driven Development (Addison Wesley, 2003, ISBN 0-321-14653-0) focuses on XP's testing practice.
I bought the book because I'm fascinated with testing. It's usually done as an afterthought, and rarely hits the hard conditions. Unit tests are notoriously poor. The programmer runs his new function through the simplest of situations, totally ignoring timing issues or boundary conditions. So of course the function passes, only to inflict the team with agony when integrated into the entire system. XP's idea of writing the tests in parallel with the code is quite brilliant; only then do we see all possible conditions that can occur.
Beck argues for building tests first and deriving the code from these tests. It's sort of like building a comprehensive final exam and then designing a class from that. An intriguing idea.
But X what if the test is wrong? Test driven development (TDD) then guarantees the code will be wrong as well.
Worse, TDD calls for building even the smallest project by implementing the minimal functionality needed to do anything. Need a function to compute the Fibonacci series? Create the simplest possible test first X in this case, check to see that fibonnaci(0) is 0. Then write a function that passes that test. Try fibonnaci(1); that breaks the test, so recode both the test and the function. Iterate till correct.
The book shows how to build a factorial program, which results in 91 lines of code and 89 of test... after 125 compilations! Klaxons sound, common sense alarms trip.
Agile proponents love the interactive and fast action of this sort of method. But programming isn't about playing Doom and Quake. If you're looking for an adrenaline rush try bungee jumping.
125 compilations for a trivial bit of code is not fast. Dynamic? You bet. But not fast.
Test Driven Development is not quite the same as having unit tests or automated tests, which I believe the poster asked for. The jury is still very much out on how useful TDD (and XP for that matter) really is.
Many talented programmers believe XP/TDD is essentially hype.
This is not to say Pius is wrong, just to add a note of caution, be skeptical of the XP hype machine.
"Try it and decide for yourself" is probably the best advice possible
Anyone that wants to test in Python should really check out nose (nosetests). I made a slight modification to Django so that it gets run any time a change in code is detected in my project (i.e. whenever the development server restarts). So every time I make a code change, the tests run. Combining this feature with doctests makes testing a total breeze (and confidence booster).
If you don't like TDD (and I have to admit I'm not a fan), you can still produce tests while you write code (you probably already are). Whenever you write a bit of code, you probably do something to check whether it is producing the expected outcome - even if it is as simple as writing a little script to generate some output that you visually inspect as it flashes by on the screen. If it looks good, you keep developing. If it fails, you figure out why.
Just store your these mini-testing scripts in a unit test, and verify the expectation/output in a short statement. If you use a framework, you'll have some methods for this available to you - if you don't, no biggie, just write something that honks loud and gives you a message diagnosing the failure.
It's not going to satisfy the TDD crowd, and honestly your coverage won't be as good as if you had done TDD, but there you go - you just got yourself some unit tests, and you didn't have to disrupt the mental flow that you prefer to TDD.
Unless your software will kill people if it fails or something, my advice is: observe what proportion of your development time you spend debugging. If it's one half or more, unit testing would probably benefit you. If it's significantly less than 50%, then you're probably better off riding bareback, because writing and updating those tests will take at least half of your time.
The problem I see with the "you'll test anyway, you may just as well automate it" argument is that unit tests are no substitute for testing with real users.
I find that (at least in early development), when I'm in the zone I refactor several times a day (differnent parts of the system of course). If my break even point is twenty times, I'd never get there. Every refactoring would virtually obliterate all of the previously automated tests.
Remind me again, how many web startups have failed because of a bug? How many? Hm.
Or maybe you're with the aaron crowd in believing that it has to be absolutely perfect or it's useless. Heck, even that only could really apply to the design and architecture of an app.
Perfection is rarely necessary.
edit: And what's with so many downvotes? I thought yc was more mature.
I am not a 100% TDD fanboy either, but it's not something to dismiss wholeheartedly.
Adding tests up front, from my experience, adds about 40% overhead to a project. (time spent actually writing the tests) You may get some of this time back later, as the author claims, as the automated test suite is run.
I've also been on projects where the code & entire test suite was re-written & scrapped after 6 months of initial development, pretty much making the whole test up front thing of little value.
I was referring to the later.
The same enviroment and thinking that brought us java, and made it a clusterfuck of "abstractions", and "design principles".
I'd rather just code ad design at will, and be able to refactor, change, or throw away code without being held hostage of these tests which will break at some point, not because your code is not right, but because they were design to test something according to a thinking, and when that something changes, all those tests are obsolete.
In fact, that's one of the major benefits of testing: a refactoring safety net!
Refactoring is inevitable. I find that on projects where I've written tests, refactoring goes a lot better.
Which, I agree, is one of the shortcomings of unit testing: placing an additional burden on improving interfaces and semantics.
No, it just wasn't clear from your original comment.
I can see how TTD is important in critical software (avionics, banking, trading) etc... but for the average web 2.0 speed of execution should be paramount, and I think TTD just slows you down. You end up writting x2 the code, plus everytime you have to change something, you have to do it twice.
In mobile client development, if found TTD pretty useless, unless you are doing mission critical things, where speed of writing a software is less important than the accuracy of it. (imagine phones crashing, not a good think).
and how sloppy you were up until now
The question for you is - Would you rather spend your time in your web browser/console & debugger, or your text editor writing code? Clearly the latter has significant advantages , when it comes to reliability, etc, but if you're willing to forgo all of that, then you're really talking about where you want to spend your time.
Personally, I'd rather be in my text editor than my debugger (Okay, I don't have a debugger, since I write tests). My text editor is where I find my zone.
My strategy has generally been to throw some tests that cover most things in a general way, and when I find bugs, add lots of detailed tests that make sure those bugs will never come back again. That makes it feel less pedantic.
That's actually the opposite of the truth. The second D is for Design (or Development). The idea is that you organically develop the spec and the code in tandem.
If you weren't using TDD, you might go off and create a BowlingPin class, a BowlingBall class (if it's not extant), and hack together a method in some arbitrary order and method. That's cool, but the TDD approach would have you write a mini "test." In pseudoRuby:
This, of course, will fail because there is no BowlingPin class. So you write the class skeleton and re-run the tests.
Fails again. Why? There is no got_hit method on BowlingPin? Shit. Now you've got to go and write the simplest got_hit method that'll work. So you write something like:
Rerun the tests. Dammit! There's no "fell?" method -- gotta implement it! And so on and so on.
There are a few things to note here. First, it's pretty critical to use a decent test harness. Running and writing tests has to be crazy fast otherwise this approach won't work well.
Next, in running through the TDD cycle you're actually (1) specifying the interfaces and behavior of your code and (2) implementing as narrowly as you can. The actual "tests" you get out of it are almost secondary! If implemented properly, it's really a design methodology. That's why the new trend is to change the language from Test Driven to Behavior Driven. The use of the word "test" seems to (understandably) evoke the same response you had of needing an iron-clad spec to work from initially. It's just not the case.
I think a lot of people here are confused about what exactly a unit test is. When you write a unit test, you don't say: "I want a bowling application, with such and such a scoring method, and such and such a pin layout".
It's more like - "I am writing a class (or portion of a class) here that's going to perform a certain function in my 'blue sky', exploratory phase project". If you don't know what your code is going to do, how can you really write it. That is to say, exploratory coding isn't just typing random characters. You have some ideas.
So, instead of just writing the first function/method/class/controller action/model, you write a test first, and watch it fail. Then, you implement your method. That way, you know that your method works.
You really don't need to have an overall big picture of your application to write tests. That view is more of a misunderstanding of testing than anything else.
That doesn't strike me as an optimal use of time.
On the other hand, once I'm pretty sure of what things should look like, yeah, test cases are invaluable in demonstrating that everything works like it should, that corner cases don't break things, and that any new bugs stay fixed. Especially if other people have to work with the code, and don't know it well, they can wade into it with confidence that they can be confident they won't break anything (or at least are less likely to).