Hacker News new | comments | show | ask | jobs | submit login
We don't write tests. There just isn't time for luxuries. (jamesgolick.com)
24 points by byosko 3463 days ago | hide | past | web | 43 comments | favorite



I'm a chronic manual tester. But I want to start getting into automated tests.

What are the most important things to test? What are some good intros to pick up the basics on writing automated tests?


I personally think that test/behavior driven development is the way to go, so my advice will be geared in that direction. With that in mind, the "reference" text on it is probably "Test Driven Development: By Example" by Kent Beck. I've only skimmed it, myself, but it seems solid and Kent Beck is a smart guy.

The cool thing about testing (and TDD in particular) is that even introducing a bare minimum of that sort of discipline into your work consistent will quickly pay pretty big dividends. A quick way to get started is to read this Wikipedia article: http://en.wikipedia.org/wiki/Test-driven_development. Focus on understanding the TDD cycle. Mocks and stubs are useful to know, but you can safely consider them "advanced" and skip them if they confuse you initially.

Then, pick a test harness for your platform, say RSpec (http://rspec.rubyforge.org), if you're a Ruby person. Read the documentation and search for blogs and tutorial articles on the framework you've chosen. Then just start designing a simple module using it!

As you practice, you'll find it useful to read up on Mocks and Stubs and dig deeper into the philosophy and best practices. Dave Astels has some great writings out there.


I personally found Kent Beck's book very tedious and boring. In it , simple classes are bludgeoned into submission through the over use of tests.

From Ganssle's article at http://www.ganssle.com/articles/deconstructing%20xp.htm

"XP's motivating guru, Kent Beck, also has a new book out. Test Driven Development (Addison Wesley, 2003, ISBN 0-321-14653-0) focuses on XP's testing practice.

I bought the book because I'm fascinated with testing. It's usually done as an afterthought, and rarely hits the hard conditions. Unit tests are notoriously poor. The programmer runs his new function through the simplest of situations, totally ignoring timing issues or boundary conditions. So of course the function passes, only to inflict the team with agony when integrated into the entire system. XP's idea of writing the tests in parallel with the code is quite brilliant; only then do we see all possible conditions that can occur.

Beck argues for building tests first and deriving the code from these tests. It's sort of like building a comprehensive final exam and then designing a class from that. An intriguing idea.

But X what if the test is wrong? Test driven development (TDD) then guarantees the code will be wrong as well.

Worse, TDD calls for building even the smallest project by implementing the minimal functionality needed to do anything. Need a function to compute the Fibonacci series? Create the simplest possible test first X in this case, check to see that fibonnaci(0) is 0. Then write a function that passes that test. Try fibonnaci(1); that breaks the test, so recode both the test and the function. Iterate till correct.

The book shows how to build a factorial program, which results in 91 lines of code and 89 of test... after 125 compilations! Klaxons sound, common sense alarms trip.

Agile proponents love the interactive and fast action of this sort of method. But programming isn't about playing Doom and Quake. If you're looking for an adrenaline rush try bungee jumping.

125 compilations for a trivial bit of code is not fast. Dynamic? You bet. But not fast.

"

Test Driven Development is not quite the same as having unit tests or automated tests, which I believe the poster asked for. The jury is still very much out on how useful TDD (and XP for that matter) really is.

Many talented programmers believe XP/TDD is essentially hype. . This is not to say Pius is wrong, just to add a note of caution, be skeptical of the XP hype machine.

"Try it and decide for yourself" is probably the best advice possible


I totally agree with your last statement.


All great advice. Thanks!


I've never read any books on the subject, but one thing I like to do is, when a bug is found, ask yourself, "What's a test that would have detected this bug?" Then write that test. Run your new test and make sure that it fails. Then fix the bug. After that, re-run the test to make sure it passes. Finally, check-in both test and patch.


+1 to "Test-Driven Development: By Example" by Kent Beck. It won me over.

Anyone that wants to test in Python should really check out nose (nosetests). I made a slight modification to Django so that it gets run any time a change in code is detected in my project (i.e. whenever the development server restarts). So every time I make a code change, the tests run. Combining this feature with doctests makes testing a total breeze (and confidence booster).


"Everybody Tests" is a good point. Some devs throw away their tests, and other devs keep the tests around and run them periodically, but devs who write code iteratively (which seems like pretty much all of them) write something that could be easily turned into a unit test.

If you don't like TDD (and I have to admit I'm not a fan), you can still produce tests while you write code (you probably already are). Whenever you write a bit of code, you probably do something to check whether it is producing the expected outcome - even if it is as simple as writing a little script to generate some output that you visually inspect as it flashes by on the screen. If it looks good, you keep developing. If it fails, you figure out why.

Just store your these mini-testing scripts in a unit test, and verify the expectation/output in a short statement. If you use a framework, you'll have some methods for this available to you - if you don't, no biggie, just write something that honks loud and gives you a message diagnosing the failure.

It's not going to satisfy the TDD crowd, and honestly your coverage won't be as good as if you had done TDD, but there you go - you just got yourself some unit tests, and you didn't have to disrupt the mental flow that you prefer to TDD.


I wrote some opinions on this in other thread:

http://news.ycombinator.com/item?id=46526

Unless your software will kill people if it fails or something, my advice is: observe what proportion of your development time you spend debugging. If it's one half or more, unit testing would probably benefit you. If it's significantly less than 50%, then you're probably better off riding bareback, because writing and updating those tests will take at least half of your time.

The problem I see with the "you'll test anyway, you may just as well automate it" argument is that unit tests are no substitute for testing with real users.


"the break even point for an automated tester is when they've run their tests twenty times"...

I find that (at least in early development), when I'm in the zone I refactor several times a day (differnent parts of the system of course). If my break even point is twenty times, I'd never get there. Every refactoring would virtually obliterate all of the previously automated tests.


I've recently begun using automated tests of basically all my code. Both frontend and backend, when I'm doing web stuff. I usually write the code to a minimum first, and then implement tests. It's a huge relief to use these tests when I refactor and refine code. I definitely think automated testing is the way to go in most cases.


I have always believed that there is an inverse correlation between the amount of precise attention devoted in design/development and the amount of time spent in any debugger. Nothing here changes that thinking.


Sometimes it just does slow you down overall.

Remind me again, how many web startups have failed because of a bug? How many? Hm.


Are you implying that the number is very small or very large?


SMALL. Are you implying that it's not?

Or maybe you're with the aaron crowd in believing that it has to be absolutely perfect or it's useless. Heck, even that only could really apply to the design and architecture of an app.

Perfection is rarely necessary.

edit: And what's with so many downvotes? I thought yc was more mature.


How many teams fail to execute and lose precious time to market debugging things that they could've caught earlier?

I am not a 100% TDD fanboy either, but it's not something to dismiss wholeheartedly.


That's why I said sometimes. If you read the article, it claims that this is never the case. I presented my disagreement with this.


Sorry, I think you're right. There are cases when it can slow you down overall.

Adding tests up front, from my experience, adds about 40% overhead to a project. (time spent actually writing the tests) You may get some of this time back later, as the author claims, as the automated test suite is run.

I've also been on projects where the code & entire test suite was re-written & scrapped after 6 months of initial development, pretty much making the whole test up front thing of little value.


To be fair, it seems that everyone other than I took this in the context of more traditional software project, not an ill-specified one man startup project.

I was referring to the later.


ok, so now I get it. TTD is better suited for those large companies where people don't know what the f are doing, so they need to test everything (just making sure), and when speed doesn't matter, as big companies everything goes slow.

The same enviroment and thinking that brought us java, and made it a clusterfuck of "abstractions", and "design principles".

I'd rather just code ad design at will, and be able to refactor, change, or throw away code without being held hostage of these tests which will break at some point, not because your code is not right, but because they were design to test something according to a thinking, and when that something changes, all those tests are obsolete.


I think the problem that you might be facing is more related to the quality of your tests than testing itself. If your tests are written correctly, you should never have to throw them away when you refactor.

In fact, that's one of the major benefits of testing: a refactoring safety net!


Right. How can you refactor code that has no tests? Basically, you've got no clue if your new refactored code works because all the precious time you've spent using the old code went towards ephemeral tests that weren't written down. That makes refactoring a lot more risky than it needs to be.


What I meant to say is "How can you effectively refactor code that has no tests?"

Refactoring is inevitable. I find that on projects where I've written tests, refactoring goes a lot better.


By refactor he means to change the purpose of the code, not just the implementation. Your tests will not cover this.


Does he? Then he's using the wrong word. Refactoring is changing the implementation while keeping the interface and semantics.

Which, I agree, is one of the shortcomings of unit testing: placing an additional burden on improving interfaces and semantics.


> Are you implying that it's not?

No, it just wasn't clear from your original comment.


I dislike writing more code for my tests than the "actual" code. I do not eat the "test driven development" drivel hook-line-and-sinker. Sorry, call me an unenlightened developer.


I am in the same boat. Maybe TTD was invented for people certain kind of people that like to "plan" stuff well in advance? Maybe my brain is not wired for TTD, but I tried to like it, and I couldn't.

I can see how TTD is important in critical software (avionics, banking, trading) etc... but for the average web 2.0 speed of execution should be paramount, and I think TTD just slows you down. You end up writting x2 the code, plus everytime you have to change something, you have to do it twice.

In mobile client development, if found TTD pretty useless, unless you are doing mission critical things, where speed of writing a software is less important than the accuracy of it. (imagine phones crashing, not a good think).


Just a quick note on "mission critical." I've always found that to be an interesting term. I'd argue that software components critical to your business are mission critical, practically by definition. It doesn't matter if I'm an old-school avionics company or a Web 2.0 social bookmarking site . . . my software is mission critical to my business.


Crashes are a lot worse for business if you're an avionics company than if you're a social bookmarking company.


I second that. If you have something like a display bug or some strange inconsistency in a website, having that bug there is actually a chance for you to show how responsive you can be to user bug requests. That is, having and quickly fixing a bug could turn out to be a good thing. That is not the case if you are NASA or your local nuclear power plant.


"having that bug there is actually a chance for you to show how responsive you can be to user bug"

and how sloppy you were up until now


I want to be clear in saying that the argument I was putting forth in this article was that you're already investing the time in testing your code, whether you test it by eye, or by code.

The question for you is - Would you rather spend your time in your web browser/console & debugger, or your text editor writing code? Clearly the latter has significant advantages , when it comes to reliability, etc, but if you're willing to forgo all of that, then you're really talking about where you want to spend your time.

Personally, I'd rather be in my text editor than my debugger (Okay, I don't have a debugger, since I write tests). My text editor is where I find my zone.


Yes, TDD implies that there is a more or less exact specification. Otherwise, if you're just experimenting, you would have to write the test and your code, and that's going to make you less inclined to throw it away and test out something else (see "Planning is highly overrated").

My strategy has generally been to throw some tests that cover most things in a general way, and when I find bugs, add lots of detailed tests that make sure those bugs will never come back again. That makes it feel less pedantic.


"Yes, TDD implies that there is a more or less exact specification."

That's actually the opposite of the truth. The second D is for Design (or Development). The idea is that you organically develop the spec and the code in tandem.


You are assuming that development only goes in the forward direction. When I really have latitude in my goals, my code is just about impossible to pin down until it's 95% implemented.


How can you test something if you don't even know how or if it works? You need to hack on it and see if you can get things going before you nail it down, no?


Let's say you know you've gotta design a bowling pin from scratch. TDD and "standard" coding practices both require the same initial thought. You'd say, ok, what's this thing actually have to do? Maybe you start with, "it has to fall when hit by a bowling ball hard enough."

If you weren't using TDD, you might go off and create a BowlingPin class, a BowlingBall class (if it's not extant), and hack together a method in some arbitrary order and method. That's cool, but the TDD approach would have you write a mini "test." In pseudoRuby:

[http://pastie.caboo.se/103470]

This, of course, will fail because there is no BowlingPin class. So you write the class skeleton and re-run the tests.

Fails again. Why? There is no got_hit method on BowlingPin? Shit. Now you've got to go and write the simplest got_hit method that'll work. So you write something like:

[http://pastie.caboo.se/103471]

Rerun the tests. Dammit! There's no "fell?" method -- gotta implement it! And so on and so on.

There are a few things to note here. First, it's pretty critical to use a decent test harness. Running and writing tests has to be crazy fast otherwise this approach won't work well.

Next, in running through the TDD cycle you're actually (1) specifying the interfaces and behavior of your code and (2) implementing as narrowly as you can. The actual "tests" you get out of it are almost secondary! If implemented properly, it's really a design methodology. That's why the new trend is to change the language from Test Driven to Behavior Driven. The use of the word "test" seems to (understandably) evoke the same response you had of needing an iron-clad spec to work from initially. It's just not the case.


What if you want to create "some kind of game with a ball"? You have a lot of things already pinned down (ha ha) in your example, it's not a "blue sky" project where you don't know exactly how it should be.


Still, you can write your unit tests as you write your code. All a unit test does is verify that your code behaves the way you want it to.

I think a lot of people here are confused about what exactly a unit test is. When you write a unit test, you don't say: "I want a bowling application, with such and such a scoring method, and such and such a pin layout".

It's more like - "I am writing a class (or portion of a class) here that's going to perform a certain function in my 'blue sky', exploratory phase project". If you don't know what your code is going to do, how can you really write it. That is to say, exploratory coding isn't just typing random characters. You have some ideas.

So, instead of just writing the first function/method/class/controller action/model, you write a test first, and watch it fail. Then, you implement your method. That way, you know that your method works.

You really don't need to have an overall big picture of your application to write tests. That view is more of a misunderstanding of testing than anything else.


So basically, if you decide to toss out your method or class or whatever, you now have twice the code (or whatever the Test/Code ratio is) to throw away.

That doesn't strike me as an optimal use of time.

On the other hand, once I'm pretty sure of what things should look like, yeah, test cases are invaluable in demonstrating that everything works like it should, that corner cases don't break things, and that any new bugs stay fixed. Especially if other people have to work with the code, and don't know it well, they can wade into it with confidence that they can be confident they won't break anything (or at least are less likely to).


Exactly.


I recently follow a hybrid principle for my Rails apps: do not write unit tests, but write functional tests once your UI gets relatively stable.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: