Hacker News new | comments | show | ask | jobs | submit login

I dislike writing more code for my tests than the "actual" code. I do not eat the "test driven development" drivel hook-line-and-sinker. Sorry, call me an unenlightened developer.



I am in the same boat. Maybe TTD was invented for people certain kind of people that like to "plan" stuff well in advance? Maybe my brain is not wired for TTD, but I tried to like it, and I couldn't.

I can see how TTD is important in critical software (avionics, banking, trading) etc... but for the average web 2.0 speed of execution should be paramount, and I think TTD just slows you down. You end up writting x2 the code, plus everytime you have to change something, you have to do it twice.

In mobile client development, if found TTD pretty useless, unless you are doing mission critical things, where speed of writing a software is less important than the accuracy of it. (imagine phones crashing, not a good think).


Just a quick note on "mission critical." I've always found that to be an interesting term. I'd argue that software components critical to your business are mission critical, practically by definition. It doesn't matter if I'm an old-school avionics company or a Web 2.0 social bookmarking site . . . my software is mission critical to my business.


Crashes are a lot worse for business if you're an avionics company than if you're a social bookmarking company.


I second that. If you have something like a display bug or some strange inconsistency in a website, having that bug there is actually a chance for you to show how responsive you can be to user bug requests. That is, having and quickly fixing a bug could turn out to be a good thing. That is not the case if you are NASA or your local nuclear power plant.


"having that bug there is actually a chance for you to show how responsive you can be to user bug"

and how sloppy you were up until now


I want to be clear in saying that the argument I was putting forth in this article was that you're already investing the time in testing your code, whether you test it by eye, or by code.

The question for you is - Would you rather spend your time in your web browser/console & debugger, or your text editor writing code? Clearly the latter has significant advantages , when it comes to reliability, etc, but if you're willing to forgo all of that, then you're really talking about where you want to spend your time.

Personally, I'd rather be in my text editor than my debugger (Okay, I don't have a debugger, since I write tests). My text editor is where I find my zone.


Yes, TDD implies that there is a more or less exact specification. Otherwise, if you're just experimenting, you would have to write the test and your code, and that's going to make you less inclined to throw it away and test out something else (see "Planning is highly overrated").

My strategy has generally been to throw some tests that cover most things in a general way, and when I find bugs, add lots of detailed tests that make sure those bugs will never come back again. That makes it feel less pedantic.


"Yes, TDD implies that there is a more or less exact specification."

That's actually the opposite of the truth. The second D is for Design (or Development). The idea is that you organically develop the spec and the code in tandem.


You are assuming that development only goes in the forward direction. When I really have latitude in my goals, my code is just about impossible to pin down until it's 95% implemented.


How can you test something if you don't even know how or if it works? You need to hack on it and see if you can get things going before you nail it down, no?


Let's say you know you've gotta design a bowling pin from scratch. TDD and "standard" coding practices both require the same initial thought. You'd say, ok, what's this thing actually have to do? Maybe you start with, "it has to fall when hit by a bowling ball hard enough."

If you weren't using TDD, you might go off and create a BowlingPin class, a BowlingBall class (if it's not extant), and hack together a method in some arbitrary order and method. That's cool, but the TDD approach would have you write a mini "test." In pseudoRuby:

[http://pastie.caboo.se/103470]

This, of course, will fail because there is no BowlingPin class. So you write the class skeleton and re-run the tests.

Fails again. Why? There is no got_hit method on BowlingPin? Shit. Now you've got to go and write the simplest got_hit method that'll work. So you write something like:

[http://pastie.caboo.se/103471]

Rerun the tests. Dammit! There's no "fell?" method -- gotta implement it! And so on and so on.

There are a few things to note here. First, it's pretty critical to use a decent test harness. Running and writing tests has to be crazy fast otherwise this approach won't work well.

Next, in running through the TDD cycle you're actually (1) specifying the interfaces and behavior of your code and (2) implementing as narrowly as you can. The actual "tests" you get out of it are almost secondary! If implemented properly, it's really a design methodology. That's why the new trend is to change the language from Test Driven to Behavior Driven. The use of the word "test" seems to (understandably) evoke the same response you had of needing an iron-clad spec to work from initially. It's just not the case.


What if you want to create "some kind of game with a ball"? You have a lot of things already pinned down (ha ha) in your example, it's not a "blue sky" project where you don't know exactly how it should be.


Still, you can write your unit tests as you write your code. All a unit test does is verify that your code behaves the way you want it to.

I think a lot of people here are confused about what exactly a unit test is. When you write a unit test, you don't say: "I want a bowling application, with such and such a scoring method, and such and such a pin layout".

It's more like - "I am writing a class (or portion of a class) here that's going to perform a certain function in my 'blue sky', exploratory phase project". If you don't know what your code is going to do, how can you really write it. That is to say, exploratory coding isn't just typing random characters. You have some ideas.

So, instead of just writing the first function/method/class/controller action/model, you write a test first, and watch it fail. Then, you implement your method. That way, you know that your method works.

You really don't need to have an overall big picture of your application to write tests. That view is more of a misunderstanding of testing than anything else.


So basically, if you decide to toss out your method or class or whatever, you now have twice the code (or whatever the Test/Code ratio is) to throw away.

That doesn't strike me as an optimal use of time.

On the other hand, once I'm pretty sure of what things should look like, yeah, test cases are invaluable in demonstrating that everything works like it should, that corner cases don't break things, and that any new bugs stay fixed. Especially if other people have to work with the code, and don't know it well, they can wade into it with confidence that they can be confident they won't break anything (or at least are less likely to).


Exactly.


I recently follow a hybrid principle for my Rails apps: do not write unit tests, but write functional tests once your UI gets relatively stable.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: