Hacker News new | past | comments | ask | show | jobs | submit login
Making Too Much of TDD (michaelfeathers.typepad.com)
43 points by philbo on Dec 30, 2010 | hide | past | favorite | 16 comments



I think what the author is really talking about is the "build the simplest thing that could possibly work" approach, which is strongly related to TDD, but can be done without TDD as well.

It makes for a great motto, but in practice, developers often disagree about what the "simplest thing" actually is. Mostly, I think this is a function of experience. When you have more experience with a type of development, it's much easier to see the "obvious" future needs.

So in the case of a text editor, it's obvious (to me) that a giant string just won't work. That may not be as obvious to someone with less experience.

Determining what the simplest thing that could possibly work is turns out to be anything but simple!


Thanks for posting this. it puts into words something I've long felt to be true.

Everyone likes to say they're cutting things down to the simplest thing possible. But the entire premise is subjective.

One interesting dynamic thing to note is that nobody notices the things you build in from the beginning that become necessary, but everyone notices the things you build in from the beginning that turn out to be unnecessary. (Similar to how if you clean a house, everyone notices the things you've missed but not the unobvious things you noticed to clean.)

I think this psychological trick results in a overly sensitive aversion to "over-engineering", simply because we remember the things that were over-engineered, and never go back to reflect on the "extra work" that turned out to save our project.

The sad and depressing realization is that there is no good answer to how much is too much and now little is too little. You have to use your intuition, and you also have to roll the dice. Sometimes you'll win, sometimes you'll lose, but if you're paying attention, in the long run you'll get better at it.


YAGNI.


Oh, yeah, of course. You ARE going to need it. You're different. Mod me down for my lack of understanding how awesome you are.


One thing I liked about the article is the breakdown of programmers as either engineers or scientists. They say that physics people make some of the best programmers and from what I've seen it is because they adopt a 'scientific attitude' towards the systems. I don't mean scientific in the sense of traditional CS but rather the ability to quickly devise experiments that offer the most information for the least amount of work. When you come at a modern IT system you have have to accept that you won't have the time or resources to understand most of it at the source code level. It is better to be able to quickly experiment and synthesize the results to arrive at a working model of what is really going on. I think people with science (or even philosophy ) backgrounds can be better at this than people who did an actual CS degree.


where TDD and refactoring fall apart

This too is what I find most fascinating: Up to a point refactoring is far easier without TDD, but after that point, it's much harder without it.

I know it's not popular, but I think aspect oriented programming may be a nice way to eliminate this conflict.

One question for HN is: Why is it considered bad practice to worry about the range of valid inputs and outputs for a method? Wouldn't this basic consideration, combined with functional testing, offer a far greater degree of confidence in the system than hundreds of unit tests alone, and be far less brittle?


"Why is it considered bad practice to worry about the range of valid inputs and outputs for a method?"

Is it? I don't think I've heard people complain about ensuring valid arguments and return values. But they do argue over what's the best|right|coolest way to handle it.

"Wouldn't this basic consideration, combined with functional testing, offer a far greater degree of confidence in the system than hundreds of unit tests alone, and be far less brittle?"

As would appropriate use of automatic type checking. Rather than having my method call explicit code to check that an argument is an integer between 0 and 100 (for example), I'd rather define the method signature such that the argument has to adhere to the behavior of type IntBetweenZeroAndOneHundred. Basically, move some unit tests or explicit method code for argument properties into a type.

But, as the article suggests, this gain in cleaner code or a nicer abstraction may reduce the granularity, making it harder to change later.


Some languages have this already. Specifically safety critical languages like Ada. Ever so slowly the entire industry is moving towards what ada had in 1983. Beyond this you can use the spark annotations on top of ada to verify code does what it's expected to.

Here's some Ada range examples type My_Int is range 1..100; type byte is mod 28; type Sin_Values is digits 10 range -1.0..1.0;

Then you can query the object for its range: z'range; or see if it's even valid, useful for finding stray bits switched by cosmic rays - I'm serious here. if z'valid then dowhatever; else printwarning;


Having worked on both sides, I dislike the author's depiction of high-end consultants as "scientists" as opposed to the "engineers" who ship things. These "scientists" who sell their models of how software development is supposed to work are mostly, in my experience, hucksters. They haven't actually figured it out (no one has), but there's a market for software processes because software is hard and unpredictable, many people want to be told how to do it, and some are prepared to write cheques. People may go into the authority game with good intentions, valid experience, and talent, but without the discipline of hands-on work on real systems (and no, an occasional trip to the client doesn't cut it), it's too easy to fall for one's own hype. All the more because you are surrounded by people who are paying you to tell them what to do and thus themselves have a vested interest in believing that you know.

The deep problem underlying this is that we are a long way yet from having large-scale economic organizations suitable for developing software efficiently. My money's on small autonomous units (a.k.a. startups), but that's still a minority view. There's an impedance mismatch between how most people want to fund and run software projects, and the work itself. The markets this mismatch creates (like the market for software processes) are unhealthy ecosystems in polluted environments. But it is very difficult to see that from the inside, especially when you're a high-end consultant with more money and higher status than lowly practitioners.

I know of only one way around this conundrum of how to tell bullshit apart from knowledge, and it goes like this: write nontrivial programs and show me the code. Then we can talk.


write nontrivial programs and show me the code. Then we can talk.

I think there should be a development group rating organization based on this idea.


The thing that bothers me the most about TDD is that it really seems to mean unit test driven development. Unit testing has its place, to be sure, but it is only one tool in the box.

By its very nature, unit testing will always be limited to checking simple, well-defined, low-level behaviour. There is no unit test to tell you that you made a good choice of data structure or algorithm. There is no unit test to tell you that your domain model is sound and relevant. There is no unit test to tell you that your calculation code is a faithful implementation of incorrect mathematics that you thought was correct. There is no unit test to tell you that your requirements were the result of poor analysis and as a result you're building a product your customers won't buy.

If we take a single tool like this and elevate it to the point where the entire development process is built around it, then must the process itself not be limited as well?

The TDD community seem to compensate for this by adding the well-known third step:

1. Fail.

2. Pass.

3. Magic happens.

During that final step, we have an opportunity to compensate for any failure of the small-increments, simplest-thing-first approach to converge efficiently on a good end result. This in itself doesn't seem unreasonable, but I have never seen a single piece of TDD advocacy or training material that explains how this is supposed to happen, at least not in any serious detail or in any form that is amenable to training developers in how to do it.

The trouble is, almost all of the hard stuff in programming is in areas that unit tests don't address, such as those I mentioned above. That being the case, the whole TDD process can also only address the easy stuff until the "refactoring" step is explored in very much more detail than it has been so far. I think perhaps this is what Michael Feathers is starting to realise and trying to highlight in this article.


"There is no unit test to tell you that your requirements were the result of poor analysis and as a result you're building a product your customers won't buy."

No, but that's why there is integration and acceptance testing, and the sooner you get feedback from actual users the better off you are.

TDD doesn't have to mean unit tests.


> TDD doesn't have to mean unit tests.

Really? I'm not sure I've ever seen any TDD-related source claim that the tests that initially fail and are then made to pass are anything but automated unit tests.


Generally, you write an acceptance test, watch it fail, drop down into the unit tests that get your code going, pull back up into your acceptance test, move on to the next part...

Here, I wrote something about this: http://timeless.judofyr.net/bdd-with-rspec-and-steak

Most of these are not 'unit' tests in the traditional sense of the word. Unit tests are useful, but only as part of a larger testing strategy.


> Unit tests are useful, but only as part of a larger testing strategy.

I agree, but I think you're making my point for me. TDD and unit tests alone are not enough, which motivates further developments like BDD.

I think we can apply an analogous argument even to a BDD process, because there are many advantageous properties code might have that can not be determined effectively using a finite number of specific, automated, behavioural tests, no matter what level of abstraction those tests run at. Your BDD process is not as limited as TDD, but it still contains a "magic happens" line, "Refactor code as appropriate."


The problem Feathers describes has a simple cause: the classes in java, C# etc are a premature optimization. The rewrite described, in my experience, is caused because having to reclassify everything is annoying and error prone when:

a) classes are part of the language and,

b) classes are defined by a stream of characters in a flat file on a disc

C++ makes it even more difficult to automate the redoing of classification by requiring that the definition of a class is stored in multiple such files.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: