I think TDD is useful, but practitioners should not be dogmatic about it.
As I discussed in my talk "Testing is Overrated", programmer testing finds a different class of bugs than manual testing or code reviews.
One of the problems is that programmers are not very good at writing tests. They are much more likely to test "happy path" scenarios. Steve McConnell reports in Code Complete that mature testing organizations have 5 times more dirty tests than clean tests, while immature testing organizations write 5 clean tests for every dirty test.
Another big problem is that unit tests are never going to show you that your software blows. Only user testing can find that.
If you want to ship good software, I think you need to do a combination of code reviews, unit tests, system tests, user studies, and manual testing. Different organizations will find a balance that works for them.
That last is a key point. I like writing tests, but I hate the dogmatism that if you're not writing tests you're doing it wrong. Obviously, good software has been written without tests, and buggy crap has been fully TDD'ed. In the end, the team matters more than the methodology.
The article misses an important point: People who've done TDD know that it is the easiest/fastest way to unit-test. Few aspects of programming are more painful than trying to unit test code after the fact, especially if it was written without testing in mind.
It's not the fastest way, by far. It tends to lead to more testable code, sure, but it's not fast.
For example, TDD encourages code churn: you write tests for a lot of throwaway code.
Personally, it usually takes me two/three attempts before I'm satisfied with the version of the code that I have. Writing tests for these throwaway attempts is a waste of time.
As you become more experienced in writing tests and testable code, you realize that you no longer need TDD to write code that can be easily tested, so I see TDD more as "training wheels" for beginners.
I fail to see how unit tests are required for a shippable product. For a wide range of things, buggy and shipped is better than clean and ready in 3-6 months.
Having the flexibility to choose between those is better than having your hands tied and not being able to make the choice.
According to this article, the effect of TDD is to remove your ability to defer that decision until a time when you have more insight into which of those is better.
In the marginal case where you only care about one iteration, then TDD is a little bit worse. However, I am confident that without TDD, your second iteration will be slower - you'll get less done, and your bug rate will still be higher.
If something is faster, but starts later it does not necessarily win. 15% to 35% longer initial release cycles is fine for a 1 month web app, but if it’s a game that takes 2 years to release, your starting from a deep hole.
PS: I suspect, TDD ends up being something of a premature optimization on larger projects, you write tests before you know which direction you want to take the project in. Don't get me wrong Tests are great from a process improvement standpoint, but every major shift ends up breaking way to many tests that you either don’t do it, or you just wasted all that time writing tests.
Thanks, BDD looks like an interesting idea, I think I will look into that some more.
Anyway, I tend to build the happy path by any means necessary, verify that's what you want, then go for TQM style process improvement to add quality. I think of it like pulling a tiny string across the river before you build a rope bridge.
It’s not hard to add heaver gage rope to work your way up to a bridge. But, good luck crossing the chasm if you don’t know how to swim or where to find a boat. While that original string may have little to do with the finished bridge, but at least you have something to fall back on when you’re trying to get the GPU to do something etc.
PS: Ok, I think I took that analogy well past the breaking point.
What is the source of your confidence? Do you have evidence for this?
TDD advocates make assertions based on a certain common sense ("tests are scaffolding") but I don't see the present evidence supporting such assertions.
The thing is that TDD also often calls for not spending a lot of time on "Big, upfront design". I could just as easily argue that without a coherent design, your next iteration is going to be longer and that the critical thing is to master good design principles. But that belief also needs evidence behind it.
Programming is ultimately a mysterious thing and so looking at real, long term data is good, especially after you become good enough that everything seems clear to you.
You could argue that without a coherent design, your next iteration is going to be longer. I'd argue the reverse - but it depends on what you mean by "big upfront design"
Every program, TDD or not, needs to get a basic architectural outline in place early on - e.g. decide if you're building a web app or a desktop client, which framework you'll use to build it. Usually you have a good idea of a lot of other major pieces of that you want in your architecture - they're there in "best practices" such as SOLID, repository pattern, CQRS, MVVM, etc. TDDs encourages "spikes" that prove the concept with a vertical slice through all layers of the app.
In my opinion and experience, "Big, upfront design" is a 1980s-1990s idea that all details of the program, not just the outlines, can be designed once before coding commences. And it is a completely false idea. TDD and agile say that you're going to redesign anyway, so you may as well accommodate that. This is where TDD speeds things up. Change is not just adding things to a program, it involves redesign on existing parts.
But I agree that an ounce of data is better than a pound of theory in this case. And number are far better than anecdote. That's what the article is for, I guess we should be reading it.
"If we control the bug rate, is TDD faster?" is the central question. I assert that TDD is the Cause of the bugs, so the question is nonsense.
Code should be developed with as large a view of the design as possible. TDD restricts development (by definition) to satisfying the tests. The effect is of 'moving the bugs around' since the existing tests rarely completely describe the whole design. You get into a loop of "oh, yeah, well we Really meant to write a test to cover that case too", repeated ad-nausuem.
I think one problems with TDD is that it doesn't lean towards testing bugs in a feature/scenario (describing the whole design). That is, TDD focuses on testing a unit and it is effective/fast at testing the fringe cases of units.
Behavior Driven Development really shines in the aspect of "completely describing the design" as it focuses on driving development against a well defined behavior (using scenarios). BDD lets you know where the bug is: in the code or in the specifications (the behavior as specified by the customer). So there is a lot less "chasing around". If you are trying to cover behavior of a system only using TDD then I agree you can end up chasing around a bug.
However, both are necessary and highly effective when used together.
I'm not sure what you mean by "TDD is the Cause of the bugs" - the TDD projects had 40-90% fewer bugs. Even if their remaining bugs would have been caught by test coverage of things that occur in live use, that's still a good result, right?
Well, sort of. TDD defines bugs as "things the unit tests catch". And the unit tests are very narrow.
So yes, you record fewer bugs, fix those narrow cases quickly and pretend you're done.
Then it gets to the field, and whatever you didn't catch becomes a 3-alarm fire.
I vastly prefer shakedown tests of the sort where you launch a 'bot army' on cloud servers to thrash the app. Especially if there is Any network/client/server component.
TDD defines bugs as "things the unit tests catch". And the unit tests are very narrow.
I think that if you read the article linked, you'd find that the study didn't work that way. Any study that measures different things between experiment and control groups is not going to be sound.
Then it gets to the field, and whatever you didn't catch becomes a 3-alarm fire.
I've worked on TDD projects, and those are definitely are counted as bugs. Severity 1.
... and then you get a regression test, so that bug never happens again. And the regression test is easy to write, since all of your code is already testable, you have all the testing infrastructure in place, etc.
Never happens again? You already have the 3-alarm fire, its too late. That's my point exactly. You catch only what you can think of and write tests to cover.
I object to the TDD fundamental precept that you code to the tests. That narrows the 'testing' until its a self-fulfilling prophecy that you will have 'fewer bugs', since you defined the bugs as 'what we're looking for'.
Reminds me of the old joke "A: I'm looking for my wallet. B: where did you lose it? A: over there. B: then why look here? A: the light's better"
I would argue that TDD makes you think more about the design of your system. Also, it makes it easier for you to refactor (as you don't worry too much about accidentally breaking something) and hence keep the code quality high over time
In addition to making you think more about your design, I'd say it makes your design inherently more flexible. Once your design is flexible enough to do two different things, it's much more likely to be able to do three things than if it could only do one thing. And when you do TDD, your design is always capable of two things: fulfilling the requirements and testing.
"Dependent Types: A Look At The Other Side" by Larry Diehl
It talks about using Types for Driving your Development. So it is TDD, but without tests, without runtime and programmer's overhead of them.
When using TyDD, you isolate core that should go as bugless as possible and write DSeL for it in language with strong type system. The talk above tells about Agda, but we and some number of other companies in the world (Eaton http//eaton.com/, Echo http://aboutecho.com/ - from the top of my head) successfully employ Haskell for that task.
That way you shoot two birds with one stone: you get provably correct software and you get an ideal language to express your core problems.
Best of all, you get free "requirement tracking" - whenever your requirements change, you express them in types and then apply changes where they should, guided by friendly compiler. Still getting provably correct software (modulo types) at the end.
That was "interesting" in that I like expanding my horizons, but it was a terrible presentation. Plus, it looks like Haskell and I got the impression from his lead-in that he was going to introduce a new language.
However, I actually came here to say something else about his material: if it is harder to decompose the business requirement into Haskell than it is to write monkey tests for one's Blub language, I am not certain anyone wins when going with a provably correct implementation.
My experience using advanced type system tells me that you can get away with functional tests. Types, also, takes much less code space-programmer's time than tests.
So I do not see how they could be harder to use. Certainly types will be unusual, but not much harder.
Anecdote: I'm not smart enough to design software without having an idea of how it can be automatically tested. My design has to prove that it can be easily tested (both via unit tests and integration tests), and I've found that writing some of the tests up front helps with that.
As I discussed in my talk "Testing is Overrated", programmer testing finds a different class of bugs than manual testing or code reviews.
One of the problems is that programmers are not very good at writing tests. They are much more likely to test "happy path" scenarios. Steve McConnell reports in Code Complete that mature testing organizations have 5 times more dirty tests than clean tests, while immature testing organizations write 5 clean tests for every dirty test.
Another big problem is that unit tests are never going to show you that your software blows. Only user testing can find that.
If you want to ship good software, I think you need to do a combination of code reviews, unit tests, system tests, user studies, and manual testing. Different organizations will find a balance that works for them.
That last is a key point. I like writing tests, but I hate the dogmatism that if you're not writing tests you're doing it wrong. Obviously, good software has been written without tests, and buggy crap has been fully TDD'ed. In the end, the team matters more than the methodology.
http://www.scribd.com/doc/8585284/Testing-is-Overrated-Hando... http://www.scribd.com/doc/8585444/Testing-is-Overrated