The reality is that testing everything before you even write it locks you in, emotionally, to a certain design. ("If I clean up this module, the bar goes red! Ahh, fuck it!")
You end up writing the code to make your tests pass, not the code to solve your problem in the best way. Without any iteration during the design phase, you're not going to get the right design. At that point, you don't care if you get the right answer, you care that you have the framework for getting the right answer. Once you like the design, then you can worry about making sure it works (sure, use TDD for that).
Now, the reality is, people that are forced to use TDD weren't going to come up with a good design anyway, so at least they have some idea that their code works and can change it would losing too much sleep. Fine. But when you aim for 75%, you get 75%. I like to set my sights higher.
(I use TDD approximately never, and I have hundreds of thousands of lines of code in production that never give me a headache. They are tested, but they were designed first, not tested first.)
Well, it's not really much of a debate when one side is bluntly asserting, for example:
Good programmers will write good code regardless of
whether or not they use TDD or not.
It becomes ridiculous when people start to think that
this tool that may be useful to some is going to save
us all from bad code and crappy software.
Did you know, working software has existed for 30++ years, performing its job day after day, before TDD was conceived and marketed to the world?
People might take TDD advocates a bit more seriously if they dropped this assertion that if you aren't using TDD, or even worse, if you're not even writing tests for your code, you are doomed!
I've been writing code for well over 15 years without TDD, and I've never had a major problem with bugs or refactoring, because I think when I'm working, and anticipate future problems. Yet I have seen many young people come along with many different silver bullets to solve all out problems, and I've had to clean things up after they were done.
This is not to say TDD is bad, and especially not to say testing is bad, I'm just saying, it is definitely not a panacea, and often not even necessary.
Approaches to programming improve over time. Yeah, people got by without TDD for 30+ years, but I'd much rather be a programmer today than 30+ years ago.
No, I did not. Thanks for that tidbit. To return the favor, here's another data point:
Did you know, shitty software has existed for 30++ years, failing at its job day after day, before people were knocking TDD and its marketing to the world?
That's right: there's been good and bad code for quite some time.
Who wrote that bad code? Well, the assertion was that "[g]ood programmers will write good code", so it wasn't them.
I find that an amazing claim, absent any evidence. At best, it's either tautological or a No True Scotsman sort of thing: anyone who ever wrote bad good is by definition not a good programmer.
My own anecdotal evidence says that good programmers sometimes write bad good. Maybe not often, maybe not on purpose, but it happens. You code yourself into a corner; you make a bad guess on something. Just like great athletes have a bad game, or a great musicians an off gig.
Let's suppose I'm over-interpreting the assertion. Suppose the claim was really meant to mean that good programmers will usually or typically write good code regardless of whether or not they use TDD or not.
Well, that invites the awkward question of would they write good code more often using TDD? I think yes, but have no more of a data-backed argument for that than the article has against it.
The existence of working code says nothing about whether good programmers always write good code regardless of TDD, or whether they would or would not write good code more often using TDD.
"People might take TDD advocates a bit more seriously if they dropped this assertion that if you aren't using TDD, or even worse, if you're not even writing tests for your code, you are doomed!"
People might take TDD critics a bit more seriously if they stopped trotting out shit like that because no one with an ounce of brains argues that, except perhaps to troll people into writing shrill blog posts.
For things that really matter, authentication, payment processing and service orchestration, I have resorted to ad-hoc specification; I cheesy pseudo-algebra based on axiomatic semantics, guards and boatloads of die() type assertions :-|
Once upon a time I wrote a story about a person who didn't use TDD and how he suffered: http://ryanbigg.com/2010/02/congratulations/
As a person that is genuinely interested in why TDD is so important but doesn't understand the importance or practice it, could you point me to an article that explains it?
I'm a developer who really doesn't have serious problems with bugs. Maybe I write my code a bit slower as I am always thinking about edge cases as I go, but everyone keeps telling me I have to write all these tests, but for what?
Is it mostly for the following developers? I suppose that makes sense. But then, I've inherited projects where I had to throw 50% of the code away, and all of the tests, because it was hotshot young developers who were hip to all the new things, but couldn't code their way out of a paper bag.
I see you point, but I think some sort of a reasonable balance has to be reached here.
For developers that are new to a largish code base, it can help them ensure they don't break important subsets (as would any good unit testing strategy). But also it proves to be quite handy when you have a really complex system that grows over time. When you go to refactor it to keep it evolving and working, the tests throughout (while costly to maintain) give you more reassurance that you didn't wreck the world.
TDD adds costs but reassurance that you don't break major things during a refactor. It can get a bit cumbersome at times, but it's hard for me to avoid seeing the value for long-lived code (i.e. non-prototypes) and/or shared code.
It's not for everyone or everything, but I think it's pretty reactionary for anyone to say it's not a useful process to have in your toolbox.
For example, I recommend TDD for cases where you're making a public API to unknown people. It's much more likely you'll break some unknown importance nuance in a refactor unless you have a good number of comprehensive unit tests. If you can generate those in some other fashion, awesome-- but TDD puts you ahead of the game there.
For me, I also don't have the best long-term memory on details given how many vastly different projects/technologies I work on, so TDD is basically encoding that memory for me so I don't make multi-tasking mistakes for problems I've already solved a month or few weeks ago and helps pass on that memory to people who inherit that code later. I'm an aggressive commenter for that reason, too, but nothing helps you keep cross-component issues solid than unit tests designed to be the canary in the coal mine when you break something in a refactor.
Similarly, the introduction of an accurate progress-bar might require the introduction of threads to the backend, or force you to update a global variable, or even rewrite it as an stream-based algorithm.
For CLI applications, independent utilities (a la git) might lead to cleaner code, but sometimes you want to pass more state between phases, and having a per-application shell, a la OpenSSL might be better since you can then implement variables and pass intermediate representations in memory, instead of disk files and sockets.
This misses the point. The fundamental fact of the software industry is that the demand for software exceeds the supply of good programmers by about an order of magnitude. Many of the irrational properties of the industry can be deduced from that.
You can't have an off day; there's no net to catch you.
1. First, develop some code
2. Think about whether it solved the problem
3. If not, repeat until it does
...which I fear is a rather more common variant.
Lately I've seen lots of FBP - Faith Based Programming.