This seems much less to be about TDD and more to do with just not letting a paradigm obscure your judgement. In this case TDD helped someone break through a bit of fog. Not disagreeing, but if that's the case then there should be additional ways to clear that fog.
I currently practice TDD where I work, but this part seems weird to me:
> "Try not to think of the bigger picture and don’t start create classes you think you might need."
You don't need TDD for this. Even when not doing TDD, if you're dividing your work up at all, you will naturally achieve this. It's only when you're trying to build the Golden Gate Bridge in one giant go that you run into situations where you're over-engineering, over-designing, and over-thinking prematurely.
Divide your problem into subproblems and solve them one at a time, this leads to a natural growth in complexity and minimizes poorly conceived abstractions. You don't need TDD for this to be a good idea.
I like TDD, but I dislike how religious it is. Not every problem is best solved via TDD. It's a tool in your toolbox, not a life philosophy. A strict adherence to testing has done wonders where I work (and we test iOS apps, no simple feat in the current ecosystem).
I'm not sure I understand the hostility towards TDD I sometimes see here. It's true there are zealots -- people who think just because you've written tests you will magically automatically have good design, and never have any bugs ever. But surely there are plenty of reasonable people who recognize there are trade offs, and find that writing tests first helps them make sure they're writing the interface they want to, ensures that test coverage is pretty good even if you don't have the budget/manpower for a whole QA team, and makes it easier to maintain the code, ultimately, since you'll be alerted to unintended consequences.
People like the first commenter on the article are even more bewildering. You have to test your code. Why wouldn't you automate that?
You see such hostility towards TDD for the same reason you see zealots. In general (I observe) the stronger some people argue for something, the stronger others will argue against.
Also, this SMBC comic expresses how I think these things often end up working out:
Agree. To me this smells like a lack of experience. TDD is what it is--for some parts of the process, I don't think it's appropriate--just like other "Extreme programming" practices.
But TDD certainly has helped me write cleaner, simpler code in the past, and having a set of regression tests to fall back on has saved me time and time again (which, let's be clear, is separate from TDD). In fact, I spent this morning writing a good bit of TDD code, and in the afternoon felt like I had to finish something up and foolishly eschewed the testing ("I'll write 'em later!" right)...and I ended up spending more time mucking about with stupid errors, and ironically in the end wrote less code. This is anecdotal, but it has happened to me a good number of times. As a result, I tend to prefer TDD when I can use it. Sometimes I like spiking out a solution first, if I don't fully understand the algorithm, for example--but then I will often scrap it and re-write it with tests, which usually produces a more clear statement of the solution since I have already thought through it.
From which follows my next point: of course writing tests, and TDD in particular, does not preclude thinking hard about your code's structure. But without experience testing, and especially doing TDD/BDD, I feel that it is very hard to understand the tradeoffs and challenges involved, as well as the pluses. So when I hear either a vehement denouncement or statement of support, I tend to suspect a lack of experience more than anything else. Although, I suppose Uncle Bob is a special case...
There is a big difference between "true" TDD and simply writing the tests before you write the code.
According to the TDD evangelism I've seen, it means (1) writing tests one at a time, which should fail at the time that they're written; (2) after adding each test, write code that will past that one test while being deliberately ignorant of any requirements that will show up in future tests; (3) refactor, rewrite, etc, as needed as new tests are added.
I suppose this could be meant to mimic how code in long-lived systems evolves as requirements are discovered and changed. Or it could be meant to promote awareness of you-aint-gonna-need it and reduce over-engineering. Either way it also will necessarily cause extreme short-sightedness and err strongly on the side of under-engineering, and in the case of mimicking natural code evolution it would be an example of cargo-cult thinking.
The first comment seems rather harsh. Right tool for the job. TDD invariably is in my domain, platform and language, but I wouldn't dogmatically insist on TDD being mandatory for every software system written ever, anywhere.
One thing I tell a lot of people who are getting into TDD is to focus on what you want the class's interface (AKA API) to look like. This means that you can start with the assertion and work your way backwards.
As the author alludes to this doesn't mean building out the whole architecture. For me it means just starting out with a simple case, make it pass, question whether you need to refactor or not and then move onto the next simplest case. A great example of this is Corey Haine's String Calculator https://vimeo.com/7961506
I agree that TDD is not for everyone but I've found a lot of it is because people are either taking too huge of steps or not spending enough time refactoring their code AND their tests.
In the most ideal world you should have a test suite and when something in your application breaks you should only have one test that notifies you of this. This I've found is often very far from the case.
I don't do TDD all the time, but I used it in a few strategic projects in daytime jobs. I was taught TDD in college by an excellent professor that used a similar approach as the one in the article (write a test, then write the least code that will pass it, then build from that) and I think it's a healthy and interesting way to look at TDD.
Another view of TDD that I share is that it can be considered as a spec of (some strategic parts) of a system written in a programming language, which makes it unambiguous and - obviously - automatable. I've had good results in using this approach to introduce TDD to non-technical managers.
I've also found that writing tests for a few corner cases and typical cases before coding is a good way to make the problem itself (what comes before the design of the first solution, as the author treats in his article) clearer in one's head.
Also very subjective, in that you're a fan, or not. I've written maybe 10 or 15 tests[1] in my entire career (which formally started in 1989), and have little use for it.
Same guy further down the blog, which I think gives a fairer presentation of his position:
"TDD is the way to go, I’m absolutely sure of it. It brings more benefits than you might first think. As well as the simple fact that your code will be more reliable, it forces you to write your code in a way that is testable, which in turn will make it loosely coupled and modular. It forces you to think about what you want your code to do before you start writing it. It brings other additional benefits that I won’t bother to mention, suffice to say theres no excuse for not using it in modern development.
[...]
In short unit tests will be more hinderance than help if there is a lot of prototyping and learning in your application. In cases like this your best doing away with the tests and just writing the code. Wait until your code starts to stabilise and then go back and review the situation then.
As much as unit tests give you flexibility by allowing you to change code quickly with more assurance that you won’t break anything inadvertently. It can also really slow down prototyping and rapid development if you don’t have an understanding of your environment.
The reason I mention this is so many tutorials and advice on the internet tell you to not write a line of code until you’ve written a test. I think you need to be a little more pragmatic about it."
The problem I have strong advocacy for TDD is the same problem I had for strong advocacy for OOP, and which the fella above seems to some extent familiar with: It's good if you don't have a deeper understanding of the subject but in the long run slavish adherence to a paradigm will cripple your ability to do things. This is exacerbated by the fact that, in the short run, you probably won't want to use the tests unless you understand why they'd be good - and if you understand why they'd be good, then just go ahead and write the program that way in the first place and save yourself a bunch of time.
The key fact with relation to the original article here is that he didn't see any of these advantages in TDD until he want to a speech where someone told him to do TDD another way and pointed out the advantages. Which rather implies that the advantages aren't a result of TDD as much as they are the result of already knowing something and levering it into a TDD framework...