That attempt is discussed in Ravi's article: http://ravimohan.blogspot.com/2007/04/learning-from-sudoku-s...
Peter Norvig wrote a sudoku solver, not by using TDD, but using old-fashioned engineering: http://norvig.com/sudoku.html
Ron Jeffries' attempts:
And, as dessert (Ron is very frank about his failures):
"This is surely the most ignominious debacle of a project listed on my site, even though others have also not shipped. (Sudoku did not ship and will not. [...])"
Suppose Norvig had used TDD while writing his solver. He would have used constraint propagation and come up with a good solution that way too. Similarly, any other technique would have yielded just as poor a result for Jeffries. Knowing things in advance is not a technique. Norvig made this point in Coders At Work:
I think test-driven design is great. I do that a lot more than I used to do. But you can test all you want and if you don’t know how to approach the problem, you’re not going to get a solution.
The important question is: how can you benefit from existing techniques (like constraint propagation) that reduce your hard problem to an easy one if you don't know about them to begin with? It is a genuine conundrum. Norvig gives a very non-technical answer in Coders At Work: general education and intuition. To that list I suppose one could add: asking around. What you don't know about, other people may.
You don't exactly need to know any formal theory of constraint propagation to solve this, after all - I wrote a Sudoku solver many years ago based on the algorithm "place a random valid value in the cell with the least valid values; cross off any now-impossible values in the same row/column/block; backtrack if there are no valid moves" which is not impossible to come up with on the spot. (I did.)
I also think, however fair you want to be to TDD methodology, that it's hard to get around the fact that Norvig didn't do intensive formal testing on his solution. Jeffries presumably applies TDD to lots of problem domains where solutions are obvious ("what's the cleanest way to wire this form to this database table"). TDD has to do more than "not prevent you from discovering solutions"; it also has to demonstrate value.
Ummm, he solved a hundred or so sudoku problems from Project Euler and verified they were correct, and did a performance test on a million random boards. That sounds like as much testing as one would need for correctness.
You might want to add automated tests for regressions, and maybe Norvig did so, but it would add little to the blog post (which is not really about testing at all).
I think the lesson to take away is that Jeffries never sat down and thought about the problem enough to come up with the key piece of insight from Norvig: "Coding up strategies like this is a possible route, but would require hundreds of lines of code (there are dozens of these strategies), and we'd never be sure if we could solve every puzzle." I personally consider this to be a failing of TDD: It encourages you to write code before you understand your problem. Others may take away different things.
TDD advocates tend to assume that you can always iterate your way to a solution. But what you get by iteration is sensitive to how you start. Technically, yes, you can evolve any program A into any other program B, but in practice no: the class of programs that A will evolve into is sharply constrained by A. I think this is true no matter how small A is. If that's correct, then initial conditions are a lot more important than it's fashionable to think they are.
Still, this isn't a weakness of TDD per se, but of iterative approaches in general, and it's something that advocates for iterative development of software (and other things) haven't yet taken into account. That's understandable, because the advent of iterative approaches was so necessary and has proven so valuable in other ways. These things come in historical waves.
A point about the Sudoku "debacle", though. The posts by Ron Jeffries are indicative of something other than TDD. They're indicative of mucking around in public. The difference isn't that other people don't make embarrassing mistakes; it's that they hide them. Why would Jeffries exhibit his so blatantly? If he were just a bad programmer or a zealot or a dishonest guru, obviously he'd have suppressed them. Not one of those types is ever too dumb to do that. (Typically, they're quite good at it. Maybe that's where their intelligence goes!) So something else is going on, and I found it unfair that no one who wrote about the Sudoku "debacle" ever asked what it might be.
My guess is that it's the original XP culture. These guys practice a let-it-all-hang-out style in which they highlight their mistakes and affect being stupider than they really are. Kent Beck affects this "I'm an idiot" style in the original TDD book. I say "affect" because they're not idiots, and I find the tone annoying. But I can see why they do it. It's an educational tactic to say "see, I make dumb mistakes too". They advocate a way of making software that embraces dumb mistakes as part of the process and encourages people to get over the fear of looking like an idiot. Underlying that is a psychological view of software development that can be traced back to Weinberg's "egoless programming".
I may be way off base because I haven't read the posts. I tried once and quickly lost interest. But I do think I recognize the culture.
Exactly. The criticism of TDD is (or should be, imo) about the "Driven" part, not so much the "Tests" part. Using conformance to an increasing number of tests as a hill climbing metric and a substitute for deep thinking(sometimes expressed as the "TDD is not about testing, it is about design") gets you stuck on local minima, a point Peter Seibel delineates clearly in his blog post on the subject .
I do disagree with you somewhat in that I think Ron Jeffries ( and most of the Agile evangelist/conference speaker/methodology-book-author types for that matter) are dishonest gurus who couldn't code their way out of a paper bag, but reasonable people can disagree here.
Also, it's another example of iterative change causing something to go in circles around a local maxima. One issue with TDD is that this can still feel like progress - your tests are still changing from red to green, after all.
The article presents a straw-man in the form of someone who believes TDD is the One True Way, and then attacks this by suggesting that the purpose of tests is—more or less—to prevent regressions. Given that this is an important thing, and since TDD isn’t really about that, TDD is clearly not the One True Methodology.
Quite honestly, that makes perfect sense to me. Love it or hate it, TDD is a design technique, not a regression prevention technique. Much of TDD is the creation of tests that validate an implementation, not a requirement. Thus, changing the implementation breaks the tests and you have to fix them. In that sense, the tests TDD produces are useful after the fact in the same way that Design By Contract’s contracts are useful after the fact. And if Design By Contract wasn’t somebody’s trade mark, I would honestly say that TDD is a way of practising DBC when you don’t have a language like Eiffel handy.
Is Design By Contract a useful technique? I think so. Is it the only technique to use? No. Are implementation tests the only tests needed in a project? No. Are the useful? Yes. Do they impose a maintenance overhead? Absolutely. Are there other paths to success? Absolutely.
So in summary, if I take the article at its face value of railing against TDD being the ONLY methodology, I agree. It isn’t. It isn’t even the only testing methodology. However, the TDD baby is staying right here while I toss out the fanaticism bath water. I believe that TDD is a useful tool and that one way to think about it is as a form of Design by Contract at the implementation level.
When I've worked on something in the early stages of design with someone who has used TDD (for example, if someone junior asks for help designing their project), I end up having to throw out or rewrite major parts of the tests, and explain to them why that test is useless. Usually the problem is one of two things: the functionality changed because their design changed (maybe a subclass where there wasn't before, or the data structure goes from an array with constants to a hash/map) or the test was actually useless and was testing that a library worked.
Are there places where you'd want to do TDD? I'm sure there are, though I don't use it. Do I think less of programmers who use TDD? No. Do I think less of programmers who try to convince me to use TDD? It depends how. If they say "it produces better code" or some such bull, then yes: Simply writing tests first does not help me produce better code, nor have I seen it help other people in my company produce better code. If they don't know what they're doing, or aren't familiar with something, writing tests first just means they're writing broken tests, and increases the overhead for fixing it ("I can't change that, the test will break!")
Personally I prefer to write code and then write tests for the parts of it which I think are critical. I also prefer to write tests that ensure the code itself is solid, not "does it return what I expect when I expect it to". By that I mean write tests that feed the method bad data, nulls, edge cases, etc. The way I've seen TDD described/practiced is that tests are written to say "this is how I expect this method to perform under normal circumstances"
As you said, it does seem like TDD tries to be DBC. However, it also seems like the people selling TDD sell it as the One True Way, not as another form of DBC with all the benefits/drawbacks of DBC.
I’m sorry that you are frustrated with your work environment, however we seem to have a misunderstanding about the form of the OP’s original argument.
If you read it as:
(a) There exist TDD OneTrueMethodologists, and (b) TDD is not the One True Methodology
(c) The OneTrueMethodologists are wrong.
Then this argument sets up One True Methodologists and attacks their belief that TDD is the One True Methodology. I consider this a well-formed argument, even though obviously half of your company disagrees with its conclusion because they disagree with (b).
The other way to read the OP is:
Therefore: (c) Using TDD is criminal.
This argument attacks TDD by showing that TDD OneTrueMethodologists are wrong. I do not consider this a well-formed argument against TDD, because I do not believe that using TDD is synonymous with believing that TDD is the One True Methodology. There are also some other small issues, such as the question of whether the only tests in a project are those produced by TDD.
I have seen projects that use TDD part of the time, and use TDD as well as other types of automated tests. An argument against OneTrueMethodologists that believe TDD is the only way and that other tests are not useful or that other design practices are secondary to TDD is not really an argument against using TDD in a wider context.
I personally read the argument as taking the second form. If you read it as taking the first form, I can understand your objection to the term “strawman."
"Writing tests first as a tool to be deployed where it works is "Developer Driven Testing" - focusing on making the developer more productive by choosing the right tool for the job. Generalizing a bunch of testing rules and saying This Is The One True Way Even When It Isn't - that's not right."
I feel that some of the comments on this thread already illustrate some people believe TDD to be The One True Way, and reception elsewhere too. I've been bored to tears (and annoyed) when clients I'm working with have bought in training consultants who frame TDD as The Only Way Forward. It's certainly possible that I've got some kind of selection bias going on here, but I really believe that most times I see mentions of TDD, I see it mentioned in the context of "YOU MUST DO EVERYTHING LIKE THIS".
I tried hard to make a real distinction between "Writing Tests First" as a tool, and "Test-Driven Development" as an all-encompassing philosophy in the article. I actually have a separate article sketched out about design by contract, too. Hopefully as I write more I'll get better at articulating my points better!
More often than not, developers tend to have lack of discipline and instead have a high testosterone of "cowboy coding" attitude. To make matter worse, companies often don't have the culture of improving code base.
Now, if I am a consultant being hired (I'm not a software consultant by the way, am always working on products) and looking at the culture of the company or the trend in software development out there (more patterns, more code, hipster Actor-styled, functional, currying, and less about testing techniques), what choice do I have other than to sing the TDD song?
FYI, I don't care if you write test-first or test-last. As long as your code is testable and there's no silly bug found because you forgot to write test, I'm cool with that. Otherwise, I'll make sure you don't go home until you change your coding style if I'm your supervisor.
Forcing TDD to a company that does not have good software development practice and culture tend to be one way to ensure going forward things are not going to become worse. Often the problem is the excuse of "I can't test this because there's a hard dependency on the Database". So if you're a consultant being hired to turn around a company with 20 or so developers with lack of discipline, this is one way to make things a little bit better one bug fix at a time.
Would you suggest a double standard? the less disciplined programmers must practice TDD and the more experienced programmers don't have to?
Isn't it true what they always say? People is the problem.
Conversely, some people believe that anything that is a direct result of adhering to TDD is automatically well designed, regardless of other metrics.
Being dogmatic and using One True Approach for everything is not good, but if we were to advance ourselves, we need to be able to define what is good and what is bad specifically. Then, fruitful discussion can follows.
Blanket accusation like this would not help no one and is just crying for attention, IMO.
Writing tests first is great for - for example - writing a regression test for a bug you've found. Or for adding a small piece of well-defined functionality to an existing interface.
But that's writing tests first in a specific instance, not Test-Driven Development, which is ALWAYS writing tests first.
Writing tests first falls down as soon as your ideas on what needs to be implemented may change as implementation progresses. You find yourself reluctant to change your interface or implementation because dang-it, the test said it should work that way, and now you don't want to change the test. Or you wrote a test for a simple piece of sub-functionality, and then it turns out that relied on an architecture you don't want to use and ...
And if you're not a senior dev with a bucket load of experience, you're unlikely at that point to want to go back and change things.
TDD definitely has been _very_ positive for my software and there really is no comparison between what I did before and what I've done since as far as bug rates, customer satisfaction, and time to completion. I'm sure you'd agree that those are positive outcomes.
I appreciate you've had lousy examples and that maybe even the majority of TDD practiced is a lousy example. However, the examples you give and what you're describing from these consultants is not remotely the way I've done TDD for the last 5 years. I work extremely hard on refining the concept of testing and when and where to apply different approaches.
As for "you're unlikely at that point to want to go back and change things". I make way more changes in my code and tests before TDD than without. Granted I did spend a year learning how to write tests that hit the right boundaries and wouldn't break everything when I did change architecture strongly. I think any developer needs to dive deep into HOW TO TEST anyway test first or after.
Also, I don't know why anyone would have a hard time deleting code that no longer applied. There is always source control if you really feel like you need it again.
as for "and then it turns out that you relied on an architecture that you don't want to use"
Many TDD practitioners have this concept called "spikes" which is code that you write without tests to get a good idea of how that particular algorithm will work for you and what approach you want to take. However, its throw away code that's is often very procedural and is more just thinking through an issue. This minimizes some of the shifting architecture pain you're referring to.
1) TDD encourages testing the smallest unit of functionality possible
2) When you are first developing a large piece of functionality from scratch, you often need to upend the entire architecture a few times as you solidify the design. Theses revamps are the type of thing cannot be done incrementally as large number of small changes. And doing design on paper in advance only goes so far. And these architectural revamps are best done early, as they are far easier to do when you have 1000 lines of code than when you have 100000 lines of code.
3) smallest-unit-of-functionality tests generally get throw away in system-wide revamps
4) Therefore, the cost of these high-level revamps is greatly increased if you need to throw away all these tests every time, hindering productivity.
5) Because of that, I write tests afterwards, once i am comfortable the architecture has stabilized.
TDD is an methodology, not a religion. Do what works best for you and the specific project you are working on.
It kills the flexibility of your code way too early on in development. At best it doubles the inertia preventing any code change. If you know the ideas of "build one to throw away" or "you don't know what you're building until it is built", you understand that you can design as much as you'd like, but at the end of the day, you don't know good your implementation design is until you actually implement. Writing tests first locks you into implementation details before you even use the implementation to know if they're a good idea.
The author won't convince any TDD disciples to change their ways by attacking them. A reasoned post that objectively weights the pros and cons might. But even if the author could affect change, is it really the case that no-one ever would benefit from TDD? It might certainly be used as a crutch by some developers, but if it helps them develop better software, what's the harm? That's their issue to deal with, not anyone else's.
The goal is to write good programs. There's many paths there, and TDD, anti-TDD or part time testing whenever you feel like it all are viable. If TDD can help abstract away a level of thought with regards to testing, and helps speed up the process and helps someone write better code, then that's great.
Thanks for the feedback. The next article planned is a detailed discussion of how to retro-fit an automated test-suite to a web-app that doesn't already have one.
To summarize, the author seems to have the One True Programmer Evaluation Methodology, and is able to determine that someone is shitty, inexperienced, or a criminal from their answer to a single question. How is this different from someone who says that they can determine whether your project is shitty from the answer to the question “Do you use TDD?”
My feeling is that the author has a great deal in common with people who say that X is the One True Programming Methodology, for any value of X, including TDD.
Once you establish that 'there are no silver bullets', you don't have to go around to every bullet and say 'that bullet isn't silver, either!'
I liked this part; it made me laugh. I also get red flags whenever I hear "X is the one true way to do Y" statements. Personally, I don't think that there is much in life that is so black and white that there could be "one true way" to do anything.
My understanding of the original agile methodologies was that each team/company needed to do what worked for them. Sure, you could follow some book to the letter and hire some certified SCRUM master but odds are, some deviations from the written plan are needed for the best results.
Either way, I'm generally a fan of TDD, test automation and metrics ... where they make sense and where they aren't going to be abused. Use tools and methodologies that make sense for the current situation, not just because some book/person tells you that you'll fail unless you use them.
It's first and foremost a design tool. Secondary to that it tests functionality of your code.
TDD leads to better designed software. Period. No it's not the only way to design software, nor should it be the only tool used when designing the software you're writing, but it will show problems with any design you've got and help you fix them.
If you're going to call it flame-bait, could you expand on some of your ideas or address some of the points made in the article?
Care to expand on that? Other than forcing low coupling, I don't really see what TDD does for the design of the code, especially because it emphasizes unit testing, and those are pretty small in the scope of overall design. And even the low coupling is debatable, a good test framework is going to have enough hooks to be able to isolate the unit under test to the point that the surrounding design doesn't really matter to the test.
As for "especially because it emphasizes unit testing" that doesn't mean it frowns on integration or acceptance testing. In fact most practitioners I know write high level end to end tests for the core functionality.
I think this guy doesn't understand the concept of mocking and stubbing in your tests.
TDD works great in bottom up development ("zooming down to tiny bits of functionality"), but it also works great in top down via stubbing.
I feel like his complaint isn't against TDD, but about badly written tests. I think we can all agree: Crappy code bad. Nice code good. Both in tests and in production.
My workflow is usually:
1. Write the tests and design the module/class/whatever in the same time.
2. Code it. (Repeat)
You made them more than clear. People just want to argue the headline. Good post BTW I'm subscribing to your RSS feed.
Tests should never dictate the design of an application.
"You can down-vote all you want, only thing your post shows is that you are struggling with insert development practice here development. You should learn it well, and then you will be in position to say what part you don't think is useful."
The blog post, on the other hand, is not generic. I've had similar experiences: in many cases TDD makes trivial coding issues influence overall application design (in a bad way) instead of application design driving those trivial decisions. I can even tell you when this happens. It happens when the complexity in the application comes mainly from structuring large chunks of mostly trivial functionality. I've had pretty positive TDD experience with other type of applications - ones where complexity comes mainly from some data-processing algorithms, while the overall structure of the app is fairly simple. (Think web app vs language parser.)