That attempt is discussed in Ravi's article: http://ravimohan.blogspot.com/2007/04/learning-from-sudoku-s...
Peter Norvig wrote a sudoku solver, not by using TDD, but using old-fashioned engineering: http://norvig.com/sudoku.html
Ron Jeffries' attempts:
And, as dessert (Ron is very frank about his failures):
"This is surely the most ignominious debacle of a project listed on my site, even though others have also not shipped. (Sudoku did not ship and will not. [...])"
Suppose Norvig had used TDD while writing his solver. He would have used constraint propagation and come up with a good solution that way too. Similarly, any other technique would have yielded just as poor a result for Jeffries. Knowing things in advance is not a technique. Norvig made this point in Coders At Work:
I think test-driven design is great. I do that a lot more than I used to do. But you can test all you want and if you don’t know how to approach the problem, you’re not going to get a solution.
The important question is: how can you benefit from existing techniques (like constraint propagation) that reduce your hard problem to an easy one if you don't know about them to begin with? It is a genuine conundrum. Norvig gives a very non-technical answer in Coders At Work: general education and intuition. To that list I suppose one could add: asking around. What you don't know about, other people may.
You don't exactly need to know any formal theory of constraint propagation to solve this, after all - I wrote a Sudoku solver many years ago based on the algorithm "place a random valid value in the cell with the least valid values; cross off any now-impossible values in the same row/column/block; backtrack if there are no valid moves" which is not impossible to come up with on the spot. (I did.)
I also think, however fair you want to be to TDD methodology, that it's hard to get around the fact that Norvig didn't do intensive formal testing on his solution. Jeffries presumably applies TDD to lots of problem domains where solutions are obvious ("what's the cleanest way to wire this form to this database table"). TDD has to do more than "not prevent you from discovering solutions"; it also has to demonstrate value.
Ummm, he solved a hundred or so sudoku problems from Project Euler and verified they were correct, and did a performance test on a million random boards. That sounds like as much testing as one would need for correctness.
You might want to add automated tests for regressions, and maybe Norvig did so, but it would add little to the blog post (which is not really about testing at all).
I think the lesson to take away is that Jeffries never sat down and thought about the problem enough to come up with the key piece of insight from Norvig: "Coding up strategies like this is a possible route, but would require hundreds of lines of code (there are dozens of these strategies), and we'd never be sure if we could solve every puzzle." I personally consider this to be a failing of TDD: It encourages you to write code before you understand your problem. Others may take away different things.
TDD advocates tend to assume that you can always iterate your way to a solution. But what you get by iteration is sensitive to how you start. Technically, yes, you can evolve any program A into any other program B, but in practice no: the class of programs that A will evolve into is sharply constrained by A. I think this is true no matter how small A is. If that's correct, then initial conditions are a lot more important than it's fashionable to think they are.
Still, this isn't a weakness of TDD per se, but of iterative approaches in general, and it's something that advocates for iterative development of software (and other things) haven't yet taken into account. That's understandable, because the advent of iterative approaches was so necessary and has proven so valuable in other ways. These things come in historical waves.
A point about the Sudoku "debacle", though. The posts by Ron Jeffries are indicative of something other than TDD. They're indicative of mucking around in public. The difference isn't that other people don't make embarrassing mistakes; it's that they hide them. Why would Jeffries exhibit his so blatantly? If he were just a bad programmer or a zealot or a dishonest guru, obviously he'd have suppressed them. Not one of those types is ever too dumb to do that. (Typically, they're quite good at it. Maybe that's where their intelligence goes!) So something else is going on, and I found it unfair that no one who wrote about the Sudoku "debacle" ever asked what it might be.
My guess is that it's the original XP culture. These guys practice a let-it-all-hang-out style in which they highlight their mistakes and affect being stupider than they really are. Kent Beck affects this "I'm an idiot" style in the original TDD book. I say "affect" because they're not idiots, and I find the tone annoying. But I can see why they do it. It's an educational tactic to say "see, I make dumb mistakes too". They advocate a way of making software that embraces dumb mistakes as part of the process and encourages people to get over the fear of looking like an idiot. Underlying that is a psychological view of software development that can be traced back to Weinberg's "egoless programming".
I may be way off base because I haven't read the posts. I tried once and quickly lost interest. But I do think I recognize the culture.
Exactly. The criticism of TDD is (or should be, imo) about the "Driven" part, not so much the "Tests" part. Using conformance to an increasing number of tests as a hill climbing metric and a substitute for deep thinking(sometimes expressed as the "TDD is not about testing, it is about design") gets you stuck on local minima, a point Peter Seibel delineates clearly in his blog post on the subject .
I do disagree with you somewhat in that I think Ron Jeffries ( and most of the Agile evangelist/conference speaker/methodology-book-author types for that matter) are dishonest gurus who couldn't code their way out of a paper bag, but reasonable people can disagree here.
Also, it's another example of iterative change causing something to go in circles around a local maxima. One issue with TDD is that this can still feel like progress - your tests are still changing from red to green, after all.