Hacker News new | comments | show | ask | jobs | submit login
Test Driven Development? You've got to be kidding me... (writemoretests.com)
49 points by peteretep 2006 days ago | hide | past | web | 48 comments | favorite



Discussions about TDD always make me think of Ron Jeffries' attempt to develop a sudoku-solver using TDD.

That attempt is discussed in Ravi's article: http://ravimohan.blogspot.com/2007/04/learning-from-sudoku-s...

Peter Norvig wrote a sudoku solver, not by using TDD, but using old-fashioned engineering: http://norvig.com/sudoku.html

Ron Jeffries' attempts:

http://xprogramming.com/articles/sudokumusings/

http://xprogramming.com/articles/oksudoku/

http://xprogramming.com/articles/sudoku2

http://xprogramming.com/articles/sudoku4

http://xprogramming.com/articles/sudoku5

And, as dessert (Ron is very frank about his failures):

http://xprogramming.com/articles/roroncemore/

"This is surely the most ignominious debacle of a project listed on my site, even though others have also not shipped. (Sudoku did not ship and will not. [...])"


"Old-fashioned engineering vs. TDD" is misleading. The difference is that Norvig knew in advance how to solve the problem - constraint propagation - while Jeffries presumably did not.

Suppose Norvig had used TDD while writing his solver. He would have used constraint propagation and come up with a good solution that way too. Similarly, any other technique would have yielded just as poor a result for Jeffries. Knowing things in advance is not a technique. Norvig made this point in Coders At Work:

I think test-driven design is great. I do that a lot more than I used to do. But you can test all you want and if you don’t know how to approach the problem, you’re not going to get a solution.

The important question is: how can you benefit from existing techniques (like constraint propagation) that reduce your hard problem to an easy one if you don't know about them to begin with? It is a genuine conundrum. Norvig gives a very non-technical answer in Coders At Work: general education and intuition. To that list I suppose one could add: asking around. What you don't know about, other people may.


TDD seems to encourage you to dive straight into implementation. For certain problems - e.g. a Sudoku solver - it's much more effective to think through the entire algorithm before starting to write code, and I think that the above debacle supports this argument.

You don't exactly need to know any formal theory of constraint propagation to solve this, after all - I wrote a Sudoku solver many years ago based on the algorithm "place a random valid value in the cell with the least valid values; cross off any now-impossible values in the same row/column/block; backtrack if there are no valid moves" which is not impossible to come up with on the spot. (I did.)


I was actually thinking the opposite thing: that TDD gives you another excuse to model your problem instead of moving towards solving it; it's an analysis paralysis trap.

I also think, however fair you want to be to TDD methodology, that it's hard to get around the fact that Norvig didn't do intensive formal testing on his solution. Jeffries presumably applies TDD to lots of problem domains where solutions are obvious ("what's the cleanest way to wire this form to this database table"). TDD has to do more than "not prevent you from discovering solutions"; it also has to demonstrate value.


> Norvig didn't do intensive formal testing on his solution.

Ummm, he solved a hundred or so sudoku problems from Project Euler and verified they were correct, and did a performance test on a million random boards. That sounds like as much testing as one would need for correctness.

You might want to add automated tests for regressions, and maybe Norvig did so, but it would add little to the blog post (which is not really about testing at all).

I think the lesson to take away is that Jeffries never sat down and thought about the problem enough to come up with the key piece of insight from Norvig: "Coding up strategies like this is a possible route, but would require hundreds of lines of code (there are dozens of these strategies), and we'd never be sure if we could solve every puzzle." I personally consider this to be a failing of TDD: It encourages you to write code before you understand your problem. Others may take away different things.


I'm sorry, I was imprecise; I was referring to piecemeal unit testing.


I agree that there is a serious criticism here, and that sufficiently deep thinking is underrated.

TDD advocates tend to assume that you can always iterate your way to a solution. But what you get by iteration is sensitive to how you start. Technically, yes, you can evolve any program A into any other program B, but in practice no: the class of programs that A will evolve into is sharply constrained by A. I think this is true no matter how small A is. If that's correct, then initial conditions are a lot more important than it's fashionable to think they are.

Still, this isn't a weakness of TDD per se, but of iterative approaches in general, and it's something that advocates for iterative development of software (and other things) haven't yet taken into account. That's understandable, because the advent of iterative approaches was so necessary and has proven so valuable in other ways. These things come in historical waves.

A point about the Sudoku "debacle", though. The posts by Ron Jeffries are indicative of something other than TDD. They're indicative of mucking around in public. The difference isn't that other people don't make embarrassing mistakes; it's that they hide them. Why would Jeffries exhibit his so blatantly? If he were just a bad programmer or a zealot or a dishonest guru, obviously he'd have suppressed them. Not one of those types is ever too dumb to do that. (Typically, they're quite good at it. Maybe that's where their intelligence goes!) So something else is going on, and I found it unfair that no one who wrote about the Sudoku "debacle" ever asked what it might be.

My guess is that it's the original XP culture. These guys practice a let-it-all-hang-out style in which they highlight their mistakes and affect being stupider than they really are. Kent Beck affects this "I'm an idiot" style in the original TDD book. I say "affect" because they're not idiots, and I find the tone annoying. But I can see why they do it. It's an educational tactic to say "see, I make dumb mistakes too". They advocate a way of making software that embraces dumb mistakes as part of the process and encourages people to get over the fear of looking like an idiot. Underlying that is a psychological view of software development that can be traced back to Weinberg's "egoless programming".

I may be way off base because I haven't read the posts. I tried once and quickly lost interest. But I do think I recognize the culture.


"I agree that there is a serious criticism here, and that sufficiently deep thinking is underrated."

Exactly. The criticism of TDD is (or should be, imo) about the "Driven" part, not so much the "Tests" part. Using conformance to an increasing number of tests as a hill climbing metric and a substitute for deep thinking(sometimes expressed as the "TDD is not about testing, it is about design") gets you stuck on local minima, a point Peter Seibel delineates clearly in his blog post on the subject .

I do disagree with you somewhat in that I think Ron Jeffries ( and most of the Agile evangelist/conference speaker/methodology-book-author types for that matter) are dishonest gurus who couldn't code their way out of a paper bag, but reasonable people can disagree here.


While I find Jeffries's attempt to make a sudoku solver with TDD a bit embarrassing, he's still done more to contribute to the debate than people who are standing on the sidelines being smug (or posting snarky comments on github). I admire his openness and humility.

Also, it's another example of iterative change causing something to go in circles around a local maxima. One issue with TDD is that this can still feel like progress - your tests are still changing from red to green, after all.


I don't know how much TDD matters here. I read 2 articles, and it looks like he doesn't really know how to write a sudoku solver. Unless he was going to invest some time in learning constraint propagation and backtracking, writing or not writing tests would have ended up in the same state.


Having responded to the tone elsewhere, I’ll respond to the content here :-)

The article presents a straw-man in the form of someone who believes TDD is the One True Way, and then attacks this by suggesting that the purpose of tests is—more or less—to prevent regressions. Given that this is an important thing, and since TDD isn’t really about that, TDD is clearly not the One True Methodology.

Quite honestly, that makes perfect sense to me. Love it or hate it, TDD is a design technique, not a regression prevention technique. Much of TDD is the creation of tests that validate an implementation, not a requirement. Thus, changing the implementation breaks the tests and you have to fix them. In that sense, the tests TDD produces are useful after the fact in the same way that Design By Contract’s contracts are useful after the fact. And if Design By Contract wasn’t somebody’s trade mark, I would honestly say that TDD is a way of practising DBC when you don’t have a language like Eiffel handy.

Is Design By Contract a useful technique? I think so. Is it the only technique to use? No. Are implementation tests the only tests needed in a project? No. Are the useful? Yes. Do they impose a maintenance overhead? Absolutely. Are there other paths to success? Absolutely.

So in summary, if I take the article at its face value of railing against TDD being the ONLY methodology, I agree. It isn’t. It isn’t even the only testing methodology. However, the TDD baby is staying right here while I toss out the fanaticism bath water. I believe that TDD is a useful tool and that one way to think about it is as a form of Design by Contract at the implementation level.


Straw man? Where I work about half the developers believe that TDD is the One True Way. Unfortunately, this includes the CTO, who wants to force every developer to practice TDD. (along with other 'agile' methods)

When I've worked on something in the early stages of design with someone who has used TDD (for example, if someone junior asks for help designing their project), I end up having to throw out or rewrite major parts of the tests, and explain to them why that test is useless. Usually the problem is one of two things: the functionality changed because their design changed (maybe a subclass where there wasn't before, or the data structure goes from an array with constants to a hash/map) or the test was actually useless and was testing that a library worked.

Are there places where you'd want to do TDD? I'm sure there are, though I don't use it. Do I think less of programmers who use TDD? No. Do I think less of programmers who try to convince me to use TDD? It depends how. If they say "it produces better code" or some such bull, then yes: Simply writing tests first does not help me produce better code, nor have I seen it help other people in my company produce better code. If they don't know what they're doing, or aren't familiar with something, writing tests first just means they're writing broken tests, and increases the overhead for fixing it ("I can't change that, the test will break!")

Personally I prefer to write code and then write tests for the parts of it which I think are critical. I also prefer to write tests that ensure the code itself is solid, not "does it return what I expect when I expect it to". By that I mean write tests that feed the method bad data, nulls, edge cases, etc. The way I've seen TDD described/practiced is that tests are written to say "this is how I expect this method to perform under normal circumstances"

As you said, it does seem like TDD tries to be DBC. However, it also seems like the people selling TDD sell it as the One True Way, not as another form of DBC with all the benefits/drawbacks of DBC.


Straw man? Where I work about half the developers believe that TDD is the One True Way. Unfortunately, this includes the CTO, who wants to force every developer to practice TDD. (along with other 'agile' methods)

I’m sorry that you are frustrated with your work environment, however we seem to have a misunderstanding about the form of the OP’s original argument.

If you read it as:

(a) There exist TDD OneTrueMethodologists, and (b) TDD is not the One True Methodology

Therefore:

(c) The OneTrueMethodologists are wrong.

Then this argument sets up One True Methodologists and attacks their belief that TDD is the One True Methodology. I consider this a well-formed argument, even though obviously half of your company disagrees with its conclusion because they disagree with (b).

The other way to read the OP is:

(a) There exist TDD OneTrueMethodologists, and (b) TDD is not the One True Methodology

Therefore: (c) Using TDD is criminal.

This argument attacks TDD by showing that TDD OneTrueMethodologists are wrong. I do not consider this a well-formed argument against TDD, because I do not believe that using TDD is synonymous with believing that TDD is the One True Methodology. There are also some other small issues, such as the question of whether the only tests in a project are those produced by TDD.

I have seen projects that use TDD part of the time, and use TDD as well as other types of automated tests. An argument against OneTrueMethodologists that believe TDD is the only way and that other tests are not useful or that other design practices are secondary to TDD is not really an argument against using TDD in a wider context.

I personally read the argument as taking the second form. If you read it as taking the first form, I can understand your objection to the term “strawman."


I read it as the first form. I was trying to offer at least one example of why I do not think TDD is the OTM. I read the criminal part of the OP as hyperbole. The article does say there are uses for TDD, (not sure how to do quotes on HN)

"Writing tests first as a tool to be deployed where it works is "Developer Driven Testing" - focusing on making the developer more productive by choosing the right tool for the job. Generalizing a bunch of testing rules and saying This Is The One True Way Even When It Isn't - that's not right."


Thanks again for taking the time to share your comments.

I feel that some of the comments on this thread already illustrate some people believe TDD to be The One True Way, and reception elsewhere too. I've been bored to tears (and annoyed) when clients I'm working with have bought in training consultants who frame TDD as The Only Way Forward. It's certainly possible that I've got some kind of selection bias going on here, but I really believe that most times I see mentions of TDD, I see it mentioned in the context of "YOU MUST DO EVERYTHING LIKE THIS".

I tried hard to make a real distinction between "Writing Tests First" as a tool, and "Test-Driven Development" as an all-encompassing philosophy in the article. I actually have a separate article sketched out about design by contract, too. Hopefully as I write more I'll get better at articulating my points better!


Sometime it's about trust between two parties.

More often than not, developers tend to have lack of discipline and instead have a high testosterone of "cowboy coding" attitude. To make matter worse, companies often don't have the culture of improving code base.

Now, if I am a consultant being hired (I'm not a software consultant by the way, am always working on products) and looking at the culture of the company or the trend in software development out there (more patterns, more code, hipster Actor-styled, functional, currying, and less about testing techniques), what choice do I have other than to sing the TDD song?

FYI, I don't care if you write test-first or test-last. As long as your code is testable and there's no silly bug found because you forgot to write test, I'm cool with that. Otherwise, I'll make sure you don't go home until you change your coding style if I'm your supervisor.

Forcing TDD to a company that does not have good software development practice and culture tend to be one way to ensure going forward things are not going to become worse. Often the problem is the excuse of "I can't test this because there's a hard dependency on the Database". So if you're a consultant being hired to turn around a company with 20 or so developers with lack of discipline, this is one way to make things a little bit better one bug fix at a time.

Would you suggest a double standard? the less disciplined programmers must practice TDD and the more experienced programmers don't have to?

Isn't it true what they always say? People is the problem.


It's not a straw-man argument. I've seen countless developers and managers who believe... well, pretty much they believe that unless you're in love with TDD you suck as a programmer, period. This arrogant attitude is very real. In fact, it's commonplace.

Conversely, some people believe that anything that is a direct result of adhering to TDD is automatically well designed, regardless of other metrics.


I am hoping for a more detailed discussions on which case writing test first hinders productivity.

Being dogmatic and using One True Approach for everything is not good, but if we were to advance ourselves, we need to be able to define what is good and what is bad specifically. Then, fruitful discussion can follows.

Blanket accusation like this would not help no one and is just crying for attention, IMO.


Hey, thanks for your comments. Perhaps I should have made more of that in the article. Specifically, I find TDD to be completely unsuitable for any process where you're not 100% wedded to what the result needs to look like at the outset.

Writing tests first is great for - for example - writing a regression test for a bug you've found. Or for adding a small piece of well-defined functionality to an existing interface.

But that's writing tests first in a specific instance, not Test-Driven Development, which is ALWAYS writing tests first.

Writing tests first falls down as soon as your ideas on what needs to be implemented may change as implementation progresses. You find yourself reluctant to change your interface or implementation because dang-it, the test said it should work that way, and now you don't want to change the test. Or you wrote a test for a simple piece of sub-functionality, and then it turns out that relied on an architecture you don't want to use and ...

And if you're not a senior dev with a bucket load of experience, you're unlikely at that point to want to go back and change things.


I hope you can appreciate TDD is not terrible for all developers and that possibly there is something useful in there for some people. As you imply there are many paths to quality software. You risk being as guilty as those you criticize if you try and claim others are "less than" that can produce decent code in a different way than you believe is possible.

TDD definitely has been _very_ positive for my software and there really is no comparison between what I did before and what I've done since as far as bug rates, customer satisfaction, and time to completion. I'm sure you'd agree that those are positive outcomes.

I appreciate you've had lousy examples and that maybe even the majority of TDD practiced is a lousy example. However, the examples you give and what you're describing from these consultants is not remotely the way I've done TDD for the last 5 years. I work extremely hard on refining the concept of testing and when and where to apply different approaches.

As for "you're unlikely at that point to want to go back and change things". I make way more changes in my code and tests before TDD than without. Granted I did spend a year learning how to write tests that hit the right boundaries and wouldn't break everything when I did change architecture strongly. I think any developer needs to dive deep into HOW TO TEST anyway test first or after.

Also, I don't know why anyone would have a hard time deleting code that no longer applied. There is always source control if you really feel like you need it again.

as for "and then it turns out that you relied on an architecture that you don't want to use"

Many TDD practitioners have this concept called "spikes" which is code that you write without tests to get a good idea of how that particular algorithm will work for you and what approach you want to take. However, its throw away code that's is often very procedural and is more just thinking through an issue. This minimizes some of the shifting architecture pain you're referring to.


My experience is as follows.

1) TDD encourages testing the smallest unit of functionality possible

2) When you are first developing a large piece of functionality from scratch, you often need to upend the entire architecture a few times as you solidify the design. Theses revamps are the type of thing cannot be done incrementally as large number of small changes. And doing design on paper in advance only goes so far. And these architectural revamps are best done early, as they are far easier to do when you have 1000 lines of code than when you have 100000 lines of code.

3) smallest-unit-of-functionality tests generally get throw away in system-wide revamps

4) Therefore, the cost of these high-level revamps is greatly increased if you need to throw away all these tests every time, hindering productivity.

5) Because of that, I write tests afterwards, once i am comfortable the architecture has stabilized.

TDD is an methodology, not a religion. Do what works best for you and the specific project you are working on.


>which case writing test first hinders productivity.

It kills the flexibility of your code way too early on in development. At best it doubles the inertia preventing any code change. If you know the ideas of "build one to throw away" or "you don't know what you're building until it is built", you understand that you can design as much as you'd like, but at the end of the day, you don't know good your implementation design is until you actually implement. Writing tests first locks you into implementation details before you even use the implementation to know if they're a good idea.


Good point about doubling inertia, never considered it that way before.


The problem is that most TDD advocates are so dogmatically strong, that blanket accusations seem to be one of the best ways to trigger debates.


These types of posts come up every so often, and I don't really understand why. Sure, the main point is valid (for some people). I just don't understand the vitriol behind the Author's post. So what if someone thinks TDD is the bees-knees? So what if they think less of YOU because you don't drink the same kool-aide?

The author won't convince any TDD disciples to change their ways by attacking them. A reasoned post that objectively weights the pros and cons might. But even if the author could affect change, is it really the case that no-one ever would benefit from TDD? It might certainly be used as a crutch by some developers, but if it helps them develop better software, what's the harm? That's their issue to deal with, not anyone else's.

The goal is to write good programs. There's many paths there, and TDD, anti-TDD or part time testing whenever you feel like it all are viable. If TDD can help abstract away a level of thought with regards to testing, and helps speed up the process and helps someone write better code, then that's great.


I appreciate the article may be a little polemic for some tastes, but you know, I have often had my mind changed in this way before. I've read something deliberately somewhat provocative, which enraged me briefly, but got me thinking, and a week later, I realized had gotten me thinking enough that I'd started to come around to a different perspective.

Thanks for the feedback. The next article planned is a detailed discussion of how to retro-fit an automated test-suite to a web-app that doesn't already have one.


You're light on actual counter-examples and heavy on words like "shills", "hocus", "criminal", and "idiocy", so that most of your argument has to rely on your ability to paint TDDers as stupid or evil. The vast majority of thinking people will be unmoved by that.


There is something ironic about an article that begins: When I hear someone start advocating Test-Driven Development as the One True Programming Methodology, that's a red flag, and I start to assume you're either a shitty (or inexperienced) programmer, and carries on to say that The whole concept of Test-Driven Development is hocus, and embracing it as your philosophy, criminal.

To summarize, the author seems to have the One True Programmer Evaluation Methodology, and is able to determine that someone is shitty, inexperienced, or a criminal from their answer to a single question. How is this different from someone who says that they can determine whether your project is shitty from the answer to the question “Do you use TDD?”

My feeling is that the author has a great deal in common with people who say that X is the One True Programming Methodology, for any value of X, including TDD.


I'll be honest and say the article started with the most inane troll about people who run their own mail servers and have beards, and was quickly changed when I realized people weren't reading beyond that ;-) Clearly my sense of humour needs work!


Well, I for one thought it funny (read your post via zite at first), and the rest of your article was a nice take on the TDD philosophy and it's adherents. It should merely be another tool in one's tool chest. Maybe a less confrontational headline would have produced a more measured response.


Why does the author pick out TDD here? All he says is that "People that say TDD is perfect and the best technique in all situations are wrong."

Once you establish that 'there are no silver bullets', you don't have to go around to every bullet and say 'that bullet isn't silver, either!'


Because I teach people how to write tests effectively, and run in to this TDD silver bullet bullshit over and over :-)


"Testing is a tool for helping you, not for using to engage in a 'more pious than thou' dick-swinging my Cucumber is bigger than yours idiocy. Testing is about giving you the developer useful and quick feedback about if you're on the right path, and if you've broken something, and for warning people who come after you if they've broken something. It's not an arcane methodology that somehow has some magical 'making your code better' side-effect..."

I liked this part; it made me laugh. I also get red flags whenever I hear "X is the one true way to do Y" statements. Personally, I don't think that there is much in life that is so black and white that there could be "one true way" to do anything.

My understanding of the original agile methodologies was that each team/company needed to do what worked for them. Sure, you could follow some book to the letter and hire some certified SCRUM master but odds are, some deviations from the written plan are needed for the best results.

Either way, I'm generally a fan of TDD, test automation and metrics ... where they make sense and where they aren't going to be abused. Use tools and methodologies that make sense for the current situation, not just because some book/person tells you that you'll fail unless you use them.


Nothing but flame-bait here. The OP obviously doesn't understand Test Driven Development or why people swear by it.

It's first and foremost a design tool. Secondary to that it tests functionality of your code.

TDD leads to better designed software. Period. No it's not the only way to design software, nor should it be the only tool used when designing the software you're writing, but it will show problems with any design you've got and help you fix them.


It's your "Period" that reveals something to me. Like there's no best way than TDD to enforce better design. Like it's the ultimate thing you can do, and behind that there's nothing. Like higher expressivity or code generation cannot even compete with TDD in that regard. Like if you are not doing TDD, then it means your software has bad design.


Automated testing is the raison d'être of the whole blog. TDD is a methodology that also involves automated testing.

If you're going to call it flame-bait, could you expand on some of your ideas or address some of the points made in the article?


>It's first and foremost a design tool.

Care to expand on that? Other than forcing low coupling, I don't really see what TDD does for the design of the code, especially because it emphasizes unit testing, and those are pretty small in the scope of overall design. And even the low coupling is debatable, a good test framework is going to have enough hooks to be able to isolate the unit under test to the point that the surrounding design doesn't really matter to the test.


I can put in 2 cents here. It makes you think about your API up front. Can you do that others ways (write your documentation first, etc)? Of course so do the approach that works for you. TDD just happens to be my preferred approach.

As for "especially because it emphasizes unit testing" that doesn't mean it frowns on integration or acceptance testing. In fact most practitioners I know write high level end to end tests for the core functionality.


"Even if you write only some tests first, if you want to do it meaningfully, then you either need to zoom down in to tiny bits of functionality first in order to be able to write those tests, or you write a test that requires most of the software to be finished, or you cheat and fudge it."

I think this guy doesn't understand the concept of mocking and stubbing in your tests.

TDD works great in bottom up development ("zooming down to tiny bits of functionality"), but it also works great in top down via stubbing.

I feel like his complaint isn't against TDD, but about badly written tests. I think we can all agree: Crappy code bad. Nice code good. Both in tests and in production.


Still, starting with testing helps so much to create modular design because it forces you to write clear/clean interfaces between them.

My workflow is usually:

  1. Write the tests and design the module/class/whatever in the same time.
  2. Code it. (Repeat)
Although, I rarely write unit tests in TDD; only tests providing a high level design view (Such as functional or integration tests).


And perhaps I didn't make my points clearly enough in the article; I was really trying to differentiate between the Test-Driven Development dogma as a panacea for all development problems ever, and developers writing tests first where it makes sense to do that.


>And perhaps I didn't make my points clearly enough in the article

You made them more than clear. People just want to argue the headline. Good post BTW I'm subscribing to your RSS feed.


I never understood why people can't write well designed modular code without writing the test first.

Tests should never dictate the design of an application.


:) Flame-bait. Seems that Hacker News became a place for such things. Blog where this article was written was made so it can post there while hiding it's identity. If you want to make fun of 'agile consultants' go ahead, but stick with it with your name.


"Posted by Peter Sergeant at 02:30"


Yeah, so you don't have a blog, yet you have opinion :). You can down-vote all you want, only thing your post shows is that you are struggling with test driven development. You should learn it well, and then you will be in position to say what part you don't think is useful. And to be constructive, I would suggest instead of trying with unit tests, maybe go with integration testing first and just do that initially, you might enjoy it more.


This is awfully generic. Almost a template.

"You can down-vote all you want, only thing your post shows is that you are struggling with insert development practice here development. You should learn it well, and then you will be in position to say what part you don't think is useful."

The blog post, on the other hand, is not generic. I've had similar experiences: in many cases TDD makes trivial coding issues influence overall application design (in a bad way) instead of application design driving those trivial decisions. I can even tell you when this happens. It happens when the complexity in the application comes mainly from structuring large chunks of mostly trivial functionality. I've had pretty positive TDD experience with other type of applications - ones where complexity comes mainly from some data-processing algorithms, while the overall structure of the app is fairly simple. (Think web app vs language parser.)


I would like to respond to comments to this post. It is not generic, it is how mind works. If it is hard, it starts finding excuses. Once you master it better, suddenly excuses are gone. I read the post and it doesn't sway me one bit in my view. More experienced developers usually say that they find some part of the testing more useful, while other level is more tedious and they don't feel it is as useful. Consultants use TDD because it gives more predictable results. There are cases when you need to do spike first to discover how to go about a problem, which doesn't say you don't test, just do that when it is appropriate. I think what you suggest as an example is that. Also, I don't believe that you should write tests every time, sometimes if you are a startup and want to do quick and dirty prototype, I don't think it is a bad idea to skip testing, but again, this is not to say startup don't do testing, they are the one who need it more then most, just you should be flexible in your approach to coding and software development.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: