Hacker News new | past | comments | ask | show | jobs | submit login
In TDD Russia, developers write production code to cover their tests (uselessdevblog.wordpress.com)
11 points by gsky 8 months ago | hide | past | favorite | 4 comments



Let's be honest, though. Most developers who've had a test coverage metric goal imposed on them still write the code first, then they write just enough test code to exercise all of it. Maybe, if they're clever, they will delete a little bit of unused code here and there, like some error checks. No need to test if a check for null pointer is ever called, if you just delete the check.

Come to think of it, given how often catastrophic failures can be traced to untested and broken error handling code[1], maybe it's better if the code flat out blows up, rather than trying to limp along after some error it can't really do much about.

1 https://www.eecg.utoronto.ca/~yuan/papers/failure_analysis_o...


I’m somewhat conflicted on principled TDD. Often when writing code I find issues with the specifications, sometimes serious enough that interfaces need to be redesigned. Or even more often, problems that were assumed to be simple turn out to be far more complex with lots of edge cases no one thought of when writing user stories, meaning that substantial refactoring has to be done. For the more extreme yet still frequent instances, these situations would mean not only throwing out the code implementation but also the tests. If you write the code first and test after, you throw out 1/4 of the code than you would in TDD (usually tests are several times longer than the implementation in solidity).

Any thoughts on how to manage these risks?


To me, thinking of tests as only code makes it more difficult to do TDD. You likely had a desired behavior you wanted to solve, something that didn’t exist or work before you started. The way you know it isn’t working before even starting to code tests counts.

In your examples you discovered a specification bug which required re-writing. That’s a failure to not “measure twice, cut once” which was likely upstream of you. I find that specifying the integration/very high level tests first allows those to be discovered more easily. The last part of writing tests first then throwing them out; this happens a lot when writing lots of unit tests on low level functions rather than starting high then moving low once you are sure those new functions are needed.

In other cases like “problems that were assumed to be simple turn out to be far more complex”, that’s… how software development goes. It’s a “wicked problem”[1]. I don’t think any ____ driven development can help with that. The agile manifesto offers a solution though!

[1] https://en.m.wikipedia.org/wiki/Wicked_problem


How does agile fit into TDD? Thinking about it while writing this, perhaps the answer is to write a set of specifications that are deliverable within a 2 week period, write tests based on those specs, then deliver the code to meet those tests in that period then review. Yeah that could be a smart way to handle projects I would consider trialing - especially since trying longer term waterfall schedules dont work anyway, even though non software dev actors demand them.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: