Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: Do you write tests before the implementation?
248 points by MichaelMoser123 20 days ago | hide | past | web | favorite | 317 comments
I mean how many of you stick with this test driven development practice consistently? Can you describe the practical benefit? Do you happen to rewrite the tests completely while doing the implementation? When does this approach work for you and when did it fail you?



No. Tests are like any other code. They incur technical debt and bugs at the same rate as other code. They also introduce friction to the development process. As your test suite grows, your dev process often begins to slow down unless you apply additional work to grease the wheels, which is yet another often unmeasured cost of testing in this fashion.

So, in short, I view tests as a super useful, but over-applied tool. I want my tests to deliver high enough value to warrant their ongoing maintenance and costs. That means I don't write nearly as many tests as I used to (in my own projects), and far fewer than my peers.

Where I work, tests are practically mandated for everything, and a full CI run takes hours, even when distributed across 20 machines. Anecdotally, I've worked for companies that test super heavily, and I've worked for companies that had no automated tests at all. (They tested manually before releases.) The ratio of production issues across all of my jobs is roughly flat.

This issue tends to trigger people. It's like religion or global warming or any other hot-button issue. It would be interesting to try to come up with some statistical analysis of the costs / benefit of automated tests.


Thankfully we don't have to. Hillel Wayne links to a few of the studies that have been done on TDD [0]. It doesn't have a conclusive effect on error rates in software. While I do write tests before I write code, more so in a dynamic language without a strong, static type system; it appears that there isn't any correlation to a reduced number of bugs.

But I still do it. And I think that's because that while I may prevent a few obvious errors with the practice the benefit I get is from the informal, automated specifications of the units of a module that I'm building. I also get the benefit of a continuous refactor cycle. Less experienced developers wonder how I write clean, simple code: I refactor, a lot, all the time, because I have tests checking my code.

If it's functional correctness we're after TDD is only one, small, piece of the puzzle. A strong, formal specification will go a lot further to ensuring correctness than fifty more unit tests.

[0] https://www.hillelwayne.com/talks/what-we-know-we-dont-know/


Don't you think that if tests are effective, they may let you go faster?

To put it another way, it is not surprising that error rates are similar since, presumably, you keep coding till you produce an error, fix it and then repeat.

The question is how much error free code you wrote between the errors. If testing -- or any technique for that matter, like relaxing, jogging, discussing, planning -- reduces errors, it will manifest as more functionality per error.


"Effective tests" are a bit like the sufficiently smart compiler, or No true Scotsman fallacy.

Tests either take a long time to run because they're integration tests in disguise, or they mock and stub module boundaries and are an impediment to design refactoring because lots of tests are invalidated.

I tend towards favouring unit tests for leaf modules, especially those that get reused a lot, and fewer but rich (and slow) integration tests for a whole stack of functionality, end to end, where the test doesn't need much rewriting even if the design changes substantially.

Unit testing everything usually undertests the composition of modules. You can fool yourself (especially using code coverage as a metric) that unit testing makes it easier to have complete testing, but actually the space of code under composition of units is much larger, and unit tests don't really test that at all.


Depending on your use of the term "unit" and "integration" (testing taxonomy is bad), I tend to do the opposite, but with the addition of acceptance (user feature testing) to tie the whole thing together.

By this I mean that I unit test (as driven out by Discovery Testing i.e mocking collaborators) down the dependency graph until I hit a leaf node, which I will integration test against real external systems or black box unit test if the logic is self contained.

This sort of drives out a world where slow tests are kept to the leaf components, or the e2e acceptance, and everything else is quick with confidence in all the paths.

It certainly does lead to a world where refactoring is more costly because of the burden of these mocks but this isn't trying to follow Chicago TDD where the refactor step is part of the cycle, instead looking like London TDD where the testing puts pressure on the design itself, so refactors are far less common.


> (testing taxonomy is bad)

So true, it is terrible!

What I say is:

Unit tests - small tests that show the intent of one line of code or small set of lines, what the developer thought it was meant to do - can't prove correctness of an entire process

Integration tests - test one specific process works (login success, login fail, etc.) starts to prove correctness of one part of an application

Acceptance tests - larger, test that processes interact together and prove that multiple processes work

Production tests - (roll out to x users and monitor or something similar) - prove that the entire application works, in production.

Tests go from: Small to large, fast to slow, no risk to complete risk


I haven't written tests as part of my regular workflow in... years now, but one of my earlier mind blown / level up as a developer was when I wrote tests, which made it required that my code actually had to be clean and conform to a load of best practices that weren't in my mind's eye yet.

The other day I was doing a part of Go Blueprints which touched upon TDD; one thing it highlighted was that if you use TDD you should only write until your test is green, no more - this (supposedly) avoids you writing too much code. It has some merits. I'll need to do a lot more TDD and co to actually see it though.


I take the perspective that tests - and particularly unit tests - are a living specification for the software that you know cannot be out of date. If you write tests in a way that you understand the business purpose of the system, you are providing a significant part of the documentation while also providing a regression suite that is automated.

There are other ways to handle this. You can have a design document that is continually updated. Some teams can do this well. You can use literate programming a la Knuth. (Many code bases have abbreviated forms of this.) You can assume that the small number of developers who have been around for these decisions will never leave the company and forgo all of it. (I do not recommend this.)

So, what is your preferred alternative to unit tests as a specification? (And if your set of unit tests don't provide clarity to the source code, that may be a source of the frustration.)


I'm wary of "testing theater" (in the spirit of "security theater"); but I've come to think of testing as similar to two-factor authentication: it doesn't guarantee correctness, but it does reduce the likelihood of bugs and regressions, especially during refactors.

I think the other benefit of testability has nothing to do with the tests themselves, but rather the discipline of writing testable code: in general, writing code that is easy to test will tend to be higher-quality and easier to reason about.

The thing I'm not fully sold on is mocking, which ends up being a huge timesink, and may or may not improve reliability since you're testing against a fake system and not the real thing. I vastly prefer a combination of small functional/unit tests, and E2E integration tests in a real environment (cypress/etc); the uncanny valley in between has a poor ROI IMO.


This is a good point. Just like with anything in engineering understanding the process is more important than just following some steps because you think you should.

When developing software there are many steps before testing even occurs to catch problems, and the earlier you catch problems the better. Coding standards, having requirements, and peer reviews all are important too.

I find tests useful for having a "checklist" of things to do before releasing a new build. In robotics automated tests are especially helpful since there is a lot of code which only happens in certain physical conditions which are hard to recreate manually (i.e. in a low battery condition the robot should do this behavior). But just having the checklist is more important than how you execute it.


> This is a good point. Just like with anything in engineering understanding the process is more important than just following some steps because you think you should.

Music to my ears. We've been all at some point or another being followers to the cargo-cult. But taking a step back and evaluating things from first principles greatly help getting one's self out of being stuck in a rut. At the end of the day, there are no silver bullets. Everything has trade off, just pick whichever is the least worst :)


Spot on with cost, as with everything you have to be pragmatic.

Tests are great for:

* High risk items (large consequence when it goes wrong)

* Documentation

* Weird unintuitive things

We had a C# project recently that needs to detect changes between a DTOs properties. At implementation, all the comparisons would be done over value types, but if someone added a reference type that didn't properly implement equality this would silently fail (likely for months). Good case for adding a test that ensures the change detection works for all properties.


I used to be a TDD zealot. In recent years, I've taken a much more selective approach to test coverage. I typically focus my testing on pieces of code that contain business logic whereas I used to test everything. I've also found automated UI testing is not worth the squeeze and I've had better luck just looking at impacted objects and manually testing those.

I'd be interested to hear if anyone has automated UI testing tools in place that are easier to write test cases for than to just do the manual testing.


I'd recommend taking a look at Cypress. https://www.cypress.io/


I’ve had really good luck with react UI testing using Jest and React Testing Library and throwing in UI screenshot testing.

At my previous job we eliminated selenium from our testing suite. We found that good UI unit and integration testing caught 99% of the bugs and the bugs that did make it through most likely wouldn’t have been caught by selenium so the added time wasn’t worth it.


IMO, snapshots, like Jest/React, are one of my favorite ways to test UIs. They require minimal effort and address the major points of testing a UI.

Actual pixel "perfect" UI testing should still be done from time to time in a real browser. It's nearly impossible to properly capture some of the layout differences/bug that can come simply from a new browser version.


We ran screenshots and looked for a percentage same. Because 100% is almost impossible. Where it saved us over snapshots was we were using MUI and they introduced a breaking UI change that our snapshots didn’t catch the change. And the screenshot did because the menu was no longer hidden. But the code hadn’t changed.


Maybe you can help me understand this.

Since you don't write as many tests, that means you're not actually testing all your code branches because tests incur technical debt after all. So does this mean you test every single branch manually? just don't bother with it at all? Do you just have a few integration tests and they break and you spend a good chunk of time figuring out which logical branch broke?

What happens if you make a typo, comment out a piece of code and forget to uncomment , etc?

I'd love to write less tests but don't know how to do it?


Yeah, as soon as code has more than two "real" branches, I don't trust myself to manually test them all. One of them will be broken quickly if I keep hacking in a particular branch. (This is, secretly, also an argument for writing code in sufficient generality to avoid this phenomenon in the first place.)

I also never trust a test that passes the first time I run it. I am both terrible at writing correct code, and completely normal in that regard.


I believe he's using the term branch different than you : 'alternate code path'


No, alternate code path is exactly what I mean. What's the functionality I'm coding in support for today, in addition to the stuff I was supporting yesterday? How do I know the stuff from yesterday still works? TESTS!


I write tests when I know the answers before to some complicated code.

A good example of this is writing some code to tell if a point is inside a triangle or not. Setting up the test case is easy with three points for the triangle, a point or two inside, points outside, etc... Writing the code is simple enough, but possibly prone to flipping a sign for a slope, or maybe translating the equation wrong.

If I'm writing a web app, I just don't bother to write them before hand. I mostly code until I get what I want and then write the test. I think of this as "nailing it in place". I want to make sure through the test that this code continues to produce what I expect. I'll write as many tests as I feel are necessary but probably not explore every branch.


Most of the typoos and obvious errors are caught by IDE thanks to intelligent code completion systems and linters. If you're writing in a compiled language - compile step might catch some of the more obvious errors too.

I manually test my current branch manually while developing and do sanity checks when it's merged somewhere. I do write tests for some cases, but rarely, only to save time when I have to test against a wide range of input parameters.


> What happens if you make a typo, comment out a piece of code and forget to uncomment , etc?

Thanks god this is handled out of the box by any compiled language. I can't imagine myself writing tests to catch non-logical bugs.


It's weird you bring up global warming when there is scientific consensus that it is not only happening but man made. Doesn't seem like a good analogy for something you want more data on.


I believe they're using it as an example of a thing that people have very strong, immovable opinions about, making it difficult to discuss. It just so happens that in the case of climate change one side is just objectively wrong, and its opinions are factually incorrect... but that just underscores the OP's use that it's impossible to discuss in public with them. They hold strong opinions in spite of overwhelming evidence against them, indicating strong feelings that will dominate any conversation you try to hold.


I accept the reality of global warming (I'm almost a single-issue voter on the subject), but I look at it from a Bayesian point of view: if we compared two universes, one where AGW was real, and one where the scientific consensus was mistaken (not exactly unprecedented), popular opinion would probably vary by no more than 1% between the two.

Humans tend to start with moral intuitions and tribal affiliations, and then cherry-pick data to support them; that irrational force is at work even in the cases where tribal values-signaling happens to align perfectly with reality.


Have you ever wondered why it became partisan to begin with?

Lots of other issues are not partisan like "smoking causes cancer", everyone generally agrees that it does, even many years ago before the "science proved it".

There is some science that is so obviously manipulated by power and money everyone knows something is wrong, but can't do anything about it.


Yes, the perverse incentive of the oil lobby is transparently staggering. Even if one is charitable, that the lobbyists believe their own talking points, "it is difficult to get a man to understand something, when his salary depends upon his not understanding it".

That said, it didn't help the politicization of the climate for Al Gore to have become its face, thus associating its brand with Democrats, for all his pleading that "this issue is moral, not political". He meant well, but it was painfully naive.


There are success stories in the past about how things have improved. And it seems like the only way was when leaders didn't frame the issue to focus on themselves or their party, but about the people and the country.


I believe in this case the partisanship comes from less from the topic itself, but if you look into the proposed solutions, they all tend to shift money and power in a single direction and don't have much to offer in regards of actually changing the climate.


I suspect that's a product of anything that becomes political: whether it's climate change or border security, the incentives are more towards theater than results, creating an attack surface for economic exploitation.

At this point, I think the most efficient (and least corruptible) solution is a carbon tax + dividend [0], AKA a Pigovian tax [1]. It's not as though markets don't work; the problem is that externalities, ecological ones in particular, are not priced into the system, meaning we're just passing the bill to our descendants, at a phenomenally high interest rate.

https://en.wikipedia.org/wiki/Carbon_fee_and_dividend

https://en.wikipedia.org/wiki/Pigovian_tax


Do you think corrupt governments, that also happen to be the largest polluters in the world will participate with a carbon tax honestly and in good faith?


Yes, I suppose that is a risk; as with our (allegedly) progressive-taxation system, a carbon tax could invite loopholes and creative accounting, both in public and private sectors. It would take a significant groundswell of grassroots political will to apply a carbon tax fairly and universally; perhaps the dividend would make it a little more salient and less abstract, making it easier to calculate that company/department X is taking $Y out of each citizen's pocket, as opposed to more convoluted federal spending, regulatory capture, etc.


> It would take a significant groundswell of grassroots political will to apply a carbon tax fairly and universally;

This is where carbon tax falls apart. There is ZERO insentive for poor people in high polluting countries to make their lives worse for the benefit of the world. None.

Therefore, carbon tax will only ever be paid by the "rich", but then we /know/ the rich dodge taxes. So the carbon tax falls on the middle class in the rich countries.

Now the politics are more clear. Is it any surprise that the middle class don't want carbon taxes? Only the middle wealthy (doesn't affect their life much), the poor (they don't pay any taxes) and the rich (actors, politicians, etc...).

This is why Trump is popular, a whole class of people is being dumped on, it's not a conspiracy from the oil companies, it's ideology based on fantasies that dump all the costs on one group of people.

A new plan is needed, carbon tax will _never_ work.


It's a legitimate concern; but the majority of carbon-producing activity is on the production side, and can be taxed from the producer, or even as a "carbon TCO" VAT on the consumption side. While tax dodges are a risk (particularly in the open, as big corps lobby for exceptions), this makes it much more universal than something like tax havens for capital gains by a small number of the uber-wealthy.

I would consider the world-wide implications to be a bigger concern: if the tax is too high, some producers might feel incentivized to move operations to a country without a carbon tax. Though I generally think tariffs are a bad idea, a carbon tariff that essentially covers the difference (and goes to the same dividend pool) might not be the worst idea.


From what I understand about TDD it's more about getting the design right then testing for bugs.


TDD is also a design tool. I agree for newly written code, yes.

Writing a test before fixing a bug reveals the bug and solidify a proof that the fix addresses it adequately. So in this context, it's less about the design (since it's already there) and more about fixing and proving that the fix works.


TDD is about getting design right, but it's also about ensuring that the design's functionality is defined, demonstrated and protected from regression.


I've honestly thought I was crazy for thinking that required tests are ridiculous. Thank you for confirming that I'm right in my beliefs.

At least 50% of my last job was writing tests, and the snail's pace of their dev process was the main reason I left.


> Anecdotally, I've worked for companies that test super heavily, and I've worked for companies that had no automated tests at all.

How did refactoring work at that latter company?


Those companies were mostly C#, with a bit of Ruby and JavaScript. The C# type checker generally made refactoring a non issue. In the case of JavaScript and Ruby, it was not much harder. Mostly find and replace. Sometimes a bit more work, depending on the complexity of the change. In general, the code bases were pretty well designed and modular enough that refactoring was pretty simple.

I tend to be a repl-driven developer, and I also use the product as I code. If you've ever watched Jonathan Blow programming on YouTube, that's similar to my process.

I find the majority of issues as I go. The bugs that slip through tend to be the surprising edge cases where I think I'm just modifying a single vertical, but there's some bleeding over into other verticals which I'm not using while developing. These sorts of bugs can usually caught by a handful of basic integration tests.


very good point.


The majority of the time, no.

There are a couple of circumstances I often do, though.

The first is when fixing a bug - writing the (red) regression test first forces me to pin down the exact issue and adds confidence that my test works. Committing the red test and the fix to the test in two commits makes the bug and its fix easy to review.

The second is when I'm writing something high risk (particularly from a security standpoint). In this case I want to have a good idea of what I'm building before I start to make sure I've thought through all the cases, so there's less risk of rewriting all the tests later. There's also more benefit to having a thorough test suite, and I find doing that up front forces me to pin down all of the edge cases and think through all the implications before I get bogged down in the "how" too much.


> Committing the red test and the fix to the test in two commits makes the bug and its fix easy to review.

I've done this in the past. Then I started to use `git bisect` and having a red test somewhere in your commit-history is a killer for bisect. So now I tend to include both, the test and the bug-fix, within one commit.


A tip I learned is to commit the failing test but mark it as an expected failure, if your test framework supports that.

That way you can commit the test, bisect works, and the test begins "failing" when the bug is really fixed, and you can commit the fix as well as a one-line change to amend the test from being failure-expected to just a normal test.


I see a test as a declaration of intended outcome. By writing a test to expect an intentional failure (say you have a bug in a divide: “int -> int -> Maybe int” function that causes it to return 0 when you divide by 0 instead of “None”) you are declaring that is actually intentional behavior. So I would never write a test like this - I think I would prefer committing the fix and the new test at once. I don’t see the value in reviewing them separately, because they are related and dependent changes.

Obviously if you view tests differently (eg. as a declaration of current behavior rather than intended behavior) then my argument dies.


Keep in mind the test is written with the correct behaviour and annotated to be failing — in a hypothetical language and framework your test would be

@failing testDivZero() { assertEquals(None, div(1, 0)) }

This expresses both the intent and the reality


this is why i love HN. that is such an outstanding idea.


I get around this by squashing the PR commits. Reviewers see the individual commits, and good CI means they can see the test-commit failed and the fix-commit passed, but post-merge it's a single passing commit.


> Committing the red test and the fix to the test in two commits makes the bug and its fix easy to review.

I don't think this is compelling argument. Normally the test and the fix are looked at together and are already sufficiently separated in the code.

It makes more sense to me to use a single commit so the fix can be described in a single commit message, keeping the history clean.


Keeping it separate allows ensuring the test works easily: just revert the fix keeping only the patch for the test and make sure the test fails.

My workflow tend to be to keep them separate, and write a big description in the merge commit rather than the individual commits (those have a small description of the local change).

There's tradeoffs to both approach though, and as you mention, it's easier to keep track of the provenance of a fix in the git blame when it's unified in a single commit message.


I totally understand why you would do that and it makes sense, but I personally like to keep them in one commit so that I won't get any problems with an automated git bisect later.


For maintenance & extension, yes.

For new development, no.

I've found that unless I have a solid architecture already (such as in a mature product), I end up massively modifying, or even entirely rewriting most of my tests as development goes on, which is a waste of time. Or even worse, I end up avoiding modifications to the architecture because I dread the amount of test rewrites I'll have to do.


What period do you consider swapping the definition from new development to extension?


I think there's this myth that TDD is one of the best ways to write software and if you admit you don't do it, you'll be seen as a cowboy and will look stupid. I think the truth is TDD has its pros and cons, and the weight of each pro and con is highly dependent on the project you're doing.

- The uncomfortable truth for some is that not doing any testing at all can be a perfectly fine trade off and there's plenty of successful projects that do this.

- Sometimes the statically checked assertions from a strongly typed language are enough.

- Sometimes just integration tests are enough and unit tests aren't likely to catch many bugs.

- For others, going all the way to formal verification makes sense. This has several orders of magnitude higher correctness guarantees along with enormous time costs compared to TDD.

For example, the Linux kernel doesn't use exhaustive unit tests (as far as I know) let alone TDD, and the seL4 kernel has been formally verified, both having been successful in doing what they set out to do.

I notice nobody ever gets looked down on for not going the formal verification route - people need to acknowledge that automated testing takes time and that time could be spent on something else, so you have to weigh up the benefits. Exhaustive tests aren't free especially when you know for your specific project you're unlikely to reap much in the way of benefits long term and you have limited resources.

For example, you're probably (depending on the project) not going to benefit from exhaustive tests for an MVP when you're a solo developer, can keep most of the codebase in your head, the impact of live bugs isn't high, the chance of you building on the code later isn't high and you're likely to drastically change the architecture later.

Are there any statistics on how many developers use TDD? There's a lot of "no" answers in this thread but obviously that's anecdotal.


Can you give a run down on formal verification?


So say you were writing a sorting algorithm and with unit tests (perhaps with TDD) you wrote tests like:

- sort([]) should produce []

- sort([1]) should produce [1]

- sort([1,3,2]) should produce [1,2,3]

- sort([1,5,6,2,3,4]) should produce [1,2,3,4,5,6]

You would test a few values and edge cases until you were confident it works for all lists. However, you can't be 100% sure that there's some list out there like [5,5,5,5,1] that doesn't get sorted properly.

With formal verification, you can actually test it sorts for all possible lists with a mathematical proof. You write a maths proof that shows a property like the following holds:

- For all lists X, the result of sort(X) will be a permutation of X that is sorted.

For example, the proof could take the form of proof by induction where every step in the proof is confirmed correct by the machine (see Coq, Isabelle for more info).

When you were doing maths at school, you probably had exercises where you tried a few examples to see if an equation you came up with might hold in general and then you would write a proof to showed it worked for all possible cases (e.g. with induction, by case analysis). The former is similar to unit testing and the latter is similar to formal verification.

My point was there's a spectrum of how rigorous your tests are. People talk about TDD like it's the holy grail sometimes but it's nowhere close to how rigorous you can be. If you've tried some formal verification though, you'll realise it's far too expensive for most projects. Likewise, TDD doesn't make sense for all projects.

You have to pick your tradeoffs e.g. between time to market vs cost vs ease of refactoring later vs how rigorous the testing is.


> You would test a few values and edge cases until you were confident it works for all lists. However, you can't be 100% sure that there's some list out there like [5,5,5,5,1] that doesn't get sorted properly.

For the curious, something like this has happened before - and was found with formal verification: http://www.envisage-project.eu/proving-android-java-and-pyth...


Isn’t this formal verification more for algorithms than implementations? Eg if I have to use Coq to prove my code works, what use is that for my C application? Porting the code to Coq seems to defeat the point of formal verification, I can much better use some property based testing method.


There's lots of options. You can write an implementation in Coq (it has its own functional language you code in), prove it correct in Coq and then "extract" (like transpiling) it to another language like OCaml for executing. There's ways to map C code into Coq to prove it's correct as well. All of this is machine checked. See the sel4 kernel to get more of a feel for it.

Property based testing sits somewhere between regular software testing with unit tests and theorem proving on the spectrum. It's much less time intensive to do but much less rigorous.

My point isn't that formal verification is better than everything. It has its trade-offs, just like TDD.


I do, TDD gives me such a sense of confidence that now that I'm used to, it's hard not to use.

> Can you describe the practical benefit?

Confidence that the code I'm writing does what it's supposed to. With the added benefit that I can easily add more tests if I'm not confident about some behaviors of the feature or easily add a test when a bug shows up.

> Do you happen to rewrite the tests completely while doing the implementation?

Not completely, depends on how you write your tests, I'm not testing each function individually, I'm testing behaviour, so unless there's a big architectural change or we need to change something drastic, the tests have minimal changes

> When does this approach work for you and when did it fail you?

It works better on layered architectures, when you can easily just test the business logic independently of the framework/glue code. It has failed me for exploratory work, that's the one scenario where I just prefer to write code and manually test it, since I don't know what I want it to do...yet


> It works better on layered architectures, when you can easily just test the business logic independently of the framework/glue code. It has failed me for exploratory work, that's the one scenario where I just prefer to write code and manually test it, since I don't know what I want it to do...yet

Totally with you on this, when I am clueless about the what/how, I throw a bit of exploratory code and test it manually or semi-automated fashion. But once the learning has happened I will use the acquired knowledge to feed a proper TDD cycle to do it properly now that I know a little better.


> Confidence that the code I'm writing does what it's supposed to. With the added benefit that I can easily add more tests if I'm not confident about some behaviors of the feature or easily add a test when a bug shows up.

Isn't this just the benefit of tests, not necessarily TDD?


Yes, it's a welcomed side effect of TDD'ing, TDD is more of a design tool. But also I have experimented with writing tests before/after the implementation. Code with tests written first seemed to always be just to the point and get one in the mindset of thinking ahead of your edge cases and pin them down


Nope. I pretty much always find it to be counterproductive.

Most of programming happens in the exploration phase. That's the real problem solving. You're just trying things and seeing if some api gives you what you want or works as you might expect. You have no idea which functions to call or what classes to use, etc.

If you write the tests before you do the exploration, you're saying you know what you're going to find in that exploration.

Nobody knows the future. You can waste a crazy amount of time pretending you do.


> You're just trying things and seeing if some api gives you what you want or works as you might expect.

I don't do most of my programming this way, because mostly I'm writing new things, not gluing together existing APIs with a tiny amount of simple glue code. But when I do need to characterize existing APIs, I find that unit tests are a really helpful way to do it — especially in languages without REPLs, but even in languages that do have REPLs, because the tests allow me to change things (parameters, auth keys, versions of Somebody Else's Software) and verify that the beliefs I based my code on are still valid.


Yeah, I'm all for unit tests when they're needed. They just aren't the first thing I'll write


You appear to be talking about unit tests in general, while GP was talking about test-driven-development (what the original question is about).


Good point, thanks.


> Be prepared to throw one away. You will anyway.

Write a POC to learn then you can write tests first for production.


You are right, but technically speaking you already did implementation for POC without having the tests. So the full answer is "do POC without tests first, continue implementation with tests first".


So, in other words... an "exploration phase" which is done prior to writing tests?


No. And also 'do you write a test for everything?'. Also No.

Tried it, ended up with too many tests. Quelle surprise. There is a time/money/cognitive cost to writing all those tests, they bring some benefit but usually not enough to cover the costs.

I'm also going off the 'architect everything into a million pieces to make unit testing "easier"' approach.

I heard someone saying that if you write a test and it never fails, you've wasted your time. I think thats quite an interesting viewpoint.

Reminded of:

"Do programmers have any specific superstitions?"

"Yeah, but we call them best practices."

https://twitter.com/dbgrandi/status/508329463990734848


Everyone's in the confessional booth here admitting dogmatic test-first-test-everything's not so hot in practice, which is nice, but how long until it becomes safe to answer with anything other than some variation of "love testing, it's always great, I love tests, more is better" when asked how you feel about testing in interviews?


I always answer this honestly in interviews and I've always had my interviewers say they feel the same way.

I think maybe it's a thing where you're not supposed to say it, but once you do, it frees the interviewer up to admit it as well and they're happy.


Oh of course you have to be enthusiastic about testing in interviews. Same as with agile!


Actually, for the interview for my current job (a pretty big corporation) they asked me about testing and I flatout said that I think testing has some benefit for some cases for , but I think the "100% CODE COVERAGE OMG TDD!!!" mentality is actually counter productive and makes code much harder to adapt.

I think they appreciated my honesty.


>No. And also 'do you write a test for everything?'. Also No.

Same here and for the same reasons plus stuff in the backlog that takes more priority; At least on Finance, gambling and telecom industries that I've worked on


Do you consider coverage an effective metric? I've got some code that has a test suite which is effectively a bunch of low level driver checks, plus a bunch of common example snippets and checks for eg empty inputs etc.

Coverage gives an idea of how many lines of code have been run, but obviously no guarantees of correctness for those specific lines (eg you can't detect a double negative).

It's worked well for me so far, since the important parts are (a) the hardware communication works and (b) users can process and output data in a way that is correct. No need to obsessively check the intermediate steps if the output is good.


I definitely think coverage is a worthy metric to track. It can provide meaningful information about the "doneness" of your tests. It shouldn't drive testing though, and especially, you shouldn't write your tests specifically "to get coverage". Yes, lots of people do this in environments where "getting 100% coverage" is mandatory.

That said, I've found issues specifically after targetting blocks for testing, which were highlighted by incomplete coverage. It's crucial to always remember that coverage is predicated on have good tests. At the very least every test must test something. Sounds obvious, however it's possible to get 100% coverage with a single test, test nothing, and still miss issues.


> which were highlighted by incomplete coverage

This is the single most important part of code coverage IMO: we don't care about what the tests cover, we care about what the humans never considered.

For this reason I'm a proponent of 100% coverage with a major caveat: Any code you explicitly decided not to test gets marked "no cover", so it doesn't count for or against the coverage score. This way branches that were accidentally missed really stand out, and we're not bogged down by having to test 100% of the code.


Great idea we'll just add

// Unit Testing begin not covered

// Unit Testing end not covered

To the start and end of every file (or however it's done in your suite). Then we can leave early, go to the bar, and have a nice pint ;)


I would agree that "coverage gives an idea of how many lines of code have been run, but obviously no guarantees of correctness for those specific lines"


Perhaps I should rephrase. If you're trying to effectively unit test a project, when do you decide you've tested enough? And are there any metrics which help support that?

Coverage for example is a weak signal that you've at least run some fraction of your codebase at test time.


I just write tests for the bits that I know I'm going to have lots of trouble with. You can tell, after a while


Never done this, and don't consider it practical. Code and interfaces (even internal ones) change rapidly for me when I'm starting a new project or adding new major functionality to the point that the tests I'd write at the beginning would become useless pretty quickly.

I also believe that 100% test coverage (or numbers close to that) just isn't a useful goal, and is counterproductive from a maintenance perspective: test code is still code that has to be maintained in and of itself, and if it tests code that has a low risk of errors (or code where, if there are errors, those errors will bubble up to be caught by other kinds of testing), the ROI is too low for me.

After I've settled on interfaces and module boundaries, with a plausibly-working implementation, I'll start writing tests for the code with the highest risk of errors, and then work my way down as time permits. If I need to make large changes in code that doesn't yet have test coverage, and I'm worried about those changes causing regressions, I'll write some tests before making the changes.


That is how I used to work; then I got into finance and there are two things different with the work I did before that (web/desktop/app (or too long ago; there was no 'testing' in the 80s) the software I write now has to be certified/audited to some extent and I cannot change/repair production software on the fly. That could costs a lot of money for certain bugs. So now I tend to write tests for everything and that helps a lot.


If I worked on software like that, I would almost certainly write more tests. But still not up-front.

Agreed, and the kind of software you’re working on is a part of this conversation that often gets left out and leads to people talking last each other. I work in cryptography, and our work needs an exceptionally high level of testing to catch subtle bugs. I used to work in games and I didn’t see the same value out of tests. Different industries require different practices.


>So now I tend to write tests for everything and that helps a lot.

Isn't that a separate issue from writing the tests before the implementation?


Yes, and I do both.


What "other kinds of testing" do you do instead then? How do you make sure the code is testable by those other tests?

Often people fall back on manual testing, which is often slow, unreliable and incomplete. And certain things might not even be testable if the system hasn't been designed to allow it.


Functional and integration testing, mostly.

I guess I'm assuming we're talking about unit testing specifically here, so I'm considering functional and integration testing as separate items.

Manual testing is terrible. I'll occasionally engage in that, but only as an extremely-short-term measure when I'm (again) not convinced my interfaces are stable enough yet to write unit tests.


I've been asked to literally test that values are incremented. Like, does ++ add 1 to the value in C, just to be safe, for the sake of 100% coverage.


I've seen tests that essentially look like that, and tests that, when you really look at it, pretty much just test the testing framework and mocking library and don't really exercise application code. It's infuriatingly useless.

99% of the code I write is test first. It makes my life easier - I always know what to do next and it reduces the amount I need to keep in my head.

TDD done the way many developers do is a PITA though. When I write a test it will start off life with zero mocking. I'll hit the db and live APIs. From here I'm iterating on making it work. I only introduce mocking/factories because it's harder work not to. I'll gradually add assertions as I get an idea about what behaviour I want to pin down.

Done this way using tests is just making life easier, you can start off testing huge chunks of code if that's what you're sketching out, then add more focused tests if that's a faster way to iterate on a particular piece. For me the process is all about faster feedback and getting the computer to automate as much of my workflow as possible.

edit: Kent Beck had a fantastic video series about working this way, I can only find the first 10 mins now unfortunately but it gives you a taste, https://www.youtube.com/watch?v=VVSSga1Olt8.


I mean how many of you stick with this test driven development practice consistently? I have been doing this for a while now. Practically, saves me a tonne of time and am able to ship software confidently.

Can you describe the practical benefit? Say, a change is executed on one section of the (enterprise level)application. You missed addressing an associated section. This is easily identified as your test will FAIL. When the number of feature increases, the complexity of the application increases. Tests guide you. They help you to ship faster, as you don't need to manually test the whole application again. In manual testing, there are chances of missing out few cases. If it's automated, such cases are all executed. Moreover, in TDD - you only write code which is necessary to complete the feature. Personally, tests act as a (guided)document for the application.

Do you happen to rewrite the tests completely while doing the implementation? Yes, if the current tests doesn't align with the requirements.

When does this approach work for you and when did it fail you? WORK - I wouldn't call it a silver bullet. But I am really grateful/happy to be a developer following TDD. As the codebase increases, when new developers are brought in - TESTS is one of the metrics which helps me ship software. NOT WORK - a simple contact only based form(i.e. a fixed requirement having a name, email, textarea field and an upload file option), I rather test it manually than spend time writing tests


The benefits you describe seems to be achievable with tests written after code as well.

We write extensive unit tests, but mostly after development work. The re-write work you mention is then avoided.


The benefit of TDD is that the code you end up with will actually be testable. Just keeping in mind that you have to write a test for your code, changes how you write it. As a bad example, imagine having a 1000 line function that just does everything you needed for the new feature... Good luck testing that afterwards.


> Just keeping in mind that you have to write a test for your code, changes how you write it.

Which is often enough to ensure the code is testable.

Generally, I'll write some tests sort of alongside, or soon after (like, a couple hours or a day) to not lose the initial thought process. Going back to code days/weeks later and trying to 'test' it when it wasn't conceived of as testable is tough.


> Just keeping in mind that you have to write a test for your code, changes how you write it.

Indeed it does! But that doesn't really change whether you write the test pre- or post-coding.


Thanks for your perspective. Did it take a lot of time to develop the required discipline? I mean defining the interface for a single function is different from defining a set of functions in the context of a test. May i ask: what is your problem domain / field of work?


Did it take a lot of time to develop the required discipline? I wouldn't say lot of time. If you get a good mentor, it isn't a steep learning curve. Moreover, I program using a framework, it has lot of helper functions.

May i ask: what is your problem domain / field of work? I develop SAAS apps and complex HA web applications/API. I am web developer - PHP(Laravel to be exact).


cannot agree more. i've worked for a year on a fast evolving software and we had to refactor things a lot. TDD helped me to refactor in confidence and without regressions. Now i can't live without tests!


I've been writing software professional for 20 years and for much of that time I was very skeptical of testing. Even after I started writing tests it was several more years before I saw the value of writing tests first. I've moved to doing this more and more, especially when doing maintenance or bug fixes on the back-end. I still struggle with writing valuable tests on the front-end, apart from unit tests of easily extracted logic functions, or very basic render tests that ensure a component doesn't blow up when mounted with valid data.

If you write your test after making the code changes, its easier to have a bug in your test that makes it pass for the wrong reasons. By writing the test first, and progressively, you can be sure that each thing it asserts fails properly if the new code you write doesn't do what is expected.

Sometimes I do write the code first, and then I just stash it and run the new tests to be sure the test fails correctly. Writing the test first is simply a quicker way to accomplish this.

Like others have said when there is a lot of new code - new architectural concerns etc, its not really worth it to write tests until you've sketched things out well-enough to know you aren't so likely to have major API changes. Still, there is another benefit to writing the tests - or at least defining the specs early on - which is that you are not as likely to forget testing a particular invariant. If you've at least got a test file open and can write a description of what the test will be, that can save you from missing an invariant.

Think of tests as insurance that someone working on the code later (including yourself, in the future) doesn't break an invariant because they do not know what they all are. Your tests both state that the invariant is intentional and necessary, and ensure it is not broken.


> If you write your test after making the code changes, its easier to have a bug in your test that makes it pass for the wrong reasons.

I see this a lot. I don't write tests first, but I always make sure my changes are properly covered by my assertions. For instance, when fixing a bug, I comment/undo my fix and make sure my test fails.

One could say I'm doing twice the work (fix, write test, comment out fix), but I find it easier than just writing the test first.


I tend to write test cases that re-produce bugs first, then fix the bug. Other than that, I don't stick too hard to test driven development. I did for a while, but you start to get a sense of the sort of design pressure tests create and end up build more modular, testable code from the get go anyway.

> Can you describe the practical benefit?

For a test case that produces a bug, you might find the bug manually. Getting that manual process into a test case is often a chore, but in doing so you'll better understand how the system with the bug failed. Did it call collaborators wrong? Did something unexpected get returned? Etc. In those cases, I think the benefit really is a better understanding of the system.

> Do you happen to rewrite the tests completely while doing the implementation?

A TDD practicioner will probably tell you taht you're doing it wrong if you do this. You write the minimum viable test that fails first. It might be something simple like "does the function/method exist". You add to your tests just in time to make the change in the real code.


It's a tool like any other and I reach for it when tests will help me write code faster and at a higher level of quality. Which is pretty often with new code.

Also always before a refactor. Document all the existing states and input and output and I can refactor ruthlessly, seeing as soon as I break something.

Tests are also great documentation for how I intend my api to be used. A bunch of examples with input, output, and all the possible exceptions. The first thing I look for when trying to understand a code base are the tests.

When do I not write tests? When I'm in the flow and want to continue cranking out code, especially code that is rapidly changing because as I write I'm re-thinking the solution. Tests will come shortly after I am happy with a first prototype in this case. And they will often inform me what I got wrong in terms of how I would like my api consumed.

When did it fail me? There are cases when it's really difficult to write tests. For example, Jest uses jsdom, which as an emulator has limitations. Sometimes it is worth it to work around these limitations, sometimes not.

Sometimes a dependency is very difficult to mock. And so it's not worth the effort to write the test.

Tests add value, but like anything that adds value, there is a cost and you have to sometimes take a step back and decide how much value you'll get and when the costs have exceeded the value and it's time to abandon that tool.


In new code, I'll usually write high level black box tests once enough code is in place to start doing something useful. I rarely write unit tests except for behavior that is prone to be badly implemented/refactored, or for stuff that's pretty well isolated and that I know I won't touch for a while.

Then as the project evolves, I start adding more high level tests to avoid regressions.

I prefer high level testing of products, they're more useful since you can use them for monitoring as well, if you do it right. I work with typed languages so there's little value in unit tests in most cases.

Sometimes I'll write a test suite "first", but then again only once I have at least written up a client to exercise the system. Which implies I probably decided to stabilize the API at that point.

Like others have said, tests often turn into a huge burden when you're trying to iterate on designs, so early tests tend to cause worse designs in my opinion, since they discourage architectural iterations.


[flagged]


To each their own. I have seen many times all the unit tests pass, but the application is broken.

It really depends on the situation. There is no silver bullet when it comes to testing.


[flagged]


In my experience, people who run around proclaiming how great they are and calling other people 'amateurs' are the real amateurs. The pros have seen enough to know that there are all sorts of ways to succeed and that (engineering) religion often gets you in trouble.


If I have time to write only one of unit or functional test, I will always choose a functional test, because these ones test what the user actually sees.

It's possible to write an app that passes 100% of unit tests and still be completely broken to the user.


Almost never. I’m roughing things out first, or iterating the APIs. When the functions, data and interactions seem to stabilize, then I’ll start to put tests in.

Once, I started with tests, but I had to rip up a lot along the way.

It is helpful to ensure testability early on. It might be easier for some devs to figure it out by actually coding up some tests early.

I won’t argue against anyone who is actually productive using hard-core TDD.


I have a similar experience, but now I am forced to use Ginkgo, and the tool doesn't make sense without TDD and BDD - behavior driven development, so some people must be using it.


Hi, I sort of maintain Ginkgo and Gomega when I have time (not much these days), having picked it up during my years at Pivotal, where it was originally developed. BDD/TDD is practiced extensively (as in, 100% of the time) at Pivotal. I'd be happy to talk to you more about the process or tools if you would like. Good luck!


I’ve always been highly skeptical of this approach. Often what you’re doing is so clear cut that tests are entirely unneeded. In fact, outside of the most complicated cases, I don’t even use unit tests. I have black box testing that I use to check for regression. My biggest reasoning for this is that test code is effectively another code base to maintain, and as soon as you start changing something it’s legacy code to maintain.

All that being said, I haven’t spent much time on teams with a particularly large group of people working in one project. I think the most has been 4 in one service. The more people working in a code base, the more utility you get from TDD, I believe. It’s just tough to have a solid grasp on everything when it changes rapidly.


I recently started doing this. My project involved using three different services, where one of them was internal. I only had API documentation for these services and because of many reasons, there was a delay in obtaining the API keys required and I was stuck on testing my code. That's when I decided to write unit tests and mock these services wherever I am using and started testing my code. There were zero bugs in these integrations later.

While doing this I also found one more benefit, at least for my use case. The backend for user login was simple when I started, but it started growing in a few weeks. Writing test cases saved me from manually logging in with each use case, testing some functionality, then logging out and repeating with other use cases.

Not sure if it is a practical benefit or not, but writing test cases initially also helped me rewrite the way I was configuring Redis for a custom module so that the module can be tested better.

My only issue is that it takes time, and selling higher-ups this was kind of difficult.


Thanks, an interesting perspective. Do you plan to go with this approach for your other projects as well?


More fun is when you get an API documentation and no access to the actual system. You develop the whole thing and then fly out to their site, you've got 3 days to get your software and hardware certified by them, and the certification costs a fortune.


I am planning to do this for my hobby projects. I think it will help me write better code and also learn a few things. For professional projects, I think the time I will try to add stuff but it relies on my manager's approval.


TDD works best at the interface where there is the lowest likelihood of API churn.

Writing a test for something an MP3 ID tag parser is a good case for TDD with unit tests. It’s pretty clear what the interface is, you just need to get the right answer, and you end up with a true unit test.

Doing TDD with a large new greenfield project is harder. Unless you have a track record of getting architecture right first time, individual tests will have to be rewritten as you rethink your model, which wastes a lot of energy. Far better is to test right at the outermost boundary of your code that isn’t in-question: for example a command line invocation of your tool doing some real world example. These typically turn into integration or end to end tests.

I tend to then let unit tests appear in stable (stable as in the design has settled) code as they are needed. For example, a bug report would result in a unit test to exhibit the bug and to put a fixed point on neighboring code, and then in the same commit you can fix the bug. Now you have a unit test too.

One important point to add is that while I reserve the right to claim to be quite good at some parts of my career, I’m kind of a mediocre software engineer, and I think I’m ok with that. The times in my career when I’ve really gotten myself in a bind have been where I’ve assumed my initial design was the right one and built my architecture up piece by piece — with included unit tests — only to find that once I’d built and climbed my monumental construction, I realized all I really needed was a quick wooden ladder to get up to the next level which itself is loaded with all kinds of new problems I hadn’t even thought of.

If you solve each level of a problem by building a beautiful polished work of art at each stage you risk having to throw it away if you made a wrong assumption, and at best, waste a lot of time.

Don’t overthink things. Get something working first. If you need a test to drive that process so be it, but that doesn’t mean it needs to be anything fancy or commit worthy.


I do sometimes. It depends. I want to do more of it.

Here are cases where I've genuinely found it valuable and enjoyable to write tests ahead of time:

Some things are difficult to test. I've had things that involve a ton of setup, or a configuration with an external system. With tests you can automate that setup and run through a scenario. You can mock external systems. This gives you a way of setting up a scaffold into which your implementation will fall.

Things that involve time are also great for setting up test cases. Imagine some functionality where you do something, and need 3 weeks to pass before something else happens. Testing that by hand is effectively impossible. With test tools, you can fake the passing of time and have confirmation that your code is working well.

Think about when you are writing some functionality that requires some involved logic, and UIs. It makes sense to implement the logic first. But how do you even invoke it without a UI? Write a test case! You can debug it through test runs without needing to invest time in writing a UI.

Bugs! Something esoteric breaks. I often write a test case named test_this_and_that__jira2987 where 2987 is the ticket number where the issue came up. I write up a test case replicating the bug in with only essential conditions. Fixing it is a lot more enjoyable than trying to walk through the replication script by hand. Additionally, it results in a good regression test that makes sure my team does not reintroduce the bug again.


I don't write as many tests as I'd like in general (adding tests to a legacy project that has none is a struggle - often worth it, but needs to be proitized against other tasks).

I once had to write an integration for a "soap" web service that was... Special. Apparently it was implemented in php (judging by the url), by hand (judging by the.. "special" features) - and likely born as a refractor of a back-end for a flash app (judging by the fact that they had a flash app).

By trial and error (and with help of the extensive, if not entirely accurate, documentation) via soapui and curl - i discovered that it expected the soap xml message inside a comment inside an xml soap message (which is interesting as there are some characters that are illegal inside xml comments.. And apparently they did parse these nested messages with a real xml library, I'm guessing libxml.) I also discovered that the Api was sensitive to the order of elements in the inner xml message..

Thankfully I managed to conjure up some valid post bodies (along with the crazy replies the service provided, needed to test an entire "dialog") - and could test against these - as I had to implement half of a broken soap library on top of an xml library and raw post/get due to the quirks.

At any rate, I don't think I'd ever got that done/working if I couldn't do tests first.

Obviously the proper fix would've been to send a tactical team to hunt down the original developers and just say no to the client...


One of my teammates has an AWESOME response to testing and here it is:

"The point of writing tests is to know when you are done. You don't have to write failing tests first if you are just trying to figure out how to implement something or even fix something. You must write a failing test before you change prod code. How do you do square this seeming circle?

- Figure out what you need to do

- Write tests

- Take your code out and add back in in chunks until your tests pass

- Got code left over? You need to write more tests or you have code you don't need

Without the tests, you cannot know when you are done. The point of the failing test is that it is the proof that your code does not do what you need it to do.

Writing tests doesn't have to slow down the software development process. There are patterns for various domains of code (e.g., controller layer, service layer, DAO layer). To do testing efficiently, you need to learn the patterns. Then when you need to write a new test, you identify and follow the pattern.

You also need to use the proper tools. If you're using Java or Kotlin, then you MUST use PIT (http://pitest.org). It is a game changer for showing you what parts of your code are untested."

- Steven, Senior Software Engineer on our team


When people say writing tests first slows you down, they are usually only looking at the upfront costs. They do not factor in the costs of maintenance, and having to fix and/or extend a previously written code.

Send my best regards to Steven. I share the views as his.


I tend to swap between two modes.

If I'm working with well-known tools and a problem I understand reasonably well, I'll approach it in ultra-strict test-first style, where my "red" is, "code won't compile because I haven't even defined the thing I'm trying to test yet". It might sound a step too far but I find starting by thinking about how consumers will call and interact with this thing results in code that's easier to integrate.

However, if I'm using tools I don't know well, or a problem I'm not sure about, I much prefer the "iterate and stabilise" approach. For me this involves diving in, writing something scrappy to figure out how things work, deciding what I don't like about what I did, then starting again 2 or 3 times until I feel like I understand the tools and the problem. The first version will often be a mess of printf debugging and hard-coded everything, but after a couple of iterations I'm usually getting to a clean and testable core approach. At that point I'll get a sensible set of tests together for what I've created and flip back to the first mode.


I will usually write one or a few tests that exercise ideal 'happy paths' before starting proper implementation, assuming it can be done fairly quickly. I don't hold on to these very tightly; they will often change as things go forward.

Once I have those basic tests passing, I will often write a couple more tests for less common but still important execution paths. It's ok if these take a little longer, but only a little.

Beyond the obvious 'test driven' benefits, I find that, especially for the first round, writing those tests helps me solidify what I'm trying to accomplish.

This is often useful even in cases where I go in feeling quite confident about the approach, but there are some blind spots that are revealed even with the first level, most simple tests.

I find the basic complaints that others have posted here about pre-writing tests largely valid. "Over-writing" tests too early on is, for me, often a waste of time. It works best when the very early tests can be written quickly and simple.

And if they can't be, then I'll frequently take a step back and see if I'm coming at the problem from a poor direction.


Yes, most of the time for the non-exploratory code that is not deeply ingrained with external framework.

When starting a new module / class I put a skeleton first, to establish an initial interface. Then I change it as I find, while writing tests, how it can be improved.

When dealing with bugs - red / green is incredibly helpful with pinpointing the conditions and pointing exactly where the fault lies.

When introducing new functionality I do most of the development as tests. Only double checking if it integrates once before committing.

Going test first pushes your code towards statelessness and immutability, nudging towards small, static blocks. As most of my work is with data, I find it to be a considerable advantage.

It provides little advantage if you already rely heavily on a well established framework that you need to hook to (e.g. testing if your repos provide right data in Spring or if Spark MRs your data correctly).

I tend to change/refactor a lot to minimise the maintenance effort in the long run. I would spend most of the time testing by hand after each iteration if not for the suite I could trust at least to some extent.


Sometimes.

If I'm writing something where I know what the API to use it should be and the requirements are understood, yes, I'll start with tests first. This is often the case for things like utility classes: my motivation in writing the class is to scratch an itch of "wouldn't it be nice if I had X" while working on something unrelated. I know what X would look like in use because my ability to imagine and describe it is why I want it.

There are times, however, where I'm not quite sure what I want or how I want to do it, and I start by reading existing code (code that I'm either going to modify or integrate with) and if something jumps out at me as a plausible design I may jump in to evaluate how I'd feel about the approach first-hand.

I'm short, the more doubts or uncertainty I have about the approach to a problem (in the case of libraries, this means the API) the longer I'll defer writing tests.


I know this will probably get downvoted. It's impossible to predict how your app will fail ahead of time. So test driven development (for the most part) is a waste of time. Every test you write will either continue to pass forever not providing useful information to you or will need to be updated when new features are added to the software, making them costly to have in place. Meanwhile they reveal very few defects that wouldn't easily be caught with a basic smoke test you need to do anyway.

Of course, there are always exceptions. if you have software that is highly complex but the outputs are very simple and easy to measure then it might actually be a good idea.


Almost never.

With the kind of software I mostly write these days, I'm fortunate to be able to incrementally develop my code and test it under real-world conditions or a subset thereof.

So my approach is exploratory coding -- I start with minimum workable implementations, make sure they work as needed, and then add more functionality, with further testing at each step.

The upside is that I don't have to write "strange" code to accommodate testing. The downside is that I'm forced to plan code growth with steps that take me from one testable partial-product to the next. A more serious downside, one I'm very aware of, is that not every project is amenable to this approach.


> With the kind of software I mostly write these days, I'm fortunate to be able to incrementally develop my code and test it under real-world conditions or a subset thereof.

What kind of software you write if you don't mind me asking ?and are your "real-world conditions" tests automated ?

> The upside is that I don't have to write "strange" code to accommodate testing.

Can you elaborate more as what you mean by "strange" ?


For the past 2 years, most of my work has been in porting some fairly simple legacy message forwarding and conversion programs from C to Java. So on our test servers I can swap out the C programs for drop-in replacements in Java and watch them (via log files) working -- or not. If my programs fail I can either observe crashes and stack trace or the message receiving programs will crash or loudly object to bad data from me. Usually one day's worth of traffic will exercise enough of my program's logic that failure to fail for a day constitutes a successful end-to-end test.

Yes, this is kid stuff. My current work is about as sophisticated as typical undergrad Computer Science projects. We can't all be doing rocket science!

I used to write automated test setups for my programs, providing streams of pre-canned messages and such. That worked out OK. I suppose it's great to have test suites to avoid regression and such, but I ended up regretting all the effort I sunk into testing. So far it's been my experience that I would sink a lot of time into creating a test suite that could exercise my programs as thoroughly as simple exposure to real-world message traffic.

I hope my attempt to be brief didn't come across as derogatory when I wrote "strange." Here's an example: I like to make a lot of my fields and methods private. It's handy that my IDE warns me when fields and methods aren't used, or when final fields aren't initialized. Obviously, for "classic" unit tests I'd have to at least expose my methods at the package level to call them from out of class. Another example: my apps rely on a fair bit of configuration data and some embarrassingly tight coupling between my classes. A JUnit-friendly program would call for a lot of mockups, as well as a lot more coding to interfaces rather than concrete classes, probably a lot more reliance on design patterns. My coding style for these projects yields a small number of compact classes but is very hostile to unit testing.

To be clear: For many other projects, your mileage may vary dramatically. I've successfully done TDD in other projects where that made a lot more sense.


My take on tests is that they serve two purposes:

1. As a security system for your code

2. As a tool for thought, prompting the application of inverse problem solving

Both of these have costs and benefits. If you consider the metaphor of the security system, you could secure your house at every entry point, purchase motion sensors, heat detectors, a body guard, etc. etc. If you're Jeff Bezos maybe all of that makes sense. If you're a normal person it's prohibitively expensive and probably provides value that is nowhere near proportional to its cost. You also have to be aware that there is no such thing as perfect security. You could buy the most expensive system on earth and something still might get through. So security is about risk, probability, tradeoffs, and intelligent investment. It's never going to be perfect.

Inverse thinking is an incredibly powerful tool for problem solving, but it's not always necessary or useful. I do think if you haven't practiced something like TDD, it's great to start by over applying it so that you can get in the habit, see the benefit, and then slowly scale back as you better understand the method's pros and cons.

At the end of the day, any practice or discipline should be tied to values. If you don't know WHY you're doing it and what you're getting out of it, then why are you doing it at all? Maybe as an exploratory exercise, but beyond the learning phase you should only do it if you understand why you're doing it.


I write approximately as much test code as application code, but it never makes sense to write tests first.

I frequently redesign/rewrite an implementation a few times before committing it, often changing observable behaviors, all of which will change what the tests need to look like to ensure proper coverage. Some code is intrinsically and unavoidably non-modular. Tests are dependent code that need to be scoped to the implementation details. Unless you are writing simple CRUD apps, the design of the implementation is unlikely to be sufficiently well specified upfront to write tests before the code itself. Writing detailed tests first would be making assumptions that aren't actually true in many cases.

I also write thorough tests for private interfaces, not just public interfaces. This is often the only practical way to get proper test coverage of all observable behaviors, and requires far less test code for equivalent coverage. I don't grok the mindset that only public interfaces should be tested if the objective is code quality.

When practical, I also write fuzzers or exhaustive tests for code components as part of the test writing process. You don't run these all the time since they are very slow but it is useful for qualifying a release.


The answer to this question depends on the type of programming you do.

1)

Working as a part of an enterprise team on a big lump of TypeScript and React? Then you probably don't write tests before you code because a) TypeScript catches all the bugs amirite? and b) Your test runner is probably too hairy to craft any tests by hand, and c) You are probably autogenerating tests _after_ writing your code, based on the output of the code you just wrote, code which may or may not actually work as intended.

2)

Working on an npm module that has pretty tightly defined behaviour and the potential to attract random updates from random people? Then you _need_ to write at least some tests ahead of time because it is the only practical way to enshrine that module's behavior. You need a way to ensure that small changes/improvements under the hood don't alter the module's API. This means less work for you in the long run, and since you are a sensible human being and therefore lazy, you will write the tests before you write the code.


I don't. My approach probably isn't ideal. But I find it really hard to start with tests for new solutions.

With a basic understanding of the problem and the expected solution, I start off directly with prototype code - basically creating a barely working prototype to explore possible solutions.

When I'm convinced that I'm on the right track (design-wise), I start adding more functionality.

When I'm at a stage where the solution is passable - I then start writing tests for it. I spend some time working through tests and identifying issues with my solution.

Then I fix the solutions. And clean up the solution.

At this point my test cases should cover most (if not all) of my problem statement, edge cases and expected failures.

When it comes to maintaining the solution, I do start with test cases though. Usually just to ensure that my understanding of the bug or issues is correct. With the expected failure tests, I then work on the fix. And write up any other test cases needed to cover the work.


In general I write unit tests as I implement. I would not write them unless I was convinced they were immediately beneficial to my productivity. Some parts of my code is not covered. In the beginning the test cases just outputs the result to standard out, in the end I would use assertions or disable them if they were dependent on external systems.


> I mean how many of you stick with this test driven development practice consistently?

I do for some projects. For example currently I'm working on a project that has a high test coverage and most bugs and enhancements start first as a test and then they're implemented in code. TDD makes sense when the test code is simpler to write than the implementation code.

> Can you describe the practical benefit?

It may take some time to write initial tests but as I'm working with some legacy enterprise tech serializing all inputs and testing on that is a lot faster than testing and re-testing everything every commit on real integration servers.

Tests provide you with a safety net when you do refactors or new features so that the existing stuff is not broken.

> Do you happen to rewrite the tests completely while doing the implementation?

Yeah, I do. There are two forces at play - one of them pushes to test that cover more stuff in a black-box matter - they won't be broken as often when you're switching code inside your black-box. On the other hand if you've got finer grained test when they break it's obvious which part of code is failing.

> When does this approach work for you and when did it fail you?

It works for projects that are hard to test other way (we've got QA but I want to give them stuff that's unlikely to have bugs) and for keeping regressions at bay. It did fail me if I didn't have necessary coverage (not all cases were tested and the bug was in the untested branch).

I wouldn't also bother to test (TDD) scratch work or stuff that's clearly not on critical path (helper tools, etc.) but for enterprise projects I tend to cover as much as possible (that involves sometimes writing elaborate test suites) as working on the same bugs over and over is just too much for my business.


Never - I always attempt to make an end to end implementation. Once the code works I go back an examine the what I did and try to make it simpler. This often requires refactoring and changing API's etc. Writing a bunch of test would simply add inertia to that process. After I am satisfied with the code or get bug reports I will add test. Test code is code and often has bugs. Every line of code I write is a liability so I try to limit it to necessary things. I have never seen the write test approach work. Once all the test code is written people become reluctant to change things because they have to change both the code and test code doubling the work/time. One just ends up with well tested but not so good code. If you a M$ and can afford to pair program it might be more feasible.


Nope. We are judged by our individual performance by how many points (tasks) we complete. Since this has been a metric, unit testing is always an afterthought. During code checks / reviews, some developers will ask others to add unit tests, which will be hastily written.


I mainly write embedded software and I tend to write my tests using Robot Framework. I generally start by writing the new feature since I need to probe around how it will work on the hardware, but generally write the test before the feature is actually finished. This is because the test itself will help me recreate the conditions I need the hardware to be in during debugging! One example is sending a specific sequence of serial commands over the CAN bus, or hitting a sequence of buttons on the user interface.

I am still trying to figure out the best way to do unit testing with embedded C (working with Unity right now), but with Python development I try to write unit tests only for more tricky code.


Sure, still, 20 years on. Not dogmatically so to reach some coverage goal, but anything with real logic, yes. I don't test-drive React components, for example (instead the goal is to get all real logic out of them).

Benefits--not pushing logic defects gives me more time to invest in other important stuff; I end up with tests that document all the intended behaviors of the stuff I'm working on (saves gobs of time otherwise blown trying to understand what code does so I can change it safely); I'm able to give a lot of attention to ensuring the design stays clean. Plus, it's enjoyable most of the time.

"They incur technical debt and bugs at the same rate as other code." Not at all true.


There's two basic kinds of code I write: the kind where I know what I'm doing before I start, and the kind where I don't.

For the latter, its when I'm exploring a codebase or an API, writing a spike script just to see how things work and what kinds of values get returned, for example. Many times I'll turn the spike into a test of some sort, but a lot of times I just toss it when I'm through.

For the former, yes, I generally write tests before implementation, though I'm not religious about it. I'm just lazy. I'm going to have to test the code I write somehow, whether that's by checking the output in a repl or looking at (for example) its output on the command line. Why you wouldn't want to capture that effort into a reproducible form is beyond me. (And if you're one of those people who just writes up something and throws it into production, I really hope we don't end up on a team together!) I generally just write the test with the code I wish I had, make it pass somehow, rinse repeat. Its not rocket science. Its just a good scaffold to write my implementation against.

That said, I don't usually keep every test I write. As others have noted, that code becomes a fixed point that you have to deal with in order to evolve your code, and over time it can become counterproductive to keep fixing it when you change your implementation slightly. So the stuff I keep generally has one of three qualities: * it documents and tests a public interface for the API, a contract it makes with client code * it tests edge cases and/or bugs that represent regressions * it tests particularly pathological implementations or algorithms that are highly sensitive to change.

Honestly, I feel like people who get religious about TDD are doing it wrong, but people who never do TDD (i.e. writing a test first) are also doing it wrong in a different way. There's nothing wrong with test-soon per se, but if you're never dipping into documenting your intended use with a test before you start working on the implementation itself, you're really just coding reactively instead of planning it, and it would not surprise me to hear lots of complaints in your office about hard-to-use APIs.


No. Most of the time I don't really have any spec to base hypothetical tests on, and I have to be exploring what is even possible as I go. When I'm throwing things at a third-party API or service to see what sticks, writing tests first is wasteful.

If I'm doing something that is pretty well defined and essentially functional, where I know the inputs and outputs, I'll sometimes do the TDD loop. It can be good for smoking out edge cases; although unless you start dtifting into brute force fuzzing or property-based testing you still have to have the intuitions about what kind of tests would highlight bugs.


I can't even code like that except maybe for simple tasks in a mature project.

I'm more a "Make It Work, Make It Beautiful, Make It Fast" person and don't see it working by writing unit test first.


I think the value of TDD's red–green–refactor cycle is in making sure that after you "Make It Beautiful" it still works, and again after you "Make It Fast" it still works. Otherwise, if you don't automate the test first, you end up testing manually three times.


exactly, lots of people are missing the point.


I usually start with a REPL, then play around with that for a while. For languages without a REPL I usually just create a one time file in the project and play around with that.

Once I am a bit more comfortable with the code and have a better understanding of what I need, I will start writing some tests. I usually don't write too many early on, this way I don't have to go back halfway through development and change all my tests. Only when I'm confident enough with the code do I start writing extensive tests and try to cover all cases.


I look upto many persons in the industry who have achieved big milestones. One of them is Chris. His talk and post -https://quii.dev/The_Tests_Talk, will provide you with a tonne of information regarding TDD. If you are into GO, you should definitely check out Chris's book - https://github.com/quii/learn-go-with-tests.


I resisted TDD at first and then became a firm believer and then dropped it to then turn around again to practice it again.

The major thing is that the tests becomes a boundary of sorts which enables you to do a lot more then if you didn’t have it. It can also be done horribly wrong which was the reason why I stopped using it.

I see it as a tool to see how good your code and abstractions are. Large tests => leaky abstraction. Many details (Mocks/stubs) in the tests => leaky abstractions.

Also it reminded me that sometimes I’m trying to satisfy the language instead of just solving the problem. As soon you are trying to satisfy your language, code style/principle or architecture, you are now trying to solve something that has nothing to do with the problem and just causes the code to be designed wrong, or that I should move it somewhere else. Though if I need to tweak the code to make it more testable, I always do that.

I also have a rule, never test data, only test functionality. This have worked very well over the years creating pretty clean code and clean tests and I believe less bugs, or at least it’s hard to be sure. Though my perception that during the periods that I switched between the practices the TDD code had less bugs and I could confirm them faster than the code which had no tests. Also the code that was produced with TDD was a lot easier to make new tests for, where the non TDD code were really hard to write tests for, if I wanted to for example confirm a bug or a feature.


Depends. If it's a bug or a new feature built on top of code that is already tested, then I will write the tests because they fit nicely into already existing testing infrastructure. The test therefore "proves" the bug, and the fix eliminates it.

For outright new code (think new objects, new API, and so on), I tend not to write the tests first because they become a cognitive load that affects my early design choices. In other words, I am now writing code to make the tests pass, and have to exert effort not to do that.


This is a great hoax in software industry. I generally use it for conversation starter in company's lunch


I follow a practice I like to call "test minded development".

I write tests at the earliest point I feel appropriate - but rarely before I actually write code. I tend to work on greenfield projects, so writing tests before I write code rarely makes sense.

IMO, TDD only makes sense if you already know what you're going to write. This makes a lot of sense if you're working on a brownfield project or following predictable patterns (for example, adding a method to a Rails controller).

If I'm doing actual new development, as I code, I tend to write a lot of pending tests describing the situations I need to test. However, I don't typically implement those tests until after.

One of the biggest factors for me is so much of my code deals with handling some degree of unknown - what the client will need, exact how an API works, how errors/invalidations are handled, unexpected refactoring, etc.

In this case, it doesn't make sense to create tests before I write the underlying code. Most tests will have mocks/stubs/simulations that make assumptions about how the code works. At that point, a pre-written test is no better than code, since it's just as likely to contain errors.

I much rather do real-time debugging/interacting while developing then capture the exact interactions of outside systems.


> Do you write tests before the implementation? Absolutely, day in day out. New code and bugs fixing a like. It's the proof that I need to know that whatever code I am doing is an exact fit to the problem it's trying to solve.

> Can you describe the practical benefit? Testing first help me clarify my intentions, then implement a realisation of those intentions through code. Testable code has the side effect of being well modularized, free from hidden dependencies, and SOLID.

And it's also about making sure that whatever code you write, there's a justification for it and a proof that it works, could be seen more like a harness protecting you from writing things that you don't need, YAGNI.

> Do you happen to rewrite the tests completely while doing the implementation? I follow the classic TDD cycle, RED/GREEN/REFACTOR and I can not be any happier.

> When does this approach work for you and when did it fail you? The only exception to the above is exploratory code. I.e. the times where I don't know how to solve a given problem, I like to hack few things together and poke the application and see what happens due to what I have changed.

Having verified and learned more about how to solve that problem, I delete all my code and start afresh but this time TDD the problem/solution equipped with what I have learned from my exploratory cycle.

If you are in doubt or need further information to help you make your own decision about the matter, I can not recommend enough the classic TDD by Example from Kent Beck as a starting point.

For a more real-world view with an eye on the benefits of adopting TDD, have a look at Growing Object Oriented Software Guided by Tests, aka the Goose book.


Test Driven Development is not actually synonymous with Test First Development. Test First is a method that you can use to do TDD. It's quite a good method, but it's not the only one.

To answer your question properly, you need to back up a bit. What is the benefit of TDD? If you answer is "To have a series of regression tests for my code", then I think the conclusion you will come to is that Test First is almost never the right way to go. The reason is that it's very, very hard to imagine the tests that you need to have for your black box code when you haven't already written it.

You might be wondering why on earth you would want to do TDD if not so that you can have a series of regression tests for your code. Remember that in XP there are two kinds of testing: "unit testing" and "acceptance testing". An acceptance test is a test that the code meets your requirements. In other words, it's a black box test regression test. You are very likely to do acceptance testing after the fact, because it is easier (caveat: if you are doing "outside-in", usually you will write an acceptance test to get you started, but after you have fleshed in your requirements, you normally go back and write more acceptance tests).

If acceptance tests are regression tests, why do we need unit tests. A common view of "unit tests" is to say that you want to test a "unit" in isolation. Often you take a class (or the equivalent) and test the interface making sure it works. Frequently you will fake/mock the collaborators. It makes sense that this is what you should do because of the words "unit" and "test".

However, originally this was not really the case as far as I can tell (I was around at the time, though not directly interacting with the principle XP guys -- mostly trying to replicate what they were doing in my own XP projects. This all to say that I feel confident about what I'm saying, but you shouldn't take it as gospel). Really right from the beginning there were a lot of people who disliked both the words "unit" and "test" because it didn't match what they were doing.

Let's start with "test". Instead of testing that functionality worked, what you were actually doing is running the code and documenting what it did -- without any regard for whether or not it fit the overall requirements. One of the reasons for this is that you don't want to start with all of the data that you need and then start to write code that produces that data. Instead you start with a very small piece of that data and write code that produces that data. Then you modify the data and update the code to match that data. It is less about "test first" as it is about decomposing the problem into small pieces and observing the results of your development. It does not matter if you write the test first or second, but it's convenient to write the test first because before you can write the code, you need to know what change you want the code to enact.

One of the reasons why the term "BDD" was invented was because many people (myself included) thought that the phrase "Test Driven Development" was misleading. We weren't writing tests. We were demonstrating behaviour of the code. The "tests" we were writing were not "testing" anything. They were simply expectations of the behaviour of the code. You can see this terminology in tools like RSpec. For people like me, it was incredibly disheartening that the Cucumber-like developers adopted the term BDD and used it to describe something completely different. Even more disheartening was that they were so successful in getting people to adopt that terminology ;-)

Getting back to the term "unit", it was never meant to refer to isolation of a piece of code. It was meant to simply describe the code you happened to be working with. If we wanted to write tests for a class we would have called it "class tests". If we wanted to write tests for an API we would have called it "API tests". The reason it was called "unit test" (again, as far as I can tell) is because we wanted to indicate that you could be testing at any level of abstraction. It's just intended to be a placeholder name to indicate "the piece of code I'm interested in".

I think Michael Feathers best described the situation by comparing a unit to a part in a woodworking project. When you are working on a piece, you don't want any of the other pieces to move. You put a clamp on the other pieces and then you go to work on the piece that you want to develop. The tests are like an alarm that sounds whenever a piece that is clamped moves. It's not so much that you are "testing" what it should do as you are documenting its behaviour in a situation. When you touch a different part of the code, you want to be alerted when it ends up moving something that is "clamped" (i.e. something you aren't currently working on). That's all. The "unit" you want to clamp depends a lot on how you want to describe the movement. It might be a big chunk, or it might be something incredibly tiny. You decide based on the utility of being alerted when it moves.

So having said all that, what is the benefit of TDD? Not to test the code, but rather to document the behaviour. I've thought long and hard about what that means in practical terms and I've come to the conclusion that it means exposing state. In order to document the behaviour, we need to observe it. We have "tests", but they are actually more like "probes". Instead of "black box" interactions (which are fantastic for acceptance tests) we want to open up our code so that we can inspect the state in various situations. By doing that we can sound the alarm when the state moves outside of the bounds that are expected. The reason to do that is so that we can modify code in other places safe in the knowledge that we did not move something on the other end of our project.

Anything you do to expose state and to document it in various situations is, in my definition anyway, TDD. Test First is extremely useful because it allows you to do this in an iterated fashion. It's not so much that you wrote the test first (that's irrelevant). It's that you have broken down the task into tiny pieces that are easy to implement and that expose state. It just happens to be the case that it's extremely convenient to write the test first because you have to know what you want before you can write it. If you are breaking it down in that kind of detail, then you might as well write the test first. And, let's face it, it kind of forces you to break it down into that detail to begin with. That's the whole point of the exercise.

There are times when I don't do test first and there are times when I don't do TDD. I'll delve into both separately. First, I frequently don't do Test First even when I'm doing TDD if I'm working with code that has already got a good TDD shape (exposed state with documented expectations). That's because the "test" code and the production code are 2 sides of the same coin. I can make a change in the production behaviour, witness that it breaks some tests and then update the tests. I often do this to stress tests my tests. Have I really documented the behaviours? If so, changing the behaviour should cause a test to fail. If it doesn't, maybe I need to take a closer look at those tests.

Additionally, I don't always do TDD. First, there are classes of problems which don't suit a TDD breakdown (insert infamous Sudoku solver failure here -- google it). Essentially anything that is a system of constraints or anything that represents an infinite serious is just exceptionally difficult to break down in this fashion (woe be unto those who have to do Fizz Buzz using TDD). You need to use different techniques.

Jonathon Blow also recently made an excellent Twitter post about the other main place where you should avoid TDD: when you don't know how to solve your problem. It is often the case that you need to experiment with your code to figure out how to do what you need to do. You don't want to TDD that code necessarily because it can become too entrenched. Once you figure out what you want to do, you can come back and rewrite it TDD style. This is the original intent for XP "spikes"... but then some people said, "Hey we should TDD the spikes because then we don't need to rewrite the code"... and much hilarity ensued.

I hope you found this mountain of text entertaining. I've spent 20 years or more thinking about this and I feel quite comfortable with my style these days. Other people will do things differently and will be similarly comfortable. If my style illuminates some concepts, I will be very happy.


Thanks for your perspective. One issue is that it is often hard to tell in advance whether you know what you are doing or not. Also in my experience some implementation details can lead to a revision of the interface as well.


My best advice is to try it both ways (assuming you are interested in Test First/TDD). You'll find a sweet spot that works well for you. This is an area where I think there are lots of things that can work well. For me, my TDD is probably the sharpest knife in my kit, so I rely on it. For others, maybe there are other things. Don't let anyone tell you that there is only one way to do it. Of course you have to find a way to collaborate effectively (and that's the real difficult part), but in terms of growing as a developer I think you've got a lot of viable paths.


Thanks for your advise. I think it is an advantage to learn about different ways to look at a problem. Thankfully there are a lot of ways to look at problems when working in the software business.


Here's what I tend to do: https://medium.com/chrismarshallny/testing-harness-vs-unit-4...

Basically, sometimes, it makes sense to write tests beforehand, but most of the time, I use test harnesses, and "simultaneous test development."

Works for me. YMMV.


Nope. Can't stand TDD.

Write the code, secure it from refactoring stuff ups with your tests.


I don't do TDD on the first version. For me, the first version is a throwaway version. If it turns out to be commercially viable, that's when I start with a brand new codebase incorporating all the lessons from the first version, but this time using TDD. TDD has it's place. I just don't think it's cost effective on the first version.


Yes, I am pretty consistent in it. It has the great benefit of leading to highly reliable software even in the face of complex requirements and many requests for changes.

Rewriting the tests completely does not really happen. Sometimes I am not entirely sure of all the things that the production code should do so then I go back an forth between test and executable code. In that case one needs to be very aware of whether a failure is a problem in the executable code or the test code.

It pretty much works all the time. Occasionally there are the exceptions. If a thing is only visual, e.g., in a webinterface it may be best to first write the production code because the browser in my head may not be good enough to write a correct test for it. Also, in the case of code that is more on the scientific/numeric side of things one may start out with more executable code per test than one usually would. I still write the test first in that case, though.


In my experience TDD is great when you know what you want a particular piece of code to do.

The other place it works well is code written as a pair - with one member writing tests and the other writing the implementation - the challenge is on to pass the buck back to the other pair member - i.e. find the obvious test that will cause the code to fail / find a simple implementation that will cause the tests to pass. This is great fun and leads to some high-quality code with lots of fast tests.

The benefit of TDD is that your coverage is pretty high - and you aren't writing anything unnecessary (YAGNI).

I don't think I have ever rewritten tests that I have written (TDD or otherwise). They might get refactored.

TDD doesn't work so well when you only vaguely understand what you are trying to do. This is not a coding / testing problem - get a better vision - prototype something perhaps - i.e. no tests and very deliberately throw it away.


mostly tests while doing implementation, but not 100%.

however, I've started working on a project with others, and am becoming a bit more adamant on "this needs tests". Codebase had none after a year, and the other dev(s) are far more focused on code appearance than functionality. Fluent interfaces, "cool" language features, "conventional commit" message structure, etc are all prized. Sample data and tests? None so far (up until I started last week).

I've had push back on my initial contributions, and I keep saying "fine - I don't care how we actually do the code - change whatever I've done - just make sure the tests still run". All I've had is criticism of the code appearance, because it's not in keeping with the 'same style' as before. But... the 'style' before was not testable, so... there's this infinite loop thing going on.


yeah, that's the problem. If they don't get the value of what a test gives them, and then the focus shifts to aesthetics and conventions, etc...

Personally when I review code, I look for the test, I need to find a way to tell me why that code exists and a proof that it works.


I had this issue with a different client about 6 months ago. I understand there are 'coding styles' that some companies stick with, and I'm not strictly opposed to them. I do bristle when I'm presented with the 'one true way' from devs who spend all their time on one project, or one tech stack, or one company. I jump around a lot, and multiple companies have "the one true way", and the conflict. Processes around commits, flow, commenting, etc - these vary more than some people would care to admit.

In response to that, I've started to care and focus more on tests and sample data to illustrate the core issues, changes and value for an issue. You want to change the code from 4 lines in to 1 4-line chained fluent interface to match other bits of the code, or to try out your new builder syntax? I really really have grown to not care too much - as long as I have some tests to demonstrate when something stopped working (or when our understanding of the project changed).


This is a big topic, and you're asking some good questions. Rather than tackle it all, I can recommend expanding your questions with the following.

When someone tells you they don't write tests first, ask them how they refactor. How do they know the changes they made didn't break anything?

You can fool yourself with test-first, but it's quite difficult to do if you're rigorously following the practice. First write a failing test. Next, write only enough production code to fix the failing test. Optionally refactor. Rinse and repeat.

Code created this way can prove that every line of production code resulted from a failing test. Nothing is untested, by definition. The code may be incomplete due to cases not considered, but everything present has been tested. Note that it's possible to break this guarantee by writing production code unnecessary to get the test to pass.


I have had one excellent experience with TDD. I re-wrote the stubbing library Sinon in Lua, and as I wrote a feature I wrote the test first, then made the test pass. Since I wanted it to match Sinon as much as possible, the requirements were exact, meaning the tests I wrote never had to be refactored. The whole thing was really smooth and worked really well.

The issue I find is that generally we aren't writing code we know the exact requirements for, so doing TDD means that not only are you refactoring your code as you understand the problem better, but you're also refactoring your tests, which increases the workload.

Maybe that's a sign that we need to spend a lot more time designing before implementing, but I've never worked anywhere that happens enough to use TDD as nicely as my experience with my Sinon clone.


My understanding is that tests are mostly supposed to increase iteration speed - the opposite of what most comments here suggest.

> The change curve says that as the project runs, it becomes exponentially more expensive to make changes.

> The fundamental assumption underlying XP is that it is possible to flatten the change curve enough to make evolutionary design work.

> At the core are the practices of Testing, and Continuous Integration. Without the safety provided by testing the rest of XP would be impossible. Continuous Integration is necessary to keep the team in sync, so that you can make a change and not be worried about integrating it with other people.

> Refactoring also has a big effect

- Martin Fowler

https://www.martinfowler.com/articles/designDead.html


Lots of people are writing tests as an end goal rather than a means to the “malleability” end goal. They end up writing tests that satisfy code coverage metrics, but are too closely tied to implementation.


I realized recently that only 10% or so of the code I tend to write truly benefits from thorough testing. The rest can be handled more broadly through integration testing which is less about the code specifically and more about expected end use cases, like user workflows. I find those tests very useful. I only write those after the flow is established and more or less finalized.

I used to write a lot of tests and discovered over summer that it costs too much in terms of time spent writing, changing, and debugging tests for what you tend to get out of it.

I do think writing a lot of tests for a legacy or relatively old system is a great way to uncover existing bugs and document expected behaviours. With that done, refactoring or rebuilding is possible and you gain a great understanding of the software.


I have never been able to "grok" the idea of writing tests before writing the implementation. My brain just doesn't work that way. It's like a speed bump that makes me lose the idea or inspiration if I try to think of it in terms of "tests" first.

However, when I need to overhaul something that already exists, e.g. the core of a game engine, I've gotten into the habit of writing tests for current behavior, so that when I rip it out its replacement works the same way, or at least retains the same interface, so I don't have to replace the whole pyramid on top before I can compile again. :)

This has also helped me realize the value of tests, but later on in the development cycle, not as the base before actually writing anything.


I've never had success doing this in languages like Java or Python, but where it's been very very helpful is in hardware description languages. Since you are often implementing a piece of hardware with known input and output specs, writing tests first can work and show you how much of you design meets the spec as you go forward.

Plus writing HDL without tests is basically guaranteed to create something nonfunctional.

I hate unit testing in for example Java though, individual functions are typically very basic and don't do much. A service? Integration tests? Sign me up, but unit testing to 100% coverage a function with ten lines that reads a bytestream to an object and sets some fields is boring, and fairly difficult to mock.


I usually practice Test Along The Way rather than Test Driven Development. A lot of problems aren't well understood in the beginning, in which case it doesn't make sense to me to write tests in a way that will end up forcing a stupid design upon my code.


No, my clients wouldn't work with me then...

On a serious note, I find it hard to write the tests at the beginning for a code that I'm not sure how/what is gonna do. What do I mean by that? Well, as all you probably have experienced, requirements change during development, sometimes 3rd party/microservices/db constraints don't let us achieve what we want. We have to come up with hacky/silly solutions that would require us to rewrite most of the tests that we wrote.

A lot of times I don't even know how to code the stuff I'm required to build. How am I supposed to write tests in that kind of situations? I think it would be like building abstractions for problems that I don't know very well yet.


Depends entirely on the code and the context - if it's something like a library, mostly yes. There I already know what I want the library to do, and writing the test first is a great way to get a feel for what the ergonomics are like in actual use. Also a great way to spec out what the library will and will not do. Then I write to make it work, and once the tests are passing I keep refactoring to make it neater and easier to understand, and then maybe add benchmarks and move on to optimization.

If it's an application or framework, I usually drive it from the UI, so tests are more an afterthought or a way to check / ensure something.

I find the best balance is to have thick libraries and thin applications, but YMMV.


I write both client-side and server-side code.

In server-side, almost all code is atomic, very functional and is very easy to cover with tests, on different levels. I start with several unit tests before implementation and then add a new test for any bug.

The client, however, is a completely different story. It's a thick game client, and through my career I honestly tried adopting TDD for it - but the domain itself is very unwelcoming to this approach. The tests were very brittle, with a ton of time spend to set them up, and didn't catch any significant bugs. In the end, I abandoned trying to write tests for it altogether - at least I'll be able to write my own, functional and test-driven game engine, to begin with.


Not always before, but always eventually and usually once I’m done with “exploration”. Testing done right should be a time saver in the long run! I think many people are turned off by testing and especially unit testing because it ends up being difficult and maintenance is more of a pain than it’s worth. There are many good strategies to make it easier that in my experience have yet to be well adopted:

https://m.youtube.com/watch?v=URSWYvyc42M https://www.destroyallsoftware.com/talks/boundaries


Extremely rarely. Probably the thing I do the most is write the implementation as I think it should function, then write the test to meet the result I want, and correct from there. Sometimes the test requires a correction and sometimes the implementation does.


I am quite curious to know what percentage of working devs ever write tests at all. Because I don't think it's the majority, based on my own professional anecdata.

But I've not yet been convinced that any of the various polls are very authoritative. So I dunno.


I write tests for each bugfix and feature with rare exception.

It's a great practice to have a regression test suite that you can use to run your code in a simple context. The unit test suite can catch all kinds of low-hanging fruit instead of waiting until you deploy the code to your target device or production service (or even the release cycle for those).


No, in fact, we don’t use coded tests at all, deliverables are tested by the product owner(s). On the code level, we hardly see bugs, not even when refactoring. I often wonder if it is luck or expertise, and whether we would benefit from writing tests.


Maybe you hardly see bugs because you don't test for them.


I used to wonder the same thing, until I inherited a project with tests.

The tests never caught a bug for the first 3 years. Got in the way a lot though.

Then they finally caught 1.

Not worth it.


I don't see unit tests "catch bugs" often, either, in the sense that the CI build fails due to defective code pushed up.

And even with TDD, I don't often find myself breaking a lot of things that were already working, though it does happen. In those infrequent cases, it's extremely valuable to know I broke stuff that was working. I.e., it's pretty sad to ship changes that broke other behaviors, things you had the faintest clue that you were impacting.

What I do see gobs of, when doing TDD, is the tests preventing crap code from getting integrated in the first place, i.e. when I or others first write the code (or change the code of others). From the testing perspective, that's the real thing they do--gate the defects from ever leaving your desktop, and in a far faster manner than most other routes.

Unless, of course, one is a perfect coder.

In any case, TDD has more important benefits that I've also gotten. Easily worth it for me.


If you tested, you wouldn’t have to wonder.


That’s Schrödinger's unit test, I guess.


I found TDD unpractical. Can recall one situation when used TDD. I was writing a printf-like formatter in C++ and prepared a lot of tests cases in advance. That approach worked well at early state of development. however, further development had revealed quirks I hadn't predicted. As a result, the number and complexity of tests got increased.

My typical practice is to work out an API, write early scratches of implementation and test only simple cases. Then I can inspect two things: how the API works in real code and what else should be tested. In other words tests help to establish an API, then to stabilize the implementation.


No, although sometimes I do wish I were working on problems that are so simple to test.


I've always found it easier to write the skeleton of a module first (or an interface, depending on the language), and then write the tests to cover the main functionality, then the tests for the edge cases, and then finish the implementation.

I usually finish by checking the test coverage and trying to make it reach 100% branch coverage if I have the time. The coverage part is important because it usually makes me realize things could be made simpler, with fever if/else cases.

I could never get used to writing only the tests first simply because all the compilation errors get in the way (because the module doesn't exist yet).


Acceptance tests, yes. Nothing puts my mind at ease like knowing that when I feel I am done coding, I can follow a list of actions on a spreadsheet and call it done when every row is green.

Integration tests, sometimes. Depending on the complexity of the system, I might skip this part. If it is a collaborative work then integration tests are (in my view) mandatory for ensuring that everyone’s code plays nice with everyone else’s.

Unit tests, almost never. Unless it’s something absolutely production critical, pull-your-hair-out-at-5-on-a-Friday kind of feature, it’s usually not worth the extra time putting unit tests together.


Definitely, whenever I have a clearly defined design to implement. If I don't, I generally try to design the API/interface first, then write at least the basic tests before implementing.

That being said, I never write all the possible tests before starting with the implementation. They're called unit tests for a reason -- I generally write at least a few tests for a particular unit (say, a function or method) and then write the implementation before starting working on another. And I often go back and add extra tests for an already implemented unit to cover some edge cases and error conditions.


I mean I would "like" too, however, working in a startup there are some limitations. Our primary focus is building software that will improve metrics for our business, often time that software will only last a few months or so. As a result, it's just too costly for us to write tests for code that will only last a couple months most likely. Again, we would like to, and there are certainly some parts (small-parts) that have lasted for more than a few months and those don't have tests either, in that case, we "just never got around to it".


I often list out and sometimes actually write many of the tests that’ll need to pass before I write much code. Definitely not real TDD, but just listing out the cases helps me stay focused, keep from missing things, pick back up after a distraction, etc. Before doing this I usually have gotten to the point where I've got classes and database migrations at least scaffolded out, but mostly empty.

I find literal TDD distracting and unhelpful, but having a list of things I need to handle that doubles as tests I can't forget to write is a really nice balance.


Never did TDD as prescribed. But I try to write one test together with the implementation, to have basic coverage to cover the most silly things, and to ensure testability. This way it is easy to add more tests later, if and when I find that they are needed.

Also I try to test at subsystem / API boundaries whenever possible. Small units like a function rarely get their own tests, they are covered implicitly by being used. This avoids tests of arbitary internals (that should be free to change) becoming a maintenance burden. External APIs should be stable.


If the problem is complex, or I don't well understand the requirements, I write tests first.

For complex tasks, breaking down the problem into a single requirement per test helps me understand it better, and ensure I don't introduce regressions while refactoring or adding new requirements.

However, a lot of modern code is hooking up certain libraries or doing common tasks that don't get a lot of value from unit tests (mapping big models to other models, defining ReST endpoints, etc), so I don't generally write unit tests for those (but integration)


No. Every time I try, I quickly fall out of the habit. I do write high-level first and then implement code after establishing API/what I need these days (I end up re-writing that a few times) which seems ideal for TDD but just doesn't seem to work that way.

What I do do is add a unit test for (almost) every formal bug I come across, (to prove the bug, and fix it) so that but never happens again. Which over the years seems to have given the best results for backwards compatibility, stability etc


Almost never when writing new code. It feels like editing a manuscript before it even exists. If I "write" the code in pseudocode in my head or on paper first, I might write some tests first.


tests can be used to clarify your intentions and what you are attempting to build. It's best place when you are actually starting from a clean slate.

Just write enough code to make the test pass, no more no less. Refactor and repeat.

In recent years I've never written any new piece of code without test first and can not be any happier. Beside the confidence a test gives you, it really a great way to pin down your thoughts and write what makes them meterialize.


Yeah... I have read that and told that to others many times. But the thing is some of my best work (rare, though) has not been to spec. To any spec. More like, when you are doodling on a paper. "This might be a lake with a duck. Nope scratch that, it's actually a dragon and those are its scales. Yep."


Yes, I understand what you mean. When I face uncertainty, I explore with not so much emphasis on testing. But once I am over with that exploratory phase, I funnel the learning into the TDD process to solidify my implementation and guide my design.

BTW, I rarely work from a well-defined spec these days.


I almost never write test beforehand, unless I'm working on a really complex subject, where I need to write all tests first to grasp the problem entirely. Otherwise, I write tests before submitting the pull request.

I almost never write tests for personal projects, but 100% of the time when working in a team. IMO, tests are not here to prevent bugs, but they are part of the developer documentation: a coworker must be able to make any change they want to my code without asking me anything, and tests are the biggest part of that.


For new development, no. for the last 10 years I realise that stuff will change fast. when is solid, and feature is clear and will remain same for at least few months, yes, then I do it :)


I heard that Martin Fowler once said, only test what subjectively "can fail". If you make tests for more than that, you are by your own definition wasting your own time ...


Like all things in life "it depends".

I rarely practiced TDD until I started working on a piece of software that could take anywhere from under a minute to an hour to finish running. This means I must be able to isolate the specific portion of code effectively to save time and focus on the problem at hand. The APIs for this model are well written so I can recreate a bug or test out new code effectively by stitching some APIs in a unit test. It's incredibly helpful in that sense.


No.

I've had Sr. Soft. Engs. ask me why I thought we needed unit tests at all. I've had managers not know what they were. I've worked on projects where I was the only developer who wasn't afraid of the technology, but management couldn't give proper requirements. I've also worked in code bases where no testing framework (of any kind) existed.

I don't mean to sound combative. Looking back those places would benefit immensely from structured testing. Life got in the way though.


I usually write an implementation first, but when testing it i try to re-evaluate the requirements - i do not look at implementation at all. Test should handle all edge cases.

And i do modify tests, because sometimes assumptions are wrong(or domain experts change their mind or got something wrong)

Sadly functionality requirements are very soft in my industry(once it was requested from me to do a perfect fuzzy match..)

Most of the time i am required to modify untested legacy code, that starts with test(if it is even possible).


When I build a new web app, I start with a REST API. The input and output are known, so bugs are less likely. As I roll out new API methods, I add it to a postman collection, and add a couple of tests to it. Then with each deploy, I point my postman to that environment and hit run.

When building the client application, I typically just manually test as I go along. Bugs happen, but because the foundation code is all REST API, the bugs are usually and easily fixed.


No at the start of a project. Yes for refactoring.

I used to write tests for all pure functions because they were the easiest tests to set up. They are also the easiest to debug so the test didn't really help unless you're checking for type signatures.

I think that implementation tests are important but I found that I suck at figuring our how to set up a test before the actual implementation. So I do them after the fact and judiciously.


Whenever I can and it makes sense, yes. When I write a test first the resulting code is always better.

Sometimes the change is so trivial or type safe that it's not worth it, so I don't.

Sometimes I don't understand the problem well enough, so I learn more about it by doing some exploratory coding and prototyping. I usually come back and write a test after the fact.

Sometimes the project is on fire and I'm just throwing mud a the wall to see what sticks.


No. I have always felt that TDD gives a false feeling of safety and satisfaction, and that it is mostly a waste of time that could be better spend optimizing and refactoring.

Testing simple code is simple and therefore pretty much useless. Testing complicated code is complicated and therefore more likely to fail by making to few or to many assumptions in the test, or completely screwing up the test code itself.


A couple of takeaways from your post:

- knowing your code does what you expect is a _false_ sense of safety

- if something is simple, it is useless

- if something is complicated, it is useless


That's the problem. You assume that because you have a couple of passed tests that you know your code. Tests are like any other code, they incur bugs at the same rate as other code, and as such can very much give a false sense of safety.


Most of the time. It helps me design the code that I'm about to write. Sometimes I go further and even write the docs ahead of the tests.


Manual tests, yes. More in the lines of "user should be able to do this".

The developer does a round of these tests as best they can. Then they toss it to QA who tries to break it, but must also do the same tests.

It prevents a lot of bad design bugs, but adds almost no overhead.

Automated testing should be applied only where this manual testing becomes tedious or where we often make mistakes in testing.


Only for functions that have a clear input and output. You can nicely think of all the edge cases, and then get going with the actual code.

But most of the time, fixing some bug or implementing a feature is more of experimenting and prototyping at first. Writing tests for every futile attempt would be a waste of time.

At best we design some small architecture first with interfaces, and then create the tests off that.


I use tests when I need to debug certain sections of my code. It's faster in the overall development process.

Also most of my tests revolve around business logic, where I need to test multiple versions of data.

The best advice I could give would be test cases around errors.

That's usually where most bugs are found, when something doesn't return what you expect.


If I'm changing a software that's in production: yes. If I'm just messing around with a prototype I'm the only user of and don't yet know if it's going to be useful or go in production: no. I do like to write tests that actually write tests though, see such patterns in django-dbdiff, django-responsediff, or cli2 (autotest function).


I could write a test plan first, but I haven't always fully designed the interfaces until I've ploughed into the code and figured out what needs to be passed where, so there would be a lot of repeat effort in fixing up the tests afterwards.

Effectively, in writing tests first you make assumptions about the code. These don't always turn out to be true.


Personally, nope.

I could write a test plan first, but I haven't always fully designed the interfaces until I've ploughed into the code and figured out what needs to be passed where, so there would be a lot of repeat effort in fixing up the tests afterwards.

Effectively, in writing tests first you make assumptions about the code. These don't always turn out to be true.


Normally no. I do not do TDD either. If however I am writing some complex algorithm where I know that I'll make few bugs here and there I would write test.

Being too proper, doing everything by the book does not always translate to better code or good ROI

Also being an older fart and programming for so many years I am usually pretty good at not making too many bugs anyways.


I follow the Functional Core, Imperative Shell pattern.

I do TDD on Core, especially on mission critical code.

The Shell however have almost zero automated tests.


Thanks for that term, it sounds related to the style I prefer. Do you have any recommended references?


Try to watch Gary Bernhardt's "Boundaries" screencast (2012), if you haven't yet.

There are also some collection of links in github such as https://gist.github.com/kbilsted/abdc017858cad68c3e7926b0364....

The pattern articulated by Gary also best resonated my thoughts and experience in building software.


No. Doesn't agree with exploration.

I once had a job that didn't require any. I was given a specification and my job was to write some DSL-code of it. This would have been an excellent setup to practice TDD! Unfortunately, I wrote a script that basically translates the specification from word documents into DSL snippets and quit soon after.


I fall under the category of, if I'm getting paid to do it then it's up to the client and in that case most want things done fast and cheap so testing isn't going to get done. when i'm doing open source stuff, that's for me and others to benefit and learn from, so i test EVERYTHING i can so i learn.


As an independent consultant, I do what the customer is doing. If they like TDD, then that's what I do. If they dislike it, then I do what they do. Most of my customers don't write tests first. I have one open source project where I always write tests first to make sure I don't get out of practice.


No, because I'm always under pressure to show something on a screen as early as possible, but I really wish I did.


Most of the time: no But for things where certainty and is a requirement I'll do it without a second thought


I write tests NOT for correctness, but in order to refine the API for public methods/functions. If I'm going to have some fairly complicated behavior, I want to write an interface that is easy to use first, because otherwise I end up writing an interface that's easy to implement.


No. I suppose only really smart thinkers and high level architects do that. In a book or a blog post, not in a real thing. It's similar to Aikido or a kata based martial dance where the oponents are either imaginary or aren't allowed to hit back/must follow a script if they exist.


No, because even if I write the test before the code, the real test will usually be written after the real code. That is to say, I write a test, then I write code, then not much later I change the code to different code, and then I have to change the test to a different test anyways.


Not often. But when you're implementing something that needs a lot of iteration, a test case that you never commit can be a good alternative to copy-pasting statements into REPL. Then as it comes together you can just clean up that garbage test and turn it into something worth committing.


When I contemplate how I'm going to implement it and it starts to look overwhelming and tedious then I go and write some tests.

Other situations are coming just from experience: you know that some part will have a lot of special cases, so you implement a test as soon as you think of the next special case.


I attempt to, but it often doesn't work out that way. I'd say maybe 40% of my tests are written before the implementation and I start 75% of the remaining implementations by writing tests before forgetting them and completing the implementation and writing tests for it after.


Others have said similar, but for better or worse the three cases where I usually write tests are

    - when it’s really easy
    - when it’s really important
    - before a refactor
The last one is arguably the most important and has saved me a lot of headaches over the years.


Writing a simple test suite as a sanity check during development can be helpful, but TDD is idiotic. If you don’t believe me, read this: https://news.ycombinator.com/item?id=3033446.


I’ve actually found it helpful to start by writing tests, and then in the actual code, documentation (just about what the code does or should do), especially when I’m not entirely sure how to solve a given problem.

But for code that’s trivial to implement, it seems unnecessary...


Kind of, I write my code then test it via unit tests. Then I focus on achieving 100% code coverage by tuning the code and beat the devil out of it.

I have found that beating code is a great way to preserve my sleep and save the next person a headache.


One thing I have got into the habit of is live updates - saving a file triggers a rebuild, which then gets reflected by live.js in a browser, automating as much as I can during testing to reduce manual actions saves me a bunch of time.


I definitely think about the tests. I will sometimes write boundary value analysis on my notebook. But very rarely I would write tests before I write code.

I just realized that Never have I ever version for programmers will be quite interesting.


I do sometimes. I do it when I really know what the outcome should be, and I know that this will speed up my implementation. But when I'm just throwing ideas around, the tests are never written before the implementation.


I prefer BDD with parameterized test templates over TDD.

I give priority to end to end working of the software stack.

I always makes sure there that test suite should be executed in parallel threads.

I make sure that tests are written before I merge my code in the master branch.


Kind of. For a lot of my work I first elicit the bad behavior, then fix the code to no longer be broken, then write a test that automates what I did manually initially. The manual stage is the first step in writing the test.


Yes.

I like BDD, it helps me focus on the goal.

I feel that it lets me find the right approach faster.

Also, helps avoid distractions and optimizing things too early.

Related: „Write tests. Not too many. Mostly integration.”, httsps://kentcdodds.com/blog/write-tests/



I write my manual acceptance tests before I write a big fix. It forces me to straighten out in my head what the intended behaviour is and how to prove it. Imho, it has saved me a lot of time from confused starts


I've written integration test rigs prior to development, not unit tests.


Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: