I'm all for splitting up the definitions of "test-first development" (TFD) and "test-driven development" (TDD), because I think for many forms of software development TFD is an anti-pattern that actively discourages people from writing tests.
Unfortunately this article appears to want to treat TDD as a sub-set of TFD. I'd like to redefine TDD to mean "you bundle tests with your implementation when you commit to source control" - where it doesn't matter if you wrote the tests before or after the implementation.
I write a lot of my tests after, often using snapshot testing. I get the key benefits of a robust test suite - protection against future changes introducing bugs that my test suite would have caught - but I don't go through the torment of trying to figure out how to test something that I've not yet even fully designed.
TDD has always meant to write your tests first. Red, green, refactor. If you're not doing TFD, you're not doing TDD; you're just developing with tests. TDD is test-driven precisely because all implementation code is written to make some failing test pass. Code not so written is by definition broken and does not make it into production. (If you need to try out an idea without writing tests first, that's called a spike and it should be thrown away after proving what you set out to prove with it.)
I've found this approach super useful for anything that has a potentially complex implementation with relatively simple output (e.g. compilers, query generators, format converters). You can comprehensively cover your code with simple snapshot tests and not really anything else.
It's critically important to very carefully review the snapshots before committing them for the first time though - it's far too easy to run the test, look at the output, think "yup that looks like some sql/assembly/whatever that does the thing I'm trying to do" and carry on. Only to realize days/weeks/months later that there's a bunch of bugs that you never caught because your supposed "correct" output was never actually correct.
100% agree - the risk of snapshot tests is that you can get lazy, at which point you're losing out on the benefit of using tests to help protect against bugs!
This is exactly why I strongly dislike snapshot tests most of the time in big repositories with lots of collaborators. The snapshot isn’t encoding any logic, it’s just saying “are you sure you wanted to change that?” Whereas a good unit test will tell you exactly what’s broken.
It’s just too easy to update the snapshot, and when you glance at changes to a large snapshot, it’s impossible to tell what the test is actually trying to check
This reminds me of expect tests in OCaml[0]. You create a test function that prints some state and the test framework automatically handles diffing and injecting the snapshot back into the test location. It helps keep your code modular because you need to create some visual representation of it. And it's usually obvious that's wrong through the diff.
My main challenge is that I don't know what to call the tests I write, or the process that I use. Calling them "tests" can be confused with manual testing. I usually call them "automated tests" but it's a mouthful and not a term I hear from other people.
You call the tests you write tests. I'm not even sure how this is a question. Manual tests and automated tests are terms used in industry, but mostly to distinguish manual versus automated end-to-end or integration tests. And then the only time you'd use the terms is when you have both and feel a need to distinguish them. Otherwise, just call them tests.
https://tidyfirst.substack.com/p/canon-tdd isn't particularly new; Beck has been consistent about "write tests before the code, one test at a time" for about 25 years now.
Same idea, different spelling: do you really think TDD should get credit for your good results, when you aren't actually shackling yourself to the practices that the thought leaders in that community promote?
I want to credit "writing automated tests" with the good results that I get from that practice. The problem is I need terminology that's widely used by other developers.
I once worked with a guy who took the opposite approach: I had a PR that made some simple changes to existing code to make it possible to test it. He refused to approve my PR, because he considered this cheating.
Unfortunately this article appears to want to treat TDD as a sub-set of TFD. I'd like to redefine TDD to mean "you bundle tests with your implementation when you commit to source control" - where it doesn't matter if you wrote the tests before or after the implementation.
I write a lot of my tests after, often using snapshot testing. I get the key benefits of a robust test suite - protection against future changes introducing bugs that my test suite would have caught - but I don't go through the torment of trying to figure out how to test something that I've not yet even fully designed.