I do like Go tests. Would love to have ability to find out untested parts of the code base though to increase coverage, but I do not think there is a way to measure this with available tools just yet.
Although I have yet to make integration tests that require live database and other dependencies. That will be fun...
Anyway, my main gripe with tests or TDD is that even if the code base is 95% finished, not talking libraries here, even small code changes can cause a cascade of changes that need to be addressed and tests essentially multiply the work load by a factor of 10, easy. And I am not talking about big changes. It might be a simple addition of a new struct field which suddenly breaks half of your test suite. Hence teste should be, in my experience, written as the absolutely last step before going live. Otherwise they might impose massive costs on time, and potentially money(if we're talking actual business and not one man shot type of project).
You must be using a dynamic language with a heavy framework? Rails maybe?
I code in Go mainly (also Java and Rust) and never experienced what you describe: simple addition of a field to a struct does nothing if not used in code. And the use is simply checked by compiler.
However, I did work alongside a Rails team which had major gripes with this. They called it brittle tests: whenever they made a simple change (like adding a field), half of their tests would fail. This really lowered devs' confidence in their codebase and slowed the changes to a halt.
Small code changes breaking a gamut of tests is indicative of testing the wrong thing. It is almost certain that you have tested the wrong thing if they start breaking on changes when you are 95% complete.
I don’t have particular experience here with golang, but in other languages the biggest reason for this is mocking and stubbing everything.
Junior or intermediate developers start writing code. Most functions manipulate internal state instead of acting cleanly on inputs/outputs. Writing tests for this style of code is hard, so developers reach for the nearest mocking library. Now, instead of testing that given inputs produce given outputs, tests are written in a way that effectively they only verify that functions are currently implemented the way they are currently implemented.
These style of tests literally have negative value (NB, not all mocking is bad, but these type of tests are). Delete them when you find them.
Testing should help you accomplish two things: find bugs and allow confident refactoring. These do neither.
They don’t help you find bugs because they don’t look for bugs. They look for “the code is currently implemented in a certain way”. And this of course means if you implement the same logic a different way, they will fail.
Negative value. Delete them, and whenever possible rewrite modules that are designed such that they need to be “tested” in this manner.
> Testing should help you accomplish two things: find bugs and allow confident refactoring. These do neither.
Technically the primary goal of testing is to document for other developers (and future you) how something is intended to be used. The documentation being self-verifying most definitely helps with refactoring and may discover bugs, but those are largely incidental.
Although I have yet to make integration tests that require live database and other dependencies. That will be fun...
Anyway, my main gripe with tests or TDD is that even if the code base is 95% finished, not talking libraries here, even small code changes can cause a cascade of changes that need to be addressed and tests essentially multiply the work load by a factor of 10, easy. And I am not talking about big changes. It might be a simple addition of a new struct field which suddenly breaks half of your test suite. Hence teste should be, in my experience, written as the absolutely last step before going live. Otherwise they might impose massive costs on time, and potentially money(if we're talking actual business and not one man shot type of project).