I can see how that would be useful, but I also think it's a matter of priorities.
I'm basically saying I rarely have bugs in my tests because I verify them first. In fact I can't think of a single bug in my tests over the last 4 years (or even 10 years), but I can think of dozens of bugs in my code.
For example here are some pretty exhaustive tests I've written for shell, which have exposed dozens of bugs in bash and other shells (and my own shell Oil):
I would rather spend time using the tests to improve the code than improving the tests themselves. But I don't doubt that technique could be useful for some projects (likely very old and mature ones)
ITYM "I rarely have bugs in my tests that I'm aware of". The amount of tests I've seen that look like they are working properly, have been "verified" and are buggy is huge. Usually we find they were buggy because someone moves some other tests around or changes functionality that should have broken the tests, but didn't.
Please, don't ever assume that your tests are beyond reproach just because you verified them. Tests are software as well and are as prone to bugs as anything.
I'm basically saying I rarely have bugs in my tests because I verify them first. In fact I can't think of a single bug in my tests over the last 4 years (or even 10 years), but I can think of dozens of bugs in my code.
For example here are some pretty exhaustive tests I've written for shell, which have exposed dozens of bugs in bash and other shells (and my own shell Oil):
https://www.oilshell.org/release/0.7.pre11/test/spec.wwz/osh...
I would rather spend time using the tests to improve the code than improving the tests themselves. But I don't doubt that technique could be useful for some projects (likely very old and mature ones)