Not only philosophical, it can come out in the code too. I've written a number of testing packages over the years, and it's a rare testing platform that can assert that some sort of test failure assertion "correctly" fails without some sort of major hoop jumping, usually having to run that test in an isolated OS process and parse the output of that process.
This isn't a complaint; it's too marginal and weird a test case to complain about, and the separate OS process is always there as a fallback solution.
We once accidentally made a change to a python project test suite that caused it to successfully run none of the tests. Then we broke some stuff but the tests kept "passing".
It's a little difficult to productionize an always_fail test since you do actually want the test suite to succeed. You could affirmatively test that you have non-zero passing tests, which is I think what we did. If you have an always_fail test, you could check that that's your only failure, but you have to be super careful that your test suite doesn't stop after a failure.
Maybe you could ignore that test by default, and then write a shell script to run your tests in two stages. First you run only the should-fail test(s) and assert that they fail. Then you can run your actual tests.
> We once accidentally made a change to a python project test suite that caused it to successfully run none of the tests.
That shouldn't be an easy mistake to make.
Your test code should be clearly marked, and better if slightly separated from the rest of the code. Also, there should be some feedback about the amount of tests that run.
And yeah, I know Python doesn't help you make those things.
Not GP but when I feel like I'm going crazy I insert an "assert False" test into my test suite. It's a good way to reveal when you're testing a cached version of your code for some reason (for instance integration tests using Docker Compose that aren't picking up your changes because you've forgotten to specify --build or your .dockerignore is misconfigured).
reply