Hacker News new | past | comments | ask | show | jobs | submit | foobar8495345's comments login

In my regressions, i make sure i include an "always fail" test, to make sure the test infrastructure is capable of correctly flagging it.

This opens up a philosophical can of worms. Does the test pass when it fails? Is it marked green or red?

Not only philosophical, it can come out in the code too. I've written a number of testing packages over the years, and it's a rare testing platform that can assert that some sort of test failure assertion "correctly" fails without some sort of major hoop jumping, usually having to run that test in an isolated OS process and parse the output of that process.

This isn't a complaint; it's too marginal and weird a test case to complain about, and the separate OS process is always there as a fallback solution.


You want both. To test green and red pixels.

So basically you want yellow? As it's what you get when you start testing red and green subpixels simultaneously.

could you give a concrete example of what you mean by this?

We once accidentally made a change to a python project test suite that caused it to successfully run none of the tests. Then we broke some stuff but the tests kept "passing".

It's a little difficult to productionize an always_fail test since you do actually want the test suite to succeed. You could affirmatively test that you have non-zero passing tests, which is I think what we did. If you have an always_fail test, you could check that that's your only failure, but you have to be super careful that your test suite doesn't stop after a failure.


Maybe you could ignore that test by default, and then write a shell script to run your tests in two stages. First you run only the should-fail test(s) and assert that they fail. Then you can run your actual tests.

Sounds like the old George Carlin one-liner. Or maybe it's a two-liner:

The following statement is true.

The preceeding statement is false.


Even older than George Carlin. The Liar Paradox is documented from at least 400BC.

https://en.m.wikipedia.org/wiki/Liar_paradox


I have to imagine it's about as old as propositional logic (so, as old as the hills).

I most closely associate it with Gödel and his work on incompleteness.


> We once accidentally made a change to a python project test suite that caused it to successfully run none of the tests.

That shouldn't be an easy mistake to make.

Your test code should be clearly marked, and better if slightly separated from the rest of the code. Also, there should be some feedback about the amount of tests that run.

And yeah, I know Python doesn't help you make those things.


Not GP but when I feel like I'm going crazy I insert an "assert False" test into my test suite. It's a good way to reveal when you're testing a cached version of your code for some reason (for instance integration tests using Docker Compose that aren't picking up your changes because you've forgotten to specify --build or your .dockerignore is misconfigured).

But I delete it when I'm done.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: