Hacker News new | past | comments | ask | show | jobs | submit login

> It would be an enormous amount of effort to go through every flaky result and reason about whether the problem is with the test or the code under test, especially in larger codebases.

IMO a "flaky test" is one for which the investigation has been performed and the design of the test assessed to be "flaky." i.e. not a test that passes some times and not others, but one whose design cannot reliably expect a given result or one whose implementation can be impacted by factors outside of the control of the test suite. Timeouts are a classic source of flaky tests.

Once a test is determined to be flaky it should be evicted from the test suite, or at least demoted from the "reliable/frequently run" rank to a lower one that can tolerate some hand holding.




I'm not sure if it's exhaustive, but any test with a dynamic dependency is one I would likely classify as "flaky", like anything hitting another system. Draw your own boundaries as fit around what "dynamic" means (ex: some might say static files are OK, others not).

Also, I'm talking every-commit, unit-level testing for context.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: