Hacker News new | past | comments | ask | show | jobs | submit login

How many tests does a PR need? One? Five? When would you not write tests for something(that something not necessarily being a PR, but maybe a unit or feature)?

Tests, like any process, should be serving you and your goals.. You shouldn't be serving your processes or testing practices. This sort of un-nuanced thinking isn't indicative of a high performing startup or CTO IMHO. Perhaps your policies are not directly indicative of your real thoughts on the matter?




I guess it might sometimes be fine to be a relativist and write off the need for tests as a result of "nuanced thinking," but I think you have to accept that you are running a risk by shipping untested code into your product.

As others have said, line coverage is a misleading metric. Ideally, your tests would fully cover all _program states_, and even 100% line coverage doesn't guarantee full state coverage. If you have untested states, then the following facts are true:

- You don't have a formalized way of modeling or understanding what will happen to your program when it enters an untested state.

- You have no way to detect when a different change causes that state to behave in an undesired way.

So the answer to how many tests does a PR needs- as many as needed to reduce your software's risk of failure to a minimal level... And this is failure right now, and in the future, because you will likely be stuck with this code for a while. Since it's difficult to know how much a future failure will cost your company, IMO I always try to err on testing as much as possible. Plus, good comprehensive tests have other benefits, such as making other changes / cleanups safer by reducing the risk that they unintentionally side-effect other code.


Those facts are untrue. If i am using a sound static type system, I have a formal way of modeling and understanding what will happen to my program, even without tests.

If a function has been statically proven to return an int, I know it will either return an int or not return at all. It can't suddenly return a hashmap at runtime, no matter what untested state it enters.


Code without test code doesn't mean untested code. And vice-versa.

Unless you're actually writing complex tools - no, you're probably not getting a "formalized way of modeling" what happens to your program.

If somebody tells me "hey, I have to keep manually testing this and that, I'm losing a lot of time, how about I spend 2 days writing my test thing?" - I'll say Sure!

But if someone tries to convince me in the abstract - I'll be skeptical. Developer busy-work is real.


If you have any concurrency in your system then you aren't going to cover all the states using unit tests. You'll need some sort of formal model for that.


> How many tests does a PR need?

Enough test for each of your spec. Adding a new functionality to your product? Your test should cover and the cases you put in your specs. Correcting a bug? You test should trigger it with the old code.

You can have 100% code coverage with unit testing, it will do jack-shit for your users when they enter a negative number and there was no code meant to manage this case so it never was tested.


> How many tests does a PR need? One? Five?

Enough so the overall coverage doesn't go down.


Be careful that coverage is a proxy metric to good tests. Striving for high coverage can mislead you on the quality of your tests.


On the flip side, encouraging good coverage usually ends up uncovering some bugs that might otherwise have gone unnoticed until they bit someone.


On the other flip side it encourages writing test that have zero business value.


High coverage is necessary but not sufficient, sure. I don't think you can have a good test suite with low coverage (< 80%).




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: