Tests, like any process, should be serving you and your goals.. You shouldn't be serving your processes or testing practices. This sort of un-nuanced thinking isn't indicative of a high performing startup or CTO IMHO. Perhaps your policies are not directly indicative of your real thoughts on the matter?
As others have said, line coverage is a misleading metric. Ideally, your tests would fully cover all _program states_, and even 100% line coverage doesn't guarantee full state coverage. If you have untested states, then the following facts are true:
- You don't have a formalized way of modeling or understanding what will happen to your program when it enters an untested state.
- You have no way to detect when a different change causes that state to behave in an undesired way.
So the answer to how many tests does a PR needs- as many as needed to reduce your software's risk of failure to a minimal level... And this is failure right now, and in the future, because you will likely be stuck with this code for a while. Since it's difficult to know how much a future failure will cost your company, IMO I always try to err on testing as much as possible. Plus, good comprehensive tests have other benefits, such as making other changes / cleanups safer by reducing the risk that they unintentionally side-effect other code.
If a function has been statically proven to return an int, I know it will either return an int or not return at all. It can't suddenly return a hashmap at runtime, no matter what untested state it enters.
Unless you're actually writing complex tools - no, you're probably not getting a "formalized way of modeling" what happens to your program.
If somebody tells me "hey, I have to keep manually testing this and that, I'm losing a lot of time, how about I spend 2 days writing my test thing?" - I'll say Sure!
But if someone tries to convince me in the abstract - I'll be skeptical. Developer busy-work is real.
Enough test for each of your spec. Adding a new functionality to your product? Your test should cover and the cases you put in your specs. Correcting a bug? You test should trigger it with the old code.
You can have 100% code coverage with unit testing, it will do jack-shit for your users when they enter a negative number and there was no code meant to manage this case so it never was tested.
Enough so the overall coverage doesn't go down.