Good question. Depends on the reference template, if there is one. Otherwise, Unit Testing: Principles, Practices, and Patterns (Khorikov 2020) suggests
---
Let me repeat myself: coverage metrics are a good negative indicator, but a bad positive one. Low coverage numbers--say, below 60%--are a certain sign of trouble. They mean there's a lot of untested code in your code base. But high numbers don't mean anything. Thus, measuring the code coverage should be only a first step on the way to a quality test suite.
Getting <100% coverage means you have no idea what some of the code does when run in dumb, non-parallel mode. Thar be bugs!
And those bugs are easier to find than the ones that might still exist if you have 100% coverage.
No one ever anywhere (except for numerous strawmen) has claimed 100% coverage "will eliminate all bugs and produce the perfect software."
> This article is useless. Everybody knows that 100% code coverage doesn’t eliminate bugs.
A double-secret strawman is still a strawman, dude.
> Junior developers certainly don’t know this. Misguided team managers don’t know it as well.
Great. Do they read your clickbait? If so, do they believe it (on authority, I guess)? Because unless they do, you're certainly a problem and not part of a solution.
Code coverage only tells if a line or statement was hit. I’ve been working with offshore development teams which get sufficient code coverage, but don’t have any asserts. They call the code, but the test doesn’t test (except maybe a “smoke test” - it doesn’t throw an exception).
Jacoco measures both line and branch coverage, same as Jest.
Preferred thing to do is establish a baseline, and then
* Write tests for new features going forward.
* Write tests for fixes to production incidents.
Don't let things get worse than baseline. (The number will vary from repo to repo.)