There is no silver bullet. Personally, I let a combination of complexity and importance guide my tests.
The more likely it is that a piece of code will break, and the more business damage it will do if it does break, the more tests I wrap around it.
For self-contained algorithms that have a lot of branches or complex cases, I use more unit tests. When the complexity is in the interaction with other code, I write more high-level tests. When the system is simple but critical, I write more smoke tests.
If I’ve got simple code that’s unlikely to break and it doesn’t matter if it does break, I might have no tests at all.
100% In critical areas I would suggest parameterised tests are worth the effort, especially in conjunction with generators. Property-based testing in FP for example, or just test data generators that'll generate a good range.
The more likely it is that a piece of code will break, and the more business damage it will do if it does break, the more tests I wrap around it.
For self-contained algorithms that have a lot of branches or complex cases, I use more unit tests. When the complexity is in the interaction with other code, I write more high-level tests. When the system is simple but critical, I write more smoke tests.
If I’ve got simple code that’s unlikely to break and it doesn’t matter if it does break, I might have no tests at all.