Hacker News new | past | comments | ask | show | jobs | submit login

There is no silver bullet. Personally, I let a combination of complexity and importance guide my tests.

The more likely it is that a piece of code will break, and the more business damage it will do if it does break, the more tests I wrap around it.

For self-contained algorithms that have a lot of branches or complex cases, I use more unit tests. When the complexity is in the interaction with other code, I write more high-level tests. When the system is simple but critical, I write more smoke tests.

If I’ve got simple code that’s unlikely to break and it doesn’t matter if it does break, I might have no tests at all.




100% In critical areas I would suggest parameterised tests are worth the effort, especially in conjunction with generators. Property-based testing in FP for example, or just test data generators that'll generate a good range.


I'm not sure what's involved there. Do you have more detail, like a good blog post, to read about how to do that?


Not OP, but I particularly enjoy the fsharpforfunandprofit article about it.

https://fsharpforfunandprofit.com/posts/property-based-testi...




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: