Hacker News new | past | comments | ask | show | jobs | submit login

I'd add putting in a static code analysis tool in there because that will give you a number for how bad it is (total number of issues at level 1 will do), and that number can be given to upper management, and then whilst doing all the above you can show that the number is going down.



There is significant danger that management will use these metrics to micromanage your efforts. They will refuse changes that temporarily drive that number up, and force you to drive it down just to satisfy the tool.

For example, it is easy to see that low code coverage is a problem. The correct takeaway from that is to identify spots where coverage is weakest, rank them by business impact and actual risk (judged by code quality and expected or past changes) and add tests there. Iterate until satisfied.

The wrong approach would be to set something above 80% coverage as a strict goal, and force inconsequential and laborious test suites on to old code.


Many tools allow you to set the existing output as a baseline. That's your 0 or 100 or whatever. You can track new changes from that, and only look for changes that bring your number over some threshold. You can't necessarily fix all the existing issues, but you can track whether you introduce new ones.


The results might also be overwhelming in the beginning.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: