Hacker News new | past | comments | ask | show | jobs | submit login

> I was also going to say around 60-70 pct, but for a different reason: what's left in my code is mostly checking of assertions, debug logging and handling rare errors, i.e code that's not supposed to run.

I grant you assertion-checking, but code for handling rare errors is not the code you should skip writing unit tests for. If it runs infrequently then you're far less likely to stumble on a regression during other testing, so it's even more important to unit test.




Depends on rare errors. I meanthings like 'My database crashed halfway a transaction, my file system drops from under my application, my back end service gave up.

You can't do much here. Dump some info, abrt the half- done work, maybe try again somewhere in the future. And yes, that last part migh deserve a test.


If you can't do much, it shouldn't be much code, so coverage should stay high.

I used to be of the "70% is good enough" school, but I found that as I wrote better programs, I also achieved better test coverage. Not because I was writing more tests, but because I was choosing core designs with fewer edge cases, simplifying my error handling with fewer reachable paths, and admitting I might as well just crash in more cases. So now my line/branch coverage is more like 95-100%, but my code is easier to read and I write _fewer_, mostly functional/behavioral, tests. Most lines I don't reach are fatal errors, and most fatal errors are the only statement in their branch.

Code you can't reach from test cases isn't a sign you should write more tests, it's a sign you should remove that code from the program.

Today I view 90-95% really as a baseline, and focus more on path coverage (most standard tools are quite bad for this still) and edge cases in data (e.g. denormed floats or different kinds of invalid data I want to make sure stay invalid) that don't affect coverage one way or another.


It adds up surprisingly fast. I live mostly in the java world, and libraries throwing 3 or so required to catch exceptions are common.

I'd agree that 90pct is doable for your own algorithmic code. Glue code for libraries, commonly not very well thought out, requires a very different style. Defensive up to the paranoid, and catching every gremlin at the source. It's not uncommon to have more than half of the code dedocated to exception handling. Dumping said libraries is rarely an option, unfortunately.

I'm not necesarily disagreeing with you, but maybe we live in different code worlds.


We definitely do, and I see how that might apply - I do write some Java but avoid as much as possible the disaster of checked exceptions. Mostly I am talking about Python, Go, TypeScript, and other more esoteric stuff.

I don't agree that this defensive style is necessary though - I do (like everyone these days) write a ton of glue code, and that's as I described. Java is nearly as bad as JavaScript for language-cart-pulling-program-horse designs, it's just a different set of mistakes. It could be so much better if developers just did less.


> ...abort the half-done work...

I have seen more than a few problems be caused by this not being done right. IIRC it was an element in the events that destroyed Knight Capital in just a few minutes.

If there is something in your code that can put your system in an inconsistent state, and your testing has not covered that scenario, then it is irrelevant what percentage of the code was covered by that testing. This combinatorial complexity and history-dependence is the reason why code coverage is a misleading indicator of quality: 100% code coverage is very far from 100% scenario coverage.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: