Hacker News new | past | comments | ask | show | jobs | submit login

> I've heard this aversion to unit tests a few times in my career, and I'm unable to make sense of it.

It's very simple: most of the time people are told by management that they MUST achieve a 80-90-95% of code coverage (with unit tests), which leads to a lot of absolutely worthless tests - tests for the sake of it. The irony is that the pieces that really count don't get tested properly, because you unit-test the happy-path and maybe 1 or 2 negative scenarios, and that's it, missing out a bunch of potential regressions.

EDIT: This just to say that I don't believe the author of the comment said "don't write unit tests" (I hope not, at least!) but, if I can rephrase it, "well, the integration tests give you a better dopamine effect because they actually help you catch bugs". Which would be partially true also with properly written unit tests (and they would do so in a fraction of the time you need with integration tests).




> most of the time people are told by management that they MUST achieve a 80-90-95% of code coverage (with unit tests), which leads to a lot of absolutely worthless tests - tests for the sake of it

So strict rules from management in a company that likely doesn't understand software development, and lazy developers who decide to ignore this by intentionally writing useless tests, lead to thinking that unit tests and coverage are useless? That doesn't track at all.

I'd say that the answer is somewhere in the middle. If the company doesn't understand software development, it's the engineer's job to educate them, or find a better place to work at. It's also the engineer's job to educate lazy developers to care about testing and metrics like code coverage.

> if I can rephrase it, "well, the integration tests give you a better dopamine effect because they actually help you catch bugs"

And unit tests don't? I would argue that unit tests give you much more of that dopamine, since you see the failures and passes much more quickly, and there should be much more of them overall. Not that we should structure our work towards chasing dopamine hits...

I'd say that most of the people who advocate for this position haven't worked with a well tested codebase. Sadly, not all of us have the privilege of working with codebases like SQLite's, which go much beyond 100% line/statement coverage[1]. Is all that work in vain? Are they some crazy dogmatic programmers that like wasting their time? I would say: no. They just put a lot of effort and care in their product, which speaks for itself, and I would think makes working on it much safer, more efficient and pleasant.

I would also argue that the current state of our industry, and in turn everything that depends on software, where buggy software is the norm would be much better overall if that kind of effort and care would be put in all software projects.

[1]: https://www.sqlite.org/testing.html


> Sadly, not all of us have the privilege of working with codebases like SQLite's, which go much beyond 100% line/statement coverage[1]...

Linked SQLite page mentions this:

> 100% branch test coverage in an as-deployed configuration

Branch test coverage is different from line coverage and in my opinion it should be the only metric used in this context for coverage.

90-95% line coverage is exactly why many unit tests are garbage and why many come up with the argument "I prefer integration tests, unit tests are not that useful".


I'm not sure if I understand your argument.

> Branch test coverage is different from line coverage and in my opinion it should be the only metric used in this context for coverage.

It's not different, just more thorough. Line and statement coverage are still useful metrics to track. They might not tell you whether you're testing all code paths, but they still tell you that you're at least testing some of them.

Very few projects take testing seriously to also track branch coverage, and even fewer go the extra mile of reaching 100% in that metric. SQLite is the only such project that does, AFAIK.

> 90-95% line coverage is exactly why many unit tests are garbage

Hard disagree. Line coverage is still a useful metric, and the only "garbage" unit tests are those that don't test the right thing. After all, you can technically cover a block of code, with the test not making the correct assertions. Or the test could make the right assertions, but it doesn't actually reproduce a scenario correctly. Etc. Coverage only tracks whether the SUT was executed, not if the test is correct or useful. That's the job of reviewers to point out.

> and why many come up with the argument "I prefer integration tests, unit tests are not that useful".

No. Programmers who say that either haven't worked on teams with a strong testing mindset, haven't worked on codebases with high quality unit tests, or are just being lazy. In any case, taking advice from such programmers about testing practices would not be wise.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: