Hacker News new | past | comments | ask | show | jobs | submit login

That is not an apt analogy. Unit tests improve code quality: code that is easily testable has high cohesion and low coupling. Tests can also serve as valuable documentation.



I have to work in a few codebases at work that require 100% line and branch coverage.

There are many cases where someone will write tests that hit an endpoint directly and then assert on the whole response, which in this case is quite huge. They'll then do so for all branches.

Their library / service / etc. code is technically exercised, sure, but doTheFooThing() isn't directly tested, so it could have a bug that is only exposed from another caller with different parameters that would be caught with direct testing. Extreme coupling happens all the time.

Now I'm slowly converting it to sanity, and my teammates are copying me.

To be fair, it was one of those "Get this out now because we're dying" kind of codebases, not do to lack of skill.

But once code is written, it's hard to undo. Then the bad pattern becomes "keeping the same style"


I am in favor of testing the way you complain about. The advantage is primarily that you can refactor code without getting bogged down in having to change tests. An API is a) more stable, so you are maintaining compatability for that surface anyway, and b) uses your code under exactly the preconditions that really matter.

If doTheFooThing() is called from somewhere else than that somewhere else should also have tests. So I find that an argument from "purity" more than practical consideration about bug probability.

Also, if you only test doTheFooThing but not the API then you could accidentally refactor yourself into breaking the API in a backwards-incompatible way (or not be bug-compatible, which is sometimes required, or at least you should detect it and check logs and warn consumers). So the API tests are needed anyway.

There's a balance of course, if doTheFooThing() is an important internal cross-road, or if it is algorithmically non-trivial, or important for other reasons then it should be tested in seperation. But between only semi-integration tests (hitting endpoints and checking responses), and only lots of small unit tests that break or needs rewriting for the simplest refactorings but doesn't catch subtle API breakage, I'd want to work with the former any day. The units of code are often trivial where mistakes are not made, and the mistakes comes when stringing them together, and then it is more difficult to trust the human capacity to figure out failure scenarios than just run the real handlers.


I've gone through various stages on my feelings about full coverage unit testing vs functional testing.

Having full unit coverage with full path coverage is a great ideal. The reality is that for most companies the overhead of having and maintaining these tests is impossible from a business POV. Getting buy-in to spend more than 50% of your time maintaining (writing, updating, verifying) tests is a very hard sell. For companies with under-staffed/over-worked development teams, it just doesn't happen.

At this point in my career I'm firmly in the camp that functional testing is really what matters in most cases. It is a compromise between the realities of business and the needs of engineering. I can test the end product of the development work as a whole and identify that everything works as expected and has no known strange side-effects. This also serves as a contract with your business users as to functionality of the product. If bugs are discovered you can add specific use-case tests to attempt to trigger the bug and prevent regression. All of this does not preclude limited unit testing as well for critical code paths. I find this to be a much more pragmatic approach.


High cohesion and low coupling are good, and easily testable correlates with that, but having lots of unit tests doesn't imply that the code is easily testable, and not having unit tests doesn't imply that code has low cohesion or high coupling. It's even possible code needs many lines of testing partly because it lacks in ease of testing.

In the case of SQLite I think it mostly is because of hard work to fulfill the ambition to deliver a robust project and because of the existence of fuzzing, which can automate test generation.



That is analogy I copied from McConnell, Steve (2009-11-30). Code Complete (Kindle Location 16276). Microsoft Press. Kindle Edition. I tend to rely on what I written on this book


That's fine. It's a good book. That doesn't mean the analogy works in every situation. I happen to think it doesn't in this one. Dr. Hipp is a great coder. Telling him to "develop better" from that analogy just falls apart. Tests are there for a reason and Dr. Hipp uses them to great effect on the quality of SQLite.


Yeah I agree. Actually the whole point of the original comment was about SQLite's robustness as the result of multiple development practices they use(including testing of course), but testing coverage can't lead to success itself. That's it)




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: