Hacker News new | past | comments | ask | show | jobs | submit login

> Striving for 100% test coverage is nonsense

Tell that to SQLite guy.




Striving for very high test coverage seems like exactly the kind of thing you want for your database infrastructure.


And that's exactly why you need less tests in your project that uses a DB: there's no need to test the DB because it's covered by its own tests already.


Interactions with your DB are often the most fragile piece of your code, because there's an impedance mismatch between the language you're writing and SQL. Some languages/frameworks abstract this more safely than others, though.


Interaction, in general, is where many, if not most, errors lie. Unit testing verifies that things work in isolation. But if you code up your unit tests for two components that interact, but with different assumptions, then the unit tests alone won't do you any good: X generates a map like `%{name => string()}`, Y assumes X generates a map like `%{username => string()}`. Now, hopefully that won't happen if you're writing both parts at the same time, but things like this can happen. Now your unit tests pass because they're based on the assumptions of their respective unit of code, but put it together and boom!


Exactly, though I believe there's still a thin line between testing the interaction, and testing the db itself. Just like, the difference between testing some code, and testing the language itself.


You should definitely watch this presentation on SQLite exploits.

SELECT code execution from using SQlite - DEF CON 27 Conference

https://www.youtube.com/watch?v=HRbwkpnV1Rw

Don't get me wrong, it's a great product and I use it often, but 100% test coverage does NOT equal 100% safe.


Something like sqlite can actually be unit tested to 95% or so. If you stretch the definition of 'unit test' a bit (don't mock i/o).

Try that with a networked application that takes user input though...


Mocking networked services is something very much worth doing because you can then check you’ve set timeouts correctly, handling incomplete or junk responses gracefully etc. Those are the kind of hidden problems that can bite you on production deployments.


You can mock networks all you want and you'll still cover only what you can think of.

It will always break on the user's internet, because it's too diverse to predict.

Doesn't mean you can't have some networking unit tests, just that you shouldn't believe in them too much.

Edit: you said services. You thinking of server? I'm thinking of clients.


"mocking networked services" is exactly what you would do when testing clients.

And it doesn't have to be a static mock. It's not too hard to inject a fuzzer in your mock service response, although that's probably left to a separate testing routine, and not part of your unit test setup. But if you have no mock for your network service, you can't fuzz it either.


And yet they had bugs and could lose data.

Which shows 100% unit test coverage is not better than spending that time in other kinds of tests.


unit tests don't account for timing-based effects and you generally can't test for ACID properties.


TBH

SQLite is suitable for 100% coverage.

A lot of application code or workflow style code is hard to reach 100% coverage as they are rarely triggered.


The rarely triggered code is exactly where you want unit tests.

That's an opportunity to describe, in code, what that path is supposed to do, and then make sure it does it.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: