Hacker News new | past | comments | ask | show | jobs | submit login

100% branch, line coverage means nothing. It's about logical coverage. What are you testing for? You are not testing lines of code, but logic.

Right. The actual standard is called "modified condition/decison coverage" or MC/DC. In languages like C, MC/DC and branch coverage, though not exactly the same, are very close.

Achieving 100% MC/DC does not prove that you always get the right answer. All it means is that your tests are so extensive that you managed to get every machine-code branch to go in both directions at least once. It is a high standard and is difficult to achieve. It does not mean that the software is perfect.

But it does help. A lot. When I was young, I used to think I could right flawless code. Then I wrote SQLite, and it got picked up and used by lots of applications. It will amaze you how many problems will crop up when your code runs on in millions of application on billions of devices.

I was getting a steady stream of bug reports against SQLite. Then I took 10 months (2008-09-25 through 2009-07-25) to write the 100% MC/DC tests for SQLite. And after that, the number of bug reports slowed to a trickle. There still are bugs. But the number of bugs is greatly reduced. (Note that 100% MC/DC was first obtained on 2009-07-25, but the work did not end there. I spend most of my development time adding and enhancing test cases to keep up with changes in the deliverable SQLite code.)

100% MC/DC is just an arbitrary threshold - a high threshold and one that is easy to measure and difficult to cheat - but it is just a threshold at which we say "enough". You could just as easily choose a different threshold, such as 100% line coverage. The higher the threshold, the fewer bugs will slip through. But there will always be bugs.

My experience is that the weird tests you end up having to write just to cause some obscure branch to go one way or another end up finding problems in totally unrelated parts of the system. One of the chief benefits of 100% MC/DC is not so much that every branch is tested, but rather that you have to write so many tests, and such strange, weird, convoluted, and stressful tests, that you randomly stumble across (and fix) lots of problems you would have never thought about otherwise.

Another big advantage of 100% MC/DC is that once they are in place, you can change anything, anywhere in the code, and if the tests all still pass, you have high confidence that you didn't break anything. This enables us to evolve the SQLite code much faster than we could otherwise, using relatively few eyeballs.

Yet another advantage of 100% MC/DC is that you are really testing compiled machine code, not source code. So you worry less about compiler bugs. "Undefined behavior" is a big bugbear with C. We worry less than others about UB because we have tested the output of the compiler and we know that the compiler did what we wanted, even if the official C-language spec didn't require it to. We still avoid UB, and SQLite does not currently contain any UB as far as we know. But is is nice to know that even if we missed some UB in the code someplace, it probably doesn't matter.

Nicely written, and thank you for providing such a great peice of engineering!

A thought: would it help to have a modified C compiler that would crash the app whenever UB was encountered? It might help find some bugs where non-default C compiler was used (which I assume happens, given the large amount of platforms sqlite supports). Or am I missing something?

There is ASAN, the address sanitizer. You can enable it by passing some flags to gcc. It will make your program crash as soon as there is an out of bounds read / undefined behaviour. If debug symbols are enabled, it will also tell you which line of code was responsible. It can save you countless hours of debugging

I believe there's some things your have to do such aren't C compatible, e.g. store fat pointers of base+length+offset instead of raw pointers, to catch OOB accesses.

I would not help because modern compiler treat UB as an optimization opportunity, including a license to do whatever they want (even elimination of code).

That 100% branch coverage does not include indirect calls via functional pointers or jumps to signal handlers caused by devision by zero or invalid memory access, right?

True, but SQLite still is one damn well-crafted codebase, that has been explored by thousands of pairs of eyes over time. And that's the point.

Isn't that the definition of branch testing, to test all possible branches within code and also testing the logic in all of those branches?

Consider a line like:

    value = 1 / input;
You can get "100% coverage" if you test that with `input = 1`, but unless you check with `input = 0` you're missing a quite important logical check.

Can't you just have a NonZeroInt type?

Only if your language supports dependent types.

It's perfectly possible in languages with ordinary ADTs.

  data Nat = Z | S Nat
  data NonZeroNat = OnePlus Nat
  data NonZeroInt = Negative NonZeroNat | Positive NonZeroNat

C isn't one of those languages, though.

The analogous would be the Go-style represent-a-sum-badly-as-a-product,

  struct nonzero_t {
    int is_negative;
    unsigned int one_less_than_the_absolute_value;
which, under interpretation, ranges from -(2^32) to -1 and +1 to +(2^32).

In Ada, you can define integer types that only accept a given range of values.

Not really - internally inputs 0 and 1 use different branches.

That's not a branch; otherwise you would have an infinite (or impossibly large) number of branches for just that one line of code. A branch is when you execute one set of code upon a given condition, and another if that condition is not met.

I didn't say every number is a different branch. But on many processors, divide by zero triggers an interrupt. That's semantically the same as a branch.

It depends on the language. In C it is not a branch because division by zero is undefined and not a path you consider. In Java you can argue that there are two branches. One branch that throws an exception and one that does not.

No coverage reporting library will attempt to tell you that kind of coverage. You are essentially in violent agreement with the op but turning it into an argument by using different words for the same concept.

Testing all possible branches (each considered individually) won't get you very far in terms of testing all possible logic flows.

Consider a function which checks 5 simple if-statements in a row, always in the same order. Getting each branch means you tested 10 things.

But there are 32 ways for 5 if-statements to jointly evaluate. If there is a logical dependency between the state checked by one if-statement and the state checked by another one, your perfect coverage may not pick up on that.

If the if-statements might be checked in an arbitrary order... there are 120 ways to order 5 things. But you'll still get perfect branch coverage by checking 10 of them.

  int returns_less_than_twelve(int input1, int input2) {
    int sum = 0;
    if (input >= 5) {
       sum += 5;
    if (input2 >= 5) {
      sum += 8;
    return sum;
The following test cases will pass and achieve 100% branch coverage.

  int x = returns_less_than_twelve(5,0);
  test_assert(x < 12);
  int y = returns_less_than_twelve(0,5);
  test_assert(x < 12);
However, this does not cover the entire input space so

  test_assert(x < 12);
Will fail. However, branch coverage won't tell you that there's a hole in your test coverage.

Generally however, writing full branch coverage will find a lot of issues, and also cause you to really think through how your code works; but still, it doesn't guarantee correctness. If you want that you need to start bringing tools that either exhaust your input space (a function which takes 5 booleans can be exhaustively tested for correctness in trivial amounts of time), or you start modeling your chosen language well enough that you can use a mathematical prover to demonstrate that your program or function is safe on all inputs.

This of course requires you to come up with a definition of 'correct' or 'safe'. For the above program, it's clear how to define correctness. For things like "Don't let an unauthorized individual access this data or data that is derived from it in a way contrary to the desires of the owner of said data" it gets 'tricky' ;).

You test all possible executions of your program when you test with all possible data inputs, which is infinite.

You can do this with machine verified programs. It's like proving a maths theorem; you don't check it for all values but you create a robust argument that it must be true for all values.

That is why I like using random data generators for tests. You can input some static data and then the rest is random. Every once in a while a bug pops out when you see a test fail that was previously passing.

SQLite is heavily subjected to fuzzing already ... maybe this vulnerability was discovered that way.

And the probability to find the corner case ('input = 0') for the simplest expression ('value = 1 / input') this way is not quite astounding.

None of the above methods are used for testing. You use boundary testing, branch testing, equivalence partitioning etc. Random data is not a good method of testing.

Except for the fact that it is exactly the method that has been used to discover a large number of critical bugs in the most popular OSS projects (including SQLite): http://lcamtuf.coredump.cx/afl/

Fuzzing isn't really practical if all you do is just generate a totally random bit stream for input. There are many much more clever and robust strategies to hit as many edge cases as possible. Check AFL[1] for some details on generating smart random input files. You can also combine that with pretty advanced dynamic execution analysis to fuzz against unknown processor instruction sets, like in sandsifter[2].

[1]: http://lcamtuf.coredump.cx/afl/

[2]: https://github.com/xoreaxeaxeax/sandsifter

On the contrary, for years, the most prolific fuzzers basically did just generate random bitstreams, and that technique will still find vulnerabilities in all the memory-unsafe software that hasn't been fished out by those same dumb fuzzers.

Sorry, I strongly disagree. Random data with a few static arguments is an incredibly great way to test. Adding in some chaos finds bugs. "why did that test fail after 100 times...ohhhh"

I try to only use random data when possible, less and smaller tests to write with a proper setup. End result: more bugs found.

Random data is a great method of testing.

This is exactly why I don't find code coverage tools useful. Now, a tool that can show what the tests assert against? THAT would be useful.

What you’re describing sounds very similar to “mutation” testing. https://en.m.wikipedia.org/wiki/Mutation_testing

That would be interesting. In what form would you express the result?

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact