
MinUnit: a minimal unit testing framework for C - Tomte
http://www.jera.com/techinfo/jtns/jtn002.html
======
MaulingMonkey
If we're going this far - C already has a "unit testing framework", assert.h.
Just ensure !NDEBUG.

~~~
filmor
A failing `assert` interrupts execution, though.

~~~
MaulingMonkey
Right on one of the file/line combinations I want to look at when the debugger
is attached, even. Feature!

When the debugger is not attached, it provides additional incentive to fix the
tests ;).

(Hanging CI on failed tests on remote servers is, admittedly, annoying at
best. MinUnit will also only report a single failed test in that case, I
note.)

------
int_19h
This is obviously not an option for embedded stuff, but on many common
platforms, it's actually easiest to unit-test C from Python. While you do need
to write extra glue code (although ctypes are often "good enough" for tests,
significantly reducing that amount), it's just so much easier to set things up
in Python - i.e. generate inputs (especially strings!), set up files etc.

~~~
pagnol
Would you mind giving an example of how you would unit test a function?

~~~
int_19h
The same way you would unit test a Python function, except you invoke the
actual C function using e.g. ctypes:

[https://docs.python.org/3/library/ctypes.html](https://docs.python.org/3/library/ctypes.html)

but your unit test is written in Python, including all the setup/teardown
stuff and asserts. And you run it using the stock Python test runner (or some
wrapper around that):

[https://docs.python.org/3/library/unittest.html](https://docs.python.org/3/library/unittest.html)

(In theory, this all can be done even smoother if you use Cython, because that
lets you interface with C more naturally than ctypes. I haven't tried that,
though.)

Note that this all makes sense for C APIs, not C++ ones. With C++ APIs, as
soon as classes etc come into the picture, and especially once you start
dealing with STL and Boost data types, it becomes too difficult to invoke that
stuff from Python. However, the _implementation_ can be in any language - what
matters is what the tested API looks like.

------
chriswarbo
For those asking what the point of this is, I'm not the poster but I found it
very useful a few months ago in a sample solution to a programming class:
[https://github.com/Warbo/aai-2016-17/tree/master/lab01](https://github.com/Warbo/aai-2016-17/tree/master/lab01)

Most of the students would waste most of their time in lab sessions adding and
removing printf statements to one huge `main` function, and trawling through
the resulting mess on the console.

I was quite satisfying to guide them through the process of:

\- Articulating exactly what they're looking for in the console (e.g. "these
numbers should add up to the same as that")

\- Have them write that down in the code (e.g. `mu_assert("results add up", x
+ y == z);`)

\- Remove the corresponding printf calls

Due to sheer amount of output from printf, many students had cobbled together
elaborate nested-if-based one-char-at-a-time scanf-parser menu systems to
choose which parts of the program to run with which inputs; the results being
far more complicated than the actual programming assignments they'd been
given! Since tests only output when they fail, all of this complication could
be thrown away.

Then, the magic parts:

\- _Leave the tests in place_ , so they'll be checked _every time_

\- _Keep adding more tests_ to narrow down a problem more specifically

\- Wrap the tests in `for (int i = 0; i < 1000000; i++) { ... }` to check _far
more_ cases than would be feasible manually

Besides driving home the idea of automation, testing across many inputs in a
loop usually made debugging easier. Due to the effort of setting up, checking
and tearing out printf debugging statements, many students had skipped
checking trivial cases like 0 and 1, and instead were trying to wade through
the output for an input like 50. Adding a loop caused those trivial cases to
be checked, and the problems became obvious :)

------
mauvehaus
If you're looking for something a little more featureful, there are a couple
other options out there: Check[0] and Cunit[1].

I've used both, although I wouldn't say I've used any of the advanced features
of Check. Check seems to be a bit more maintained and documented, but I don't
remember having any particular problems with Cunit the last time I used it.

Check's documentation also includes a list of the other C unit testing
frameworks its developers know about[2]. I'm curious to check out AceUnit,
which is said to be usable in an embedded context.

My approach for testing embedded functionality has been to shim any hardware-
specific stuff in the code I want to test and run the tests using Check on the
machine hosting the build. This "works", but it misses errors that occur
because of differences in e.g. the size of size_t. Mind you, I've only been
doing embedded for fun recently.

[0] [https://libcheck.github.io/check/](https://libcheck.github.io/check/) [1]
[http://cunit.sourceforge.net/](http://cunit.sourceforge.net/) [2]
[https://libcheck.github.io/check/doc/check_html/check_2.html...](https://libcheck.github.io/check/doc/check_html/check_2.html#Unit-
Testing-in-C)

ETA: Disappearing from the internet. Won't be around for discussion. Apologies
for the post-and-run.

------
to3m
Meh. Like, who the hell wants to have to restate their conditions twice? Not
me, that's for sure.

What you need is something more like this. (I'll write out one case, and you
can figure out the rest.) It's more lines, but I don't think you'll feel like
it's bad value per LOC.

For the actual condition you write in your code, you want a macro that expands
to something with __FILE__ and __LINE__, and uses the stringizing operator to
get the LHS and RHS as strings. Here's one that compares to ints (hence II)
for equality (hence EQ).

    
    
        #define TEST_EQ_II(A,B) (DoTestEqII((A),#A,(B),#B,__FILE__,__LINE__))
    

The function that implements that. Note that because C is stupid, the int
parameters basically have to be int64_t... \this isn't really a problem in
practice.

    
    
        void DoTestEqII(int64_t a,const char *a_str,
                        int64_t b,const char *b_str,
                        const char *file,int64_t line)
        {
            DoTestII(a,a_str,b,b_str,file,line,a==b,"==");
        }
    

Clearly I'm all about the layering, and here's the next layer.

    
    
        void DoTestII(int64_t a,const char *a_str,
                      int64_t b,const char *b_str,
                      const char *file,int64_t line,
                      int result,const char *oper)
        {
            if(!result) {
                Print("%s%s%" PRId64 "%s: test failed\n",file,LINE_PREFIX,line,LINE_SUFFIX);
                Print("    Expected expression: %s\n",a_str);
                Print("               Operator: %s\n",oper);
                Print("      Actual expression: %s\n",b_str);
                Print("         Expected value: %" PRId64 " (0x%" PRIx64 ")\n",a,a);
                Print("           Actual value: %" PRId64 " (0x%" PRIx64 ")\n",b,b);
                Break();
            }
        }
    

For VC++:

    
    
        #define Break() __debugbreak()
        #define LINE_PREFIX "("
        #define LINE_SUFFIX ")"
    

Then make Print print via OutputDebugString (VS2015 does C99-style snprintf,
so this is easy to arrange perfectly), so the output goes to the output
window. Visual Studio treats output window lines beginning like "FILE(LINE):"
as referring to source locations, so when you get a failing test you can just
tap F8 - I think? I'm not in Windows right now - and you'll be taken to the
line and file with the error.

For POSIX-type stuff, e.g., Emacs:

    
    
        #define Break() asm volatile ("int $3\n") /* if you get a build error here, your computer is shit. */\
                                                  /* Throw it away, and buy a proper one, with an x64 CPU. */
        #define LINE_PREFIX ":"
        #define LINE_SUFFIX ""
    

And make Print print to stderr. I typically run my tests as part of the build
process, invoked via M-x compile, and Emacs then picks up the FILE:LINE:
markup when you use M-x next-error and M-x previous-error.

(If you use vim, or whatever... I'm sorry. But lesser editors often understand
the standard FILE:LINE: markup, so even when you're slumming it, you're not,
like, _totally_ slumming it.)

You can probably figure out how this extends from int equals int to int not
equals int, int less than int, blah blah... and from there to string vs
string, and double vs double, and so on. It's not really the operations that
are the key thing, because of course they're easy to add; for my money the
benefit comes from auto-generating the output text from the condition, and
from having it integrate tidily with the calling process - Visual Studio,
Emacs, other (notepad, vim, etc.) - in such a way that you can jump
immediately to the source location when a test fails.

Where the article and I _do_ agree: the conclusion, all of it. Writing a unit
test system isn't hard, and writing your own isn't crazy. It probably won't
take you even one hour.

P.S. if you like C++, you can go one better, and have the output text auto-
generated without even having to say what sort of comparison it is:
[https://github.com/philsquared/Catch](https://github.com/philsquared/Catch)

Only downside is that while you'll save one hour not writing your own, you'll
lose ten hours waiting for it to compile. Also, good luck trying to write test
programs that have command line options. This is what I believe they call
"technical debt".

~~~
keithnz
I wrote [https://github.com/keithn/seatest](https://github.com/keithn/seatest)
a while back so I could test on embedded devices. It's pretty simple, but one
thing I needed to avoid was memory allocation so test test suites via function
chains, downside is tests have to be explicitly specified, though I have
scripts for that.

~~~
be_erik
Thank you so much for seatest! I've used it before on a couple of small
projects and really liked it. Yes, it is a bit of a pain to specify the tests,
but if you're red/greening it works out great.

I'd love to see the scripts you used to manage your suite if you wouldn't mind
sharing.

------
mfukar
This is possibly the most useless piece of code I've ever ran across, in any
language or context.

~~~
alexhultman
Maybe it's a joke? Or some kind of hint about testing "frameworks" being too
complex? Personally I think this is hilarious because I honestly cannot
understand how people even need a library to do testing in the first place.

------
jlebrech
testing from the same file, how it should be done.

future languages should have testing as the method definitions (i'd call it
first class citizen testing)

------
GoToRO
Tessy.

------
floor_
Unit testing in C? Why???

