cbmc found some huge bugs the testsuites were not able to find.
KLEE has similar status as Z3. Hugely popular, but rarely used in practice. Educational toys, or used in side-projects only.
But the capacity for achieving that is a function of the engine and how it integrates with the language. Similar to formal verification it's strongly computationally bound in practice, so the best results (e.g. more comprehensive coverage) are achieved when you structure your code in a way that components (functions, groups of functions, etc) can be symbolically executed individually; and when you can drop annotations in the code to narrow down value ranges that the engine can't determine itself within a reasonable amount of time. So in principle a symbolic execution engine could determine whether a page-table could be mangled, but in practice it probably wouldn't be able to definitively tell you one way or another if you simply tossed the entire FreeBSD kernel source code at such an engine.
I'm not particularly knowledgable in this space. I'm abstaining from giving a proper description of what symbolic execution actually is because I'll probably just confuse things. But I have used KLEE and even found bugs using it. It's a steep learning curve so once you get to the point where you can make use of KLEE much of the theory will begin to make sense even if you can't describe it properly ;) And if you've ever used formal verification systems, especially ones with strong language integration, things will make sense much earlier.
I don't think there's a clean dividing line between symbolic execution and static analyzers. Some modern static analysis tests and optimizations in GCC and clang could be described as employing symbolic execution. A proper symbolic execution engine can just perform such tests more comprehensively, exploring much longer, deeper control flow paths with far greater state spaces.
In fairness, that's a big deal, especially when trying to port a large existing code base.
> A successor to the programming language B, C was originally developed at Bell Labs by Dennis Ritchie between 1972 and 1973 to construct utilities running on Unix.
> In 1972, Ritchie started to improve B, which resulted in creating a new language C.
The only alternative to it, besides improved static analyzers, is the ongoing work from CPU and OS vendors to bring hardware tagged memory into mainstream computing.
Isn't that actually kind of significant?
See here for more info (which transitively links to a tutorial of 3C's use): https://github.com/correctcomputation/checkedc-clang/blob/ma...
That really irks me. It's apparently a holdover from the days when papers were published in magazine-type journals which had an issue date. Putting a date on the paper made it clear how far behind the publisher was. Papers should now have dates on them.
Published in: 2020 IEEE Secure Development (SecDev)
Date of Conference: 28-30 Sept. 2020
Date Added to IEEE Xplore: 21 October 2020
It seems to me that the path of least resistance for memory safety in C-based OS kernels is to port them to a memory-safe C compiler and runtime.
Of course, if it's feasible to refactor existing C code like in the paper, this seems like a good path forward, vs trying to "start over" with whatever other checked language of choice you pick.
Compared to C++?