

The “too small to fail” memory-allocation rule - Athas
http://lwn.net/Articles/627419/

======
greenyoda
Even if you're not a kernel developer, this article is well worth reading
because it provides a warning about doing things in your code that would cause
a lot of suffering if you ever had to undo them in the future.

Here's the punch line from the article:

 _The alternative would be to get rid of the "too small to fail" rule and make
the allocation functions work the way most kernel developers expect them to.
Johannes's message included a patch moving things in that direction; it causes
the endless reclaim loop to exit (and fail an allocation request) if attempts
at direct reclaim do not succeed in actually freeing any memory. But, as he
put it, "the thought of failing order-0 allocations after such a long time is
scary."

It is scary for a couple of reasons. One is that not all kernel developers are
diligent about checking every memory allocation and thinking about a proper
recovery path. But it is worse than that: since small allocations do not fail,
almost none of the thousands of error-recovery paths in the kernel now are
ever exercised. They could be tested if developers were to make use of the the
kernel's fault injection framework, but, in practice, it seems that few
developers do so. So those error-recovery paths are not just unused and
subject to bit rot; chances are that a discouragingly large portion of them
have never been tested in the first place.

If the unwritten "too small to fail" rule were to be repealed, all of those
error-recovery paths would become live code for the first time. In a sense,
the kernel would gain thousands of lines of untested code that only run in
rare circumstances where things are already going wrong. There can be no doubt
that a number of obscure bugs and potential security problems would result._

