

Handling out of memory conditions in C - yan
http://eli.thegreenplace.net/2009/10/30/handling-out-of-memory-conditions-in-c/

======
tptacek
The "segfault policy" is an extremely bad idea.

Over the past couple years, several major security vulnerabilities have been
caused by programs that took the "segfault" path making attacker-controlled
array references to NULL pointers. It's not hard to see that if you index
something with a standard native integer, 0 + index can mean any word in
memory. If you write to that, that's game over.

It's for a similar reason that I don't recommend developers explicitly check
malloc, or use xmalloc wrappers. Both are opportunities to make an error that
shouldn't be possible in properly deployed code. Adobe Flash had explicit
error checking with a convoluted O-O-M policy, forgot a check, and Mark Dowd
was able to use a Flash header field to create a 32 bit offset to a NULL
pointer, and:

[http://chargen.matasano.com/chargen/2007/7/3/this-new-
vulner...](http://chargen.matasano.com/chargen/2007/7/3/this-new-
vulnerability-dowds-inhuman-flash-exploit.html)

xmalloc addresses this, except if you forget to use xmalloc, or if you forget
that "strdup" needs an "x" too, or if you use a library that calls malloc.

The safest approach is just to configure malloc to abort (or invoke your O-O-M
handler via longjmp or signal). Most C libraries have explicit support for
this, but every development platform will allow you to preload a malloc
wrapper if they don't.

For what it's worth, I came by this opinion not because of security, but
because as a systems programmer I was tired of dealing with useless
conditionals on a function that's called so often; at one job, I shrank my
codebase by 30% (!) by losing malloc checks and the associated "this function
should return -1 to indicate malloc failed inside of it" detritus it carried.
The "just preload a malloc that aborts" rationale I got from a friend (then)
at Juno.

~~~
jrockway
The article says:

"Why abort with an error message, when a segmentation fault would do? With a
segfault, we can at least inspect the code dump and find out where the fault
was?"

abort also produces a core dump, so this is a bad idea for that reason too.

------
DarkShikari
Perhaps I'm missing this, but what about the "return failure" policy (for
libraries)? If malloc fails, log an error and return a failure to the calling
application. It isn't recovery, since it doesn't attempt to continue working,
but it doesn't abort() the whole program (including calling application)
either.

The calling application can then close the current handle to our library if it
chooses (thus freeing up all of the memory again).

(This is what our library, x264, does, and IMO makes the most sense when
dealing with situations where it would be insane to try to recovery, but in
which one doesn't want to kill the calling app as well. We added malloc
checking when it became possible to set options that would hit 2GB allocation
on a 32-bit system.)

------
tsandall
Checking the malloc return value doesn't always guarantee that you'll be OK.

From the malloc man page (under BUGS):

"By default, Linux follows an optimistic memory allocation strategy. This
means that when malloc() returns non-NULL there is no guarantee that the
memory really is available."

It goes on to describe how this behavior can be turned off.

------
bcl
Very good article, I like the examination of various open source project's use
of memory allocation and failure checking.

For me personally it depends on the project. If there is nothing to clean up
in the rest of the code an abort in the malloc function is fine. But if there
are other things to deal with at shutdown you have to bit the bullet and check
your return values all the way back up the call stack.

------
vlisivka
Allocate 1MB of memory upfront. When OOM happens, free that memory and run
recovery mechanism.

Similar policy is used in Linux for disk space: 10% of disk space is reserved
for root.

PS. Linux OOMKiller already do something similar: it just kills a process to
free some memory.

~~~
nuclear_eclipse
_Allocate 1MB of memory upfront. When OOM happens, free that memory and run
recovery mechanism._

No OOM recovery is fool proof. What if the OOM situation is caused by another
process constantly consuming memory? As soon as you free your 1MB overhead,
the other process could immediately consume it and again you're in an OOM
state.

Granted, the likelyhood of this is slim, but then again, so is an OOM
situation period. In the world of virtual memory and swap space, it pretty
much takes a rogue process to trigger an OOM error in the first place.

~~~
vlisivka
Older JVM's behave in such way: in the OOM situation they runs Garbage
Collection process to reclaim some unused memory. GC itself requires some
memory to operate, so JVM reserves some memory to GC.

