Hacker News new | comments | show | ask | jobs | submit login

This is actually a fine thing to do, i don't remember where i read it, but a good analogy for trying to free memory at program exit is like trying to clean the floors and walls of a building right before it is demolished.

This is also why Valgrind separates reachable and unreachable memory and only considers unreachable memory as leaks.

How do you know the program is even supposed to terminate? Maybe it's a server, as in this case. Maybe the code will be reused someday as a subfunction or library within a larger application, instead of being launched and terminated directly by the OS. Maybe it's an embedded application in a 24/7 factory somewhere. Maybe it's on its way to the Kuiper Belt. Or maybe it's just supposed to stay up and running for longer than the average Windows 10 update period.

In any case, hiding this sort of behavior in a way that sucks down days of debugging time on the part of one expert programmer after another, after another, after another, is terrible engineering.

Freeing memory at program exit may be unnecessary, but being unable to free memory while the program is running is terrible.

On iOS and android you are expected to free whatever unused memory you can when you are notified of a low memory situation.

It's OK to do, but it makes memory analysis difficult. If your app exits with a lot of allocated memory, it's hard to tell what's a real leak and what's not.

This behaviour is one of those cache-memory-leaks. Where even though memory is reachable it is still effectively leaked because it's soaked up by some data structure and not released to the rest of the system.

So it's not a traditional leak but because memory usage would continue to grow it causes the long lived process to choke itself and die.

A good example is something like doing a "cp -a". To preserve hardlinks you'll need a mapping, trying to free that at exit can and will take time. This was an actual bug in the coreutils.

I have to wonder why the C standard library didn't include a Pascal-style mark/release allocator. A naive implementation wouldn't be much faster than free()'ing a bunch of allocations manually, but the general idea offers possibilities for optimization that otherwise aren't available to a conventional heap allocator.

Because (m)alloc predates mmap by a decade or so [1], and you expressly don't want stack-like allocation semantics when using malloc (otherwise you'd just put it on a/the stack).

[1] And you cannot have more than one heap without mmap. Without mmap, you only have sbrk = the one heap. On UNIX and those that pretend to be, anyway.

Raymond Chen often uses this analogy on the blog Old New Thing.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact