Hacker News new | past | comments | ask | show | jobs | submit login

A large complex application seems like exactly the kind of environment where being stuck with a single allocator would be enraging. I personally like the idea of being able to give each component of a large system its own fixed chunk of memory (and an allocator over that chunk), such that if one component goes crazy with memory consumption it's isolated to that component instead of choking out the whole system.



That makes a little bit of sense if by "component" you mean not a software component but a unit of work users care about (e.g. a document).


It's applicable in either case:

- As you mentioned, if I'm editing a document, it's useful to have an allocator on a chunk of memory dedicated to that document. When I close the document, I can then simply free that chunk of memory - and everything allocated against it.

- If I'm implementing an operating system, I'm probably going to want to give each application, driver, etc. a limited amount of memory to use and an allocator against that memory, both so that I can free a whole process at once when it terminates and so that a single process can't gobble up memory unless my operating system specifically grants it more to use (i.e. by itself allocating more memory and making the process' allocator aware of it).


> When I close the document, I can then simply free that chunk of memory - and everything allocated against it.

You probably don't want to do this directly. Instead you want to walk the object graph and run cleanup code for everything in that graph, because in general there will be resources that aren't just memory that need to be released, and for consistency with normal operation, it should deallocate memory as it goes.

You probably don't want to allocate an actual "chunk of memory" either. That just creates unnecessary fragmentation. All you really need is accounting and the ability to report when you're consuming too much memory.

Your driver example is not an example where you would allocate memory per software component. You would actually want to allocate per device, not per driver module; it's just confusing because in many cases there is only one device. But if you can plug in many devices that use the same driver, you'd want independent allocation accounting per device.


> in general there will be resources that aren't just memory that need to be released

Zig already handles this with its "defer" feature; as a resource goes out of scope, it can be released automatically. In the document example, that document's existence would likely be a running function, and as that function terminates, it would likely have "defer" statements kick in that free the document's chunk of memory and release any file descriptors and such.

> You probably don't want to allocate an actual "chunk of memory" either. That just creates unnecessary fragmentation.

If anything that should help reduce fragmentation, or at least help reduce its impacts, since you have better control over whether that allocation exists as a contiguous block.

> All you really need is accounting and the ability to report when you're consuming too much memory.

Which is trivial to do when you know for sure that a given component can only work with a given chunk of memory.

But yeah, nothing stopping anyone from implementing an allocator that cares nothing about where its bytes actually live, and just keeping a running tab of how much memory it's used. That is: using custom allocators is an elegant and simple way to implement that accounting, since that's basically what an allocator already is.

> But if you can plug in many devices that use the same driver, you'd want independent allocation accounting per device.

We're probably talking about the same thing here, then, but with slightly different terminology (and perhaps different structure); I'd be pushing for each device to be controlled by an instance of a driver (much like how an ordinary process is an instance of a program), and it would be those per-device instances that would each have their own allocator. Those instances are what I'm calling "drivers" in this context; they might share the same code, but they run independently (or at least they should run independently; a single malfunctioning disk shouldn't bring down all the other disks).


> that document's existence would likely be a running function

No, that would mean an application managing multiple documents would need one thread per document, which is not normal practice for GUIs. In fact it would then need one event loop per document thread which is not even possible on many platforms.

"defer" simply doesn't serve as a wholesale replacement for destructors, but that's a tangent to this discussion.

> If anything that should help reduce fragmentation

No, there would be fragmentation at document granularity. For example, if you create a document, add a lot of content to it, then delete some of that content, then do that again for several documents, the memory used would be the sum of the maximum sizes of the documents.

I agree with the rest of your comment.


> No, that would mean an application managing multiple documents would need one thread per document, which is not normal practice for GUIs.

Unless those functions are async, which Zig also supports (even on freestanding targets!). Single OS thread, single event loop, many concurrent cooperatively-scheduled functions. Or you can get fancy and implement a VM that in turn runs preemptively-scheduled userspace processes, in essence basically reinventing Erlang's abstract machine (and this is exactly a pet project I'm working on, on that note).

And even keeping each document in its own (OS) thread ain't really that unprecedented; browsers already do this, last I checked (each open tab being a "document" in this context) - in some cases (like Chrome) even doing one "document" per process.

> For example, if you create a document, add a lot of content to it, then delete some of that content, then do that again for several documents, the memory used would be the sum of the maximum sizes of the documents.

Would that not also be the case if all those documents used a single shared block of memory? Again, splitting things up helps avoid fragmentation here, especially if you know that most documents won't exceed a certain size (in which case fragmentation is only an issue for data beyond that boundary) - or, better yet, if you ain't storing the whole document in memory, in which case the buffer of actively-in-use data can be fixed. Further, if each allocation is a whole page of memory, then that's about as much control over fragmentation as an application can hope for beyond itself being the OS (and probably won't make much of a difference if those pages are scattered across RAM anyway; swapping would definitely suffer on spinning rust, but that's already bad news performance-wise anyway).


> And even keeping each document in its own (OS) thread ain't really that unprecedented; browsers already do this, last I checked (each open tab being a "document" in this context) - in some cases (like Chrome) even doing one "document" per process.

That is not correct. (Source: I am a former Mozilla Distinguished Engineer.)

Chrome (and Firefox, with Fission enabled) do one process "per site", e.g. one process for all documents at google.com. (In some cases they may use finer granularity for various reasons, but that's the default.) In each process, there is one "main thread" that all documents share.

> Would that not also be the case if all those documents used a single shared block of memory?

No. Memory freed when you delete content from one document would be reused when you add content to another document.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: