If the purpose is to "use limited amount memory" I would suggest to use a GeneralPurposeAllocator and setting "enable_memory_limit" and "requested_memory_limit": https://github.com/ziglang/zig/blob/8f481dfc3c4f12327499485e.... If the purpose is to "only use the stack", then "allocating a huge chunk and using it with a bump allocator" feels a bit like cheating to be honest...
Another potential challenge is to pre-allocate instead: Have an _initialize_ phase which is allowed to allocate memory and then an _execution_ phase where you're using the allocated memory. This pattern is very common in high-performance programs.
> use a GeneralPurposeAllocator and setting "enable_memory_limit" and "requested_memory_limit"
Interesting! I hadn't looked at GeneralPurposeAllocator too closely, but yes these seem like the right way to do things instead of abusing FixedBufferAllocator as I did.
> If the purpose is to "only use the stack"...
Not really, I just had to decide on some arbitrary upper bound on the mem usage, and the default stack size (8MiB) seemed like a decent choice. In retrospect, this challenge only took shape because my solution to Day1 used a FixedBufferAllocator backed by a buffer on the stack, and I realized how easy Zig made it to track allocs. I didn't fiddle too much with the general structure of the solution after that, and made it a "challenge" to see how far I could take it.
> Another potential challenge is to pre-allocate instead
Ah, that sounds much more difficult. This is also what TigerBeetle is doing . But one thing I didn't understand even from that post, how would one deal with data structures that really depend on the input data, like the hashsets in TFA?
Simplest way I can think of is to have an arbitrary upperbound on the allocated memory and then keep checking before every operation on any dynamic structure. That sounds tedious. Is there a better way?
If this is correct, it's likely an undocumented implementation detail, so probably not something you should rely on always being the case.
Just let the kernel handle it. The virtual memory and mapped memory abstraction the kernel has makes your program's implementation simpler.
This is simply false. Most devices are moving in this direction. Practically all phones people use are using a kernel that supports virtual memory. TVs now come with Linux too. All sorts of random devices use Linux now that powerful and cheap chips exist.
But how does it determine when it should write to disk? Does every write to a potentially OOM operation get preceeded by a check?
Take the case of a HashAggregate. The DB clearly cannot know at compile time how many unique keys will be present in the hashtable; it needs to resize at runtime. So does that mean all the hashtables are still using some form of Bump/Arena allocators backed by the pre-allocated memory?
Maybe I should just read the source code :)
You write fixed sized number of key-value pairs to the file at a time. This is how LSM trees work, you chunk your data up into N sorted keys per chunk. I don't myself understand all the specifics but this is the gist.
> Does every write to a potentially OOM operation get preceeded by a check?
If you allocate memory upfront and don't allocate any more memory, you can't OOM after the initial allocation. That's what TigerBeetle does.
Zig has some nice standard library containers for adding items while asserting that there's capacity. If we miscalculate, it is caught during tests because assertions fail.
I found this bit lovely: the author has independently reinvented the core idea of semispace copying garbage collectors (see eg https://wingolog.org/archives/2022/12/10/a-simple-semi-space...).
I am actually kind of surprised the author spent so much time figuring it out. The name of the allocator is not that well-defined, but at least to me it hints of it being simpler rather than full-featured allocator. I would also imagine he's using this in a very anti-patternic way. One would guess the point of this would be to destroy the entire allocator on every iteration, rather than trying to free everything 'nicely' which would be a lot of wasted work. This is a rather common pattern in a lot of "high-level" embedded development like this.
I wouldn't be. People learn different programming techniques at different times. I'd actually assume quite a lot of programmers don't know what a bump allocator is and eventually run into one in the wild. Kudos for the author for successfully debugging and discovering that.
> One would guess the point of this would be to destroy the entire allocator on every iteration
Not necessarily every iteration, but yes, these allocators are meant to be reset to 0 at a known time, when it's safe to do so.
Here's the documentation page: https://ziglang.org/documentation/master/std/#A;std:heap.Fix.... It just lists a couple methods, most of which have the description "No documentation provided". From those docs, it doesn't even look like there's a way to allocate memory using the FixedBufferAllocator. You might guess that it implements a bump allocator based on the fact that it only has an 'end_index' and a slice, but wow, I feel like an allocator is the kind of thing you really want to have documented well; especially a bump allocator, and especially especially a bump allocator whose name makes it sound like a general purpose allocator which happens to allocate within a fixed buffer.
I knew it was still early for Zig, but that's a bit disappointing.
They're pretty explicit about not taking too much effort to document stuff in the stdlib, because any given thing in there may or may not make the final cut when the stdlib is stabilized (this is deliberate because they don't want to make people pissed or burned out for putting effort into documentation that winds up getting nuked). While FBA (or something like it) will likely make the cut, I'd say, maybe give them a break?
There is but not directly through the FBA, you need to get a concrete Allocator struct out of that by calling the allocator() or threadsafeAllocator() functions.
To fix this issue you need a completely different allocator design, e.g. a bitmap, which can keep track of individual locations within its buffer.
patternic is not a word.
I agree with all that, but patternic is not a word.
I’m not defending nor criticizing that fact or the OP, but it is the state of things today. Even the existence of the library docs is marked “experimental” on https://ziglang.org/learn/
Maybe it’s not emphasized enough.
(As someone who did it for AOC 2021)
Yep, neither the name nor what little documentation there is a are really helpful, and that looks to be a long-standing issue (https://github.com/ziglang/zig/issues/3049).
Seems to me like this allocator should be renamed something like "FixedBufferBumpAllocator", which:
- leaves room for other fixed-buffer allocators (e.g. bitmap, slabs)
- spell out that there's something of note about the allocator, whose drawbacks the developer either would already be aware of or would be able to look up easily
Transient allocators doing little to nothing on free so you can do all the work at once at end of scope is often what you want, if anything a bump allocator freeing its tip is an optimisation.
The issue is not that it behaves this way, it’s that it’s not obvious at first glance that this is a bump allocator.
That's kinda my point? free is there and does something, but also silently does nothing if you violate a fairly subtle invariant. Kinda the definition of "error-prone", and the whole blog post seems to prove it, as the leak was essentially caused by the author not realizing that free was silently doing nothing. I understand why bump-allocators exist, I'm just saying this particular one's API has quite the footgun.
It’s the exact opposite of your point.
> free is there and does something, but also silently does nothing if you violate a fairly subtle invariant.
Again, not an invariant.
> the leak was essentially caused by the author not realizing that free was silently doing nothing
The leak was caused by the author not knowing this is a bump allocator because that was not clear from the naming (and the documentation is essentially non-existent).
> I'm just saying this particular one's API has quite the footgun.
It’s not the API that’s a footgun, it follows the standard allocator API so it can be used wherever an allocator is expected. If it did not, its usage scope would be extremely limited as you'd only be able to use it for bespoke allocations, and wouldn't be able to use it for allocating e.g. arrays or sets or maps.
There is absolutely no such invariant here that allocations must be freed in the reverse order that they were allocated in. This was never a part of the contract.
> this particular one's API has quite the footgun
Then how would you use it in the cases where you want free to be a no-op?
I think that's half of the point of the allocator.. free shouldn't do anything, certainly not throw an error. You can free the buffer behind the allocator later, or for some simple command line tools you'll just let the OS free memory when the process finishes.
Perhaps some kind of debug message could be OK. Would perhaps be nice if you have some problems with allocation, you activate debug messages related to allocation, and one of them would be "free was called on something other than the last allocation so it was ignored"
Every allocator other than a general purpose allocator has a use case where it's faster, and assumes you know what you're doing with it.
For games it's easy to find a safe spot (end of frame). For a server, you might have have a pool of bump allocators (1 per connection), and you'd reset them after every http request. etc.
But it should be clearly spelled out.
When your type is already called
This allocator is useful to write extremely efficient algorithms in certain scenarios.
If you wanted to throw on double free, you'd have to track what was freed and this tracking doesn't come for free.
Documentation has section on choosing allocator .
1. Frees are never supposed to error
2. You want code to be interchangeable so that you can try out different allocators.
3. Eventually when someone writes a lifetime analysis tool for zig, you'll want to signify the memory as freed
4. If your program is long-running (the bumping is part of a subtask) you probably want the bump allocator to be itself freed on its own lifetime, so you'll eventually free that memory.
This makes this allocator fast, but it should clearly be named/described I agree.
Personally, I would have called this a StackAllocator, that way the alloc/free order requirement is obvious. I would have made the default behavior to 'panic()' if you don't satisfy the precondition of freeing the most recently allocated buffer.
If somebody really wanted to make free a no-op, I'd offer a feature flag to turn that on.
It doesn't reclaim space on free - it's no-op.
The only thing you can do without extra tracking is to reclaim space for last allocated buffer - and zig does just that. You can do it because you have all information available to do it, that's the only reason.
You could add extra rule where free on last allocated buffer triggers all reclamations on the tail - but you'd have to add extra tracking stuff - ie. at the end of the buffer that grows inwards.
But this adds extra complication which is outside of scope for this allocator. You can have other one that does it.
Let me know if you need a full explanation as to why.
However both doesn't match Zig's expected alloc/free allocator interface, which is an interesting design challenge on its own.