
Tinyalloc: replacement for malloc/free in unmanaged, linear memory situations - ingve
https://github.com/thi-ng/tinyalloc
======
abcd_f
If I read the code correctly, ta_free() is O(n), whereby n is a number of
allocated blocks. That's not very good.

PS. Also the block alignment is aligning the wrong thing. It should be
aligning the start of the public (returned) part, not the block size and
that's done by padding Block struct as required (if required)... and not to
the hardcoded 8 bytes. Otherwise you'll get SIGBUS on RISC boxes.

PPS. At the risk of stating the obvious, this is nothing more than a toy
allocator, with lots of loose ends. A lab exercise in a second-third year of
an average CS course - maybe, but this is not something suitable for
production. Perhaps it works for some specific cases, but these aren't
specified. So seeing this massively upvoted and at the top of HN is a bit
strange.

~~~
ChuckMcM
And this: _Also, allocation will fail when all blocks in the fixed size block
array are used, even though there might still be ample space in the heap
memory region..._

Basically you can fail memory allocation with a lot of small allocations that
aren't using all of a block. This situation bit me when using some C++ code on
a Cortex-M class processor where the C++ code ended up creating a bunch of
objects with very small allocations in them.

That said, the code is pretty clean and easy to read so its a pretty
_understandable_ allocator. Some malloc implementations are so opaque you have
to study them for a week to tease out how they work.

------
AHTERIX5000
Looks nifty and simple. Last time I had to work with low mem environment (64k
heap for Lua running on MIPS) I ended up implementing something close to TLSF
[1] allowing to choose some trade-offs regarding perf/allocation efficiency.

In the end we still had to add one layer of hack^H^H engineering tricks to
workaround fragmentation issues in the worst case :-)

[1]
[http://www.gii.upv.es/tlsf/files/ecrts04_tlsf.pdf](http://www.gii.upv.es/tlsf/files/ecrts04_tlsf.pdf)

------
Nursie
That's cool. I've written small but less flexible things before based on
arrays as used/freed block maps rather than lists, from embedded devices with
128K of RAM.

This seems like it has some nice features. Though a 3K overhead could be
important in such a constrained environment (I guess if it's only managing
128k that could reduce).

~~~
SlowRobotAhead
3k overhead where you’re trying to reduce from malloc is definitely a lot. One
of my products have 8K total, not going to waste 3 of that replacing malloc.

I’m fairly happy that mirsa C forbids malloc. I can’t use it even if I wanted
to, so I need to be more careful about how memory is used. Annoying at first,
makes a lot of sense later on though.

~~~
okl
It is quite possible (and it can be reasonable) to deviate from that rule. For
example you could write a wrapper for malloc that only allows dynamic
allocation in the initialization phase of your program. This can be useful to
allocate memory for opaque-pointer-style types.

~~~
CyberDildonics
There are other ways to get chunks of memory separate from the stack - static
uninitialized byte arrays.

~~~
monocasa
Yeah, but it's nicer to define that kind of stuff in your linker script IMO,
where sizes of large memory regions can be thought of holistically.

~~~
CyberDildonics
The parent post was talking about using malloc once at the start of the
program, are you talking about defining that with your linker script?

Also if the goal is to get one chunk of memory with a single call, why use
malloc at all? Why not use the direct memory mapping command?

~~~
monocasa
I don't think he's talking about using malloc "once", he's talking about only
using it during init. In practice you end calling it a bunch in that model,
the last design I shipped like that ended up making a few hundred malloc (well
new operator because this was C++) calls at init.

Under that system malloc/new was just a pointer bump, and an assert if there
wasn't enough room. Free wasn't linked, and delete asserted. That heap region
(or mine had several heaps that lived in different memory with different
characteristics that could be specified with a new overload) is way easier to
define in your linker.

------
felixguendling
As some comments point out that this code might not be production ready, can
anyone recommend other solutions that are able to manage a given block of
memory? My understanding of libraries like jemalloc is that they are replacing
malloc (using system calls brk and mmap [0]) but I can't hand it an already
allocated block of memory to manage, right?

Something like

    
    
        memory_manager m(some_buffer);
        allocation a = m.allocate(512);
    

would be nice. :-)

[0]
[https://medium.com/iskakaushik/eli5-jemalloc-e9bd412abd70](https://medium.com/iskakaushik/eli5-jemalloc-e9bd412abd70)

~~~
lfy_google
[https://android.googlesource.com/platform/external/qemu/+/em...](https://android.googlesource.com/platform/external/qemu/+/emu-
master-dev/android/android-emu/android/base/SubAllocator.h)

[https://android.googlesource.com/platform/external/qemu/+/em...](https://android.googlesource.com/platform/external/qemu/+/emu-
master-dev/android/android-emu/android/base/SubAllocator.cpp)

[https://android.googlesource.com/platform/external/qemu/+/em...](https://android.googlesource.com/platform/external/qemu/+/emu-
master-dev/android/android-emu/android/base/address_space.h)

These 3 files should be a fairly self contained, minimal way to allocate into
existing buffers. No attempt is made at thread safety though (we assume the
user synchronizes that themselves)

------
graycat
Consider using _sequential_ instead of _linear_.

------
smileypete
If handles and setters/getters can be used, then it opens up the possibility
of defragging or 'garbage collecting' the memory.

A neat trick could be to be able to 'trim' the data used by the handle, then
when it comes to defrag time just deal with the remaining part and free up the
trimmed space.

------
saagarjha
Is this missing code? I can’t find a definition of Heap anywhere.

~~~
Ace17
[https://github.com/thi-
ng/tinyalloc/blob/master/tinyalloc.c#...](https://github.com/thi-
ng/tinyalloc/blob/master/tinyalloc.c#L50)

~~~
saagarjha
Thanks. In my defense, GitHub search is garbage: [https://github.com/thi-
ng/tinyalloc/search?q=Heap&unscoped_q...](https://github.com/thi-
ng/tinyalloc/search?q=Heap&unscoped_q=Heap)

~~~
johnmarcus
github search is optimized for finding accidentally committed credentials, not
useful code.

------
howard941
How does it stack up to dlmalloc?

~~~
zik
It's a tiny allocator designed for tiny embedded systems. It's not intended
for large desktop machines like dlmalloc() is. dlmalloc() can't run on tiny
embedded systems because it has relatively high overhead. This provides a lot
less features but has a very small memory footprint. It's really intended for
cases where your embedded program mostly allocates memory once on startup and
doesn't usually free() much at all.

------
lbj
Its a nifty tool, but I can't help feeling a little disappointed that we're
still having to deal directly with issues such as memory allocation in 2019.

~~~
chrisseaton
How do you think we should be dealing with memory allocation instead?

~~~
blattimwind

        addr = getrandom(8);
        mem = mmap(addr, len, PROT_RWX, MAP_FIXED|MAP_ANON, -1, 0);

~~~
saagarjha
> PROT_RWX

Hmm…

~~~
blattimwind
The X is for fleXibility.

