Used alloca to allocate a string on the stack. Depending on the size of the string the colours on the dialog would either look normal or have a rainbow like colour.
Turns out that alloca allocated the data on a non aligned memory address on occasion this caused the quartz compositor, which depended on aligned memory, to draw text with goofy colours.
Moral of the story, I don't use alloca anymore and if I do, I ensure that its aligned properly.
Et voilà, a library function that could have been portable between the smallest microcontroller and the largest workstation becomes restricted to a single application.
Not to mention that using stack allocation on low-memory device is extremely dangerous... at least with malloc you get a nice diagnostic, while alloca will just do random program damage.
In really small embedded you should not be allocating anything meaningful in the stack anyway.
Isn't size_t supposed to be as large as to fit the cardinality of the largest type you can create?
(Used · for multiplication, because HN turns stars into emphasis.)
Regardless, even without these unobtainium big machines, the cardinality of the largest array for a platform (by any number of dimensions) must be representable in size_t. A too large of an array would not fit in RAM and you should get a compile/link/out of memory error.
Except we're talking about variable sized arrays, where the wanted size is not known until runtime (e.g. from user input). We can't check this at compile time.
This also isn't the same problem as an out of memory error; we need to check that nn didn't overflow the size of size_t, as well as that the memory allocation is successful. Something like:
int n = /* run-time value */
size_t sz = (size_t)n;
// Check n fits in size_t.
if ((int)sz != n) <error>
sz *= sz * sizeof(float);
// Check against overflow.
if (((sz / (size_t)n) / sizeof(float)) != (size_t)n) <error>
float *matrix = malloc(sz);
// Check allocation succeeded.
if (matrix == NULL) <error>
> The type size_t is an implementation-defined unsigned integer type that is large enough to contain the size in bytes of any object.
And this is why runtimes with moving garbage collectors can be faster than malloc - allocation with such a GC is always this cheap even when it's to the heap. (The costs move elsewhere, of course, but ideally to another thread, and are temporally and spatially grouped so that the work uses cache better.)
Best approach is still to minimize dynamic allocation as much as possible. One can often get surprisingly far with an entirely static memory layout that's defined upfront at application start.
How does this work when the heap address space becomes fragmented, with randomly distributed available space and objects that are still live? You can't just bump when the next piece of memory may still be in use, and when you're not able to move memory.
How do you know this to be true?
And why isn't a GC a 'proper' allocator?
There isn't any.*
The author shows what's at best a minor convenience, which is in no way worth all the downsides of which the author lists. For any actual use he'd have to wrap it in a function, if merely to have an assertion to check the array ranges. Oh, and have a version for compilers which don't support VLAs at all. At which point he might as well do the right thing always instead of VLA.
There author must be aware this isn't worth it. I guess the actual intent of the post was to show the author's knowledge, and indeed the author knows C and taught me some things I didn't know (VLAs are even worse than I thought them to be).
* Note that I'm talking about VLA, not stack allocation. There are better ways to do that when that's useful.