Basically, C function calls are doing the equivalent of alloca all the time. The size of stack needed by a function call is not known in advance. It may be a constant in that particular function, but what function is called depends on run-time tests, and indirect calls. (*fptr)(foo) could call one of fifty possible functions, whose stack consumption ranges from 50 bytes to 50,000. That's kind of like calling alloca with a random number between 50 and 50,000. And like with alloca, there is no detection. C is not robust against the problem of "no more room for the next stack frame"; it is undefined behavior.
The simplified idea that "all alloca is bad" comes from a parochial, in-the-box thinking.
(Sure, alloca for, say, temporary storage for catenating strings that come from "read a line of any length" routines, whose inputs are controlled by a remote attacker, is undeniably bad. However, even without using alloca, we can cause the problem that some input available to an attacker controls the depth of some (non-tail-optimized) recursion.)
If you are willing to publish that in a public header, you will break any code if your stack struct changes size. Like, I get the point of using `alloca(stack_size())`... but I don't see why `alloca(STACK_SIZE)` is any better than `struct stack my_stack;`.
1. Stable ABI
2. A library that can change the size of its opaque struct
3. Stack allocation of the opaque struct
4. Statically sized stack frames (i.e. no VLAs or alloca)
Pick (any) three of them. You can't have all four.
An alternative would be to use padded structs: when the struct is newly introduced, make it substantially larger than necessary. Then all the clients are reserving the room already. A transparent (or not) union between the real struct type and some array can be used, or padding members. Additional requirements may be that the unused stuff must be initialized to zero by everyone, or else some version field has to be initialized or whatever.
Versioned symbols are another solution. Binary clients that are allocating the smaller, older structure are routed to compatibility functions, at least for those functions where it matters. This approach is seen inside glibc for instance.
An approach found in numerous places in Microsoft's WIN32 is to store the structure's size, as known at compile time to the given client, into a dedicated size field. The API then knows it is called by older compiled clients when the size they are passing is smaller than the current sizeof (that_struct).
If someone finds a security bug in OpenSSL (Heartbleed, for instance), you only need a single update to the OpenSSL libs to fix every program that dynamically link it. But that only works if you have a stable ABI: otherwise, every single program would have to be recompiled to fix the bug.
yes, and what I am saying is that in 2019, the cost of recompiling $everything is quite low.
you aren't but your distro maintainer should be
> not everything is available as source code to build from,
and most software not available as source code actually do ship their dependencies because it would be madness to suppose that there will never be any ABI or API break at any point in the future
> not every OS is a GNU/Linux clone
GNU/Linux is actually the exception here - the two big others just ship and replace the whole OS on every major update (and I've heard that this was the case for minor windows update too nowadays, not sure how much this is true)
I have never seen a proprietary software that does not ship all its shared libraries along anyways - so, about the same time I would say ? eg. look at all the linux games that just come with the entire ubuntu 12.04 or 14.04 userspace.
Please explain what you mean by "safe" in this context.
Edit: you may answer: "use alloca then". But the syntax for alloca is not that convenient, and the semantics are very different. A common use for VLA is to declare a temporary array inside the body of a loop, this is not possible with alloca.
My problem is this:
void f(int n)
// what do I write here to check if everything is OK ?
Allocating a non-VLA is not safe. Your running program's char x could go spectacularly wrong.
This is a tiny size, and it would never pose a problem. I'm talking about allocating a few high-resolution video frames on a VLA for temporary computation. It is extremely natural to declare these temporary space inside a for loop as
float frame[3*width*height]; // temporary data
Cribbed from some Unix programmer's epitaph.
That doesn't mean that they're not safe, so long as the program reliably terminates. And you can catch stack overflow, it's just not easy, and made especially difficult from a software architecture perspective because neither C nor POSIX support per-thread signal handlers, so catching stack overflow requires cooperation across components and libraries. (Fortunately, POSIX does provide per-thread signal stacks via sigaltstack(2). And I suppose it should be possible to write a shared library that interposes pthread_create and signal/sigaction to support per-thread signal handlers.)
TL;DR: VLAs are fine if you're doing something you might otherwise be doing in FORTRAN. Otherwise, just don't. It's trivial to use GNU obstacks or to write your own, as I do. Just don't go down the rabbit whole of trying to solve all allocation needs; it's not a solvable problem.
 Sadly, before Stack Clash was published neither GCC nor clang/LLVM did this :(
Reason? Several security exploits.
- #define snd_pcm_info_alloca(ptr)
- size_t snd_pcm_info_sizeof(void)
Is this still bad if you have the proper alignment (by attribute or implicit placement in a struct), or have it wrapped in a packed struct? There's a lot of code that does this, e.g. overloading the definition of struct sockaddr.