Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Suppose you have 100 functions that each need 100 bytes of scratch space. Statically allocating everything like you describe means you need 10KB of memory reserved. But suppose there is one main function, and the rest are helper functions, and the main function calls each helper function one at time and the helper functions don't call anything. With the static approach, you still need 10KB. With the stack-based approach, you only need 200 bytes (the stack space for the main function, and the stack space for the helper function it is currently calling).

This is not necessarily the case: if you statically analyze the call graph, you may be able to prove that two functions will never be live at the same time, and thus reuse memory regions for their local variables: https://github.com/llvm-mos/llvm-mos/blob/main/llvm/lib/Targ...

Of course, recursion, function pointers, and dynamic linking will defeat (or at least complicate) this scheme.



That's a good point, you can definitely optimize. I'd be curious how far you could go using static analysis. Assuming no recursion, function pointers, dynamic linking, etc; can you optimize to the point that you use as little memory as a stack approach would? I think you ALMOST can! The only blocker would be specific call patterns that would require a lot of memory, but are unreachable. That's where the halting problem hits.

You just make a call graph where the nodes are functions and directed edges indicate which function calls which. Without recursion function pointers and the like you can compute this exactly and this forms a DAG. Each node is given a weight, which is the amount of local memory that function needs. Start at the entry point function, and compute the longest distance path (using the node weight).




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: