Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Working on GC-managed memory is faster still.

Care to elaborate?



Replying to GP comment and you as such.

Hypothetically, (I'm not sure how widespread this technique is), some (Java) compilers turn heap allocations into stack allocations when they can prove that transitively an allocated object does not become reachable from other threads.

This general technique "escape analysis", i.e. "a reference to this object /escapes/ a given reachability type[1]", can be used to transform general heap allocations into thread local or stack allocations.

[1] Let me define reachability type as {can be accessed from anything, can be accessed from this thread only but is of indefinite lifetime, can be accessed from this thread only and it's reference is not take by any object allocated onto my local heap, ...}


Hotspot JVM C2 compiler does this optimization, with varying degree of success but generally brittle and completely opaque. It doesn't actually stack allocate them technically speaking, just scalarizes them into registers. Should also mention that this requires sufficient inlining to take place at minimum (unfortunately, hotspot does not do flow sensitive analysis so any conditional logic that makes the object escape, even if not taken, will be regarded as escaping - graal handles this better), which doesn't always happen for a variety of reasons.

But what does this have to do with heap being faster than stack? This is doing roughly the same thing as a compiler that supports explicit stack allocations.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: