Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Don't forget scanning. Yes, moving blocks of memory around is expensive, but it can also be done concurrently. Scanning, AFAIK, cannot be done concurrently, and thus remains the primary blocker to lower latency. And scanning is something that is entirely eliminated with static memory management.


Scanning is most certainly done concurrently with ZGC. Even root scanning is on its way to become fully concurrent, which is why we're nearing the goal of <1ms latency.


I didn't know about fully concurrent root scanning, thank you!

How does root scanning work wrt to Loom? Are stacks of virtual threads treated as roots? I guess there is no other option?


No, virtual thread stacks are not roots! This is one of the main design highlights of the current Loom implementation. In fact, at least currently, the VM doesn't maintain any list of virtual threads at all. They are just Java objects, but the GC does treat them specially.


Interesting! In what way are they handled specially?


Unlike any other object, the location of references on a stack can change dynamically, so the GC needs to recognise those objects and walk their references differently. There are other subtleties, too.


Right, it is concurrent, but it is still costly. It brings rarely used data into the caches and pushes useful data out of the caches. If some parts of the heap were swapped out, the impact of concurrent scanning can be quite dramatic.


Ah, but you can pin GC threads to specific cores, and a reference-counting GC also has such non-trivial costs. In practice, however, people in the 90-95% "mainstream" domain that Java targets are very happy with the results. Of course, there are some applications that must incur the costs of not having a GC. In general, though, the main tangible cost of a GC today, for a huge portion of large-scale applications, is neither throughput nor latency by RAM overhead.


I also enjoy the low level C++ like tooling that .NET offers, but I am also looking forward to the fruits of Panama and Valhalla in that direction.


Pinning doesn't solve the problem of cache L3+ thrashing.


It could if the GCs used non-temporal instructions that bypass the L3. Of course, how much of a problem this is in practice in most applications is something that would need to be measured.


Point taken. What about swap?


Yeah, swapping could be really bad and should be avoided. So: don't swap :) Java's memory consumption can't go up indefinitely. The most important setting is the maximum heap size. Set it to a good level and don't swap.


All the modern GCs scan the heap concurrently, the hardest problem is scanning the GC roots in the call stack. ZGC is currently implementing concurrent stack scanning.


Yes it can. Stop the world collection is a thing of the past. See for example: https://developers.redhat.com/blog/2019/06/27/shenandoah-gc-...


Modern Garbage Collectors do concurrent scanning.

I believe most GC implementations have non-concurrent "initial marking" phase, but that's typically fairly quick. It has to scan roots of your object graph, think stack, JNI, etc.


Scanning can be done incrementally with each allocation (Such that allocations become slightly more expensive but no individual allocation does loads of scanning work). Scanning can also be done concurrently.


Scanning is also entirely eliminated by using no global heap allocations. With copying and small stack allocations there's not need to scan much, and can easily stay below 1ms.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: