Hacker News new | past | comments | ask | show | jobs | submit login

What is not shown is those benchmark is how much CPU is used for each GC. For example the latency for ZGC is much lower but does it means it uses more CPU than G1.



I came here to say something similar. ZGC and Shenandoah run the GC in extra threads. I don't see any mention of the number of cores in the article, or in the linked SPECjbb2015 benchmark [1].

It would be interesting to see how much of the improvement is just down to the use of extra "unused" cores, and how much CPU is actually used by the GC. Equivalently, run a CPU-bound task on one core and measure GC and application performance, with an eye on how much the fancy GC slows down the program.

[1] https://www.spec.org/jbb2015/


CPU is almost never a real issue. When I worked with Sql Server we always had data compression enabled, a little more of CPU but much less IO to do.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: