Hacker News new | past | comments | ask | show | jobs | submit login

> "...From what I've heard, Azul has a great GC, but the > throughput is extremely low. It's really only a practical > solution for high frequency finance and places like that > where latency is everything, and throughput is nothing..."

Well, you heard wrong.

Zing is used in plenty of throughput-intesive and throughout-centric applications, and sustainable throughput on Zing tends to be higher (not lower) than with other JVMs. E.g. Cassandra clusters tend to carry higher production loads per machine when powered by Zing (compared to OpenJDK or HotSpot on the same hardware). All while dramatically improving their latency behavior and consistency.

Specifically, on similarly sized heaps and workloads, the C4 collector's throughout is better than CMS's and close to ParallelGC's. And since it's throughput scales linearly with the amount of empty heap configured and since (unlike OpenJDK/Hotspot) Zing places no practical pause-related caps on how much memory can be applied, it tends to beat both on efficiency in actual configurations.

The notion that good latency behavior has to come at the expense of throughput is just a silly myth. There are plenty of examples that disprove it. Zing/C4 is just one of many.

I thought Zing was basically HotSpot licensed and modified to use C4 (plus a few other minor things). How comes it's faster than HotSpot overall? Are there a lot of Azul-specific compiler optimisations there too?

- Zing is based on HotSpot, and it's biggest visible change is C4, but it changes a lot more than just the collector. E.g. it addresses pretty much all the causes of JVM glitches. (You can see a discussion of the many other reasons JVMs pause here: https://www.youtube.com/watch?v=Y39kllzX1P8).

- The reason Zing tends to to carry higher throughput in production is that in most Java-based systems, production throughput levels are limited not by system capacity, but by how far you can drive the JVMs before the glitches start being unbearable, and what looks like occasional small hiccups at lower throughputs starts looking more like epileptic seizures at load. Capacity planning and sizing usually aim to keep peak production loads below the levels that lead to these "I don't want to go there" behaviors. By taking out the various glitching/pausing/stalling behaviors typically associated with JVMs under load, Zing extends the smooth operating range such that it comes much closer to the traditional "how much can this hardware handle?" capacity and sizing behavior people are used to in non-Java and non-GC'ed environments.

- When you compare raw throughput or speed (with no SLAs, e.g. "how long does this multi-hour batch job take to complete?"), with similar configurations Zing is usually comparable to OpenJDK/HotSpot [Where comparable typically means within +/-15-20% range. Sometimes faster, sometimes slower.] But once people apply simple knob twists in the applications (like turning up heap sizes, caches, and other using-memory-no-longer-hurts related settings) they often get more raw throughout per instance or machine through simple efficiency benefits (like the elimination of raw work that comes from higher in-process, on-heap cache hit rates).

Thanks, fascinating answer.

Applications are open for YC Summer 2023

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact