
New Project: ZGC – “Z Garbage Collector” for Java - dfabulich
http://mail.openjdk.java.net/pipermail/announce/2017-October/000237.html
======
BenoitP
> A core design principle/choice in ZGC is the use of load barriers in
> combination with colored object pointers (i.e. colored oops). [...] In
> addition to an object address, a colored object pointer contains information
> used by the load barrier to determine if some action needs to be taken
> before allowing a Java thread to use the pointer.

This looks furiously like an incremental improvement of Shenandoah's model
[1]; Hurray for more teams exploiting this new way of doing things. Cache
lines are getting bigger, we can definitely afford memory management metadata.

The difference here seems to be that they intend to explore different uses of
this metadata:

> this could be used to track heap access patterns to guide GC relocation
> decisions to move rarely used objects to "cold storage".

I always wondered if a relocating GC could have a subtle way of building
cache-friendly layouts. I'm speculating here, but if I'm not mistaken current
copying GCs traverse the heap breadth-first; and reuse this in the copying.
This new approach could lead to options for going depth first at select
points; For example in a JPA graph of objects, in some patterns lots of fields
are never used. There is something to do here.

[1]
[https://wiki.openjdk.java.net/display/shenandoah/Main](https://wiki.openjdk.java.net/display/shenandoah/Main)

------
joneholland
The use of read barriers are exactly the same technique that Azul does with
Zing, it’s “pauseless” GC.

This allows for concurrent heap compaction, at the cost of throughput.

Azul used to sell commercial “java machines”, with hardware support for the
barriers.

------
haglin
What does the Z stand for? Zettabytes?

If GC pause times do not increase with the heap- or live-set size, it could
handle such large heaps.

------
ateesdalejr
Wow! Multi-terabyte heaps? That sounds like way to much stuff in memory. When
would you need multiple terabytes in RAM? I mean come on... What kind of
application even needs that. Though I do admit that javascript needs this kind
of GC implementation because the hangs JS causes are a real pain.

~~~
rand0mthought
Run Apache Spark on the box with 256+ GB of RAM. The GC stalls by 60 or more
seconds are quite typical for heavy queries. I hope new GC will bring
improvements.

~~~
jerdavis
For Spark/etc I only care about throughput. It can stall all it wants.
Exception (possibly) being spark streaming.

