- https://docs.oracle.com/javase/8/docs/technotes/guides/vm/gc...
 - http://www.azulsystems.com/zing/pgc
 - http://mishadoff.com/blog/java-magic-part-4-sun-dot-misc-dot...
Plus, sun.misc.Unsafe is probably going away, according to Oracle. http://www.infoq.com/news/2015/07/oracle-plan-remove-unsafe
From what I've heard, Azul has a great GC, but the throughput is extremely low. It's really only a practical solution for high frequency finance and places like that where latency is everything, and throughput is nothing (can buy another 100 servers or high end hardware.) Note: I'm talking about their software product which runs on vanilla hardware, not their hardware product, which I understand is far superior.
A lot of people on HN also seem to be taking the statement that maximum GC latency will be 10ms as a statement that there will often be 10ms pauses. Hopefully, the average latency will be far less, in the 1 or 2ms, and 10ms will be something that only happens on huge heaps in certain conditions. This should be similar to what has happened on Android, where GC pauses are pretty rare and typically only 1 or 2ms.
All Java collections (which are what really matters) are being retrofitted alongside with the introduction of value types. As soon as value types are introduced (and the HotSpot team is working hard on that right now), all collections will be fully value-ready.
> Go already gives better tools than Java for managing native memory through cgo, which is a far less painful interface than JNI
JNI is being replaced by Project Panama: http://openjdk.java.net/projects/panama/ (you can already use a similar FFI already with JNR, which serves as a blueprint for Panama's FFI: https://github.com/jnr I've used JNR to write a FUSE filesystem in Java without a line of C, and unlike JNA, it's fast!)
Besides, HotSpot runs C now, too, and quite well:
> Plus, sun.misc.Unsafe is probably going away, according to Oracle.
... only to be replaced by something much better: https://www.youtube.com/watch?v=ycKn18LtNtk (Unsafe isn't going away until replacements are available).
> From what I've heard, Azul has a great GC, but the throughput is extremely low.
Not at all. Just a little lower than with HotSpot's throughput collector, and possibly higher than with G1 (although G1 changes a lot, so that might not be true).
Consider the wish list: I want a garbage collected language where, for a handful of large/important data structures, I can sidestep gc and carefully control memory layouts for cache friendliness. I'd also like direct interop with blas and my aforementioned data structures.
It looks like I may get all of this!
And yes, I've done a bunch of work with misc.unsafe but it's nowhere near as nice as it could be. What the jvm really buys you is not having to build once for each platform; I distributed code that relied on c++11 features on 3 platforms while there was mixed compiler support and it was a bloody nightmare.
Memory layout and GC are two completely orthogonal issues. You will be able to control memory layout quite well with Valhalla (value types) and even on a finer-grained level with Panama if you need C interoperability. VarHandles (hopefully in Java 9) will give you safe access to off-heap memory. Currently you can do that with Unsafe, which is more work but still less than C++.
> What the jvm really buys you is not having to build once for each platform
Oh, I'd say it buys you a lot more: seamless polyglotism, exceptional performance even for dynamic stuff (dynamic languages, esp. w/ Graal, but even cool bytecode manipulation in Java or even simple code loading/swapping), and you get all that performance with unprecedented observability into the running platform.
I have almost never seen this make a difference outside of, say, GPU programming. The fact that Java's optimizer is much better than that of Go will make a much larger difference in execution speed.
The other aspect of layout control is cacheline padding, which is also not present in the JVM. There's @Contended, but it's a blunt tool and not currently a public API (it's in sun.misc).
>The fact that Java's optimizer is much better than that of Go will make a much larger difference in execution speed
Yes, but that's orthogonal.
Yes, you can group but as I mentioned in the other reply, you get "blob" granularity/control. This is unlikely to be an issue for a value type instance itself, but say you want to have multiple value types (each with several fields) embedded into a heap allocated object. Assume this total embedded blog spans multiple cachelines, but you have clustered access patterns. So you've grouped some data together into value types, but you have no control over how that will be laid out in the "host" object.
Again, I'm happy that Oracle will be adding value types -- they will definitely help. I contend, however, that calling it "layout control" is a slight stretch in the general sense; it's perhaps fair to call it that when comparing against today's java, but not against languages that allow explicit field-by-field layout (along with padding, alignment, etc).
It's fair to call it that when comparing against any language or runtime targetting general-purpose development, but not compared to languages designed for different purposes. I agree with that.
However, we're not discussing those languages here so I don't see the relevance. I'm also sure that many statements are wrong when compared to quantum computing, but I don't think there's any need to make that qualification. It's pretty clear from the context.
Unless you consider, e.g., C, C++, and Rust as not general-purpose. Even C# has StructLayout where you can control the struct layout. Heck, Go and C# allow you to embed a fixed size array into a struct (this facility is not in the works for Java at the moment).
Of course they're not! Even Stroustrup now calls C++ "a language for those who need or want to work as close to the hardware as possible". None of these languages -- unlike Java or Go -- has simplicity as a major design goal.
> Even C# has StructLayout where you can control the struct layout.
Yeah, that's precisely my point of giving you something you don't need. That C# does it doesn't mean it's actually significant for any significant number of programs.
> this facility is not in the works for Java at the moment
Paul Sandoz is working on something similar with one of the VarHandles variants (he said there's a prototype already), namely, indexed access to object fields, which would make this a language, rather than a JVM concern.
Also, I've noticed that whenever I say Java does X, you say, "Oh, no! It does X - ε!" Now, to me, that's nitpicking, especially considering that a perfect general-purpose language/runtime designed to be simple (for some definition of simple) should give you 90+% performance in 99% of general-purpose use cases (or 95% in 95% etc.). If it does any better then one of two possibilities is true: 1/ it's magic, or 2/ it's not a perfect simple language/runtime because it could have been made simpler (by whatever definition of simple it's chosen).
Anyone who can't settle for anything less than 100% performance or does something that's outside 95% of the use cases knows not to use such a general-purpose language/runtime, and, instead, uses a more domain-specific language/runtime or one that's not designed to be simple.
They let you treat the fields of a value type as a "blob", you have no control over how they're laid out within that blob.
>Also, I've noticed that whenever I say Java does X, you say, "Oh, no! It does X - ε!" Now, to me, that's nitpicking, especially considering that a perfect general-purpose language/runtime designed to be simple (for some definition of simple) should give you 90+% performance in 99% of general-purpose use cases (or 95% in 95% etc.). If it does any better then one of two possibilities is true: 1/ it's magic, or 2/ it's not a perfect simple language/runtime because it could have been made simpler (by whatever definition of simple it's chosen).
Nothing personal, but I find your JVM related posts as borderline fanboyism (and I say this as someone that greatly respects the engineering in Hotspot, despite certain things bugging me). It's not about 100% or 95% performance; it's about not making exaggerated claims since, as you say, there's no magic.
I think they're much less fanboyish than any other language/runtime discussion on HN.
> it's about not making exaggerated claims
I am not making exaggerated claims because perfect for me (and, by definition 95% of people) is precisely how I've defined the requirements from a runtime like Java (or Go, or Python, for that matter). So, if I say "it gives you all the layout control you need", I don't mean all the layout control you need to write a DSP, or all the layout control you need to get 100% performance -- only 99% performance. I think most people make the same assumption (because they're not writing DSPs), and I find your posts nitpicky and possibly misleading for the target audience (who, by and large, don't write DSPs or medical devices or particle accelerator beam controllers -- at least certainly on threads not discussing those interesting but extreme use cases). I don't think it's reasonable to qualify every statement for some negligible (possibly nonexistent) portion of the participants, especially considering that that minority already knows the statements don't completely apply to them. Not doing so doesn't qualify as exaggeration IMO, but reasonable expressiveness, or else every discussion will be bogged down in irrelevant detail that will only distract from the main point.
So, yes, I stand behind my claim that value types give you all the control you need to get 99% performance for 99% of the use cases people would normally use Java for.
And for profiling apps on production, I've yet to encounter a more thorough, low-overhead profiler than Java Flight Recorder.
And as pron mentioned plenty of tools exist for lower level access.
In particular, the LVB is akin to an array range check on each reference read.
Well, you heard wrong.
Zing is used in plenty of throughput-intesive and throughout-centric applications, and sustainable throughput on Zing tends to be higher (not lower) than with other JVMs. E.g. Cassandra clusters tend to carry higher production loads per machine when powered by Zing (compared to OpenJDK or HotSpot on the same hardware). All while dramatically improving their latency behavior and consistency.
Specifically, on similarly sized heaps and workloads, the C4 collector's throughout is better than CMS's and close to ParallelGC's. And since it's throughput scales linearly with the amount of empty heap configured and since (unlike OpenJDK/Hotspot) Zing places no practical pause-related caps on how much memory can be applied, it tends to beat both on efficiency in actual configurations.
The notion that good latency behavior has to come at the expense of throughput is just a silly myth. There are plenty of examples that disprove it. Zing/C4 is just one of many.
- The reason Zing tends to to carry higher throughput in production is that in most Java-based systems, production throughput levels are limited not by system capacity, but by how far you can drive the JVMs before the glitches start being unbearable, and what looks like occasional small hiccups at lower throughputs starts looking more like epileptic seizures at load. Capacity planning and sizing usually aim to keep peak production loads below the levels that lead to these "I don't want to go there" behaviors. By taking out the various glitching/pausing/stalling behaviors typically associated with JVMs under load, Zing extends the smooth operating range such that it comes much closer to the traditional "how much can this hardware handle?" capacity and sizing behavior people are used to in non-Java and non-GC'ed environments.
- When you compare raw throughput or speed (with no SLAs, e.g. "how long does this multi-hour batch job take to complete?"), with similar configurations Zing is usually comparable to OpenJDK/HotSpot [Where comparable typically means within +/-15-20% range. Sometimes faster, sometimes slower.] But once people apply simple knob twists in the applications (like turning up heap sizes, caches, and other using-memory-no-longer-hurts related settings) they often get more raw throughout per instance or machine through simple efficiency benefits (like the elimination of raw work that comes from higher in-process, on-heap cache hit rates).
 - https://groups.google.com/d/msg/golang-dev/GvA0DaCI2BU/SmEel...
1) GC itself operates on virtual addresses.
2) If you want concurrent collection you're probably going to need a read barrier, and that will require some GC / MMU interaction.
The Azul Vega had a lot of interesting features to support GC (and other Java constructs), but the most important by far is the HW read barrier.
2. Barriers are one method. They're among the most expensive. An IBM prototype used careful lock management (supported by lock registers) and scheduling to avoid barriers. There's actually quite a few ways in the literature to avoid barriers in concurrency. I figure a team would have to leverage them plus asynchronous I/O from CPU to FPGA for optimal performance. I see it as a series of hand-offs to FPGA which, once it has necessary information, acts on those hand-offs with CPU assuming it completed after certain time, seeing a part of memory saying so, or receiving an interrupt.
That's a rough sketch.
Though I'm convinced custom hardware is doomed here for the usual custom hardware reasons. Maybe GPUs have gotten good enough at pointer chasing to be usable here?
I agree that the risk is high, though, to the point that one shouldn't depend on it. So, I'd advice selling system w/ services that's profitable which just happens to use such custom hardware. A high-performance, easy-to-manage, easy-to-integrate... already worth buying... platform that also has hardware-supported GC and/or memory safety. The sales of the system & licensing of the software subsidize hardware costs, which are structured to be cheap anyway. Start with FPGA's, then S-ASIC's, then advanced S-ASIC's or finally ASIC's. The NRE stays as low as volume can support.
Relevant example of this model (and evidence for my GC idea) is Azul Systems Vega machines. Those are custom hardware for Java supporting native bytecodes, a bunch of RAM, a pauseless GC, and easy enterprise integration. So, while we're all speculating, they're selling custom hardware w/ pauseless GC's. I'm just trying to work out a different, cheaper design hopefully integrating with Intel/AMD.
Note that they support a whole range of hardware, software, and services to diversify income. Any one thing shouldn't sink them, esp unfavorable hardware. That's the model to copy.
If the FPGA on the bus is doing concurrent GC, we'd need a way to mark pointers and integers reliably. It means breaking all C code that does pointer manipulation, and breaking custom tagged pointer implementations. Niche languages and applications wouldn't have the buy in, and justifiably so, to set global policy on memory. On a developer machine you could experiment, but on a consumer device you would need to use whatever the OS and the bulk of applications have decided on.
Maybe that's not a bad thing though? The OS would need to provide implementations of malloc and free, and other primitives, that are tied to the hardware. But I suppose that's not unreasonable.
Except when it comes to virtualization. The FPGA isn't fixed function, but on the timescales that modern VMMs provide slices of CPU time to virtual machines, reprogramming an FPGA is an eternity. So we would gain in potential performance but lose in flexibility, substantially.
The biggest problem would be legacy C code. Code that treats "void*" and "long" as interchangeable values. And that includes a lot of hand-written JITs, where coercing values between pointers and word-sized integers is prevalent.
Rambling aside on potential upsides and downsides: The world would probably be a better place if pointers and integers were separated and strongly typed down to the hardware level. It would be a pain, but if the null pointer is a billion dollar mistake, placing all bets on the Princeton architecture may yet be a trillion dollar mistake this century.
FPGA-aware garbage collection in Java (2005)
(Modified Jikes VM to use FPGA coprocessor for collection. Result was good performance with around 2.32% overhead on memory-intensive benchmarks.)
Fine-Grained Parallel Compacting Garbage Collection through
Hardware-Supported Synchronization (2010)
Stall-free, real-time collector for FPGA's (2012)
(This is on one chip with tiny resources for GC around 1% and 4-17% overhead.)
Maybe I can get something better for you after some rest. Note that my comment to the other person has more details of where I was going with this.
The advantage of implementing it in raw hardware are many: easy to build into a state machines with high clock-rate (even HLS should work); won't waste CPU time or cache; can be setup to analyse segments of memory in parallel because it's hardware; latency with right algorithm can be lower. That combined with multicore RISC and some concurrency extensions I know of would give plenty performance while knocking out tons of errors. Can't exactly do that with Intel/AMD chips unless we buy their semi-custom designs for untold millions, now can we?
So, next idea (for Intel/AMD) customers is to put it on the memory bus with some synchronization mechanism between CPU task and FPGA. This will use far less CPU time than a CPU that's doing GC stuff non-stop. That's on top of above benefits. Plus, if a workload doesn't need low-latency GC, it has a whole FPGA to use for acceleration. :)
It would be a cool thing to try still, and maybe doable with COTS hw:
https://www-ssl.intel.com/content/www/us/en/embedded/technol... + http://www.hotchips.org/wp-content/uploads/hc_archives/hc21/...
Fine-Grained Parallel Compacting Garbage Collection through Hardware-Supported Synchronization (2010)
Stall-free, real-time collector for FPGA's (2012)
The question is, "Can modern CPU's and off-chip FPGA's keep in sync without performance getting dragged down?" The FPGA's have gotten faster. The CPU's I/O have gotten faster. So, I'm sure it can be done but it might be difficult enough to be someone's Master thesis. ;)
Besides, I call for replacing current chips with open ones easy to modify for acceleration and security. Gaisler LEON4 SPARC, Rocket RISC-V, Cambrige's BERI/CHERI MIPS64... these all come to mind. Plan was to put them onto a high-end FPGA w/ concurrent GC's to test the scheme. Once it worked, ASIC conversion time baby. S-ASIC's are $200-500k on average with resulting production & packaging being way cheaper after that. Just hoping there's a few companies that would split the cost to eliminate most memory and control flow issues forever. ;)
(See section 7 for a radical... err realy old... way to do concurrent GC. Full paper available at ACM/IEEE or if you Google LISP processors Guy Steele enough.)
Scheme machine by Burger 1995
(See the main graphic and specifics in storage section later. Once again, GC-like stuff is handled in memory management part of the processor. This processor knows that, though, to assist GC a bit. Also different in that it was specified and then synthesized to heterogenous hardware with DDD "correct-by-construction" toolkit.)
So, have fun with those. Plus, Google hardware-assisted or hardware garbage collection to get lots of interesting results already done.
There also is some work being done to make the CMS failures a little less terrible by parallelizing them.
Still, Samsung is here to samsung things up. Comparisons of scrolling performances between a flagship samsung device and a nexus are just as expected : https://www.reddit.com/r/GalaxyS6/comments/3ck9no/scrolling_...
That's another topic entirely, but I think that another key factor is to move almost as much as possible of the UI work to a separate thread, not just to make heavy work in the background. It is starting to happen with ripple animations on a RenderThread, but I think that this strategy could be generalized.
3rd time I've laughed with more than 60% intensity at something I read on the internet in the last 2 years. thank you!!! hahahaaha
for reference, my last 100% laugh that returned me to the days of cartoons and childhood was this comment about how to install and activate your non-rental-Cable-modem: https://www.reddit.com/r/technology/comments/27szuw/comcast_...
At best they have the same SOC as a good phone but they typically have way more pixels to carry around.
NoName Chinese tablets are often a disaster and even high end tablets are hard to get right.
Even putting graphics aside, things like a slow internal memory can also wreak performances.
Just remembered though: Friend who is a Azul developer said that 0x86 processors did not have the correct instructions to support pausesless GC up until very recently.
So probably a chicken and the egg problem.
A little internet searching, leads to this and some other stuff.
I think the deal is, if the GC moves something, it can leave a bunch of dangling pointers. Those need to be fixed up to point to the new place before they get dereferenced. Except Azul uses a read barrier to trap when a dangling pointer is dereferenced, so the fix up can be lazy.
So, the best GC for the next decade was proposed in 1978. Whatever computer scientists have proposed since then apparently hasn't been as good.... so they can all just pack it up and go home?
I'd be really really curious to hear from some academic computer scientists if the best that can be done was first proposed back then, and they've all just failed to find anything better since. Cause this is just downright depressing otherwise (If you actually want to believe that we're making progress).
1. Most modern high-performance garbage collectors are both concurrent/incremental and generational. Generational GC is really important for throughput and I think Go will need to get it eventually.
2. The tri-color marking algorithm plus the idea of a nursery and tenured generation are all very old as far as ideas go, but the devil is in the details, and that's where the research lies. Hash tables, for example, date back to the '50s, but there's been constant research on refinements of the technique up to the present day.
An example from compilers: Register allocation for expressions (no cyclic dependencies) is an old and solved problem. For an expression like a+b+c+d+e+f, you wanted to order it like (a+(b+(c+(d+(e+f))))), because that minimizes register use. Then newer CPUs got out-of-order processing. Now you want ((a+b)+c)+(d+(e+f)), because in contrast to the previous order the CPU can now sum in parallel. Another case, where digging up an old algorithm helps.
What probably also happens a lot is that people reinvent stuff without knowing previous work.
For what it's worth, many of today's finest, most important algorithms are from the 50s-70s, or maybe special adaptations of them. Theory doesn't obsolete like technologies do.
No need to get depressed by this. Hard problems are still hard (until proven otherwise), and taking out the trash is the same today as it has ever been.
This is only the case/sensible for algorithms that have not been practically improved.
The problem is that you have really complicated system (the gc) meeting low-level language. You need to write it in C or C++ for performance and for direct memory access. But it's a language that isn't suitable for complex algorithms so if you could choose, you'd rather pick Lisp or something. It wouldn't surprise me if it takes Google a very long time to get their awesome gc that they are planning for Go up and running even if they throw heaps of engineers at the problem.
This is a potentially dangerous worldview. While it is true for the server/desktop/laptop segment, it is less of a surety for phones, and downright scary for smaller, low-power embedded/IoT devices.
Do you want a GC there anyways?
Latency under 1ms per their material.
They had standard Java support, a better native API, GC pauses down to 100 microseconds, AOT compilation, and so on. I especially liked how they supported separation kernels like INTEGRITY and non-x86 architectures. The ability to use strong, user-mode isolation on the VM plus obfuscation of codebase and processor ISA makes for high resistance to whatever Internet throws at you. ;) Plus, embedded boards can be really cheap w/ Freescale's doing I/O, crypto, and TCP acceleration.
So, I thought it was cool shit that could've benefited mainstream market if they thought of things like this and applied it properly. Could always build, in parallel, a FOSS-based setup in case they pulled out or some other crap that happens with proprietary companies. Can just painlessly move off their stack. Meanwhile, their stack rocked in its domain, still does given numbers I'm seeing in Go etc, and it's worth copying in some ways.
I ask because I don't have much low-level/embedded experience, and I'm unaware of why you'd want a gc on such low power systems. If there's a use case though, I'd love to learn about it!
I think it's important to be clear here because Swift and Objective-C rely on a technology known as "Automatic Reference Counting" (ARC), which does imply the usual dynamic cost of reference counting.
those tools are the same freaks that make Chrome take 150MB-200MB per Tab and ruined my perfectly usable 8GB-RAM system in a matter of 2 years.
So basically, it's the same GC that Mike Pall had planned to implement for LuaJIT 3.0. Too bad Mike recently announced he was transitioning away from doing daily LuaJIT work.
Synchronization whenever a heap pointer is changed? That does not sound like "low latency".
Typical implementation of write barrier involves adding item to some kind of thread-local queue.
Apart from cycles, reference counting has several other issues. Some of my pet objections are:
* The need to add a mutable field to every object increases the active memory foot-print significantly (especially bad for caches).
* The frequent updates to that field must be done using atomic operations if the objects may be accessed by different threads (even though the objects themselves may be immutable).
* The updates to the reference field trashes the caches in a multi-core processor.
* The amount of work done managing these updates can easily become a significant part of the run-time of the program, in some cases dwarfing the actual work.
* Uncontrolled arbitrarily long pauses will happen when cascading deletions happen.
There are really cool and smart algorithms that make reference counting much better, but then the appeal of a simple natural algorithm for collecting garbage has come and gone.
That's probably true, but if the win is pauseless automatic memory management that doesn't require twice as much memory as using a GC, then it may be worth it.
I think, realistically, we'll have to accept that both tracing GC and reference counting (however smart) will always suffer from some drawbacks that manual memory management can solve. The reverse is also true.
However, while the amount of virtual memory needed for a tracing GC might seem large, it is not active memory. Reference counting on the other hand will significantly increase the amount of active used memory, especially if many small objects are common.
As for GC vs. manual memory management, both have their uses. For most systems I would argue that GC is the way to go since it is quite efficient and is much more productive. On the other hand, I actually like writing code that has manual memory management. My current interest is learning more Rust for that kind of development.
There are less fundamental issues as well, like high contention on the reference count in highly parallel environments, and more memory fragmentation compared to a compacting garbage collector. Also, memory allocation tends to be faster on a compacting garbage collected heap (simply increment the heap pointer instead of searching for a large enough available block). Additionally, freeing memory when things run out of scope cause challenges when optimizing code, like making it more difficult to optimize function invocations that otherwise would have been eliminated as tail calls with a garbage collector. In practice, in some cases, the cost of a tracing garbage collector might be well worth when you amortize it over the lifetime of the program, compared to keeping track of reference counts at runtime.
There's also the issue that ARC vs. GC is a tradeoff between CPU overhead and memory overhead, and that in the scenarios Go was initially designed for (Google's servers) memory is much cheaper.
Upon re-reading, it's distressing to hear that the GC latency has been raised from "not significant" to "low."
Are there any well used GC that aren't tricolor?
Server GC, for example, typically tends to be less concurrent to improve throughput. Client GC is more responsive, but also sacrifices throughput. Since Go is positioned for server work, it is very interesting that they are choose what is typically seen as client-side priorities.
Also, I think its really important that pause times be as short as possible on the server as well. If they aren't, you'd probably see an unhealthy spike in 90+ percentile responses.
They say: Limiting GC pause times is, in general, more important than maximizing throughput. Switching to a low-pause collector such as G1 should provide a better overall experience, for most users, than a throughput-oriented collector
Not kidding, though, the latency hit from stop the world only gets worse scaling from 32 cores to 64. May as well call it "batch" GC. On the other hand, concurrent GC will keep the same latency but hopefully double throughput on such a machine.