Hacker News new | past | comments | ask | show | jobs | submit login

Yes! While not quite mature yet, Graal is now an industrial-strength implementation of some heretofore-theoretical compsci ideas (Futamura projections [1]). You give it an interpreter and it gives you an optimizing compiler. It is, IMO, the biggest technological breakthrough in compilers in the last decade. I picture it as automatically specializing or "templatizing" (as in C++ templates), on a use-site basis, and doing so speculatively, even when proving that the transformation is correct is impossible. Other JITs do so as well, but Graal does it generally for any language expressed as a Truffle interpreter.

BTW, the amount of groundbreaking technological innovation in the Java space these days -- be it with optimizing compilers like Graal, GCs like ZGC, and low-overhead production profiling (JFR) -- going on at Oracle (where I work) is quite phenomenal.

[1]: https://en.wikipedia.org/wiki/Partial_evaluation




Note that PyPy does something similar, but based on (meta-)tracing instead of partial evaluation.

See https://stefan-marr.de/papers/oopsla-marr-ducasse-meta-traci... for a discussion.


There was a lot of work on partial evaluation in the 1990s, including getting compilers via Futamura. But that research mostly withered. Speaking to some the old timers, I get the impression that the core problem was that compilers obtained from interpreters by Futamura didn't perform well. Indeed, in his 2010 PEPM keynote, Augustsson asked: "O, partial evaluator, where art thou?" [1]

Is there a brief summary what is Graal etc doing differently to overcome 1990s style partial evaluation performance problems?

[1] https://dl.acm.org/citation.cfm?id=1706356.1706357


Futamura projections were not theoretical-only before Graal.

There exist several partial evaluators and benchmarks for the use case of Futamura projections.

I will admit that I know too little about the internals of Graal but I haven't seen any papers describing how the traditional issues with partial evaluation have been solved for the general case, so my guess is that all this still only works and yields good results under certain assumptions and designs, i.e. just throwing any generic interpreter code at the partial evaluator alone will not necessarily yield a compiler that is particularly good. [1]

The idea of general partial evaluation and the Futamura projections is well, rather general. Take code and static input and produce a specialized program to that code. In case of an interpreter being the code and a program the static input produce compiled code, etc.

It's rather hard to deliver on the promise that this always works and in particular yields performant results on arbitrary code and inputs. And I don't think Graal delivers this either (not saying it has to).

It's possible that Graal can be considered the first industry compiler for a full mainstream language that does a high degree of partial evaluation and allows tooling around it (i.e. Truffle for implementing language interpreters that can be optimized/become compilers). However it would be disingenuous to claim it's the first compiler/partial evaluator to yield practical results of applying the Futamura projections. (You didn't quite claim that bit the previously "theroretical" part kind of goes there).

[1] > Writing language interpreters for our system is simpler than writing specialized optimizing compilers. However, it still requires correct usage of the primitives and a PE mind- set. It is therefore not straightforward to convert an existing standard interpreter into a high-performance implementation using our system.

> Our experience shows that all code that was not explicitly designed for PE should be behind a PE boundary. We have seen several examples of exploding code size or even non- terminating PE due to infinite processing of recursive meth- ods. For example, even the seemingly simple Java collection method HashMap.put is recursive and calls hundreds of dif- ferent methods that might be processed by PE too.

https://chrisseaton.com/truffleruby/pldi17-truffle/pldi17-tr...


Instead of theoretical I should have said "research-grade". People sometimes don't realize how big the gap between a research finding and a product that actually delivers market benefits is. Often the discovery is only 5% of the work, and the rest isn't just grunt programming, but deep applied research with big technical challenges. That Graal has shown that the idea can actually yield competitive benefits in practice, running on real applications, and at acceptable costs (that are being continuously reduced) is a great achievement. Graal/Truffle is, obviously, not magic, and cannot convert any interpreter to a competitive optimizing compiler (it doesn't even beat C2 on some important workloads, which is why it is not replacing it just yet), but as a first approximation for what it does, I think that interpreter->optimizing compiler is a fair description, and even from a full-picture point of view, Truffle languages are competitive at a significantly reduced cost to other approaches we see in industry. Applied research that yields significant bottom-line benefits is precisely what a technological breakthrough is.


> I picture it as automatically specializing or "templatizing" (as in C++ templates), on a use-site basis, and doing so speculatively, even when proving that the transformation is correct is impossible

In isolation, this sounds like a description of how HotSpot works, no? (For the uninitiated: HotSpot's JITs make dangerous assumptions, optimise accordingly, and discard compiled objects if those assumptions ever break.)


> In isolation, this sounds like a description of how HotSpot works, no?

First of all, Graal is a compiler that serves as a HotSpot compiler when running as a Java bytecode JIT (HotSpot is the name of OpenJDK's JVM: it includes two compilers, with Graal serving as a third, an interpreter, several GCs and various other runtime features). But yes, HotSpot's default optimizing compiler, called C2, also does speculative optimizations and deoptimizes on "mistakes", but Graal is more general in the sense that its easier to teach it various optimizations for many languages. C2 is very good, but because Graal is believed to be easier to maintain, it may match and surpass it one day (it already surpasses it for some important workloads). Project Metropolis investigates the possibility of eventually making Graal the default optimizing compiler in HotSpot (https://openjdk.java.net/projects/metropolis/)




Applications are open for YC Winter 2022

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: