Hacker News new | past | comments | ask | show | jobs | submit login
Smalltalk with the GraalVM (javaadvent.com)
109 points by mpweiher 51 days ago | hide | past | web | favorite | 33 comments



Just a clarification: when used as a JIT, Graal does not compile languages defined with the Truffle framework into Java bytecode. Rather it compiles both Java bytecode and Truffle languages directly into machine code (in this case, the Truffle language is Smalltalk bytecode), which it then installs in Hotspot's code cache. The diagram places the Graal compiler above Hotspot when it would be more appropriate to place it as a component inside of Hotspot, where it replaces the C2 compiler (Java bytecode to machine code), as well as directly compiles Truffle languages into machine code.

Graal can also be used as an AOT compiler ("GraalVM Native Image") -- from Java bytecode (only!) to a native executable that runs independently of Hotspot. If the Java program so compiled is itself a Truffle interpreter, the result is a native interpreter for the Truffle language, that contains an optimizing JIT (for the Truffle language only).

To make things even more interesting, when Graal is used as a Hotspot JIT, because Graal itself is written in Java, it must either compile itself (bootstrapped by Hotspot's Java bytecode interpreter) -- in which case it is both a Java component run by Hotspot as well as a component of Hotspot -- or it could be AOT-compiled into a native shared library, which is then loaded by Hotspot like any other shared library.


Yes, you're right. I've updated the diagram accordingly.


As of Graal 19.1 , libGraal, a pre-compiled C2 JIT compiler, is bundled with GraalVM.

You can use it by adding the following JAVA_OPTS:

-XX:+UseJVMCICompiler -XX:+UseJVMCINativeLibrary


> libGraal, a pre-compiled C2 JIT compiler, is bundled with GraalVM

libgraal isn't a C2 JIT compiler - it's (the clue is in the name here) a Graal compiler.

And the person you're replying to already mentioned it:

> or it could be AOT-compiled into a native shared library, which is then loaded by Hotspot like any other shared library


The embedded video showing a Smalltalk inspector being used to examine Python objects - that put a big smile on my face.


Happy to hear that! If you'd like to see some Smalltalk tools rendered in Java Swing, check out this demo [1]. More details on this are in the blog post on our Polyglot Programming seminar [2].

[1] https://www.youtube.com/watch?v=If7xNBYA0Bk [2] https://medium.com/graalvm/hpi-polyglot-programming-seminar-...


Hi everyone, I'm the author of the blog post. Feel free to let me know if you have any questions or suggestions!


Smalltalk and especially Squeak is something I'll always love, congratulations on such an awesome project!

One question I have is does the GraalVM open up any possibilities towards extending Smalltalk to a more multi-threaded domain using the capabilities of it and the JVM. I guess that the one thing that has always held me back from doing more Smalltalk is watching that single thread peak at 12.5% of my machine's capabilities, with the other 7 cores staying quiet. Or is the "single-threadedness" of Squeak too deeply baked into the implementation to be able to work around with the JVM?

I know there were some multi-core experiements a few years ago with the RoarVM but nothing seemed to make its way into the main implementation.


Thanks!

RE your question: Squeak's scheduler (also written in Smalltalk) would need to be extended with support for multi-threading. Since that requires a significant amount of work, it's probably out of the scope of the project at this point (unless someone is interested to work on this, of course). However, it's already possible to do multi-threading from Smalltalk using some other language like Java. We have some students doing just that with Ruby to parse HTML content in multiple threads.


That's interesting work, thank you very much. I would be very interested in the performance gain in using the GraalVM JIT compared to the original Squeek (Cog) or e.g. VisualWorks the performance of which is comparable to CPython (https://benchmarksgame-team.pages.debian.net/benchmarksgame/...). I'm aware of another publication which concludes that the performance gain of a Truffle based solution is about the same as of an RPython based one. Did you also have a look at this? And finally: could'nt you improve performance even more by using a more optimized bytecode (instead of the bluebook one, as it was e.g. done here http://strongtalk.org/downloads/bctable.pdf)?


We compared the runtime performance of GraalSqueak with OpenSmalltalkVM (formerly Cog) and RSqueak (RPython-based VM) in Figure 4 of our MPLR’19 paper [1] for the first time. Since then, we've worked on a couple of performance optimizations. So at the moment, I think GraalSqueak is only slower in the DeltaBlue benchmark... in all others, it's (often significantly) better than OpenSmalltalkVM. However, please also note its limitations which we discussed in section 5.2, especially with regard to Smalltalk's interrupt handler and Partial Evaluation performed by the Graal compiler.

So far, I'd say our observations wrt RPython match the ones discussed in "Tracing vs. Partial Evaluation" [2]: Language implementers have to do more work in Truffle, but probably get better peak performance in return (not talking about warmup or memory consumption here).

RE "a more optimized bytecode" set: Sista [3] (an extension of the OpenSmalltalkVM) is doing something like that, too. I'd guess performance would be more or less the same using GraalVM as Truffle produces highly specialized code. But, of course, this would need to be benchmarked. The advantage of the Sista approach is that it's managed on the image level, so specialized versions of methods are persisted as part of the image. AFAIK the GraalVM team is working toward persisting compiled code caches, which is kind of similar but on the level of the language implementation framework.

Hope this answers your questions!

[1] https://fniephaus.com/2019/mplr19-graalsqueak.pdf [2] https://stefan-marr.de/papers/oopsla-marr-ducasse-meta-traci... [3] https://hal.inria.fr/hal-01596321/document


Thank you for your detailed, interesting answer. I look forward to reading the referenced papers. At the moment I am building frontends for LuaJIT and would like to compare performance with analog implementations in RPython and Truffle. Maybe I will build a Smalltalk-80 frontend and then compare it to your mentioned implementations.


> ... VisualWorks the performance of which is comparable to CPython...

https://benchmarksgame-team.pages.debian.net/benchmarksgame/...


I already posted the link in my comment.


No, I posted a different link.


In what respect is it different? What did you intend to show?


You posted a link to normalized boxplot averages which-programs-are-fastest.html for 28 different language implementations.

I posted a link to side-by-side measurements vw-python3.html of individual Smalltalk and Python programs.


Ok, I see. It's a pitty Pharo is not on the diagram.


You should make the measurements you want to see, and publish them in the way you want to see them.


It's not my site, and actually the proponents of pharo and smalltalk should have best interest that their language is optimally presented.


You're the one saying what you think should be shown.


Does anyone actually outside of Oracle actually use Graal?


Twitter uses Graal in production right now - if you send a Tweet it goes through Graal.

Goldman Sachs use Graal for their Slang language experiments.

I use Graal at Shopify.

There are others.


And there are other products based on GraalVM. For example, the Gluon Client plugins can compile Java apps to iOS (https://gluonhq.com/java-on-ios-for-real/) with GraalVM Native Image.

There are also a bunch of Java frameworks (e.g. Quarkus, Helidon, Micronaut; see "In the Highlights" section on the bottom of https://www.graalvm.org/ for more) that come with special support for GraalVM.


Interesting. Is it the cross-language support or are there other reasons to use it?


I'm not aware of anyone using GraalVM for its language interoperability in production. However, it's used by GraalVM languages internally for supporting things like C extensions and FFIs.

So far, there's been a lot of interest in the performance improvements that the Graal compiler provides [1] and GraalVM Native Image [2], the toolchain that allows the compilation of Java apps and Truffle languages into binaries.

But, there's a lot more you can do with GraalVM [3].

[1] http://www.ssw.uni-linz.ac.at/Research/Papers/Stadler14/Stad... [2] https://www.graalvm.org/docs/reference-manual/native-image/ [3] https://www.graalvm.org/docs/why-graal/


For example, NextJorunal is using it for React SSR [1]. There are other companies using GraalVM's JS engine for similar functionality.

The Dutch Police is using GraalVM's interop between Scala and R for their data science [2].

I'm sure there are other projects which explore the benefits of having a polyglot runtime, would love to hear about those efforts.

[1] https://nextjournal.com/kommen/react-server-side-rendering-w... [2] https://vimeo.com/360837119


Twitter use it for simple performance in Scala.

Goldman Sachs use it as an easy way to implement languages, to integrate existing C code, and possibly integrate Python in the future.

I use it for Ruby performance.


This is very cool! Thank you for your work and inspiration it creates todo similar things.


Very cool! Congratulations. :-)


Thanks, glad you like it!


Is Graal finally there? Years ago there was a lot of promise that it would be THE “one true VM” that allowed python and ruby to fly. But I don’t really hear much about it on those fronts. Where are we at?


Twitter is using it, they are seeing gains of 10% for Scala.

It is a big project, there are many runtimes it supports, it already seems production ready.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: