The assertion in the ^ video (from the es4x author) is that node programs hop across the http/libuv/v8/etc. boundaries all the time, and that's the majority of the overhead, not actual business/program logic.
So, es4x sitting on graal brings all of those http/event loop/js runtime infra projects into a single optimizing VM.
My memory is ~fuzy on the details, but I believe I got our apollo server running running on es4x a few months back, and the latency was ~double node, which was still impressive IMO, but I didn't see the amazing gains that the author does. Not sure why.
Hi, I'm the author of ES4X I'd love to check your graphql test. I've no experience on Apollo so if you could share a test/benchmark it would be great to see where the bottlenecks come from.
the author of [1] basically says the same thing. in terms of io/throughput you gotta side-step Node (but v8 itself is plenty fast).
he's famous for trolling the Node & Deno repos (i think he's banned from both). he also refuses to participate in the benchmarks (despite being able to score very well) because he says their methodology is flawed.
I integrated a really simple JS runtime into a C# project once and found the same thing. V8 was orders of magnitude slower due to the interop overhead. Even though the IronJS compiler was very simplistic it was just so much faster to do it inside the CLR and let the MSIL JIT optimize to the degree it was able. An example of “worse is better” I suppose.
I'd say it's an example of optimizing the right thing. Business code is relatively short and uncomplicated, compared to the interop code. So optimizing the slowest part (but not the most obvious part) gave good results.
JS inside C# being faster than Node is great news!
AIUI, es4x essentially replaces Node with Eclipse Vert.x, so the main question for me is how it compares to running on a normal JVM. Do you have any figures there?
It does work on a stock openjdk11 with similar performance (but I don't have numbers to show). What happens in this case is that we can load the graal compiler to replace the default jit but it will not allow you to use other languages as only graaljs currently runs on this mode.
Oh, cool! I tested clojure and graal some months back and, at the time, it was a lot clunkier to get working and only worked with clojure 1.9. This looks much simpler and works with 1.10, awesome!
As well as several internal tools. It mostly works fine. Graal is actually the default JDK on my development machine.
If there's one thing I'd fix, it's the ability to bake dylibs into the single binaries. E.g. if you want good elliptic curve TLS, you need sunec.so/dylib/dll, and part of the point for native-image for us is the ability to ship a single binary.
Indeed you need to buy the enterprise version for use in production. This is funding the project so that the majority of the features can stay open source.
Maybe I am reading that issue incorrectly, but it seemed there was some back and forth on startup time penalties and what constituted a valid benchmark to compare the two systems.
If what they just wanted was ES6 syntax, why not use Babel? But I guess what they really wanted was to get off of their deprecated JS runtime (Nashorn).
Great to see GraalJS is a realistic option. This polyglot VM idea is fascinating, though it's not likely to go anywhere for my personal use until GraalVM Python gets some traction. I check in on it now and then - every time it hits HN - but it seems to be going nowhere. Anyone have any experience with that?
GraalVM Enterprise costs $18/mo/core[1]. Every machine running GE needs a license so a 16-core production server, 4-core staging box and 4-core dev laptop will cost you $18*24 = $432/mo.
GraalVM has several parts. If you want to run polyglot apps directly, yes, you need to install GraalVM. If you compile your JVM apps to native, no, you get a standalone binary.
It needs to support wasm on the front and the back, as well as RISC-V on the back (and maybe on the front). When it does those things, it will be The Ring.
GraalVM/SubstrateVM will compile to native ONLY the code which is being used. So the native executable is MUCH LESS bloated than the JAR running on regular JVM.
Customers running js on your server in a Java VM doesn't sound very secure. I could be wrong but when I see Java I get the suspicion that user code and privileged server code are running in the same process. Of course they could have a properly sandboxed java process and are just using Java because that is what they are familiar with.
That perspective is somewhat dated to be blunt. A modern Java deployment would be identical security wise to node, with one app per Java process running with the minimum of privileges.
I have no worries, because I make sure to sacrifice chickens to my Tomcat apps each day. Although going into the cloud is making it trickier, chickens generally can't fly that high.
It is unsafe to mix trusted and untrusted code in the same process. This is why Chrome runs the renderer in a separate process. My concern is that using Java to execute JS increases the probability they are doing this bad mixing. They could be doing it safely but it increases my suspicion.
https://www.youtube.com/watch?v=JUJ85k3aEg4
The assertion in the ^ video (from the es4x author) is that node programs hop across the http/libuv/v8/etc. boundaries all the time, and that's the majority of the overhead, not actual business/program logic.
So, es4x sitting on graal brings all of those http/event loop/js runtime infra projects into a single optimizing VM.
My memory is ~fuzy on the details, but I believe I got our apollo server running running on es4x a few months back, and the latency was ~double node, which was still impressive IMO, but I didn't see the amazing gains that the author does. Not sure why.