
Eclipse Vert.x goes Native - Samtaran
https://vertx.io/blog/eclipse-vert-x-goes-native/
======
simias
This is impressive, although the cynic that I am can't help pointing out that
we must have gone wrong somewhere when a "hello world" HTTP server application
"only" taking 40MB of RES is seen as an improvement. That's more than 40
million bytes to open a socket, receive a few bytes, parse them and then write
a few bytes back. The overhead of modern software development is insane.

Of course I'm not not being entirely honest, a lot of this overhead is
probably a constant and wouldn't grow linearly for more complex applications.

~~~
pavlov
Every application comes with an entire operating system's worth of code
nowadays.

Download a desktop chat client or a little menu bar utility — each app
includes its own copy of Electron at 150MB.

Make a "Hello world" web client app using React with the recommended "create-
react-app" toolchain — your project will first download nearly 200MB of code
via npm (and each project you start will have its own copy of this stuff).

Make a web server with a framework like Rails, Vert.x or whatever — be
prepared to spend hours downloading dependencies and setting up your local
environment just so.

I don't have any good ideas for how to break this cycle. By 2028, I expect
every to-do app will include its own 4GB copy of Ubuntu because it's
convenient.

~~~
koolba
> I don't have any good ideas for how to break this cycle. By 2028, I expect
> every to-do app will include its own 4GB copy of Ubuntu because it's
> convenient.

2028? More like 2015. It’s already like that for much of what runs in Docker
containers.

~~~
gnur
Kind of, except that the base ubuntu image is more like 70MB.

~~~
pavlov
I was thinking of the Ubuntu desktop. Each desktop utility could be its own
virtual machine, and it wouldn't be so far removed from what already happens
with Electron.

~~~
homarp
see [https://www.qubes-os.org/](https://www.qubes-os.org/) and
[https://www.qubes-os.org/news/2018/01/22/qubes-air/](https://www.qubes-
os.org/news/2018/01/22/qubes-air/)

------
kodablah
Missing under limitations is lack of Windows support (though it's coming they
say). Also, I would caution anyone from putting their eggs in this Oracle
basket. This is not a community project and we know who the owner is and they
do have a premium version. Sure, if you have existing JVM stuff and want a
native app, go ahead, just don't build a reliance on it. Otherwise, even if
you believe Oracle will be good stewards of the project (which I do believe),
it's still ok to avoid it on principles around the owner's other activities.
Of course stances like these can't be 100% consistent across everything
(especially legacy or defacto standards) but just something to keep in mind.

------
djsumdog
I hadn't heard of Vertx until my most recent job where we use it for our
platform. We currently deploy in openjdk docker containers.

This seems neat, but I'm wondering about dependencies. Does Graal rebuild all
your dependent jars to be native? What if you have unsupported things like
reflection in a dependency? I'm looking through their website and can't seem
to find any answers.

We've already run into dependencies for Vertx that puke on Java 9
(depreciated/removed API) and are currently still on Java 8. This seems nice,
but I have a feeling that, for production, you'd end up writing a lot around
GraalVM.

~~~
nwatson
The "limitations" link in the posted page has a section on "Reflection", [1].
It says: "Support Status: Mostly supported ... Individual classes, methods,
and fields that should be accessible via reflection must be specified during
native image generation in a configuration file ..."

I'd assume libraries your app depends on should be usable via Reflection as
well, and that their own calls to reflection APIs should work as well, as long
as the classes your code, or the library codes, want to reach are somehow
included in the Java --> Native conversion configuration files.

[1]
[https://github.com/oracle/graal/blob/master/substratevm/LIMI...](https://github.com/oracle/graal/blob/master/substratevm/LIMITATIONS.md#reflection)

------
AndrewSChapman
Excellent! I've started using Vert.x with Kotlin instead of PHP for making
restful API's and it's wonderful. Event driven, strongly typed and performant.
Looking forward to trying this out.

~~~
nobleach
Did you find the documentation to be a bit painful? I gave it a shot for API
endpoints too, and was very frustrated that Kotlin felt like a fourth class
citizen... and the existing docs displayed one of my biggest pet peeves with
most Java-esque docs; "we won't mention the imports, you'll probably have an
IDE that figures those out magically". As a Vim user, that's a bit more
difficult.

------
tannhaeuser
AOT compilation for Java alone is just a technique to improve startup time for
command line apps vs actually native apps; AOT shared libs still need JVM
infrastructure for GC.

Also, going all-in on async doesn't seem that helpful on the JVM where the
vast majority of existing code (= what makes the JVM valuable) uses
synchronous I/O.

So what's the point of this? To start a webserver really fast? I always
thought the value of vert.x is to become part of a node.js/CommonJS runtime
for the JVM (like what Oracle and RedHat were trying a couple years ago).

~~~
thermodynthrway
I've messed with async on JVM. In my experience, it doubles speed at most for
CRUD apps with heavy DB access. Async is only useful if you're thread starved,
and most DB calls are fast enough that it never happens.

The one place it makes a huge difference is when you're waiting on other,
possibly slow, services like third party REST calls. If you have 100 threads
and each call takes 1000ms you will run out of threads at 100 req/sec which is
quite low for the JVM.

Using async in these situations allows thousands of threads to be paused while
your 100 still do work, allowing you to handle maybe 10k req/sec even with a
slow partner service.

A lot of JS programmers don't understand the relation between thread
starvation and async, they act like async is magic. If you're not thread
starved (everything you do is pretty quick) then async is useless

~~~
apta
> I've messed with async on JVM.

What library or framework did you use? Using ExecutorService or similar would
still have those 100 threads blocked even if they run in their own future,
correct? The only thing it would allow is queuing the other incoming requests,
but it does not time slice or switch to another request upon hitting an IO
request.

~~~
thermodynthrway
You can use fiber libraries like quasaras your executor, just like Go uses for
regular threading.

The double edged sword of Java is that you can do anything. But the learning
curve is huge because there's so much crap out there

------
marktangotango
Great but whats the gc story? Does substrate vm include a generational gc?

~~~
tom_mellior
Yes, according to [https://nirvdrum.com/2017/02/15/truffleruby-on-the-
substrate...](https://nirvdrum.com/2017/02/15/truffleruby-on-the-substrate-
vm.html):

"... the SVM has a new generational garbage collector (GC) that’s different
from the JVM’s. The SVM ahead-of-time (AOT) compiler uses a 1 GB young
generation size and a 3 GB old generation size by default."

------
specialist
My bro and my bestie worked on a Vert.x project. So I got to hear all about
it.

I can only comment thru comparison. Due to multiple poor life choices, I'm
currently stuck maintaining multiple large nodejs code bases.

I'm "maturing" our code base from callbacks to promises (futures) to
async/await as able.

It's terrible.

Whatever the future of async I/O programming is, it's not events, promises,
callback. I've been reading about and playing with actors, CSP,
Erlang/Elixir/Phoenix. If I was doing greenfield, that's where I'd place my
bets.

~~~
BenoitP
There is a great approach in Quasar [1] where you program simple blocking
code, and the runtime (bytecode manipulation with a Java agent) unmounts and
remounts thin contexts in the fat thread you are in. A light thread, if you
will. No events/promises/callbacks. It does require unmounting-aware APIs,
though; making you incur the usual fat thread mounting/unmounting cost if they
are not available.

The author of this project is currently implementing the base element of this
approach (continuations) in the JVM [2]; probably along the required rewrite
of the Java standard library. I'm very excited about this!

You could start writing code in Quasar, IIRC they said it will use these
continuations when they are made available. I would not be surprised if it is
one of the first frameworks to make use of it.

[1]
[http://docs.paralleluniverse.co/quasar/](http://docs.paralleluniverse.co/quasar/)

[2] [http://cr.openjdk.java.net/~rpressler/loom/Loom-
Proposal.htm...](http://cr.openjdk.java.net/~rpressler/loom/Loom-
Proposal.html)

~~~
bpicolo
> unmounting-aware APIs

Does that mean libraries need to be rewritten to support it? In my experience,
important IO-driven libraries like database access/ORMs tend to be extremely
poorly supported for non-language-native concurrency strategies, or even
concurrency strategies added late in a language's lifecycle.

~~~
BenoitP
> Does that mean libraries need to be rewritten to support it?

Yup. Here are the list of supported libraries [1].

> even concurrency strategies added late in a language's lifecycle

Quasar code uses a regular Java Exception as a trick to stop execution and
unmount, to sort of piggyback on the language features. Fibers do also
implement the Thread interface; I think what you pointed at was indeed a
concern of the author. I don't know how the Loom initiative look like.

If you want to see how a library is adapted, here is JDBC [2].

[1] [http://docs.paralleluniverse.co/comsat/#getting-
started](http://docs.paralleluniverse.co/comsat/#getting-started)

[2] [https://github.com/puniverse/comsat/tree/master/comsat-
jdbc/...](https://github.com/puniverse/comsat/tree/master/comsat-
jdbc/src/main/java/co/paralleluniverse/fibers/jdbc)

------
randomsearch
Does native code execution outweigh the loss of JIT optimisation?

~~~
gabcoh
If I understand it correctly JIT compiles down Hotspots to native code, so if
the entire executable is already native it must be just as fast or faster.

~~~
jhomedall
JIT compilation makes decisions on how to optimize the generated code based on
run-time analysis the program. AOT compilation doesn't have access to that
information, which can lead to slower code.

However, because AOT compilation isn't delaying the execution of the program,
it is allowed to take much more time to optimize the output.

~~~
pjmlp
You can AOT compile with PGO (Profile Guided Optimization) though.

~~~
Bjartr
And if you can record profiling data from every machine the code will run on
you might be able to be confident no JIT could do better, until a new
processor (that is a compatible architecture to the executable) that wasn't
available at the time of compilation comes around and JIT can still tune for
things like cache sizes or processor features and the AOT executable is stuck
not being able to take advantage of them.

~~~
pjmlp
Agreed, but there are ways around it.

For example, Android P will upload PGO data into the store, which is then
distributed across all devices, and fed into the on-device AOT compiler.

~~~
Bjartr
I would argue that is still a JIT in the spirit of this discussion so far.
That, or we need another term to disambiguate the cases of AOT(pre-
distribution) and AOT(post-distribution)

~~~
pjmlp
It is called PGO.

