Hacker News new | past | comments | ask | show | jobs | submit login
Using jlink to cross-compile minimal JREs (jakewharton.com)
96 points by rabenblut on Jan 17, 2023 | hide | past | favorite | 64 comments



These days, all Java runtimes are created with jlink, including the one that's bundled in the JDK, so it's worth it to take a few minutes to learn how to do it yourself for a custom image. The result is not only drastically smaller, but more secure, as the potential attack surface area is much smaller.

If your application is modularised, it can become a part of the image, but, as the article shows, creating a custom runtime is easy and recommended even if your application is not modularised.

BTW, the JDK contains not just a Java runtime, but development tools, as well as an additional copy of the entire class library (this copy, in jmod files, is stored in a format that jlink uses as its input; i.e. it's there only to allow generating new runtime images). Use the entire JDK as an runtime is a real waste of space, since, among other things, it contains all libraries twice.

One minor comment, though. jlink produces Java runtime images or, in short, Java runtimes -- not a JRE. The name JRE refers to a particular kind of Java runtime from a bygone era when there was a global Java runtime environment, which was used by applets (and Web Start applications). When applets and Web Start were removed, the JRE and the very concept of one, was gone along with them. Some people still use the anachronistic term JRE to refer to a Java runtime (and some companies distribute pre-linked Java runtimes and call them JREs), but the real JRE is gone.


How is modularization progressing in practice?

Stuff like Hibernate, Spring, the big libraries.

How easy is it for the average greenfield enterprise to start as a modular app?

How easy is it for the average brownfield enterprise app started 10 years ago to convert to a modular app?


First, just to repeat, you don't need to modularise your app to use jlink. You modularise to enjoy additional security and evolution benefits.

> How easy is it for the average greenfield enterprise to start as a modular app?

As easy as a non-modularised one.

> How easy is it for the average brownfield enterprise app started 10 years ago to convert to a modular app?

This one is harder to answer because you can only modularise (i.e. encapsulate in modules) code that is actually modular (cleanly separates API and implementation into different packages, no circular dependencies among components that are to become modules etc.). So it depends on how modular your codebase already is. In many situations it could require a not-insignificant refactoring, and so worth it if you really want the best security and encapsulation (which was the case for the JDK itself, which is now fully modularised). In other situations it could make sense to leave things alone and only modularise new project components (modules and code outside modules can mix).

Either way, modular or not, using link to produce a custom runtime is a good idea!


It's getting better to be sure.

It's definitely taken time for the libraries to catch up, but when 17 came out and put a kibosh on the work arounds folks relied upon when using libraries that were not properly modularized.

From a JakartaEE perspective, that's a different world. The containers were modular runtimes of their own sort anyway, many are built upon OSGI, etc. I don't think mainstream EE projects are using modular projects.

As a developer at the end of the line, building personal and in house projects, I don't personally get much value from the modular system. It's extra bookkeeping to manage, it's seems as much whack-a-mole of updating the module-info file as any deterministic plan. You mostly learn you have a problem when something won't load, making the whole thing rather reactive.

I don't really care about the JDK/JRE size, I don't distribute projects that way. They're just jars, or they're WARs for the container. If you have to send up the entire runtime, all the time, then sure there can be value. But as mentioned, you don't need a modular project to use jlink.

So, clearly, the JDK certainly benefits from this. But it's imposed from above on to everything, and I'm not that sure that everyone else gets real value from it.


It's a real shame that modularization focused almost entirely on solving the JDK vendor's problems rather than solving the application author's problems.

Specifically, modularization can and should have been taken as a chance to allow multiple versions of the same dependency to exist within a single scope within an application. At present, this is sort of achievable by taking control of classloading and invoking defineModulesWithManyLoaders, but not many people will do this and key features are missing (it still does not allow a single module to directly reference two versions of the same dependency).


> Specifically, modularization can and should have been taken as a chance to allow multiple versions of the same dependency to exist within a single scope within an application....

Modules do it to the best extent possible, but they don't make it prominent because, quite simply, it's impossible to do in a way that's good enough to be advisable. In order for a library to be properly isolatable, it has to be written in a specific way and be quite limited. Any kind of input or output, including the reliance on configuration outside of the source code can immediately make the library susceptible to horrible problems when multiple copies of it exist in the same process, even if isolated. So modules allow you to use class loader isolation if you really need it, but they don't try to pretend it can actually work in general.

The main (though not only) thing that modules exist to provide is strong encapsulation, which is crucial for security and makes code evolution much easier.


It seems to work just fine without anything horrific happening in Rust. Direct dependencies can be renamed and indirect dependencies simply don't clash with each other.

While libraries directly parsing some sort of conf file may be a little bit of a pattern on the JVM, it doesn't have to be... And the libraries could certainly define a new pattern to allow the user to define different versions of their configs for different library versions.

I agree that, as it stands, modules enhance security by providing a stronger version of the visibility system. IMO that is important but it's just such a low bar to shoot for since it doesn't really improve the situation much more than everyone just following the rules


> It seems to work just fine without anything horrific happening in Rust. Direct dependencies can be renamed and indirect dependencies simply don't clash with each other.

It suffers from the same problems -- doesn't work if a library has an implicit assumption it's the only copy, and does otherwise. It's the best anyone can do, and modules do it as well as it can be done, it's just not something you want to encourage because there are too many cases where even the best is just not good enough. I don't think that instantiating multiple instances of a library whenever there's a version conflict is the right default. It turns an accidental configuration situation into a significant runtime choice.

Still, if you want a tool that makes this best-effort attempt easier, take a look at layrry: https://github.com/moditect/layrry. It basically exposes module's "multi-versioning" as a configuration rather than programmatically.

> IMO that is important but it's just such a low bar to shoot for since it doesn't really improve the situation much more than everyone just following the rules

Actually, it's something that can only be done deep in the VM. Modules exist to provide some runtime guarantees that nothing else can (except the SecurityManager, but that was so problematic that few used it and fewer still did it correctly). Without modules, there can be no strong security guarantees made.


As far as I can tell, Layrry does not allow two versions of the same library to be in scope at the same time.


I don't know what you mean by "in scope" but modules do allow multiple versions to exist in the same process in different layers. That's what module's entire layer mechanism is about (internally, it uses class-loader isolation), and Layrry configures module layers (that's where it gets its name). It's in one of the first examples: https://github.com/moditect/layrry

So modules absolutely do support that by design, but again, for that to work well you need to really know what that duplicated library does (same goes for Rust or any language), and it's not a recommended practice in general because it can fail in really horrible ways if you're unaware of some implicit assumptions in the library.


Even without being modular, switches like --strip-debug --no-man-pages --no-header-files net you a fair amount of space saving.


> Some people still use the anachronistic term JRE to refer to a Java runtime

Starting with Oracle which, on its Java download page, offers the choice to download the "JRE for consumers" so there's that.


That actually is the JRE for Java 8, the last JRE.


It's a pity that system wide JREs are not a thing anymore. I understand that it was difficult to get an updated JRE on the system in the early 2000s, but nowadays almost every computer is online - certainly one where you are currently downloading an application on - and automatic updates are commonplace.

You used to be able to just double click on a .jar, and that was a Java application. And it would be trivial engineering-wise to include a little shim that downloads the JRE if neccessary, or the OS could do it itself.

And why stop with Java? Why not have the OS detect the most common cases of "hey you are about to open <thing>" (where thing is a Python / Java / .NET app, or a document you can't open yet) "click here to install the needed bits and pieces from a trusted source, it will take 300 MB and 3 minutes". I think this is a case of perfect is the enemy of the good. This is a relatively simple addition that would make computers much nicer IMO.


We already have that, it's called a package manager. The concept has a number of problems (multiple runtime versions for multiple programs, who pays for and controls the repository, etc etc).


I don't know, my package manager doesn't let me download an application without runtime - say QBittorrent without Qt or Deluge without Gtk and Python - double click on an icon, and then say "It looks like you are trying to <strike>write a letter</strike> run an app that needs a library" and then proceeds to fetch it... from my package manager.

I mean, 80/20 principle. You would only have to write code for the dozen or so runtimes, and then add metadata for a few hundred or so popular apps. Totally in reach for a volunteer community (like many distros, or stuff like PortableApps) and especially for a companies like Canonical or Microsoft. And then when it is going, people would add metadata to their apps themselves, and we would no longer have to ship the runtimes together with the apps.

Think of the metadata it like a 21st century shebang line. Not `#!/usr/bin/env python`, but `#"§% require python>3.9 and gtk-stack>4.0` or `require java-xxx`.


> my package manager doesn't let me download an application without runtime

That's not true (most package managers have options to force downloads without dependencies), but anyway why would you even want to do that, considering you eventually have to download the runtime anyway...?

> And then when it is going, people would add metadata to their apps themselves, and we would no longer have to ship the runtimes together with the apps.

We had that too, it's Java Web Start and the current MS equivalent (clickonce or whatever it's called). Never worked particularly well for Java (which is why it was removed), works a bit better for Windows because, surprise surprise, the OS vendor made deployment conditions more predictable, as it controls both OS and runtimes. They would never have an incentive to provide the same to third-party runtimes.

> Think of the metadata it like a 21st century shebang line.

The problems of that approach are legion.

The hard truth is that shared libraries (and hence shared runtimes) are an accident of history, meant to cope with a scarcity of storage and bandwidth that we've long overcome. The ideal program is a single static executable that Just Runs regardless of what is going on around it. Everything else is a source of unnecessary complexity for both user and developer.


> multiple runtime versions for multiple programs

For Java, the latest one. Current latest java still runs 20 year old jars.

> who pays for and controls the repository

The OS distributor.


"For Java, the latest one. Current latest java still runs 20 year old jars."

Not necessarily anymore. Many of the standard JVM features were removed in 11+ instances, so things like _some_ of the JVMs JMX APIs were removed for one reason or other. I'm not saying the removals weren't justified, simply clarifying that not all java applications written N years ago are still entirely supported.


> Current latest java still runs 20 year old jars.

Dude, I've been using Java since 1.2. Sure, all versions will try to "run" anything, but they'll do that with all sorts of slight differences and incompatibilities in behaviour - which make it effectively impractical for any serious program to run on anything but a small subset of tried and tested versions.

> The OS distributor

Careful what you wish for. Do you really want to hand Microsoft the power to decide which runtimes are allowed to run on Windows...?


Sounds like https://0install.net which has been around for a while.

Personally I prefer to avoid 'installing' anything: if something's written in Java, its launcher should reference some specific java binary; if something's written in Python, it should reference some specific python3 binary; etc.

For example, my job is mostly writing Scala and building it with Maven; yet I have neither installed system-wide. Instead, they're just dependencies of the build script (along with Bash, etc.).


Yeah, I have trouble believing a private runtime image per app isn't going to add up to more than one complete runtime image provided by the system package manager or shared Docker layer.


But unfortunately the state would be having `n` complete runtime images for different versions. This is not a Java problem, Java just followed suit, and seemingly the preferred way of delivering executables is bundling everything.


Yeah, that’s not my preference, and I haven’t seen a team do it.


I wonder how this would play out in a container context, aside from the benefits mentioned in regards to including only what you need and other things like that, but focusing purely on distribution sizes and space reuse. For example, if you would have your application built as a .jar (perhaps with an embedded app server), but use a separate full JDK install, then the latter could be cached on the nodes running the containers and re-used, which would be often if it doesn't change much.

With this setup, deliveries would look a bit like the following:

  #  SIZE     WHAT
  1  ~100 MB  base OS image (reused if not changed, cached)
  2  ~340 MB  JDK image (reused if not changed, cached)
  3  X MB     the app .jar file (changes with every release)
Whereas with the approach in the article, it would look a bit like the following:

  #  SIZE     WHAT
  1  ~100 MB  base OS image (reused if not changed, cached)
  2  Y MB     minimal JRE + the app (changes with every release)
For example, consider the example in the article:

   36M    zulu-hello-jre-linux-x64
  338M    zulu19.30.11-ca-jdk19.0.1-linux_x64
While it's hard to speak of how any given app would look like if it was just a .jar (or what other real world examples would be like, not just a "Hello world" program), it would only take 10 individual releases to exceed the size of the JDK. In this case, if you deploy once every day, then by the end of the first month, you'd be using ~2X more space for the minimal JRE approach. However, if you would update your JDK version more than once a month, then that might as well go out of the window.

So it appears that it all depends on how much you care about space in the first place, as well as what technologies you use, since the above example only seems relevant for container technologies that have the whole layer mechanism, and only then if you don't squash them for up front space savings for individual container images. On the other hand, if you update your base image and/or the runtime often, then using jlink makes a lot of sense, since the approach with the separate JDK would send the whole thing often anyways.


Seems you could also do this:

    # SIZE     WHAT
    1 ~100 MB  Base OS
    2 Y MB     Minimal standalone runtime built with JLink
    3 X MB     App code (not in above image)
Then later you only change #2 if you need more modules or you’re updating the runtime. What makes sense will be situation specific. But presumably if you’re after the security benefits of minimal runtime and don’t want to pay too much premium in storage this is the optimal configuration. Of course doing it this way increases your build complexity somewhat significantly.


> Then later you only change #2 if you need more modules or you’re updating the runtime.

This is an excellent point, albeit one more thing to think about (which can still be worth it).

> What makes sense seems to be very situation specific.

This is true, honestly the variety of the configurations out there (e.g. dynamic scaling, where the base layers won't benefit from the ability to cache things, or PaaS solutions where nodes might change for different deployments) makes me doubt the usefulness of my post.

Guess that's why there is no one best solution for ALL circumstances.


Those caring about space usage presumably wouldn't include an entire OS in their containers?


You include the Base as a layer rather than bundle it so it’s amortized over several releases. Google has a tool to actually bundle your base and cut it down to the very essentials. This is only a thing with Docker and similar. With something like Flatpak it doesn’t even matter if you bundle because OSTree can deduplicate on file level.


Hmm, it depends - sometimes that's a decent tradeoff for development velocity vs distroless containers (from scratch) outside of particular project requirements, in other cases it can just be nice to have some common packages or even tools in your containers, for debugging/troubleshooting, especially if you don't change the base image too often, so it can also benefit from the caching.

As for the (compressed) size of some common base images:

  SIZE   WHAT
  3 MB   alpine:3.17.1
  29 MB  ubuntu:jammy-20221130
  31 MB  debian:stable-20230109-slim
  53 MB  debian:stable-20230109
  32 MB  almalinux:9.1-minimal-20221201
  66 MB  almalinux:9.1-20221201
  44 MB  rockylinux:9.1.20221221-minimal
  61 MB  rockylinux:9.1.20221221
In most cases the OS/userland related layers (which will generally be more cut down than a "full" OS install) will be smaller than the runtimes for languages like Java, .NET, Python, Ruby, Node and so on, though things can get interesting with updates (e.g. slower builds if you cut out package cache in any layer where you need to install software, so it doesn't bloat the layer size, if you need more than one install command per container).


You don't need to run jlink yourself to avoid shipping the entire JDK. Here's a java19 runtime docker image at 62MB: https://hub.docker.com/_/eclipse-temurin/tags?page=1&name=19...

Not customizing the runtime per build or per app also potentially helps reuse of docker image layers across your container registries and clusters.


You're adding the whole of Docker as a dependency if you do that.

Not everyone wants that.


JLink is a pretty nice tool - easy to use and understand, relatively fast, big app size wins. A few misc thoughts:

Figuring out what modules you need to ship can be done with the jdeps tool, but, you have to watch out for some gotchas. One is that you have to run it on each JAR or set of jars. It's slow so it helps to do this incrementally. Another is that some critical JVM functions are implemented as 'plugins' that are loaded reflectively, and the static analysis won't find them. The one that trips up my users the most is jdk.crypto.ec which implements elliptic curve cryptography. The Java TLS stack will be present but fail to connect to many TLS servers nearly at random if this module is missing. Nonetheless it still needs less configuration than a typical ProGuard/R8/native-image style dead code elimination analysis.

JLink can replace JARs with an optimized file format called "jimage". It uses a very interesting perfect hash algorithm to minimize the number of seeks/page faults required to find a class or file and it also creates a unified string table. So the space savings come not only from deleting dead code but also reducing the space required. Unfortunately it only works for JARs that are explicitly modularized (with a module-info.class file). Slowly more modules are getting this but most still don't. A good improvement would be to use the jimage format for everything. It also does a bunch of other optimizations, for startup time and the like. Also you have to figure out which modules are modular and put them on the module path at link time. There are some build system plugins that can help with that but they mostly don't do cross-linking.

Final thing to realize is that jlink doesn't create a bundled app. It just shrinks and customizes the JVM for your app. To allow your app to start up, be installed etc requires other stuff.

Conveyor [1] has extensive support for jlink. You can name a JDK or it will learn from your Gradle build, then it'll download the JMOD files you need for each platform, run jdeps on all your JARs incrementally, figure out which modules are explicit, figure out which modules are broken and have to be put on the classpath, add back the TLS ECC support, allow you to override all these decisions via config, run jlink for each target OS and architecture, finally bundle up the results in self-updating packages for each OS and do it all in parallel. It's doing a lot of work but the result is pretty magical - you just take the JARs from your build system e.g. via the Gradle plugin, feed it to the tool and out pops a nicely optimized set of downloads with HTML download page for your cross-platform app.

[1] https://hydraulic.software/


Speaking of JVM 'plugins' that are loaded reflectively, don't forget the Java Access Bridge on Windows, as I reported to Hydraulic some months ago. (Fortunately, that one is fixed in Conveyor.)


Oh yes good point. Thanks for reporting that. The module name is "jdk.accessibility" for anyone jlinking an app that depends on "java.desktop".


BTW, the reason why the Java Access Bridge is a separate thing on Windows, as opposed to AWT just directly implementing the platform accessibility API, is an interesting bit of history. Basically, in the late 90s, Microsoft's accessibility API was way too simplistic. Maybe I can talk about that in some detail if you ever get your podcast going and we talk about accessibility there.


I know that you're hawking your own software, but it looks quite cool. However, I wouldn't want to be a pioneer. Who uses it and for what? Do you have testimonials?

Or maybe other HNers, have you used it?


Yeah, hawking things is a bit of a new world to me. Good timing for the question, though. We've been collecting testimonials just last week and have a brands block for the website ready to go, pending one more approval. So that's coming RSN.

Some firms who said we can point to their usage of it (these are all JVM based):

- HEBI Robotics. They make robot kits and ship apps that allow you to control them with Conveyor.

- GoToTags. They make NFC tags and just launched an app to work with them.

- IonSpin. "mementō is a simple and modern file management solution". They aren't fully launched yet I think.

- AdCentral. An app for managing various kinds of ad campaigns in physical stores (if I understood correctly).

There are others, that's just a subset of the ones we've asked for testimonials. There's also some open source projects that use it. So far there's definitely a theme of apps that do things with specialist hardware, which isn't a big surprise.

We also dogfood it for systemd managed servers, but that's kinda experimental. I'm still trying to figure out if there's anything useful to do there, like to go from a build.gradle to a set of pushed/updated servers in one step without Docker (or maybe with Docker). Like it'd make a linked standalone app, upload the files to the server(s), integrate it with systemd then start the apps. But maybe nobody would find that useful. Server ops is such a heavily invested-in space already.


We now have a brands and testimonials section!

https://hydraulic.software/


This is awesome to know, I'm using Dart these days(java like language) that has a much smaller runtime(helloworld is 6MB), any one tried it with swing for GUI and see how large the final size is about.


Minimal Swing app is about 27mb compressed, 28mb with FlatLaf. That's for Windows. A bit smaller if you drop ECC and accessibility support.

That 6mb is just for Dart right? I thought a minimal Flutter app is about the same size as a minimal Swing app.


thanks,yes it is dart only size,need measure flutter size vs swing as a single exe


A couple of years ago I did some experiments to see how small you could make HotSpot and still have something usable. I got it down to a 7mb download but that required some tricks:

- .tar.zstd

- A custom build of HotSpot that compiled out some optional features.

- It was just java.base, no swing

You can get smaller if you do DCE / tree shaking more aggressively, or use a custom JVM designed for size. The smallest I've seen was Avian which can produce GUI apps that are 1mb standalone binaries. It's abandoned unfortunately.


I also use Dart to create native binaries! Just because it's so easy. I have a real CLI app that does quite a lot of stuff, and it's less than 8MB. And runs really fast! At least around as fast as a Java native app compiled with native-image (GraalVM), but I hate native-image because it takes MINUTES to compile and sometimes does not behave exactly like the JVM-version (Dart doesn't even need to compile, you can run from source, and when you want to compile to binary it still takes just a few seconds).


in some way, Dart is a better and easier Java, I hope Google will change Dart from a client-optimized language to a general-purpose language working at both client and server. It works for server now, but still it's client-optimized. Anyways, I really like Dart a lot.


Reflection or any other kind of dynamic execution (JNI?) will break this, no?


No. You are thinking of GraalVM native image which will compile java to native machine code.

jlink is more similar to tree shaking: it strips the JRE of anything your program don't need.


But I think the parent's point is given reflection, how can jlink statically know the complete set of classes that your program needs?


It's only tree shaking the JRE itself, not your whole program (unfortunately..). So as I understand it, it means no dynamically calling arbitrary classes in the JRE, but that's a much narrower limitation.

Final binary size is naturally vastly reduced b/c you won't have the whole JRE, but last I tested you still end up with very chunky executables (minimal JFX GUIs were coming out to 100-200 MB)


100-200mb is a bit too large for a minimal JFX app, though it may depend on how you define minimal:

    conveyor generate javafx com.example.jfx-test && cd jfx-test
    ./gradlew jar
    conveyor make windows-app
    du -h output
71mb on disk. 31mb package size. That's a bit bigger than strictly necessary. It includes FXML and see the discussion of optional plugin modules elsewhere.

However you can easily get to 150-200mb on disk by using javafx.web because that includes a custom build of WebKit which is ~75mb all by itself.


Yeah, removing javafx-media and javafx-web saves a ton. It's good to point that out. It's been a couple of years, so I'll need to revisit this again later and try again

> It includes FXML and see the discussion of optional plugin modules elsewhere.

Where is this?

And do you know if it's still not possible to generate an .exe? I remember that while I was turned off by the final files sizes, what really made me drop jlink was that the final built target would always be some baroque installer (which makes sense for large GUIs that need to maintain state between runs). But you couldn't just generate a double-clickeable .exe or .appimage file that'd be equivalent to running your uberjar. So I stuck to the uberjar.. (now user have problem installing weird Java 11 Runtimes from not-Oracle)

(Further confusing things was that there is some intermediary build target in jlink also called `appimage` that's not actually an appimage, but it's quite similar..)


Elsewhere in this thread (look at my other comments).

jlink itself just outputs a directory tree. If you want the user to be able to double click an exe then there are several options but two are:

1. Use Gluon Native Image. This will AOT compile your Java code and statically link JavaFX to give you a genuine single EXE program, with no JVM, no JIT compilation, and fast startup. However your app needs to be native-image compatible and I don't know if every app turns into a single EXE.

2. Use (surprise) Conveyor, which will make an EXE that when opened downloads, installs and then immediately runs your app using a bundled JVM. If your app is already installed then it'll do an update check and then open it. And of course your app can be then invoked from the start menu.

The latter isn't genuinely a single EXE of course but the UX is similar and it handles the common case of needing to update either your app, or your JVM, or both.


Is there something that can do the tree-shaking of the Java program?


There's proguard, originally intended for obfuscation but also serves well for dumping unused code from your project and most importantly from third party libraries. Virtually every Android app has it in its build pipeline, or these days actually a proguard reimplementation by Google (R8) that reads proguard's rather byzantine but effective configuration syntax (you usually need to keep some stuff in libraries that talk to themselves only via reflection)


I think if you do Graal native compilation then you'd effectively get that. The linker should chuck unused code. (though tbh I haven't tried it myself)

But to just say "no reflection" and tree shake your JVM code - not that I'm aware of unfortunately! I'd love to just treeshake entire unused dependencies. At the moment I do it manually - but it's a chore and it's hard to do comprehensively.

Now that post- Java8 you're supposed to jlink the JRE, I somehow doubt this will ever happen. The people that care about executable size would probably be doing Graal Native.


I've used proguard to remove unused classes from dependencies.


Proguard is usually the tool to do that (commonly used on Android with its newest incarnation named R8).

You do need to manually annotate classes used by reflection so it doesn't remove (or obfuscate) them.


If you have a module-info file, you have all the used modules listed there and this jlink only has to look at that. If you use jdeps then it indeed can in itself find only the statically known modules.


GraalVM can use reflection just fine, with the caveat that you have to specify which classes can potentially be targeted (so no dynamically loading a random class file and reflection on that is possible)


Could be "fixed" by disabling reflection.


JLink is very cool, it's a shame it came too late to make a significant difference in the market.


curious if it also improves startup time? For me that would be more of a win than the size.


It does a little bit, depending on how much of your app is modularized. The win isn't large.

A bigger win is AppCDS but most apps don't use that due to workflow issues. It's usually about a 30% startup time improvement, in my experience.

The biggest win is native-image but that's also the biggest compatibility hit. Jlink and AppCDS are compatible with all (bytecode based) JVM apps.


Not to be confused with the embedded tool from Segger.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: