All that work, which broke plenty of Java applications, violating main Java mantra of backward compatibility, all for nothing.
2. Modules are becoming the core of Java's security strategy. Without them, your code only does what you think it does unless any library decides it should do something else.
3. The reduction of security and maintenance concerns have already yielded tremendous dividends, as we the JDK developers have been freed to work on other stuff. Your complaint is like someone finally getting a fast internet connection, saying, all that noise and dirt of tearing up the street putting in the cable ducts have been in vain; they should have just given us fast internet, because that's all we wanted.
So, we've got better security and a drastic reduction in future breakages, all for negligible backward-incompatible changes to the spec.
Helidon is probably the only library I know that is modularized .
I think once the community finally moves past jdk 8 (I’m hoping spring making 17 the base next year will help) we’ll see more adoption as more libraries modularize.
One is that native code is really verbose compared to bytecode, and jlink optimizes bytecode to shrink it when it's modularized (collapses the string tables and other things). For tiny hello world apps native-image can produce smaller downloads. It doesn't take long before a jlinked JVM is smaller, because the JIT compiler is kind of "decompressing" native code on the fly. A minimal jlinked JVM can be small-ish, like 20mb or less, and if you compile a custom JVM with configure flags for size it can be even smaller still. The smallest jlinked hello world I managed to get when I tried this was a 7mb download. Not small for a program that only prints something, but, obviously, most of that is one time cost.
Another is that jlink+modules is way easier to use and configure than native-image. Tree shaking breaks stuff all over the place. You end up needing complex config files, agent-based config auto generation and/or lots of runtime testing to ensure you got everything. The point of "reliable configuration" is to avoid all this by building a less aggressive form of tree shaking into the language itself.
Quite a few libraries support modules at this point, but you may not actually have noticed. It depends on your build tool and things.
One thing that's slowed it down is that a lot of widely used libraries that sit at the bottom of the dependency stack aren't actually well maintained. And, modules can only depend on other modules. SLF4J, Guava etc have been slowing things down a lot.
My feeling is that the module system is kind of underrated and waiting for better tooling to be fully exploited. Quite a few parts of Java are like this. Like, it took quite some years for the ecosystem to start using annotation processors to full advantage, and some popular libraries still have it as a pending feature (like PicoCLI).
It will only make sense for library and framework maintainers to invest in modules once there is a large user base on JDK 11.
Either way, modules are still useful for application development. I used them years ago to close off internal APIs in one of my applications, which helped me to not cross levels of abstraction accidentally.
> Everyone still downloads and runs an entire JVM
Nope, we (i know anecdote) use jlink in docker containers, mostly to avoid useless dependencies. The problem of GraalVM native image is that it comes with a subpar GC.
> which broke plenty of Java applications
yes, i agree, modules did not worth the trouble.
How did modules break Java applications? Having recently upgraded a number of applications from 8 to 11, none of the things we had to fix were due to modules. They were largely due to libraries relying on internal implementation details of Java 8 which needed dealing with, the exact sort of thing that modules are looking to prevent going forwards (see "strong encapsulation" in TFA).
We've ignored modules so far with no issues.
Or better protection of Java internals (warnings about using internal classes in JDK 11, and in JDK 17 no warnings - you need to --add-opens or don't use broken lib).
WRT why I think strong encapsulation is a bad goal: I've worked extensively in .net where assembly-private members are the norm for core libraries. It has issues. Classes and methods that are theoretically extensible aren't so in practice because core logic is inaccessible.
In the end, encapsulation, no matter how strong, doesn't prevent people from relying on internal implementation details. If the concern is people relying on "internal" APIs, either suck it up and make those APIs public (with all that entails) or come up with a deprecation strategy (or just freeze them and refactor around them).
They do? I work in the Scandinavian public sector and we abandoned Java two or three years ago because of the continued security issues it posed. Software that require an install JVM to operate doesn’t get past our IT department.
I’m not sure modules would help though, as I don’t know much about them. But not everyone still installs JVM.
Java in the server, on the other hand, is as secured as any other runtime, i.e only as good as the application written on it.
TeaVM Website: https://teavm.org/
Tutorial in Java Magazine: https://blogs.oracle.com/javamagazine/java-in-the-browser-wi...
If that’s not the JVM then my mistake.
This article presents modules as a solution to some classpath problems, but doesnt quite give me the detail on how.
Can anyone elaborate further on how modules will augment/replace/obviate/worsen my need to care about the classpath?
Modules try to solve this by distinguishing between exported and internal dependencies. The idea is that if your library depends on some other library and uses that as part of its API, that should be an exported dependency (and a program that uses your library has to use the same version of that). But if your library depends on another library just for internal implementation, that gets hidden away from the application that uses your library.
Personally I'm extremely doubtful of this working, because for it to work, library authors would have to specify their metadata correctly. So I suspect it's only going to lead to bigger problems than before, as the classpath gets more complicated and there are more ways to go wrong. But we'll see.
It's not meant to solve the problem of having different library versions on the same classpath, and it doesn't solve that at all as you still must ensure only one version of each class exists even if no modules expose those packages (see the split-package problem).
The module system is for one thing above all: one more level of visibility, allowing Java developers to specify which packages should be visible to other packages.
If you expose a package and that package has public types that expose another library's public types, then you must re-export those packages to make it clear that you have compile-time dependencies on another library, and that's checked so it's not at all hard to specify the appropriate re-exports. The only thing you need to worry about when writing a library is to expose only those packages you want, allowing you to have public classes that are not visible to the outside world unless they are in an public package.
Another clarification: Java doesn't really have a single, flat classpath: you can create your own "nested" classpath if you want by instantiating your own class loaders. OSGi works by doing that for each "bundle" (library), which then does allow you to have multiple versions of the same library co-existing peacefully as long as no two packages which use different versions of that library attempt to re-export the library packages to each other (which would fail when you tried to load those bundles into the runtime). You can absolutely get the Rust problem you mentioned in Java if you use your own class loaders, and I've seen that happen several times, even when both libraries are of the same version (a class loaded by a different classloader is NOT the same as the class of the same exact name loaded by another, which is why classloaders are hierarchical and not supposed to load classes already loaded by their parents).
It kind of is - there's a single API call that lets you instantiate a module graph with every module mapped to its own classloader. At that point modules work as you'd expect, and package overlaps only create errors if they're both being imported into a single module (where it'd be ambiguous at the language level).
As the article explains, this isn't on by default only because (supposedly) some popular frameworks would break due to assumptions about how classloaders are used in practice.
And here we hit a sort of weakness in how Java is developed. I've seen this justification given several times over the years for why module isolation isn't on by default. At no point are the frameworks in question named. Which frameworks break when this is on? And why can't I just flip a command line flag to enable or disable it? This compatibility argument is left hanging but no details are given, which is unfortunate - if the frameworks in question were named, maybe they'd fix their assumptions or other people would and classpath conflicts could be fixed.
This is what I would expect -- and hence would likely solve most of my issues managing the classpath.
The Java classpath is "flat" only if you use a single classloader. If you use multiple classloaders (for instance, with a plugin system where each plugin is loaded on its own classloader, to allow it to be dynamically loaded and unloaded), you can have multiple versions of any given class. At work, we have more than once had that "Foo is incompatible with Foo" problem (made more maddening because the "Foo is incompatible with Foo" error messages don't mention which classloader each "Foo" came from).
I always thought that Rust is perfectly happy to link in conflicting versions of a module into the same binary.
Of course, for many programs, those values are not returned outside the original scope and all works well.
Most languages deal with paths in mostly similar ways (except Go until recently). What precisely is your issues with the Java classpath that someone who have been writing Clojure code for years haven't hit yet?
Just because I was curious, I tried searching for all directories containing ":" and besides /sys and /run, I got three hits. One because of "https://", one because of "localhost:8080" (both mirrors made from wget) and one where it actually seems to be on purpose.
It seems, to me at least, that having colon in path names is unpopular enough for this to not be a real problem.
Edit: desktop I tried the command (`# find / -name ':' -type d`) on has been running daily since beginning of 2018 (2196 packages installed atm), with daily pacman updates, so should have enough data for a average workstation.
I actually used OSGI back in the day (a more powerful module system) that relied on this for managing different components that each had their own dependencies and dependency versions. Neat, but no longer that common to do these days. Osgi actually emerged out of the embedded software devices world. Later Eclipse used it to componentize the Eclipse platform and I know some companies that use OSGi for different things. These days you'd use microservices in some kind of Kubernetes container; or not (I'd consider that overkill for most of the stuff I work on).
The Java module system is much simpler and not really intended for this (at least not primarily). Instead, the main use case is ahead of time compilation. Modules allow parking things that can't be compiled (e.g. because of reflection ahead of time in separate modules, which you can then exclude. Breaking up the jvm platform into modules was a big project. As far as I know, they are not that commonly used for anything else. At least, I've not encountered many projects that rely on this. It's a pretty low level mechanism.
These days for servers, you package up applications as Docker containers, which contain your application and the full jdk. Both Spring Boot, Ktor, and other modern frameworks are commonly packaged up as single jar applications. Nothing magical about that; they are literally zip files with some manifest listing their content (classes, resources, more jar files). Of course modularizing a single jar application is kind of pointless. Likewise, there is no real need to tweak the java runtime you run it with either. Just not a thing. The notion of stuffing zip files in zipfiles is also a bit weird but it kind of works as a packaging mechanism.
I have hate similar levels of hatred as OP toward java packages for years, then with age I started to appreciate the abstraction (independence from the file storage name space etc) but still i find clean python so much nicer 80% of the time.
I'm really curious about your point of view.
And modules have solved the issue of "you're actually using something internal from our package that might break in the future".
But in python upgrading something minor quickly wreaks havoc. Everything depends on different versions of something, which are quickly incompatible. So have to pin and do lots of manual testing and fight with pip for a while. Very easy to import and use stuff not supposed to be part of the packages' public api etc. And I find it even harder to know what's actually in use between the global packages, the virtual environment etc., than just looking inside my jar.
Using the classpath, I can specify exactly the search order of classes, including overrides and whatnot, when the VM searches for them.
Need to debug a Jar? Unzip it (it's just a Zip file).
Jars can contain non-code resources, essentially allowing you to run an entire app using a single file, without the need to first decompress to the filesystem. Java gives you great utilities to find those resources.
Classloaders can be customized and you have complete control over how the JVM sees code.
It's boring, to the point, and simple to understand. I don't get how people have so many issues with them...
Barring the system classloader - where one ought not to place anything unless in dire circumstances - one gets very nice class separation.
Infact, every JSP is compiled to a Java servlet and then loaded in its own classloader. To see the speed - edit a JSP page and then see how fast the change is reflected. When devs edit JSP pages and see changes, a fresh classloader picks up the new class file and the earlier classloader is discarded. Due to classloader inheritance, the new Servlet is able to access Java classes from the parent classloader (the webapp).
For production deployments, and ideally for continuous integration setups where one should compile once and test and use the artifacts everywhere, one can use the pre-compilation feature  where the JSP pages are compiled as servlets during the compilation process itself (and not at runtime on production environments).
The solution in the Java world allows kind of shipping both conflicting libraries and still have a go at it.
Other ecosystems might make you rethink the whole dependency tree instead.
I guess it was similar, I never used WebLogic.
More concretely, if one application packages log4j 1.2 and another packages log4j 2, the application servers guarantees that the two log4j libraries are isolated and do not interfere with each other.
to me the issue is that it creates parallel graphs on top of the FS, and that people, unless spending some time reading the spec (oracle basic doc website was utterly useless for me), will have barely no visibility on.
what I appreciate is that it's fully standardized in the spec, so you don't have to deal with competitive packaging systems / hygiene like in other languages.
It's literally modeled after PATH, which behaves exactly the same and has been used just fine for decades. I don't really see the argument here.
Not really any different than PATH.
classpath lists folders with jars which embeds their own subtree of classes, that's one difference in my (poor) understanding
then, as far as i can recall, each class file can define package namespace so looking at the class path I have no clue what is there or what isn't so classpath errors felt more daunting to me
now that the JVM does a tree search over all of this in an obvious linear manner, i can understand, but as a user i still find classpath and armies of jars very hard to map in my head (added to that the verbosity and large number of classes in many java apis)
if i'm still off, then there's nothing saving me :) except maybe if you have an article about classloading and all that, that may remap my brain
No, the classpath lists folders of classes, or actual paths to jars. Not paths to folders of jars.
> then, as far as i can recall, each class file can define package namespace so looking at the class path I have no clue what is there or what isn't so classpath errors felt more daunting to me
Classes are resolved from the root of the jar. Just inspect the jar to see what's in it. It's not common for third-party libraries to have conflicting package names.
I don't know why you're talking about tree searches.
And I've had so many times when I've wished I could do this for Python dependency trees.
For the former, the classpath is a colon separated list of jars (a zip containing the classes). At runtime the VM tries to find a class by scanning linearly all the jars, when needed or panic with a NoClassDefFoundError.
The modulepath is a colon separated list of directories that contains some jars. At runtime, the VM revolves the dependencies (stored in the module descriptor of each jar) before starting the application.
For the later, when you define a module you have to explicitly export packages and open them to reflection
Notice that on Android, this kind of tricks earn you the prize of the system killing the process and be done with it.
From the JDK’s point of view it was really important, because some libs were depending on the internals of the OpenJDK, which are prone to change. With modules enforced, the internals can change much more liberally, while still keeping up perfectly with backwards compatibility.
I agree that this one is annoying. It happens rarely, but every time it happens it is frustrating for whoever is dealing with it.
I think there are ways how to automatically check for this, but yeah.
This seems interesting to me because most of the time, dependency management happens to include two versions of the same library by accident and a simple exclusion rule fixes the problem, but obviously only if you know about it in the first place.
This is also a reason why you can't use regular Java reflection to do something like finding all references to a method, or even finding all classes - classes are loaded on the fly, only as needed (there are libraries that modify this behavior, though).
We run this plugin where I work. We have no plans to modularize our build since the duplicate class protection would just be worse that this plugin.
This can potentially reduce the size of java binaries by many megs.
This post explains in detail all the options: https://developer.ibm.com/tutorials/java-modularity-5/
I found it pretty confusing to get them to 'work', especially with libraries that are not module enabled.
Also, there is not support for Java Modules for Scala nor Kotlin. May be we need more tooling and IDE support.
Use case: JavaFX
some execs at my company caught the bug and had to have OSGI because it promised a "service oriented architecture"
except that was for services inside a giant monolith VM.
a more modern approach is to make anything that comprises a "service" and requires encapsulation be separated into its own process/server/pod
OSGI and the like really only benefit systems that are run in mainframe like environments (see the IBM connection?). Otherwise they're needless complexity.