Hacker News new | past | comments | ask | show | jobs | submit login
JDK 19 Release Notes (java.net)
264 points by ludovicianul on Sept 20, 2022 | hide | past | favorite | 133 comments



The first release to contain bits for Project Loom (the "virtual threads" part of the release notes). That is extremely exciting, since it is designed in a way that mostly allows drop-in replacement of existing threading and thread pool code, without requiring an utterly different programming model like async, coroutines or reactive programming do.


Virtual threads are nice but they don't cover the same use cases as async and coroutines. Loom will make embarrassingly parallel workloads seamless but it won't give you structured concurrency. That does require a different programming model as seen in this very release.

Should be interesting to see what shakes out of all this. When you need structured concurrency, imo async and coroutines are nicer than what fork/join and what Java has so far.


As I understand it, structured concurrency is part of the same deliverable, tracked in this JEP: https://openjdk.org/jeps/428

Here's the example from the JEP:

    Response handle() throws ExecutionException, InterruptedException {
        try (var scope = new StructuredTaskScope.ShutdownOnFailure()) {
            Future<String>  user  = scope.fork(() -> findUser());
            Future<Integer> order = scope.fork(() -> fetchOrder());
    
            scope.join();           // Join both forks
            scope.throwIfFailed();  // ... and propagate errors
    
            // Here, both forks have succeeded, so compose their results
            return new Response(user.resultNow(), order.resultNow());
        }
    }


I'm aware of the JEP. I said "as seen in this very release" after all.

I don't see the advantage to this over async/await style programming. I think you'll see very similar error cases with unobserved exceptions and such.

The advantage is that the method signature is indistinguishable from a blocking method. The subtle difference is that scopes close within a method where as async tasks can continue. I suppose you could see an even worse coloring where scopes and futures are used in half the methods instead of async.

I'm trying to wrap my head around how this would play out if you were attempting to write a GUI or something where the pattern is a single main thread. I suppose it would work just fine and look very similar to async style programming but with a lot more code one tab to the right.


> I don't see the advantage to this over async/await style programming

Harmonious with the platform (jvm) which is based on threads, better stack-traces, accurate profiling info and simpler programming model (without adding async/await/suspend keyword everywhere) etc are some of the advantages.


I've not normally seen people say async/await is a form of structured concurency.

As I understand, structured concurency mostly imply automatic cancellation and supervision that allows retry/restarts.

Does async/await gives you that?


Automatic? I suppose not. But excuse me, I'm just comparing the APIs for a specific use case. I didn't mean to call them the same thing.


The additional features of structured concurency affect the API though, because you need to introduce scopes of supervision/cancellation/retries.

If any async operation fails, at what level up the waiting chain do you want to cancel all other concurent tasks under it and retry the entire async flow?

So you need the APIs/syntax to have ways to define these scope of cancellation/retries.

As I understand, this is why it is called "structured", it refers to similar "structured" programming that introduced lexically scope restricted procedures, conditionals and loops.

Edit: Also, just want to point out that you shouldn't get mixed up between async/await and future/promise. Async/await often uses future/promise, but don't necessarily have too. But future/promise is an async API that existed before async/await, and supports threaded concurency just fine. You can use future/promise in Java with both threads or virtual threads. Using it with virtual threads won't color functions because at any time you can block on a future/promise and extract the value, so it's possible to go from a future/promise returning function to one that doesn't return a future/promise.


As far as defining scope goes, C# gets away with unchecked exceptions. Exceptions just bubble up naturally. Task scope is more implicit in syntax but scheduling Context is used in a similar way to scopes. The difference is context is usually inferred instead of explicitly defined at every use.


It's not about the exception, it's about the exception handling.

If 4 tasks are started asynchronously and one of them throws? What do you want to happen to the other 3?

In structured concurency you could say, if any of them fails then cancel all others, and retry them all.

Or for example, if one task is waiting on another and that first task throws an exception, you might want to say, ok, retry the first and re-schedule the other one to wait on that retried task.

These are all things you can handle yourself as well, but structured concurency is just trying to make that less effort and automatic. So if you define a supervision scope, everything inside it no matter how complex the async graph is, it'll all cancel/retry appropriately at that scope.

The scope will also guarantee that when we leave the scope, all concurent tasks are done, nothing is left running, either because it's all cancelled or the scope waits for all tasks to complete.

Here's some good articles that explains the motivation for it:

https://elizarov.medium.com/structured-concurrency-722d765aa...

https://github.com/apple/swift-evolution/blob/main/proposals...


async/await is just syntactic sugar for futures/promises. The future API itself generally lends itself towards structured concurrency, but your particular implementation may not be.

For example, Rust's `Future` is structured (although executors may introduce escape hatches), while JavaScript's `Promise` is unstructured.


Can you explain a little how Rust adds cancellation and retry supervision using future? I'm not familiar with Rust.


Does this not give you working stack traces?


There’s another JEP for structured concurrency.

https://bugs.openjdk.org/browse/JDK-8277129


I wish more languages would have gone this route (looking at you, Rust)


Rust started on this route but decided easy C interop and no required runtime were more important and trying to have everything made green threads slower than native threads.


They wouldn’t have had to include a runtime. They just needed to set a standard interface for kernel threads and user threads and then the community could’ve built the runtimes in libraries like they’re currently doing. But they didn’t create those interfaces early enough so the community built them and now the async ecosystem is so fragmented you can’t build libraries generically for async runtimes or for both async and sync


A different runtime wouldn't be able to make the compiler use a different stack allocation strategy (like Go's segmented stacks), for that the compiler needs to know what you're doing. Even if they also did that via having two ABIs for every platform (green vs native) it would essentially bake in a particular runtime as you wouldn't be able to experiment with these lower level primitives, only how you interface with them. If they ever define a stable ABI having two ABIs depending on thread type would also fragment the ecosystem at least as bad as the async runtimes do.

As far as async runtime fragmentation you can write code that is generic across runtimes, it's just less convenient to use. If your code only works with plain data and futures you're fine on any runtime, it's when you want to spawn new tasks or block on a task that you have to have a runtime dependency. Async vs sync is the classic "what color is your function" problem which green threads kind of solves but not completely so you still have to understand what is happening so you don't stall every task on a native thread with CPU heavy or blocking work.


> is so fragmented you can’t build libraries generically for async runtimes

How does futures.rs work generically with different async runtimes then?


It doesn't work for all async runtimes like Monoio


> now the async ecosystem is so fragmented you can’t build libraries generically for async runtimes or for both async and sync

This is true, but I think there are a few notable caveats:

1. Although you can't generically build libraries per runtime, it is possible to right a library supporting an explicit set of runtimes with some boilerplate; the simplest way is to just to abstract all of the async primitives you want into an API to use internally, use feature flags to implement them based on which runtime you want to support, and then only use that API in the rest of your library (which both avoids the need for conditional compilation outside of that one wrapper module for async stuff and reduces the surface of places you'd need to update to add or remove support for a runtime or if you want to update a runtime to a version that isn't backwards compatible.

2. Although there are quite a lot of runtimes that exist in the ecosystem right now (and more could enter the scene in the future), in practice the usage of a lot of these runtimes is quite small. For a few examples, at the time of my looking up, tokio has 66.7 million downloads overall and 10.3 million "recent" ones; async-std has 9 million overall and 1.3 million recent; smol has 1.6 million overall and 158k recent. There's diminishing returns for each new runtime you add support for, so while there's some up front burden in terms of supporting more than one runtime (like I describe above), the burden over time is not going to be super high.

3. Rust has a history of starting out by providing lower-level primitives for things, letting the community iterate over various ideas in the space for a few years, and then eventually settling on a single or small set of pseudo-official crates for them. I think the error handling APIs are the best example of this; Rust 1.0 launched with the standard library Error trait and the `try!` macro, and the community iterated over a bunch of potential solution (error-chain, failure, and probably a bunch of others I don't even remember at the moment), and for a few years it was a bit messy. Meanwhile, the standard library added the `?` operator and added a replacement for one of the `Error` trait methods (deprecating the old ones), and eventually the churn settled down and most people just use `thiserror` and `anyhow` now. There's still some lingering things to standardize like backtraces for errors, but overall the error handling space is way less fragmented now than basically at any point in the past. I'd argue that the async runtime churn is already starting to trend towards equilibrium; if this turns out to be the case, the costs of supporting multiple runtimes will continue to go down, and I'm guessing (hoping?) that within a few years people will have either mostly standardized on a single runtime or we'll have a standard solution for wrapping the small set of runtimes that retain any significant usage in the community.


1. Library creators do not want to have to deal with the headache over managing multiple runtime libraries and also between sync/async. IMO this is going to make Rust's library offerings very fractured, to the point where we will either be stuck on a legacy runtime for a long time or the hype for Rust will sizzle out because it's too difficult to write good code

2. Tokio has the best support right now but Tokio is very difficult to write libraries for and may not be the best designed async library. Monoio looks like a very good design and may be the best option for web servers because you don't have to deal with Send and Sync.

3. This method usually works but for async they should have been more opinionated. Stackfull coroutines line [May](https://github.com/Xudong-Huang/may) would have worked best for 99% of the use cases and maybe there could be a separate no-cost async setup for embedded. But trying to have both under the same async paradigm is causing huge issues.


It’s not really possible in the niche rust plays at. If you want to remain close to low-level detail. you have to deal with low-level details.


In the JEP they reference go and erlang as other examples of successful runtimes adopting this approach.

Which is true around the M:N stuff, but the missing piece is the communication. Go has channels, Erlang uses the actor model, do you know if there are any plans to have something like this in Java?


You already have queues in Java. They should work just fine with virtual threads, as they do on regular threads.


Java already has better ways to do that on java.util.concurrent, and structured concurrency is coming as well.


I wasn't too impressed when I read the docs on structured concurrency but they might flesh them out.

Frequently I've written applications that use Executors, Reactive Streams, etc. and found that frameworks like that rarely, if ever, provide a good answer for how to tear a processing system down at the end. It's completely practical to use a stream processing framework to process batch jobs, but to get correct answers you have to shut the thing down at the end. I've frequently built teardown systems and it's always left me with the feeling that system programmers should occasionally climb down from their high horse and write an application.


> but to get correct answers you have to shut the thing down at the end

That's one of the things structured concurrency improves on. By defining a scope in a try-with-resources block, you make sure it's teared down at the end of whatever operation you perform.


I have to look at in more depth.

In the systems I've built there are frequently a few queues and also some resources such as a MapDB or Jena instance, "shutting down" is a matter of flushing out the queues and closing the resources in dependency order.

One of them used the Jena Rules Engine as a control plane, effectively a group of production rules that build all the queues, initialize the processing elements, resources, etc. I got told by the Jena people that this use case was not supported but it worked pretty well.


Go has those inbuilt because before generics the language simply wasn’t expressive enough for such constructs.


Now that blocking APIs are no longer a problem, I wonder if green-field Java web backend projects should use an HTTP server based on Netty with a blocking layer on top, a servlet implementation like Jetty that was designed for blocking APIs but might have other legacy baggage, or something else.


Helidon Nima is a new contender, made with virtual threads in mind.


Virtual threads, real threads, and structured concurrency is quite literally what co-routines are about.

From a Kotlin co-routine point of view Loom is about the JVM gaining some low level optimization features that will slightly enhance co-routine performance on newer JDKs (not that this was actually much of a problem) but will otherwise not really add anything new in terms of features as co-routines already had pretty much all of the features.

Of course it's nice for Java to gain a few new APIs for this. But it's not like there was a shortage of asynchronous programming APIs. If you intend to use this as a drop in replacement for Threads, read up on blocking vs. non blocking IO or you'll be finding out the hard way why Java uses a lot of real threads when dealing with blocking IO. If on the other hand you were already using non blocking IO, using Threads would not have been a thing you'd be doing. Hint, virtual threads in the same thread will all block if you use blocking IO.

With Kotlin, co-routines, the compiler will issue warnings if you call into blocking Java stuff from a co-routine and nudge you to deal with that by of-loading to a threaded co-routine context.


> Hint, virtual threads in the same thread will all block if you use blocking IO.

This is apparently not the case in the new JDK implementation of virtual threads. According to JEP 425 (linked in the OP):

> Application code in the thread-per-request style can run in a virtual thread for the entire duration of a request, but the virtual thread consumes an OS thread only while it performs calculations on the CPU. The result is the same scalability as the asynchronous style, except it is achieved transparently: When code running in a virtual thread calls a blocking I/O operation in the java.* API, the runtime performs a non-blocking OS call and automatically suspends the virtual thread until it can be resumed later. To Java developers, virtual threads are simply threads that are cheap to create and almost infinitely plentiful. Hardware utilization is close to optimal, allowing a high level of concurrency and, as a result, high throughput, while the application remains harmonious with the multithreaded design of the Java Platform and its tooling.

So new Java virtual threads really are meaningfully different from Kotlin coroutines.


Nice. With Loom, you have to use a thing called continuations, which includes a method called suspend. So, under the hood, it does very much the same thing as Kotlin does with its suspend functions. Which isn't surprising if you understand how light weight/green thread schedulers work. Basically, you allow each "thread" to run until it suspends in some kind of loop.

What Loom likely does for blocking IO is implement some continuation wrappers around that do the right thing; which is indeed nice.

With Kotlin co-routines, you'd mostly avoid using blocking IO in favor of using non blocking IO. With the exception of legacy code that uses blocking IO which up until now you'd simply park on a threaded co-routine context for that reason. So, yes this is a very nice feature that I wasn't aware of and thanks for pointing that out. It won't massively change my life because there's not a whole lot of blocking IO happening in most things that I use. But it's great for transitioning legacy code to more modern frameworks without a lot of invasive changes.


As I understand it, the point of Loom/virtual threads (same thing) is that blocking APIs, the kind that you can easily debug, profile, and otherwise use with standard JVM tooling, shouldn't be "legacy". Loom fixes the scaling problem so we can go back to doing things the older, simpler, now better way. In time, the explicitly non-blocking APIs, and Kotlin's compiler-based coroutines, will be legacy.


After recently taking on a relatively hefty Kotlin coroutine project for the express purpose of dealing with many suspends from io, I can say coroutines is pretty good at what it does. There are things that loom does better that I've been waiting years to get back into the JVM, but there are facets of coroutines that are just really straight forward that simply green threads can't easily resolve. My hope is that the java and Kotlin crews take the best of both worlds and create some.compelling frameworks, because in the end, a better set of products is good for everyone. Q


> but there are facets of coroutines that are just really straight forward that simply green threads can't easily resolve

Could you elaborate on some of those?


The primary difference between Loom's virtual threads and Kotlin's virtual threads (coroutines) are:

  - Coroutines have a higher overhead. Though not huge, Loom threads are nearly free.

  - Kotlin Coroutines aren't pre-emptible (and this is the big one). Loom's threads are pre-emptible. This property is why they can be more efficient.


From the spec "they are preempted when they block on I/O or synchronization.". That's nice but if you write some inefficient loop, your virtual thread won't be pre-empted.

Also, Kotlin co-routines will obviously use Loom when it is available and offer the same benefits. All that takes is a few tweaks in the co-routines library. Mostly, that should not even be needed because Loom is API compatible with the old threadpool and thread APIs which already have extension functions to turn them into coroutine scopes. So, a lot of stuff should just benefit from Loom just by upgrading the JVM. Which is great of course.


What I read in JEP 425 is that Loom threads aren't actually preempted by the JVM's user-space scheduler. They can be mapped onto an arbitrary number of OS threads, but if you use an OS thread per core, and one of your virtual threads is CPU-bound, it won't be preempted IIUC.


Kotlin adds a huge overhead for async code. I measured over 50% slower for function heavy code.

The otherwise problem is that you end up with two worlds, the non-async ad the async ones, and they can't really call each other without a lot of problems.

The new JVM features will allow you to write async code while still being able to call any functions, that'll make it actually useful.


My servers mostly idle waiting for IO (like most servers do), so I never saw any slowdowns with this; nor would I expect to. Also this is not really an issue on e.g. Android where a lot of UI code is inherently asynchronous and co-routines are used a lot. And it's a complete non issue on kotlin-js as well where co-routines and js promises are essentially the same thing. I use it; it's fine.

Kotlin's co-routines are simply syntactic sugar for simple callbacks but without the boiler plate. If it has some kind of success / fail callback mechanism, you can create a co-routine from it. Futures, Promises, a Flux, whatever. That's all that happens and the co-routine library comes with extension functions for all these.

I'm not sure what you measured relative to what. But I bet it is some blocking/expensive code actually taking a while before it suspends or something similarly sub-optimal. Of course the key bottleneck with virtual threads would be the scheduler loop that runs all these co-routines. The main contribution of Loom is moving that stuff into the JDK and possibly optimizing the hell out of that. But other than that it does something very similar to what other green/light weight threads have to do necessarily.

Basically any time any virtual thread does something expensive/blocking, all the other virtual threads suffer. If you see a browser app freeze for a bit, that's usually what happened. Loom adds a thing called continuations which literally have a suspend method that tell Loom that it is now safe to switch to another Fiber. Failure to use suspend correctly, will result in the same stuttering behavior on Loom: things will freeze or block until whatever is running calls suspend: only 1 virtual thread executes at the same time. That's the whole point. For non blocking IO this is fine. If you are doing computationally intensive stuff, you either need to suspend frequently or use some thread pool. Otherwise the virtual thread will hog the CPU whenever it runs.

As for the two worlds, I can call any function from a suspend function. Just not the other way around without creating a co-routine context. What Loom adds is basically a better light weight thread scheduler. Under the hood it likely does many of the same things that Kotlin already did.


> Basically any time any virtual thread does something expensive/blocking, all the other virtual threads suffer.

Most Java APIs like socket and file have been re-written to do this for you under the hood.

You're not expected to manually call suspend.

> Just not the other way around without creating a co-routine context.

This goes beyond function calling though. Same restrictions apply for implementing interfaces or passing functions as arguments, right?

With loom, there is just one world. A function is a function is a function. It's good that both Java and Kotlin can take advantage of this improved model.

> Under the hood it likely does many of the same things that Kotlin already did.

Maybe, but this also extends to debugging, heap dumps and stack traces. Having this builtin has a lot of benefits.


It's heavily CPU intensive code that needs to perform async operations on rare occasions. Since async adds overhead in every function call, code that calls a lot of functions will be slower.

The problem with the Kotlin implementation is that if you ever want to be able to call a single async function, every call higher in the stack must also be async (unless you have a separate runner, but that's not always possible, and even if it is you can have surprising results)


> Kotlin's co-routines are simply syntactic sugar for simple callbacks

That's only partly true. And yeah they're callbacks but not like e.g. we know from JS.

As far as I know CPS basically means that the Kotlin compiler turns a function like that:

    suspend fun myFunction(param: String) {
      val asyncVal = someSuspendingFunction()
      val asyncVal2 = anotherSuspendingFunctio(asyncVal)
      return someCalculation(asyncVal2)
    }

Into a modified method and *class* like that:

     // The different parts of the function get put into different ifs
     // The state that is normally in the JVM stack frame is holded in a custom class which is a subclass of https://kotlinlang.org/api/latest/jvm/stdlib/kotlin.coroutines/-continuation/
     // Some integer tracks where in the function we are
     fun myFunction(param: String, continuation: Something<MyFunctionClass>?) { 
        continuation = continuation ?: MyFunctionContinuation(param)
        if (continuation.state == 0) {
          contination.state = 1
          someSuspendingFunction(continuation)
          return SUSPEND
        }
        if (continuation.state == 1) {
           // The result mechanism is in the Contiuation interface/abstract class. It can be used to later receive the async result
           // of callig annother suspending function
           continuation.asyncVal = contiuation.getResult()
           anotherSuspendingFunction(continuation.asyncVal, continuation)
           return SUSPEND
        }
        if (continuation.state == 2) {
           continuation.asyncVal2 = contiuation.getResult()
           return FINISHED(someCalculation(continuation.asyncVal2))
        }
     }

     class MyFunctionContinuation: Continuation {
        
        // Function arguments
        val param: String
        
        // Every suspending function needs this. It tracks where in the function we currently are
        val state: Int = 0
       
        // Local variables inside the function. Those would normally live in the JVM function stack 
        // But the  Coroutine runtime needs to be able to "rehydrate" the JVM function stack once a non-blocking call finishes
        var asyncVal: <Don't remember>
        var asyncVal2: <Don't remember>

     }
I hope the basic idea comes across.

I read this sometime in some book about coroutines. But you can see that there is some overhead. The basic idea is to replicate the function call stack in a custom datastructure. This way we can be deep inside a normal JVM callstack and also have a separate Kotlin Coroutine callstack.

Once the coroutine wants to suspend it can simply return from the JVM callstack but the Kotlin callstack is preserved. Later when we want to continue we can call the same function again, with the Kotlin callstack now containing the result of the async operation.

If the JVM can do this natively we wouldn't need this second Kotlin coroutine callstack because the native JVM callstack would support this.


Nice. This explains why the stacktraces are kinda useless unless you take special care with kotlin coroutines.


That’s a huge misunderstanding. With a tiny caveat, loom magically transforms every blocking Java call to non-blocking. The caveat is native FFI, which is extremely rare in case of the JVM. Kotlin’s coroutines are cooperative by definition, while Loom allows for non-cooperative concurrency.


There are some applications where coroutines are a comfortable way to code even if you don't care about concurrency. For instance there are many cases where you write an Iterator that wraps an Iterator, like

https://paulhoule.github.io/pidove/apidocs/com/ontology2/pid...

that involve a slightly awkward state machine (and some complexity around initialization and teardown) that can be avoided with coroutines such as Python's generators.


Except performance improvement already mentioned there are two more benefits I can think of:

* clean call stacks - stack traces returned by current async libraries are very difficult to understand

* no need to colour functions, no need to worry if your function is suspendable or not and in which context you are using it


Related:

Java record pattern matching in JDK 19 - https://news.ycombinator.com/item?id=31378896 - May 2022 (155 comments)

Using Java's Project Loom to build more reliable distributed systems - https://news.ycombinator.com/item?id=31314006 - May 2022 (91 comments)

JEP proposed to target JDK 19: 425: Virtual Threads (Preview) - https://news.ycombinator.com/item?id=31236855 - May 2022 (212 comments)

Achieving 5M persistent connections with Project Loom virtual threads - https://news.ycombinator.com/item?id=31214253 - April 2022 (145 comments)

Loom: Project Loom Early-Access Builds - https://news.ycombinator.com/item?id=28191308 - Aug 2021 (76 comments)

Project Loom and Structured Concurrency - https://news.ycombinator.com/item?id=25300233 - Dec 2020 (108 comments)

Project Loom: Fibers and Continuations for the Java Virtual Machine - https://news.ycombinator.com/item?id=15599854 - Nov 2017 (41 comments)


Any previous coverage of JEP 424 (Panama), the planned JNI alternative?


As someone who spent the first 15 years of their career in Java, but has been out of the ecosystem for some time now, it's cool to see Java get such big improvements, especially virtual threads.

However, given the number of excellent JVM languages, especially Kotlin, I'm curious what the general consensus is for starting new projects. That is, if I were starting a greenfield project, I think I would definitely use Kotlin over Java - Kotlin doesn't have to deal with the backwards compatibility concerns of Java, and thus does away with some of the problematic annoying complexity of Java. E.g. first class language support for nullable/optional types I think is critical these days - Java has bolted on an Optional wrapper, but it's cumbersome and doesn't provide the kind of strong guarantees that a language-implemented version does.

So what does the HN community think? Would you use another JVM language over Java if you were starting a new project?


Kotlin is getting some of its own baggage, at least by virtue of having to deal with multiple compilation destinations (JVM, native, Android, JS). This is why you end up with things like @JvmStatic or @JvmInline.

Java is coming along quite nicely. Its pattern matching looks to be better implemented than Kotlin's (e.g. destructuring implementation). Same with string interpolation (https://openjdk.org/jeps/8273943). And now with Loom, no need to maintain two separate API surfaces for sync vs async. It has the last mover advantage.


I like Kotlin. However, these decisions aren't always simple - and are based on more than the technical merits of the language.

Some years ago, my division at <large company> decreed Scala was the way of the future. All new development was to be done in Scala. We were offered training classes. We formed book clubs. We paired, shared, and opined as we tried really hard to do functional programming correctly. We genuinely tried, and then we gave up. The specific reasons would better suit another post, but it was a grassroots developer-led effort that led us to abandoning Scala.

At this point, we have backend code in NodeJS, Scala, GoLang (this is currently a performance-sensitive one-off), and Java. That's a problem for code reuse, tooling reuse, and general maintenance. If we were to write new components in Kotlin, we would be making the problem worse. At this point in time Java is our safe fallback.

In the future we may try a different language again, but for now there needs to be a really good reason to add to our tech stack. I suspect most new projects will continue to be Java for quite some time, because we're not going to rewrite all the Scala and NodeJS code "just because". It will slowly get replaced as it stops working, or as we find reasons to replace it other than "it's in a language we don't like". Until then, our polyglot set of codebases is too intimidating and devs are going to resist adding another language to it.


> If we were to write new components in Kotlin, we would be making the problem worse. At this point in time Java is our safe fallback.

I don't agree with this. Kotlin is essentially a drop-in replacement for Java, so much so that you can replace your Java classes _one by one_ with Kotlin classes. The build tools, frameworks etc are mostly the same, so no real changes required in the environment. (All this is not true for Scala, for example.)

From one perspective, one might argue that Kotlin didn't do enough (eg. it has ADTs but no pattern matching), on the other hand, this is exactly why it is so easy to step up. You can also very quickly get productive in it coming from Java, there is no need to use advanced stuff like async/await, and there are very few caveats you need to worry about.

Just my 2c, based on experience.


Did golang live up to the performance requirements you wanted from it? Unless it is some very small microservice which can get away with only value types, I fail to see go beating java.


I would use Scala. I like FP and Scala comes with some awesome libraries for concurrent/async programming like Cats Effect or ZIO. Good choice for creating modern style micro-services to be run in the cloud (or even macro-services, Scala has a powerful module system, so it's made to handle large codebases).

https://typelevel.org/cats-effect/

https://zio.dev/

The language, the community and customs are great. You don't have to worry about nulls, things are immutable by default, domain modelling with ADTs and patter matching is pure joy.

The tooling available is from good to great and Scala is big enough that there are good libraries for typical if not vast majority of stuff and Java libs as a reliable fallback.


If you have a small team of engineers and you need the added productivity of a more ergonomic language, then go ahead with Kotlin. In particular it's good for "lite-functional" style with OO mixed in, and your engineers get to keep all the familiarity they have with the JVM ecosystem.

It's not some panacea though, and at the end of the day Java is the system language of the JVM. New VM features are not built to cater to the guest languages. In the case of virtual threads we have a directly competing "async" solution to Kotlin coroutines. So you may see further ecosystem split between libraries catering to Android and those catering to backend work.


Virtual threads are NICE, but not truly critical for most Java devs. What I am really happy with are the pattern matching implementation. Coupled with exhaustive switch-case, its a whole new world for Java devs. And its great that the design they came up with fits so nicely into the existing java language that they didnt have to introduce much new language syntax.


> Would you use another JVM language over Java if you were starting a new project?

no. I'm never a fan of anything like typescript or Kotlin where it's transpiled (yes i know Kotlin is compiled down to JVM byte code) end up dealing with more headaches and troubleshooting in the end.


FWIW, after doing "plain" JavaScript for a couple years before moving to TypeScript, I would never do another project without TS, and these days I think not using it is a colossal mistake.

First, yes, I agree that setting it up initially and understanding all the various tsconfig options can be a pain. But it's also largely a one-time cost, and honestly over the past couple years I really didn't have any "weird troubleshooting" problems once I got it working. Also, these days more is being done to make TypeScript easier to run out of the box, e.g. Deno.

But the benefits of using TypeScript are vast, even from an individual programmer perspective, nevermind the huge benefits of using it to allow for refactoring and better comprehension in a larger team. The autocomplete benefits alone that I get with it in VSCode are worth it.


Been working on a greenfield project for 3 years now. The initial config isn't even the problem. It seems like every time I run into a bug or update packages, typescript is the cause behind it. Just couple weeks ago I had to update mongoose to the latest version to satisfy some STIG requirements and typescript basically broke all the compatibility and I had to go back and rewrite a bunch of sort functions. I'm in more of a infrastructure role now, but I'm constantly having to deal with other devs typescript bugs. We are sticking with typescript cause of how much easier it is for them to write, but personally I would never ever use trans piled languages if it was up to me or for my personal projects.


TypeScript vs JavaScript is a completely different story of Kotlin vs Java though. All of the typing benefits are already present in Java with the benefit that Java is the system language of the JVM.


Actively coding in Java in a large system at the moment (so we can't Kotlin), but if I were to start a greenfields I'd give serious consideration to doing it in Kotlin.


Loom is the talk of the town (for great reasons!), but let's not forget about Project Amber and Record Patterns [1] ... there's an arc of features moving the platform towards a data-oriented programming model. Brian Goetz talks about this quite a bit [2] including the latest 19 launch podcast [3].

[1] https://openjdk.org/jeps/405

[2] https://inside.java/u/BrianGoetz

[3] https://inside.java/2022/09/20/podcast-026


Nice new language features for Java users! sobs in Android-Java8


Is there a path to upgrading Java on Android? Surely new Android releases could do easily?


No, because Dalvik (https://en.wikipedia.org/wiki/Dalvik_(software)) / ART (https://en.wikipedia.org/wiki/Android_Runtime) are forks of Java. New features need to be re-implemented due to licensing issues. And Google seems to be focussed more on Kotlin than on newer Java features.


Android? Why not Kotlin then?


Because regardless of Kotlin, modern Java libraries won't be usable on Android, despite the marketing how "compatible" Kotlin happens to be with Java.


Kotlin is certainly a good general purpose language. But many developers would prefer to be able to use Java for Android development including the excellent tool chain and numerous libraries. It's a shame that commercial disputes between Google and Oracle have left us with this fragmented platform.


Kotlin is still heavily dependent on Java ecosystem, so this excuse is nonsense, there is hardly any Kotlin without Java.

See how sucessfull Kotlin/JS and Kotlin/Native are on the wild.


> the excellent toolchain and numerous libraries

The same libraries that can be used from Kotlin?


Only if they can be usable from ART capabilities, and desugared into DEX opcodes.


> the excellent toolchain and numerous libraries

The toolchain in both cases is basically IntelliJ (rebranded as Adroid Studio)


Give Kotlin an honest try. I think you will be impressed... and that's coming from a crusty old curmudgeon "get off my compiler" java fanboy.


You're right, and I use Kotlin for all new projects. However, when working with legacy code and with teams that aren't familiar with Kotlin, introducing a newer Java version is much easier than introducing an entirely new language.


Why learning a whole new language just because vendor refuses to upgrade?

At that point one could also say "why Anrdoid?".


Because you can use the same toolchains, APIs, libraries, and even progressively migrate with both languages in the same project. Honestly, people who whine about Java 8 in Android and refuse to consider Kotlin are suffering by choice.


Only those that can be desuagared into Android Java, everything else will fail to either compile or map into ART features.


All of Kotlin runs fine on Android, that's its point, that all of it can be desugared to run on a 1.6 or 1.8 JVM?


If you don't want to use modern Java libraries as third party depedencies then all good.

Now if you plan to dynamically link to Java libraries that require JVM opcodes without counterparts on DEX, SIMD, Panama, Java Standard Library post Java 11, then thought luck.

See it as an opportunity to learn how to use the NDK (for SIMD), or write your own Kotlin versions.


Java 8 is fine and the argument to replace it with kotlin has to be very compelling.

Kotlin was designed to replace Java 1.6 and that's about all it's good for.


> Kotlin was designed to replace Java 1.6 and that's about all it's good for.

Not sure what K was designed for. But we use it to fix J's billion dollar "implicit nulls" mistake, and terse syntax (making it useful for templating: Kotlinx.html), and pass-as-value method references. We love it's hang towards immutability.

K is not Haskell, but it sure feels a lot safer that J (where a lot of strings were used to refer to methods, and strings+Groovy was used to do templates).

Bottom line: I'm too old --and life's too short-- for NPEs.


Have your tried static analysis? When I used Java most (if not all) of our NPE problems were pointed out by sonar qube


Ugh, to hear what hackarounds folks have to deal with in the Java ecosystem is kind of frightening.

I moved from Java to Node (then Typescript) about 8 years ago. Moving to a language that doesn't allow implicit nulls makes a world of difference, importantly, at "programmer" time, not just build time.

That is, when programming in VSCode, I immediately get feedback if I attempt to use a nullable variable in an unsafe manner. That has huge productivity benefits compared to if I only got that feedback after my build ran.

Similarly, simple things like safe optional chaining (i.e. foo?.bar?.baz) makes code so much faster and easier to write, and AFAIK this hasn't been added to Java yet (correct me if I'm wrong).


Yes. But coupled with our Groovy templates (that we're now moving to Kotlinx.html) it did not really help much. This is not Java's problem per se (though one could argue that J's verbose syntax prohibits it for in-code HTML templating)


What if I don't want to put @Nullable/@NotNull all over the code?


Can't you setup your project to be `@NotNull` by default, and explicitly mark @Nullable fields? Those should be more rare anyway.


You can use Java 17 on Android, except features that require new classes and/or methods, obviously.


sobs in j2objc AND sobs in GWT/J2CL


Virtual threads preview! The beginning of a big change to java!


This is the feature I've been waiting for. Bring on the era of the loom!


It's incredible what competition can do to spur innovation. Does everyone remember how stagnant Java had become during the 6-7 era?


It is like being busy with an aquisition, and uncertainity about the job, would be an impedement for doing software development.


It was stagnant even before acquisition. Sun apparently focused on JavaFX trying to make Android competitor, which all failed.


To me it sucks because its a space that would get me to use Java if they had a really powerful GUI stack that is modern and feature filled. I feel like Oracle is dropping the ball not investing there.

We went from having RAD IDE's that produced native UIs to everything has to be done in electron because nobody is innovating in the Desktop GUI space.


JavaFX was a missed opportunity. A bit more effort and you'd have an electron killer.


Nobody liked cross platform UI toolkits (they just don't feel native). And all UI development was happening with web tech, where somehow the divergence didn't matter.


People severely overestimate how users positively perceive native vs non-native apps. Especially since

>And all UI development was happening with web tech, where somehow the divergence didn't matter.

If you're 95-99% of time using non-native web technologies, why would you care about that much about rest of it being native?


I guess cross platform toolkits fall into an uncanny valley for UIs.


Web is cross platform, and actual native platform for 99% of people.


Sun put a huge amount of effort into JavaFX. It was literally hundreds of engineers.


JavaFX predates Android.


That competitor was SavajeSE, and it hard to compete when a Do No Evil company screws the Java creators with free beer.


That too, but i think the parent is correct as well. Java has definitely increased the pace of development with gradual changes going out quicker, especially after 11.


It's on a date-based rather than feature-based cycle since 8. I think I recall there being talk of doing that back in the Sun era at one point, but then it ended up being a multi-year wait regardless.

Very much nicer this way, and although a few businesses are still stuck on 6 or 8 most places seem to be aiming to stay current with Long Term Support (LTS) releases (of which 17 is the current LTS and 21 will be the next one).


We switched to 17 this summer.

I am hoping that many of the things in the pipe right now are finalized for 21.


It's been a long time since I thought about this, but wasn't IBM really the one pushing and innovating Java (not just EE)?


IBM has done some innovations of their own implementation, which nowadays lives on as OpenJ9.

Some of the innovations were value types extensions, AOT compilation or JIT caches, and until OpenJ9 came to be, that usually was only available to commercial clients.


The slow period helped other JVM languages - Clojure, Scala and Kotlin - which are now preferred by many. A fortunate stagnation I think.


I don't think Java has ever faced as much serious competition as it did circa 2005 from PHP and Ruby. What caused Java's stagnation was Sun's decline (which both reduced resources as well as made some big missteps), and as Oracle ramped up the investment in Java, we were able to get back to the roadmap (as well as come up with new ideas).


Do you not think Node.js in the 2010s was serious competition? After all, it combined the dynamism of something like PHP with a high-performance VM built by (IIUC) former HotSpot devs.


Yeah, you're right. Anyway, my point is that what caused Java's stagnation is not lack of competition, and what's caused its resurgence is not greater competition. Both were just a matter of resources.


The 6-7 era was as far as I remember blocked by the community process, as there was a dispute over open source Java implementations and the restrictions in Suns certification process (no embedded use without license). Sun instead invested time and money into OpenJDK, which probably also pissed of a lot of people that wanted free reign over their Java implementations without having Sun or the GPL dictating terms.


Someone has to have money to spend. Sun did not have much money during 6-7 era.


GraalVM JDK19 dev builds are available at https://github.com/graalvm/graalvm-ce-dev-builds/releases/la....


Does anyone know if there are any plans to improve typing and null pointer safety in future releases? I tried to google for existing JEPs, but I'm not sure I'm using the correct terms.

I mean, I'd like to see better typing because the current generics are too basic, and you always end with casts and `isAssignableFrom`. Is there anyone working on types for Java, like in TypeScript and Rust?

Another huge issue for me is null-type safety. The latter is basically the only reason we use Kotlin right now. We would be happy to migrate to the latest Java if it fixes the nulls. I'm wondering what prevents adding a simple syntax sugar like optional !-suffix for non-nullable definitions. Like `void fooBar(String! path)`, which should prevent at the compiler level from passing a null argument. Could that work?


So... Green threads are back?


But they are being multiplexed on more than one Platform thread this time.


There's a lot more to it than that. Loom is Actually Useful Green Threads^TM.


I think it is funny that originally Java only had green threads, but then they were replaced with native threads, and now java is getting green threads again. Obviously a lot of development has happened in that area since, but still funny.


Seems like Java is ported to RISC-V on Linux now


Not sure why one would post release notes when release is not out yet. My first instinct would be to download and try out new features, but its just release candidate on OpenJDK site.


Not sure if it was there 2 hours ago, but you can download GA now: https://jdk.java.net/19/


I'm mostly excited about JEP 405: Record Patterns + JEP 425: Virtual Threads.

https://openjdk.org/jeps/405 https://openjdk.org/jeps/425


Not officially announced. What's ready to download is the release candidate from Saturday: https://jdk.java.net/19/


Yes. But today is official release day.


AFAICT there is no major language changes released in JDK 19, only previews.


Is it faster?


Looks like they just won't stop digging the Loom hole deeper, huh.


Why is it a "hole"?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: