
State of Loom - mxschumacher
https://cr.openjdk.java.net/~rpressler/loom/loom/sol1_part1.html
======
ceronman
This is my favorite JVM project and I think it's going to be huge!

This model of concurrency is so much better than the async/await model used by
many other languages. No more colored functions [1], or worse
Completable<Future>, promises et al. Nice stacktraces, debuggers that work
plus no need for thread pools anymore. I can't wait for this to be ready for
production.

Only drawback seems to be when calling native code. I guess it's the same
problem that Golang has. Good thing is that the Java ecosystem is not that
dependent on native stuff, so I think it's a fair tradeoff to make.

[1] [https://journal.stuffwithstuff.com/2015/02/01/what-color-
is-...](https://journal.stuffwithstuff.com/2015/02/01/what-color-is-your-
function/)

~~~
albertzeyer
Can you maybe elaborate why this model is better than async/await? I have to
admit, I don't really know Loom, but from a first glance, it looks like this
is just (green) threads, and async/await is anyway an orthogonal concept to
this, or not? But maybe I am confusing things here.

Can you maybe recommend a good overview over all the current concurrent
approaches and concepts, like async/await, and alternatives, and their
advantages and problems? I would love to get a more recent overview on this.

I had the impression that most recent languages follow on the async/await
concept. At least JS and Python. Maybe also Rust? Go? Erlang? And there are
probably libraries for C++ to do the same, or maybe it is already integrated?
I have to admit that I did not fully follow all this development.

~~~
GolDDranks
Async/await split up your functions at their "blocking points" to fragments
that can be run fragment-by-fragment, in an interleaved way by an executor /
event loop, thus enabling high level of concurrency within a single OS thread.

The problem is that your split-up async function ceases to be a normal
function. It doesn't use the stack in a similar way than normal functions.
Furthermore, the executor / event loop is required for running those async
functions.

So you can't call async functions from normal functions, because they aren't.
You need to create an event loop and then hand the task over for it to process
it. Your world splits up into sync and async functions.

Go and Erlang are going with the similar "lightweight thread" approach as
Java's project Loom. On the other hand, JS, Python and C# have gone with
async. I'm not sure if these languages actually _need_ async, going with
something similar as project Loom would have also fit, and would have been
simpler from the user perspective, perhaps? But hindsight is 20/20.

Rust has also gone async, and I'd argue it's the only one of the pack who
really have async/await as a necessity - this is because of two design
requirements that differ from Java, C#, Python and JS: 1) must have native-
level performance when calling foreign code that expects a C-like stack.
Something like Loom doesn't provide that. 2) must not have an implicit default
runtime.

~~~
yrio
Python is popular in data science, AI & those field depends on a lot of native
libraries: NumPy, SciPy, etc. Maybe that's why they chose the async await
model

~~~
GolDDranks
Yeah, sounds plausible. A lot of Python stuff is actually wrapped C / C++ /
Fortran code, so it makes sense to optimize for FFI.

------
jmartrican
I been waiting for this project for a while. I saw an amazing demo of this on
youtube video. In the demo a regular Jetty server running a simple endpoint
that slept for 1 sec was DOSd with multiple concurrent requests. You can see
as the number of concurrent requests increased, the execution time increased.
The presenter then changed the code of the Jetty server to create a new Fiber
vs a Thread. Then reproduced the test and this time there was no increase in
execution time.

What I really like about this project is that we can keep Spring MVC
applications in non-reactive form. Spring MVC came out with reactive framework
for implementing APIs. But I'm not a fan of this style for a few reasons, plus
there is millions of LOC doing it the non react implementation. By utilizing
project Loom, we do not have to switch over to the Spring reactive way, and we
can increase the performance of existing code.

EDIT: Here is the video:
[https://www.youtube.com/watch?v=Csc2JRs6470](https://www.youtube.com/watch?v=Csc2JRs6470):

\- he goes into Loom at 19:30.

\- he goes into the demo at 24:00.

------
MrBuddyCasino
> A virtual thread is a Thread — in code, at runtime, in the debugger and in
> the profiler.

To explain: if you debug async code on Kotlin right now, you cannot single-
step in a debugger, because the debugger attaches to a thread. But coroutines
get scheduled on different threads all the time!

I wonder what this means for ThreadLocals and Locks? Will they work as before?
If so, that is huge! Because locks don't work with async code either, for
obvious reasons. The consequence is that the whole ecosystem is split in two
parts! You cannot just use a Guava cache in Kotlin coroutine-based code.

If Loom manages to avoid all of this, this is fantastic news.

~~~
pron
> I wonder what this means for ThreadLocals and Locks? Will they work as
> before?

Yes. You can try that today:
[http://jdk.java.net/loom](http://jdk.java.net/loom)

~~~
MrBuddyCasino
This is outstanding. They solved the „what color is your function“ problem!
The Rust ecosystem has a completely separate std-lib just for async. I think
there is one other language that managed to avoid that problem too by using
compile-time magic [0].

[0]
[https://github.com/ziglang/zig/issues/1778](https://github.com/ziglang/zig/issues/1778)

~~~
The_rationalist
Kotlin has solved colouring before loom -> [https://medium.com/@elizarov/how-
do-you-color-your-functions...](https://medium.com/@elizarov/how-do-you-color-
your-functions-a6bb423d936d)

Go was the first to solve colouring but they have it seems a less type safe
solution

~~~
MrBuddyCasino
Golang occupies an interesting spot here. They never had to migrate from a
predominantly blocking, thread-based ecosystem to async. Does Golang really
have two colors, is explicit threading a thing (I honestly don't know)? Or is
it really just one color, namely the async one?

~~~
Skinney
Golang only has one colour. `funcA` works the same way in sync (`funcA()`) or
async (`go funcA()`) context.

~~~
aliceryhl
Golang doesn't really have a sync context. It has one color because everything
is async. The `go` operation is not comparable to `await`, rather it is
comparable to spawning.

~~~
Skinney
There’s a difference between sync and async code in Go, which is why you have
all the normal threading primitives like mutexes, semaphores and blocking
queues/channels.

The point is that functions themselves don’t come in sync or async flavors.
Just like in Java.

~~~
aliceryhl
I don't think the existence of mutexes, semaphores and queues/channels imply
that there is a sync version of Go. You can totally use those primitives in
asynchronous Rust too.

You call the queues blocking, but they aren't really in the sense of
"blocking" usually used when talking about async in Rust. The Go runtime can
and will preempt your Go code in the middle of waiting for a channel to run
some other task, and this preemption is what makes it different from a
blocking Rust channel. An async Rust channel will also make the calling
function wait for messages when you await the receive method.

Basically my point is that because any Go code can be preempted at any point,
that makes all Go code async. The language not making you type await on
everything doesn't make it sync.

------
doctor_eval
I was reading about Go threads just the other day and the article mentioned
that Go uses green threads because they are more efficient.

I thought this was weird because IIRC Java 1.0 used green threads in Linux and
it was a big deal when they moved to OS threads.

I’ve long believed that the IT world moves in cycles but this is a very clear
example of exactly that. Java has gone from green threads to Posix threads and
now back to green threads.

I do think it’s awesome (I love goroutines) and Threads in Java have become a
but of a nightmare, made a little easier with executors and
CompletableFutures. So this further improvement is great news.

But still... call me when the builder pattern is dead.

~~~
dullgiulio
As the article explains (I know it's hard to comment after actually reading),
Java moved away from using several green threads on top of a single system
thread.

What Loom and Go do is to schedule green threads on a bunch of system threads
and spawn more system threads when they get blocked doing synchronous system
calls.

~~~
doctor_eval
No need for snark, I did read the article. I was just pointing out the irony
that it was a big deal when the JVM moved away from green threads and now it’s
a big deal when it moves back. It’s a comment on the hype cycle. And, as I
said in my OP, it’s a good change and I’m happy to see it.

(And if you want to be pedantic, IIRC the green threads were originally mapped
to the Java process, not a thread, because threads were either unavailable or
immature in Linux when Java 1 came out; I can’t remember which)

~~~
codr7
You're missing the point, which is what the person replying was trying to
explain. We're not simply going back and forth, we're learning from past
experience and improving implementations.

The green threads on the JVM were not the same kind as green threads in Go
(and Loom), they would block on IO. Can't speak for Loom, but Go automagically
reschedules your green thread when it blocks which allows other threads to run
while waiting.

The point is they weren't rescheduled when they would block in the JVM, every
process has a main thread.

~~~
doctor_eval
It’s hard to understand how I can miss my own point... yes, I know that we are
not simply going back and forth; I did read the article. I know that the new
implementation of green threads is more sophisticated than the original
implementation.

But I find the cycle - what I called the hype cycle - from internal scheduling
to external scheduling and back again, interesting, and I wonder what, if
anything, we as an industry can learn from this?

ISTM that Java 1.2 could have improved on green threads instead of moving to
os threads. So, is there something we can learn from these two transitions
that will help us all make better decisions in the future? The use of OS
threads and all the complexity that this has caused has cost the industry
hundreds of thousands of hours of developer time. If we can learn some lessons
from this then isn’t that a good thing?

~~~
dullgiulio
I don't think this is a matter of hype cycle at all. There are two things that
changed:

1\. Threading got much faster and lightweight. This is what Java was initially
trying to work around, until it didn't have to any more.

2\. The problem moved to handling as many sockets concurrently as possible.
Even lightweight system threads are too heavy for scaling linearly with the
number of connections (too much context-switching overhead, too much space for
stack, etc.)

Green-threading has become a good idea again because we now have a kernel API
that is used to multiplex a lot (but not all) I/O systemcalls.

Today Go runtime uses epoll/kqueue to read from a big bunch of sockets,
whenever something new happens to any of them. This takes one system thread
only.

The API model of epoll/kqueue implies some way to handle concurrency in your
user code: this can either be callbacks (or async/await syntactic sugar) or
green threads and CSP (channels and so on.) This is why green threading is
having a comeback.

(Sorry for implying you did not read the article!)

~~~
doctor_eval
That's OK, lots of people fail to RTFA, but I really like reading about this
stuff.

> Green-threading has become a good idea again because we now have a kernel
> API that is used to multiplex a lot (but not all) I/O systemcalls.

OK, that makes a lot of sense. I had to read up about how epoll is different
from select/poll (that's how long it's been since I worked in C :). Clearly
epoll was needed to make green threads efficient, but from what I can see, by
the time epoll and friends were widespread, the pthread model was entrenched
in Java.

------
nahuel0x
How Loom manages killing a vthread from another vthread? In Erlang you can do
it safely and without needing to check a cancellation signal in the process to
be killed. Also, Erlang has per-process heaps, an usually overlooked feature
that gives soft-realtime capabilities, avoiding GC hiccups. JVM+Loom seems to
be closer to Go than Erlang.

~~~
aidenn0
Does that mean message-passing in Erlang necessarily includes copying?

~~~
toast0
I'm not qualified to say it copying is necessary at a language level, but in
terms of implementation, BEAM always copies for message passing, AFAIK.

A small caveat is that Erlang has a Refc binary type that is reference
counted, when sending a message with a refc binary to another process on the
same node the content isn't copied, just the reference value (like a pointer)
and the reference count is incremented.

There is an area where copies could potentially be avoided, but I don't think
they are. Recent versions of beam have an optional off-heap message queue
feature; some software patterns have a proxy process that accepts messages and
sends them as-is to another process; if the message is off-heap for the proxy
process, and it would send to the next process off-heap, it could be possible
to avoid copying it, but it might be a bit tricky. (I don't think this is
done, but that's why I said AFAIK earlier)

------
klysm
I'm really excited to see how this impacts the performance of akka and other
stream-like things in the JVM ecosystem. I've spent far too much time
tinkering with thread pools and digging through profiles with Unsafe.park
everywhere.

------
dang
See also:

[https://news.ycombinator.com/item?id=17646169](https://news.ycombinator.com/item?id=17646169)
from 2018

[https://news.ycombinator.com/item?id=15599854](https://news.ycombinator.com/item?id=15599854)
from 2017

[https://news.ycombinator.com/item?id=23194215](https://news.ycombinator.com/item?id=23194215)
\- 3 comments from a few days ago

~~~
joobus
The project has been in development for 3 years, only "early access" binaries
are available now, and the parent article has multiple TBD notes. It's going
to be another few years before this is released, IMO.

~~~
vips7L
Java moves slow on purpose because they have strict backwards compatibility
guarantees.

Moving this slow is why the implementation is much better than the same in C#,
Kotlin, or JS.

------
aidenn0
Does Loom make JNI unusable [edit] C-f JNI is my friend. They pin a fibre to
an OS thread for the dynamic extent of a JNI call.

------
pizlonator
This is great. If I grok right this is basically the best of native threads
and the best of green threads put together.

I think I proposed this at a crazy idea talk at VEE a long ass time ago (10
years or more). It’s still a good idea but I guess not crazy anymore.

------
e12e
> A server can handle upward of a million concurrent open sockets, yet the
> operating system cannot efficiently handle more than a few thousand active
> (non-idle) threads.

Does anyone have a reference to some up to date notes on the state of
scalability of Linux and/or free-/dragonfly-bsd scalability at the process
level?

I don't doubt processes are a bit heavy for scaling to a million active
threads of execution - but it'd be nice to see what one could expect on a low
end server (say 16 cores, 64gb ram) today?

~~~
zokier
I'd also be curious about this. C10k was a problem at the turn of the century
but world has changed a lot since, both in terms of hardware and kernel.

Best reference I could easily find was this experiment that had 100k
connections to mysql which has thread per connection model. Seems to handle it
well

[https://www.percona.com/blog/2019/02/25/mysql-
challenge-100k...](https://www.percona.com/blog/2019/02/25/mysql-
challenge-100k-connections/)

------
artemonster
Can Loom be used to simulate RTL & verification language on top of that
(similar to SV or e)? I mean practically (since theoretically the answer is
yes)

------
haspok
Why is there still no timeline? I don't see any previews, let alone GA
features in JDK15, which means that by JDK17 probably nothing's going to be
released. Which is sad, because that pushes the GA to the next LTS a extra few
years down the line... (2023/2024 the earliest) :(

~~~
BenoitP
> I don't see any previews

There is an early access [1]

\----

As for having in in JDK17, Ron provides an answer in the reddit thread [2]

[1] [http://jdk.java.net/loom/](http://jdk.java.net/loom/)

[2]
[https://old.reddit.com/r/programming/comments/gkgzld/state_o...](https://old.reddit.com/r/programming/comments/gkgzld/state_of_project_loom_on_jvm/fqulhig/)

------
elric
In case anyone else is confused about some of the internal links appearing to
be broken (e.g. the Scope Variables link), the content is on the second page.

------
eiopa
This reminds me a lot of python’s gevent.

I often wish the language had adopted it instead of the C#-like await async,
since it’s just more straightforward.

~~~
dehrmann
gevent has some serious drawbacks. Stack traces are incomprehensible, it works
by monkey patching existing code, and it falls apart if you have a blocking
operation gevent can't make async. Loom is a bit like monkey patching, but at
the VM level, so I expect it to be much more stable.

~~~
jpgvm
It's worth mentioning those drawbacks are implementation problems, not
soundness or ergonomic problems with the model itself.

I would consider myself a pretty harsh critic of Python but even I appreciate
the elegance of the gevent approach to concurrency.

------
The_rationalist
Are there still legitimate use cases of using real threads vs virtual threads?

------
chewbacha
Hmm I love having cake...

...and eating it too

------
MattTse
Anyone else read this, and read it as google's Project Loon, and wonder what
java has to do with it?

------
Igelau
Anyone else disappointed this wasn't about Lucasarts Loom (1990)?

~~~
davidjhall
Have to admit - I was excited to think this was about a SCUMM update or maybe
a sequel.

------
jayd16
Seems fine but I'm confused by the async/await hate. At least in C# it seems
to me thee is already a superset of Loom through async/await.

    
    
        Thread.startVirtualThread(() -> {});
    

Starts a new virtual thread that will run a synchronous method. Ok cool. C#
already has:

    
    
        Task.Run(()=>{})
    

This will run synchronously on the shared thread pool. Tasks are futures and
can return objects. Same as the Loom proposal.

Optionally you can decide to opt into the await unboxing sugar by adding async
to your method signature. There's arguably some double dipping with the async
keyword on a method. It allows the await keyword to be used inside the method
but also forces the caller into a different calling style. You can argue this
shows API intent for cooperative threading. That said, you can still use async
and sync interchangeably.

The syntax seems very similar to me.

Is there something else going on in Loom that I'm missing? Is it a matter of
how the virtual threads are scheduled/preempted vs how other languages with
async/await schedule their tasks?

~~~
jsiepkes
Maybe I don't understand correctly what your saying but "Task.Run" just
schedules something on a normal (kernel) thread by using a common thread pool.
The Java equivelant is probably "CompletableFuture.runAsync" which does the
same thing.

Loom with "Thread.startVirtualThread" will run something on a userland / green
thread (ie. not a kernel thread so no context switching and blocking it
"costs" practically nothing). async-await is actually a subset of wat Loom
does in the sense that Loom allows for far more then just async-await. Most of
the "hate" for async-await is probably because it leads to the "what color is
your function" [1] problem.

[1] [https://journal.stuffwithstuff.com/2015/02/01/what-color-
is-...](https://journal.stuffwithstuff.com/2015/02/01/what-color-is-your-
function/)

