As the JEP states, pinning due to synchronized is a temporary issue. We didn't want to hold off releasing virtual threads until that matter is resolved (because users can resolve it themselves with additional work), but a fix already exists in the Loom repository, EA builds will be offered shortly for testing, and it will be delivered in a GA release soon.
Those who run into this issue and are unable or unwilling to do the work to avoid it (replacing synchronized with j.u.c locks) as explained in the adoption guide [1] may want to wait until the issue is resolved in the JDK.
I would strongly recommend that anyone adopting virtual threads read the adoption guide.
The problem is that it's rare to write code which uses no third-party libraries, and these third-party libraries (most written before Java virtual threads ever existed) have a good chance of using "synchronized" instead of other kinds of locks; and "synchronized" can be more robust than other kinds of locks (no risk of forgetting to release the lock, and on older JVMs, no risk of an out-of-memory while within the lock implementation breaking things), so people can prefer to use it whenever possible.
To me, this is a deal breaker; it makes it too risky to use virtual threads in most cases. It's better to wait for a newer Java LTS which can unmount virtual threads on "synchronized" blocks before starting to use it.
> have a good chance of using "synchronized" instead of other kinds of locks; and "synchronized" can be more robust than other kinds of locks (no risk of forgetting to release the lock, and on older JVMs, no risk of an out-of-memory while within the lock implementation breaking things),
I haven't professionally written Java in years, however from what I remember synchronized was considered evil from day one. You can't forget to release it, but you better got out of your way to allocate an internal object just for locking because you have no control who else might synchronize on your object and at that point you are only a bit of syntactic sugar away from a try { lock.lock();}finally{lock.unlock();} .
The fact that the monitor is public rarely causes issues, and in those cases where it's used on internal objects, it's not really public anyhow.
There's an additional benefit to using the built in monitors, and that has to do with heap allocation. The data structure for managing it is allocated lazily, only when contention is actually encountered. This means that "synchronized" can be used as a relatively low cost defensive coding practice in case an object which isn't intended to be used by multiple threads actually is.
Is there a similarly low-level synchronization mechanism that doesn't work this way? .NET's does the same thing.
I guess I might have preferred if both Java and .NET had chosen to use a dedicated mutex object instead of hanging the whole thing off of just any old instance of Object. But that would have its own downsides, and the designers might have good reason to decide that they were worse. Not being able to just reuse an existing object, for example, would increase heap allocations and the number of pointers to juggle, which might seriously limit the performance of multithreaded code that uses a very fine-grained locking scheme.
In .net async won where lock and mutex does not work (lock is like synchronized, not exactly the same, tough). That’s why most libraries use SemaphoreSlim which would work with green threads. But that’s more because of the ecosystem. I’ve barley stumble upon lock’s and mutex is mostly used in the main method since it acquires a real os mutex, not really a cheap thing but for GUIs it’s clever to check if the app is running. Most libs that use system.threading.task use semaphoreslim tough.
Yeah, definitely. But for a fair comparison I think you have to look at how .NET did things before async/await hit the scene. And, for that, the aspect of the design in question is quite similar between the two.
Early .Net is hardly an independent data point from early Java. Not only was .Net directly influenced by Java, it also had to support a direct migration from the Microsoft JVM specific Visual J++ to J#.
The handful of languages I know either do not have a top level object class that supports a randomized set of features ( C++ ) or prioritize a completely different way of concurrent execution ( Python, JavaScript ).
Hi Ron. Thanks a lot for the amazing work you are doing on loom and whole JVM platform. EA builds and GA release you mentioned can make it into 22 or you meant EA build for 23?
We make scalable graphics rendering servers to stream things like videogames across the web. When we started the project to switch to virtual threads we had that as number one on the big board. "Rewrite for reentrant locks."
Maybe we have more fastidious engineers than a normal company would since we are in the medical space? But even the juniors were reading and familiarizing themselves on how to properly lock in loom's infancy.
All that only to point out that, yes, they had communicated the proper use of reentrant locks long ago.
I do understand what you're saying from an engineering management perspective though. That effort cost a fortune. Especially when you have the FDA to deal with.
It was more than worth it though! In the world of cloud providers, efficiency is money.
We use the same technologies to deliver, say, remote CT review capability, that you would use to stream a videogame. It's just far more likely that the audience I'm communicating with, HN, is familiar with the requirements of videogame streaming, than it is that they are familiar with remote medical dataset viewing. Obviously the requirements or our use case are far more stringent, but no need to go into all that to illustrate the point made.
1 - Use virtual threads with reentrant locks if you need to do "true heavy" scaling.
2 - Kind of implied, but since you gave the opportunity to make it explicit with your comment =D, there is no need to waste your life on earning no money in videogames when the medical industry is right there willing to pay you 10x as much for the same skills. (Provided your skill is in the hard backend engine and physics work. They pay more for the ML too, if I'm being honest.)
In Virtual Threads: An Adoption Guide part there is:
When using virtual threads, if you want to limit the concurrency of accessing some service, you should use a construct designed specifically for that purpose: the Semaphore class.
That language only obliquely mentions the issue. It is nowhere near clear and direct enough for someone who is just, for example, using a third-party library that is affected. And then it's stuck inside detailed documentation that anyone who wasn't personally planning on adopting virtual threads is unlikely to read.
This seems like it's at least vaguely headed in the direction of that famous scene from early in The Hitchhiker's Guide to the Galaxy:
“But the plans were on display…”
“On display? I eventually had to go down to the cellar to find them.”
“That’s the display department.”
“With a flashlight.”
“Ah, well, the lights had probably gone.”
“So had the stairs.”
“But look, you found the notice, didn’t you?”
“Yes,” said Arthur, “yes I did. It was on display in the bottom of a locked filing cabinet stuck in a disused lavatory with a sign on the door saying ‘Beware of the Leopard.”
I would like to take this opportunity to thank pron and the amazing jdk developers for working on a state of the art runtime and language ecosystem and providing it for free. Please ignore the entitled, there are many many happy Dev's who can't thank you all enough.
Those who run into this issue and are unable or unwilling to do the work to avoid it (replacing synchronized with j.u.c locks) as explained in the adoption guide [1] may want to wait until the issue is resolved in the JDK.
I would strongly recommend that anyone adopting virtual threads read the adoption guide.
[1]: https://docs.oracle.com/en/java/javase/21/core/virtual-threa...