Fundamentally, an async API is either data-oriented (Futures/Promises: tell me what data this task produced) or job-oriented (Threads: tell me when this task is done). You can think of it like functions vs subroutines.
Since you typically care about the data produced by the task, threads require you to sort out your own backchannel for communicating this data back (such as: a channel, a mutexed variable, or something else). Unscientifically speaking, getting this backchannel wrong is the source of ~99% of multithreading bugs, and they are a huge pain to fix.
You can implement futures on top of threads by using a thread + oneshot channel, but that requires that you know about it, and keep them coupled. The point of futures is that this becomes the default correct-by-default API, unless someone goes out of their way to do it some other way.
On the other hand, implementing threads on top of futures is trivial: just return an empty token value.
There are also some performance implications: depending on your runtime it might be able to detect that future A is only used by future B, and fuse them into one scheduling unit. This becomes harder when the channels are decoupled from the scheduling.
Good points, but as far as I can tell, there's nothing preventing you from spawning a bunch of Loom-thread backed `CompletableFuture`s and waiting on them.
True, but Loom won't really help you there since that already CompletableFuture.runAsync already uses a pooling scheduler. Same for cats-effect and zio, for that matter.
(And that's aside from CompletableFuture having its own separate problems, like the obtrude methods)
A bounded pooling scheduler. (The ForkJoinPool.commonPool.)
Loom, I believe, "dummies out" the ForkJoinPool.commonPool — ForkJoinTasks/CompletableFutures/etc. by default just execute on Loom's unbounded virtual-thread executor.
(Which happens to be built on top of a ForkJoinPool, because it's a good scheduler. Don't fix what ain't broke.)
Project Loom's scope explicitly encompasses not only virtual threads. To do that, the concept of structured concurrency[1] was introduced. There /are/ going to be new APIs.
Since you typically care about the data produced by the task, threads require you to sort out your own backchannel for communicating this data back (such as: a channel, a mutexed variable, or something else). Unscientifically speaking, getting this backchannel wrong is the source of ~99% of multithreading bugs, and they are a huge pain to fix.
You can implement futures on top of threads by using a thread + oneshot channel, but that requires that you know about it, and keep them coupled. The point of futures is that this becomes the default correct-by-default API, unless someone goes out of their way to do it some other way.
On the other hand, implementing threads on top of futures is trivial: just return an empty token value.
There are also some performance implications: depending on your runtime it might be able to detect that future A is only used by future B, and fuse them into one scheduling unit. This becomes harder when the channels are decoupled from the scheduling.