Looking at the article he's not implementing `Task` with `Thread` - he's round-robinning processing `Task`s through simple `ThreadPool`. So instead of a single `Thread` making continuous progress on the work in the event loop he instead has a set of `Thread`s making progress _in parallel_ on work in the event loop. This is very much Java 21's approach to virtual threads (as well as in-language task runners like the kind you find in Scala libraries like ZIO, Monix, Cats, and the venerable Scalaz).
How is that materially different than just making async invocations thread forks and awaits into joins? I understand what the code is doing, I just don't understand what the point is, when it seems like the net effect is the same as just writing threaded code.
The difference is that you can spin up only so many OS threads and you can run several orders of magnitude more "green threads" / "tasks" like this that round-robin onto system threads that comprise your event loop executor. The key thing to understand is that `await` doesn't block the backing thread it simply stops the current task (the backing thread moves on to picking up the next ready task from the queue and running the new task to its next await point).
If I understand correctly, it sounds like the idea is to map N tasks to M threads.
I suppose it’d only really be useful if you have more tasks than you can have OS threads (due to the memory overhead of an OS thread), then maybe 10,000 tasks can run in 16 OS threads.
If that’s the case, then is this useful in any application other than when you have way too many threads to feasibly make each task an OS thread?
The idea is to map N tasks to M threads. This is useful more than just when you needd more threads than the OS can spin up. As you scale up the number of threads you increase context switching and cpu scheduling overhead. Being able to schedule A large number of tasks with a small number of threads could reduce this overhead.