Linux is fine with a few hundred thousand threads too. I didn't mention it because it's significantly suboptimal to run that many OS threads or green threads like goroutines. If your scalability needs are that high, all the other little inefficiencies in Go, such as the lack of compiler optimizations and GC, are likely going to end up dominating more than threading models. We usually talk about high scalability as C10K; C100K is an extreme case.
> "Async IO" in itself doesn’t even offer a means of exploiting multiple CPU cores.
Sure it does. Multithreaded epoll is the standard solution for such things; that's how nginx is multithreaded, for example. Node isn't multithreaded, but that's a Node problem, not an async I/O problem.
> then memory limitations are not going to prevent you from running 100,000 Goroutines.
100,000 Linux kernel threads with 8kB kernel stack + 2kB user stack is only 1GB, which is likewise tractable.
And that 2kB assumes that the goroutine stacks remain small, which is not always the case. They can and will grow based on the dynamic behavior of the program.
You point out yourself that Linux kernel threads use more memory. For this and other reasons (e.g. slower context switches) you cannot spawn as many Linux kernel threads as you can spawn Goroutines.
The rest of your comment is just a repeated insistence that "Go doesn't scale" without a shred of evidence in support. You really seem to have some kind of beef with Go that's clouding your judgment in this area.
>Multithreaded epoll is the standard solution for such things
Multithreaded epoll is exactly what Go provides. Rather than having to manage thread pools yourself, the Go runtime does it for you by distributing logical threads of execution over OS threads.
>We usually talk about high scalability as C10K; C100K is an extreme case.
You seem to be assuming one Goroutine per connection. As I said before, a more plausible scenario for 100,000 goroutines is C10K with 10 goroutines per connection.
> "Async IO" in itself doesn’t even offer a means of exploiting multiple CPU cores.
Sure it does. Multithreaded epoll is the standard solution for such things; that's how nginx is multithreaded, for example. Node isn't multithreaded, but that's a Node problem, not an async I/O problem.
> then memory limitations are not going to prevent you from running 100,000 Goroutines.
100,000 Linux kernel threads with 8kB kernel stack + 2kB user stack is only 1GB, which is likewise tractable.
And that 2kB assumes that the goroutine stacks remain small, which is not always the case. They can and will grow based on the dynamic behavior of the program.