Hacker News new | past | comments | ask | show | jobs | submit login

> The scenario you are describing is one were 64-128 OS threads are fully blocked waiting for IO. If that's the case, is it likely that you will have additional unused IO resources that could be being utilized?

One likely scenario is that you've issued 128 RPCs to some other services and are waiting to hear back. Even if each RPC is, say, on a separate TCP connection, your network stack can handle plenty more.

> Also, what overhead do you see as the main limit on spawning a lot of threads? Is that the CPU time of context switching? If so, in this scenario CPU is not the bottleneck, and switching between processes will be nothing, especially with a 32-64 core CPU.

I don't remember what specific feature of OS threads contributes the most to overhead, and maybe someone else can answer this better. But context-switching burdens both the CPU and RAM (due to saved stacks).

> This is a genuine question

I always assume this anyway. Maybe not on Reddit ;)




Thanks for the reply. I am still having a hard time seeing why "turning up the number of threads" doesn't solve this. Maybe for languages with JIT runtimes where each process occupies a larger piece of memory, that could be a problem. But then I see virtual memory coming in, because as you say, most of those processes are doing nothing.

I think I'm going to do some research and see what benchmarks/measurements I can find.


Can't explain exactly why, but the overhead of one OS thread is much greater than the overhead of one routine in the event loop, and it's worth researching if you're interested. It's also worth looking into how kernel IO resources would deal with 3000 threads making calls all at once; like, the network stack has a queue. A while ago I ran a test of how many min-size UDP packets I could send per second with a multithreaded process.

Also can say at least that paging to/from disk or using memory compression would dominate all other overheads, and it's not something to rely on here.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: