Hacker News new | past | comments | ask | show | jobs | submit login

Difference with cgi is that they are live and die, with each user request. websocketd programs are more like "one per user session"... Makes sense when your user sessions are lengthy.

Yes — that’s exactly the problem. Handling ten thousand concurrent users with CGI is easy, even on 2000-era hardware. Ten thousand concurrent users might be four requests per second. But ten thousand concurrent users using websocketd means you have ten thousand processes. And if you’re doing some kind of pub/sub thing, every once in a while, all of those processes will go into the run queue at once, because there’s a message for all of them. Have you ever administered a server with a load average over 1000?

Still, the O(N) scheduler work in current Linux might make that kind of thing survivable.

Yes, I often have 10K plus processes running on a production server. It's caused troubles at time due to misbehaving processes, but mostly it's been ok. Linux is surprisingly good at this (wasn't always the case).

For the times when some of my processes were misbehaving, it was easy to identify which processes were misbehaving with "ps", "top", etc and resolve with "nice", "kill". This killed the bad connections without bringing the rest of the app down. Sysadmins like me.

Have you been waking up all 10K of them at a time? Handling 10,000 sleeping processes is not so surprising.

rapind did nice experiment in comments above.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact