Hacker News new | past | comments | ask | show | jobs | submit login

> Each inbound WebSocket connection runs your program in a dedicated process. Connections are isolated by process.

That sounds bad; it is like “CGI, twenty years later”, as they say. In 2000 at KnowNow, we were able to support over ten thousand concurrent Comet connections using a hacked-up version of thttpd, on a 1GHz CPU with 1GiB of RAM. I’ll be surprised if you can support ten thousand Comet connections using WebSockets and websocketd even on a modern machine, say, with a quad-core 3GHz CPU and 32GiB of RAM.

Why would you want ten thousand concurrent connections? Well, normal non-Comet HTTP is pretty amazingly lightweight on the server side, due to REST. Taking an extreme example, this HN discussion page takes 5 requests to load, which takes about a second, but much of that is network latency — a total of maybe ½s of time on the server side. But it contains 7000 words to read, which takes about 2048 seconds. So a single process or thread on the server can handle about 4096 concurrent HN readers. So a relatively normal machine can handle hundreds of thousands of concurrent users without breaking a sweat.

On the other hand, Linux has gotten a lot better since 2000 at managing large numbers of runnable processes and doing things like fork and exit. httpdito (http://canonical.org/~kragen/sw/dev3/server.s) can handle tens of thousands of hits on a single machine nowadays, even though each hit forks a new child process (which then exits). http://canonical.org/~kragen/sw/dev3/httpdito-readme has more performance notes.

On the gripping hand, httpdito’s virtual memory size is up to 16kiB, so Linux may be able to handle httpdito processes better than regular processes.




Difference with cgi is that they are live and die, with each user request. websocketd programs are more like "one per user session"... Makes sense when your user sessions are lengthy.


Yes — that’s exactly the problem. Handling ten thousand concurrent users with CGI is easy, even on 2000-era hardware. Ten thousand concurrent users might be four requests per second. But ten thousand concurrent users using websocketd means you have ten thousand processes. And if you’re doing some kind of pub/sub thing, every once in a while, all of those processes will go into the run queue at once, because there’s a message for all of them. Have you ever administered a server with a load average over 1000?

Still, the O(N) scheduler work in current Linux might make that kind of thing survivable.


Yes, I often have 10K plus processes running on a production server. It's caused troubles at time due to misbehaving processes, but mostly it's been ok. Linux is surprisingly good at this (wasn't always the case).

For the times when some of my processes were misbehaving, it was easy to identify which processes were misbehaving with "ps", "top", etc and resolve with "nice", "kill". This killed the bad connections without bringing the rest of the app down. Sysadmins like me.


Have you been waking up all 10K of them at a time? Handling 10,000 sleeping processes is not so surprising.


rapind did nice experiment in comments above.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: