That sounds bad; it is like “CGI, twenty years later”, as they say. In 2000 at KnowNow, we were able to support over ten thousand concurrent Comet connections using a hacked-up version of thttpd, on a 1GHz CPU with 1GiB of RAM. I’ll be surprised if you can support ten thousand Comet connections using WebSockets and websocketd even on a modern machine, say, with a quad-core 3GHz CPU and 32GiB of RAM.
Why would you want ten thousand concurrent connections? Well, normal non-Comet HTTP is pretty amazingly lightweight on the server side, due to REST. Taking an extreme example, this HN discussion page takes 5 requests to load, which takes about a second, but much of that is network latency — a total of maybe ½s of time on the server side. But it contains 7000 words to read, which takes about 2048 seconds. So a single process or thread on the server can handle about 4096 concurrent HN readers. So a relatively normal machine can handle hundreds of thousands of concurrent users without breaking a sweat.
On the other hand, Linux has gotten a lot better since 2000 at managing large numbers of runnable processes and doing things like fork and exit. httpdito (http://canonical.org/~kragen/sw/dev3/server.s) can handle tens of thousands of hits on a single machine nowadays, even though each hit forks a new child process (which then exits). http://canonical.org/~kragen/sw/dev3/httpdito-readme has more performance notes.
On the gripping hand, httpdito’s virtual memory size is up to 16kiB, so Linux may be able to handle httpdito processes better than regular processes.
Still, the O(N) scheduler work in current Linux might make that kind of thing survivable.
For the times when some of my processes were misbehaving, it was easy to identify which processes were misbehaving with "ps", "top", etc and resolve with "nice", "kill". This killed the bad connections without bringing the rest of the app down. Sysadmins like me.