Hacker News new | past | comments | ask | show | jobs | submit login

This would scale very badly, correct? If you have to hijack the connection for each client and then spend awhile doing a task (waiting for a task to complete, etc), then your whole webserver thread is going to be sitting there waiting.

That depends on the application server's concurrency model. If you have enough threads then it won't be a problem. While threads don't have the "web scale to 100,000 users!!!" reputation, you can still get very far with them on most workloads.

On evented application servers, you can integrate with the event loop API.

And, I'm not 100% sure whether the spec allows this, but it would appear that on application servers that are not evented, you can even offload the socket to a thread running an event loop. For example Phusion Passenger's implementation allows this.

How it scales is an interesting question.

Could it, for example, replace a COMET server that is maintaining 100 connections (all doing long polling)? 1000? 10000?

Could it also be hacked into doing Websockets upgrade (and subsequently maintaining a large number of concurrent websockets conns)?

Yes and yes, but it depends on your web server.

For example, while this API allows you to implement the WebSocket protocol, Nginx's input buffering will interfere with it.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact