That's a little disingenuous. You've selected a very specific set of frameworks, whereas most of the users here are probably thinking "great! I can run my Rails stack with no problems! Heroku engineer said so!"
Note to readers: the only thing that "fixes" this is if the request-handling code is asynchronous in such a way that it doesn't block a process when connections are held idle. Most of the common web frameworks don't do this because the coding required to make a fully asynchronous stack is nasty. Even apps written in nominally asynchronous frameworks (like node.js) could be in trouble if the request path is pathological (i.e. the websocket periodically makes long-running, blocking database queries).
That said, most of you will never encounter this problem, because it's the sort of problem that's "nice to have" -- by the time concurrency issues become a limit, your app will be popular.
"It's safe to assume that anyone who hopes to leverage websockets will not be using a blocking application architecture."
No, it isn't. I'll wager that right this very second, there's someone out there incorporating websockets into their Heroku-based Rails app and not thinking about (or understanding) the consequences.
I don't think the memory waste is the problem in this case, a websocket is a long lived connection. If you mix it with regular requests and don't think about the concurrency consequences you'll be able to serve 1 request and then allow for 1 websocket connection and your done. All other connections will be pending until the websocket is closed.
bgentry's comment didn't seem disingenuous to me at all, and I was surprised by sync's question. WebSocket connections are long lived, thus if your framework only supports one (or a few) concurrent connection you're gonna have a bad time.
Heroku's past routing problems with certain low concurrency frameworks/servers doesn't apply with WebSockets because you'd be crazy to use such a framework for WebSockets.
"if your framework only supports one (or a few) concurrent connection you're gonna have a bad time."
Rails only supports one concurrent connection per process (by default...for good reasons), and there are a great many people using it at scale, including on Heroku. Asynchronous stacks are becoming more common, but they're still exotic in terms of deployment -- and most of those probably aren't written very well.
I'm specifically talking about WebSockets. Do you really want to run one process for every client to connected to your WebSocket server? The answer is no. Even one (OS) thread per connection can get unwieldy.
And I think lots of people would disagree that async stacks are still "exotic" or "not written very well".
"Do you really want to run one process for every client to connected to your WebSocket server? The answer is no. Even one (OS) thread per connection can get unwieldy."
Yes, no kidding. But people will still try to do this with frameworks that don't support anything else (like Rails), because that's the shortest path to a working product.
"And I think lots of people would disagree that async stacks are still "exotic" or "not written very well"."
Well, those "lots of people" can disagree all they want, but they're wrong. The problem isn't that the frameworks are badly written, necessarily -- it's all the stuff in the stack, including the app-specific logic. Virtually no one knows how to write asynchronous web apps of any complexity. It's a very hard problem.
Note to readers: the only thing that "fixes" this is if the request-handling code is asynchronous in such a way that it doesn't block a process when connections are held idle. Most of the common web frameworks don't do this because the coding required to make a fully asynchronous stack is nasty. Even apps written in nominally asynchronous frameworks (like node.js) could be in trouble if the request path is pathological (i.e. the websocket periodically makes long-running, blocking database queries).
That said, most of you will never encounter this problem, because it's the sort of problem that's "nice to have" -- by the time concurrency issues become a limit, your app will be popular.