Hacker News new | past | comments | ask | show | jobs | submit login

Unless you need to handle a very small number of concurrent connections, using 1 process per connection seems to be a huge overhead, although I can think of some use cases.

However, I can imagine a similar tool doing multi/demultiplexing, eg the handler process would take input text lines prepended with a connection identifier (eg. "$socketID $message") and output using a similar formatting. Pretty much like websocketd but with multiplexing and unixy/pipe-friendly (eg. you can apply grep/awk/etc before and after).

How would this fit compared to websocketd?




Indeed, there is no free lunch.

At is stands, this is only really workable for low traffic (so it doesn't eat memory) where connections do not come and go frequently (so it doesn't eat process management CPU).

Once you start doing multiplexing for the sake of making this more reasonable in terms of resource usage, the simplicity benefits kind of fall away as you move closer to a full concurrent web framework.

I guess it really depends what you're tuning for, what your use case is, and how much hardware budget you have to throw at the problem.


CGI fell out of favor for this reason, but WebSockets have a different runtime profile: instead of having to deal with 10K shortlived requests per second, WebSocket endpoints have much fewer but longer lived connections. This is why the CGI model actually works well on WebSockets.


BTW, there is a VM for Dart that is experimenting with different concurrent modes to provide an alternative to async programming: https://github.com/dart-lang/fletch

You can read its short wiki for some clues: https://github.com/dart-lang/fletch/wiki/Processes-and-Isola...

I like Fletch's idea very much. Imagine not having to worry about Async all the time.

Not sure how everything is implemented in Fletch, but I think I heard that in Fletch they are able to share 1 thread per many processes if need be. And they have been trying hard to save memory while implementing those features.

If you want to run some tests to compare with, I created some small samples using different Dart implementations and NodeJS here: https://github.com/jpedrosa/arpoador/tree/master/direct_test...

Fletch also supports a kind of Coroutine: https://github.com/dart-lang/fletch/wiki/Coroutines-and-Thre...


> Imagine not having to worry about Async all the time.

I'm nitpicking, because Fletch truly sounds very cool indeed, but when I use Elixir, Erlang, or Go, I never worry about async either. From that wiki page, I can't really see what the difference with the Erlang VM is.

(that's a good thing, the Erlang VM is awesome, and being able to write server code on an Erlang-like VM and still share code with the browser sounds like the thing that could make me adopt Dart)


About wow many connections for an average machine with 8GB ram would be deemed 'OK' with this tool?


I was just playing around with this tonight using PHP and each process was about 5Mb of RAM. I'd imagine if you wrote your server code in, say C instead, then the memory footprint would be much smaller.

There's also a limit to the number of processes allowed as well. On my OSX laptop with 16Gb RAM for example the default limit is 709 (kinda strange number???). The command

ulimit -a

will tell you the value of "max user processes" for your machine.


FYI I built simple count and greeter versions in Nim and they used around 350k. Some napkin math theorizes that's over 10k concurrents on an 4GB VPS for say a simple chat service backed by redis. I'm not sure how well websocketd will hold up at that point though...

Per process could mean easier scaling too. Like round robin connections balanced between multiple app servers backed by a beefy shared redis for example. I've never really understood how best to scale websocket services, but this could make it easier.


Thanks for that experiment!


The problem here of course is that a CGI-like approach does a fork plus execve for each request, which does not give a large benefit of sharing.

If you have a simple forking socket-based server, Linux (I assume that OS X is not any different) the amount of memory per process is much lower because it uses copy-on-write pages for forks and it's largely the same process.


That must be an OSX oddity. I just checked my Ubuntu laptop with 3GB RAM: 23967 processes, and a Debian server that I happened to be logged in to, with 0.5 GB: 127118 processes.


Of course, with 3GB you could only get 600 connections at 5MB a pop.


There is closed ticket (github.com/joewalnes/websocket) where we were battling this out, you're always welcome to join with good ideas, that would be appreciated.


It's just so easy to write a web-socket server these days in Go and other languages it might not be worth the trouble. I can only see something like this being useful in odd-ball use cases; of which none really come to mind right now but I'm sure I'd recognize it when I was faced with it.


Sounds basically like the difference between CGI and FastCGI.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: