Hacker News new | past | comments | ask | show | jobs | submit login

Indeed, there is no free lunch.

At is stands, this is only really workable for low traffic (so it doesn't eat memory) where connections do not come and go frequently (so it doesn't eat process management CPU).

Once you start doing multiplexing for the sake of making this more reasonable in terms of resource usage, the simplicity benefits kind of fall away as you move closer to a full concurrent web framework.

I guess it really depends what you're tuning for, what your use case is, and how much hardware budget you have to throw at the problem.

CGI fell out of favor for this reason, but WebSockets have a different runtime profile: instead of having to deal with 10K shortlived requests per second, WebSocket endpoints have much fewer but longer lived connections. This is why the CGI model actually works well on WebSockets.

BTW, there is a VM for Dart that is experimenting with different concurrent modes to provide an alternative to async programming: https://github.com/dart-lang/fletch

You can read its short wiki for some clues: https://github.com/dart-lang/fletch/wiki/Processes-and-Isola...

I like Fletch's idea very much. Imagine not having to worry about Async all the time.

Not sure how everything is implemented in Fletch, but I think I heard that in Fletch they are able to share 1 thread per many processes if need be. And they have been trying hard to save memory while implementing those features.

If you want to run some tests to compare with, I created some small samples using different Dart implementations and NodeJS here: https://github.com/jpedrosa/arpoador/tree/master/direct_test...

Fletch also supports a kind of Coroutine: https://github.com/dart-lang/fletch/wiki/Coroutines-and-Thre...

> Imagine not having to worry about Async all the time.

I'm nitpicking, because Fletch truly sounds very cool indeed, but when I use Elixir, Erlang, or Go, I never worry about async either. From that wiki page, I can't really see what the difference with the Erlang VM is.

(that's a good thing, the Erlang VM is awesome, and being able to write server code on an Erlang-like VM and still share code with the browser sounds like the thing that could make me adopt Dart)

About wow many connections for an average machine with 8GB ram would be deemed 'OK' with this tool?

I was just playing around with this tonight using PHP and each process was about 5Mb of RAM. I'd imagine if you wrote your server code in, say C instead, then the memory footprint would be much smaller.

There's also a limit to the number of processes allowed as well. On my OSX laptop with 16Gb RAM for example the default limit is 709 (kinda strange number???). The command

ulimit -a

will tell you the value of "max user processes" for your machine.

FYI I built simple count and greeter versions in Nim and they used around 350k. Some napkin math theorizes that's over 10k concurrents on an 4GB VPS for say a simple chat service backed by redis. I'm not sure how well websocketd will hold up at that point though...

Per process could mean easier scaling too. Like round robin connections balanced between multiple app servers backed by a beefy shared redis for example. I've never really understood how best to scale websocket services, but this could make it easier.

Thanks for that experiment!

The problem here of course is that a CGI-like approach does a fork plus execve for each request, which does not give a large benefit of sharing.

If you have a simple forking socket-based server, Linux (I assume that OS X is not any different) the amount of memory per process is much lower because it uses copy-on-write pages for forks and it's largely the same process.

That must be an OSX oddity. I just checked my Ubuntu laptop with 3GB RAM: 23967 processes, and a Debian server that I happened to be logged in to, with 0.5 GB: 127118 processes.

Of course, with 3GB you could only get 600 connections at 5MB a pop.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact