At is stands, this is only really workable for low traffic (so it doesn't eat memory) where connections do not come and go frequently (so it doesn't eat process management CPU).
Once you start doing multiplexing for the sake of making this more reasonable in terms of resource usage, the simplicity benefits kind of fall away as you move closer to a full concurrent web framework.
I guess it really depends what you're tuning for, what your use case is, and how much hardware budget you have to throw at the problem.
You can read its short wiki for some clues: https://github.com/dart-lang/fletch/wiki/Processes-and-Isola...
I like Fletch's idea very much. Imagine not having to worry about Async all the time.
Not sure how everything is implemented in Fletch, but I think I heard that in Fletch they are able to share 1 thread per many processes if need be. And they have been trying hard to save memory while implementing those features.
If you want to run some tests to compare with, I created some small samples using different Dart implementations and NodeJS here: https://github.com/jpedrosa/arpoador/tree/master/direct_test...
Fletch also supports a kind of Coroutine: https://github.com/dart-lang/fletch/wiki/Coroutines-and-Thre...
I'm nitpicking, because Fletch truly sounds very cool indeed, but when I use Elixir, Erlang, or Go, I never worry about async either. From that wiki page, I can't really see what the difference with the Erlang VM is.
(that's a good thing, the Erlang VM is awesome, and being able to write server code on an Erlang-like VM and still share code with the browser sounds like the thing that could make me adopt Dart)
There's also a limit to the number of processes allowed as well. On my OSX laptop with 16Gb RAM for example the default limit is 709 (kinda strange number???). The command
will tell you the value of "max user processes" for your machine.
Per process could mean easier scaling too. Like round robin connections balanced between multiple app servers backed by a beefy shared redis for example. I've never really understood how best to scale websocket services, but this could make it easier.
If you have a simple forking socket-based server, Linux (I assume that OS X is not any different) the amount of memory per process is much lower because it uses copy-on-write pages for forks and it's largely the same process.