Based on my experience with comet node.js scales really well and uses a lot less memory than Java based solutions (I have rewritten a system from JBoss Netty and our node.js solution uses around 10x less memory and a little more CPU - - and this is on 100.000+ active connections). I have also looked at Jetty's comet solution and was generally unimpressed by its memory usage and memory usage is really critical for comet applications [source: http://amix.dk/blog/post/19490#Plurk-Instant-conversations-u... ].
You do not even know the memory load, what was running at the same time on the server, etc. It is like throwing one blue dice and one yellow. You get a six on the first and a three on the second and consider through your not terribly scientific benchmark that you get more with blue dices.
Recommended reading for the author:
Not a joke, this is a really good book about the test side of the stats.
Is there anyone who has actually made scientific benchmarks that one could draw conclusions from?
By changing the interval and reconnection settings one can reduce the latency, but there are some side-effects, i.e. the faye-client connection seems to become less stable.
I've poked around in the faye codebase myself -- it's non-trivial to try and pull it apart. I've not yet had success in figuring out what could be changed to enhance the library in its service as a near-real-time messaging transport for high bandwidth, very high message count use cases.
The author seems to be a very busy man, but if enough intrest was shown, JCoglan might be willing to work on optimizing it (and documenting it) further.
In a local network setting, in various tests, I've had at times 40Mbps, with message rates in excess of 10,000 requests per second flowing through a single faye process. The latencies were on the order of 10-100 ms And this was sustained over a period of hours.
- Way more than 12 seconds long.
- Something more in the range of 90,000 clients.
- With source code.