Hacker Newsnew | comments | show | ask | jobs | submit login
Comet with Bayeux: node.js vs Jetty and cometd (praxx.is)
27 points by MartinMond 1882 days ago | 6 comments



The source code of the benchmark programs seem to be lacking.

Based on my experience with comet node.js scales really well and uses a lot less memory than Java based solutions (I have rewritten a system from JBoss Netty and our node.js solution uses around 10x less memory and a little more CPU - - and this is on 100.000+ active connections). I have also looked at Jetty's comet solution and was generally unimpressed by its memory usage and memory usage is really critical for comet applications [source: http://amix.dk/blog/post/19490#Plurk-Instant-conversations-u... ].

-----


The test is running only for 12s. It should at least run for a couple of minutes and several times with a clear control of what is going on on the server at the same time.

You do not even know the memory load, what was running at the same time on the server, etc. It is like throwing one blue dice and one yellow. You get a six on the first and a three on the second and consider through your not terribly scientific benchmark that you get more with blue dices.

Recommended reading for the author: http://www.amazon.com/Cartoon-Guide-Statistics-Larry-Gonick/...

Not a joke, this is a really good book about the test side of the stats.

-----


All benchmarks I've ever seen between tornado/node.js/jetty and other "comet"-servers have been in the authors own words "not-terribly-scientific benchmarks".

Is there anyone who has actually made scientific benchmarks that one could draw conclusions from?

-----


JCoglan's faye (bayeux for node.js) has some built-in "intervals", i.e. timers, which regulate connection reuse and how messages end up bundled together in bulk transmissions -- this can cause an apparent increase in latency, I've run into this problem myself.

By changing the interval and reconnection settings one can reduce the latency, but there are some side-effects, i.e. the faye-client connection seems to become less stable.

I've poked around in the faye codebase myself -- it's non-trivial to try and pull it apart. I've not yet had success in figuring out what could be changed to enhance the library in its service as a near-real-time messaging transport for high bandwidth, very high message count use cases.

The author seems to be a very busy man, but if enough intrest was shown, JCoglan might be willing to work on optimizing it (and documenting it) further.

In a local network setting, in various tests, I've had at times 40Mbps, with message rates in excess of 10,000 requests per second flowing through a single faye process. The latencies were on the order of 10-100 ms And this was sustained over a period of hours.

-----


It'd be interesting to throw some Erlang into those benchmarks.

-----


I'd like to see this again except:

- Way more than 12 seconds long. - Something more in the range of 90,000 clients. - With source code.

-----




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: