Hacker News new | comments | show | ask | jobs | submit login
Socket.io benchmarking - how many messages can you send per second? (github.com)
65 points by drew 1782 days ago | hide | past | web | 11 comments | favorite

Socket.IO author here. You might be pleased to hear as well that our focus on the next few releases is heavy on performance and making load balancing trivial.

We recently made the Socket.IO parser 300% faster. The new underlying websocket server is extremely fast (check the benchmarks at https://github.com/learnboost/websocket.io/), and optimizations to long-polling are coming as well.

Very cool! Excited to see what comes next.

Is there anything people looking forward to these features should know to plan? Like, I was about to start doing some load balancing stuff soon. Is that a waste of my time and I should just work on the core logic and figure load balancing is just going to be made easier in the near future and I shouldn't waste my time with it? Or should I roll my own option for now and then just figure at some point I should be able to transition to something that performs better in the future?

Will there be 'websocket.io-client', high-performance server-side WebSockets client module, in the near future?

I noticed `websocket.io` is depends on a module called `easy-websocket` which is registered with NPM with description "lain and simple websocket client" but no where to be found on Github. Is `websocket.io` what we are supposed to be using as client?

I've been using websocket.io and I must say, refactoring the websocket server code out of socket.io was a great decision. There isn't really any clean, low-level websocket implementations for node that worked anymore after recent versions of Chrome and Firefox. Websocket.io is really clean, small, and works in all browsers.

Socket.io has a lot of bloat that isn't required if all you need is a simple Websocket communications. Please do not add long-polling to websocket.io, it's the wrong place to do that type of abstraction.

Brilliant - didn't know websocket.io had been broken out. I was patiently waiting for @miksago to update https://github.com/miksago/node-websocket-server however it had become clear this was a very low priority for him (fair enough).

> There isn't really any clean, low-level websocket implementations for node that worked anymore after recent versions of Chrome and Firefox.

Not true. I continue to use https://github.com/Worlize/WebSocket-Node with great success.

Nice initial analysis! I would love to see this run using one of the popular node clustering modules with multiple CPU cores, and also the other socket.io connection types.

9k messages/second on a single 3.3ghz VM with 1,000 connections seems quite manageable and isn't out of line with other systems I've seen that are written in Java, .NET, C++. 100ms is an eternity to wait for a response, however. That's concerning. Especially since that is the case at low message rates. I wonder if that is a function of how the client is acknowledging the received data, rather than the actual time the server is taking to send it back.

What was the limiting factor on the server for this test? I've seen instances where the VM network drivers get in the way in this scenario. Or, you could also be saturating the network bandwidth to the VM if it's a 100mbit interface. Or maybe it's just simply CPU limited in user time by socket.IO...

This is obviously not super precise, but the CPU pegs (load average 1.0, 100% usage) once we get above ~9k messages/second. I don't know how much overhead there is on socket in terms of bandwidth, but some back of the envelope math makes it seem like we're well below network bandwidth on a 100mbit interface. My instinct is that (per rauchg's comment above) that if those kinds of improvements can be made, then we're not yet butting up against a bandwidth issue yet.

As for comparing to other systems, I think we're certainly at the same order of magnitude, but some rough tests a friend of mine ran made it seem like a java server/client pair could send about 4 times as many packets per second. But that's a somewhat unfair comparison - socket.io offers a lot more than simply raw packet transmission and that additional abstraction is going to come at a performance cost. But I think being on the same order of magnitude as a really low level approach like that is a good place to be at this level of maturity of the technology.

When I say it compares in performance to other systems I mean other high level more-than-a-socket systems like XMPP servers, which I spent a lot of years benchmarking and optimizing. Of course to-the-metal binary packet systems will be much faster as you're doing the buffer management and choosing your copies wisely.

It would be super-awesome to see a profile of node.js when you run this test to see where the CPU is going. Is it something low level like the string parsing, buffer copying, socket writing, or maybe something higher level like some inefficient algorithm somewhere. As I read below that socket.io is focusing on performance now, I guess we'll know soon. :)

I'd love to know too, but I haven't had much luck getting profiling to work super well with node. I think I mostly just don't know what I'm doing and someone who did could answer this pretty quickly, but I haven't been particularly successful and figuring out exactly how that cpu time is being spent.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact