Hacker News new | past | comments | ask | show | jobs | submit login
Node.js, redis, and resque (pgrs.net)
28 points by pgr0ss on Feb 28, 2010 | hide | past | favorite | 5 comments



That's a pretty awesome trick. I've been playing around with Node.js for proxying a bit as well (see http://github.com/simonw/dogproxy ) and I've run in to the same problem as this snippet - writing a full featured HTTP proxy is a bunch of work that I don't really want to do just for my little experiments. If anyone's looking for a neat project, writing a spec-compliant HTTP proxy library for Node.js that makes it easy to plug in additional functionality (like load balancing, rate limiting, caching etc) would be incredibly useful.


I thought of this idea the other day and got stuck on how to best push messages to hanging clients in a generic way instead of the simple requestNumber request-response correlation here. I ended up just making it app specific and either generating a unique client ID per hanging connection (a UUID) or using the username of the logged-in human using the hanging client.

Anyway, this is a nice architecture indeed.

However, as a pure proxy, I would think nginx would be more appropriate for its (albeit poor but swappable) load balancing, rate limiting (a 3rd-party module), caching (memcache, redis), and general volume of usage.

In fact, there's the beginnings of a nginx upstream based redis module that supports redis' GET that could be made to RPUSH and BLPOP the result.

FWIW, I'm the author of redis-node-client. It should be noted that Promises were removed in favor of continuations in the latest Node.js on Github and I've updated redis-node-client appropriately. Thus, the sample code in the article is outdated by 2 days or so.

Using LPOP in a polling loop to wait for the response is not that great of an idea. It would be better to use BLPOP or "blocking left pop" which blocks the client connection (think "long polling") until there's something in the given list to pop. It does not waste resources and the results are returned in much less than the worst case (e.g. 100ms here). I haven't added BLPOP to redis-node-client yet though but it should be simple to patch.

To scale this beyond a single frontend, popFromQueue could potentially put the response back via LPUSH when the queuedRes[requestNumber] is nil... "Oops, this wasn't for me, let me put that back." Or, it could/should use something more formal.

Why does the Ruby worker shell out to redis-cli instead of using Ezra's client library?

Finally, "The previous spike..." What does "spike" mean here?


Thanks for writing redis-node-client!

For pure proxies, there are definitely more appropriate tools. However, this gave me a chance to play with node.js. And I may develop it into a more full featured proxy.

I haven't seen the new continuations stuff in node. I will have to check it out.

BLPOP looks great. I will check it out.

I've thought a little about scaling to multiple frontends. I thought multiple response queues might be the easiest. Along with a request number, the node server could tell the backend which response queue to use.

I shell out to redis-cli purely for ease of development. I was already familiar with the cli, so I used it. I will definitely switch to a real redis library if I continue developing.

And finally, I used the term spike to mean a non-production quality programming exercise to prove out the solution at a high level. See http://c2.com/xp/SpikeSolution.html for more details.


Very interesting, but it's far from having the features required for an actual web server. GET-only is very limiting, for obvious reasons, and I assume that this system of using a redis queue wouldn't work for file uploads without a lot of hackery? Or am I missing something?


You could still handle file uploads and other complex requests traditionally, and only use a queuing solution for the most important traffic. Then, at least part of the site would stay up during upgrades even if corners of the site went down.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: