Hacker News new | comments | show | ask | jobs | submit login
1600% faster app requests with Rails on Heroku (coderwall.com)
60 points by friism 1320 days ago | hide | past | web | 14 comments | favorite

How would this compare to using JRuby and threads?

If you use eventmachine, every single network call you make has to be evented. So you'd need to use things like https://github.com/leftbee/em-postgresql-adapter which aren't going to be as well tested as the standard pg driver.

Only for requests you'd like to have return async, as in the example given; I use event machine to make async posts mixed with regular blocking IO all the time.

yep, or even MRI with threads.

EventMachine requires fundamental changes to your code.

threads do not.

And even with MRI, you are, I am going to predict, see _significant_ performance improvement using an app server that can dispatch multi-threaded (say, puma) with config.threadsafe!.

I am confused why threads aren't getting more attention on this topic.

Seriously, I just recently switched to puma and enabled config.threadsafe! and that was that. Now I/O calls like HTTP requests don't block the server, but are still synchronous. I didn't even need to switch to jruby.

> I am confused why threads aren't getting more attention on this topic.

I think it's because of the "threads are hard" meme. I think the Ruby community is growing beyond that, but it's not a fast process.

Ironically, doing evented/fiber code right is probably harder than doing threads right, for this kind of stuff.

I'm a bit astounded that heroku, in their attempt to deal with, um, let's call it "routing-gate", aren't talking about talking about multi-threaded dispatch and config.threadsafe!, but only unicorn with 2-4 forked processes. When it seems awfully likely that multi-threaded dispatch is going to scale a lot more efficiently with regard to number of overlapping requests.

I think some of it is the lack of mature, robust, 'self-managing' app server solutions. For MRI (with the GIL), what's likely needed is something that can fork multiple processes (to use all cores), with each of those processes dispatching multi-threaded (to deal with I/O blocking as well as even-ing out latency when not all requests finish in identical time). So far as I know, Passenger 4 Enterprise is the only thing that can do this for you, without you having to manually set it all up.

As a best practice, you're right. IO should be either 100% evented or 100% blocking.

But in this case, as long as you expect your database requests to be fast and reliable, it's fine to mix in the standard blocking pg driver.

Pardon me, but how is this better than queuing a job, and checking the status for that job instead? That's my standard MO when dealing with 3rd party API calls from a request.

How do you check for the status of the job? Do you hold the client connection open and block the process while you wait? Or do you notify the client through some other mechanism (polling, websocket, etc).

The advantage of evented I/O is that you don't have to do either of these things.

The way I do it now is tell the client to check back in X seconds depending on how many available workers compared to used workers there are.

I already have this wrapper for client side on iOS and working on a Batman.js version.

I usually do this for client side Login with different providers.

Say Facebook or Twitter. Login on the client, obtain token, send to server for validation. Server validates against Facebook/Twitter.

Server will tell the client to check back in X seconds. Client waits X seconds and does another check. Server is either done or not.

I rather do that, than to keep a request open. It's easier to manage on iOS as well since, say the user decides to check their email while the login is still processing.

Evented I/O sounds miraculous!

This is not a performance improvement, its an arquitectural change that has to be accounted for when using Unicorn. Unicorn is NOT for slow requests, as their own documentation explains. (http://unicorn.bogomips.org/)

On Kaya.gs to handle 3rd party requests I built a queue, which has the advantages of being able to be light on environment and have a small footprint.

The title is enticing as well, as this has nothing to do with Heroku.

I use this technique on Heroku for my oauth requests and it works well. One downside is that your responses have to interface directly with Rack, and you lose out on all the functionality in Rails and the middleware stack.

To account for this, I created a controller mixin to recreate the middleware stack for responses:


percentage isn't really fair. I'm sure it's a vast improvement, but a lot of requests were failing (before the change) due to what appears to be being under-scaled ... at least the author was fair in stating he was after a link-baity title.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact