
1600% faster app requests with Rails on Heroku - friism
https://coderwall.com/p/5cafjw
======
joevandyk
How would this compare to using JRuby and threads?

If you use eventmachine, _every_ single network call you make has to be
evented. So you'd need to use things like <https://github.com/leftbee/em-
postgresql-adapter> which aren't going to be as well tested as the standard pg
driver.

~~~
jrochkind1
yep, or even MRI with threads.

EventMachine requires fundamental changes to your code.

threads do not.

And even with MRI, you are, I am going to predict, see _significant_
performance improvement using an app server that can dispatch multi-threaded
(say, puma) with config.threadsafe!.

I am confused why threads aren't getting more attention on this topic.

~~~
tenderlove
> I am confused why threads aren't getting more attention on this topic.

I think it's because of the "threads are hard" meme. I _think_ the Ruby
community is growing beyond that, but it's not a fast process.

~~~
jrochkind1
Ironically, doing evented/fiber code right is probably harder than doing
threads right, for this kind of stuff.

I'm a bit astounded that heroku, in their attempt to deal with, um, let's call
it "routing-gate", aren't talking about talking about multi-threaded dispatch
and config.threadsafe!, but only unicorn with 2-4 forked processes. When it
seems awfully likely that multi-threaded dispatch is going to scale a lot more
efficiently with regard to number of overlapping requests.

I think some of it is the lack of mature, robust, 'self-managing' app server
solutions. For MRI (with the GIL), what's likely needed is something that can
fork multiple processes (to use all cores), with each of those processes
dispatching multi-threaded (to deal with I/O blocking as well as even-ing out
latency when not all requests finish in identical time). So far as I know,
Passenger 4 Enterprise is the only thing that can do this for you, without you
having to manually set it all up.

------
seivan
Pardon me, but how is this better than queuing a job, and checking the status
for that job instead? That's my standard MO when dealing with 3rd party API
calls from a request.

~~~
jbaudanza
How do you check for the status of the job? Do you hold the client connection
open and block the process while you wait? Or do you notify the client through
some other mechanism (polling, websocket, etc).

The advantage of evented I/O is that you don't have to do either of these
things.

~~~
seivan
The way I do it now is tell the client to check back in X seconds depending on
how many available workers compared to used workers there are.

I already have this wrapper for client side on iOS and working on a Batman.js
version.

I usually do this for client side Login with different providers.

Say Facebook or Twitter. Login on the client, obtain token, send to server for
validation. Server validates against Facebook/Twitter.

Server will tell the client to check back in X seconds. Client waits X seconds
and does another check. Server is either done or not.

I rather do that, than to keep a request open. It's easier to manage on iOS as
well since, say the user decides to check their email while the login is still
processing.

------
conanbatt
This is not a performance improvement, its an arquitectural change that has to
be accounted for when using Unicorn. Unicorn is NOT for slow requests, as
their own documentation explains. (<http://unicorn.bogomips.org/>)

On Kaya.gs to handle 3rd party requests I built a queue, which has the
advantages of being able to be light on environment and have a small
footprint.

The title is enticing as well, as this has nothing to do with Heroku.

------
jbaudanza
I use this technique on Heroku for my oauth requests and it works well. One
downside is that your responses have to interface directly with Rack, and you
lose out on all the functionality in Rails and the middleware stack.

To account for this, I created a controller mixin to recreate the middleware
stack for responses:

<http://www.jonb.org/2013/01/25/async-rails.html>

------
JOnAgain
percentage isn't really fair. I'm sure it's a vast improvement, but a lot of
requests were failing (before the change) due to what appears to be being
under-scaled ... at least the author was fair in stating he was after a link-
baity title.

