If you use eventmachine, every single network call you make has to be evented. So you'd need to use things like https://github.com/leftbee/em-postgresql-adapter which aren't going to be as well tested as the standard pg driver.
EventMachine requires fundamental changes to your code.
threads do not.
And even with MRI, you are, I am going to predict, see _significant_ performance improvement using an app server that can dispatch multi-threaded (say, puma) with config.threadsafe!.
I am confused why threads aren't getting more attention on this topic.
I think it's because of the "threads are hard" meme. I think the Ruby community is growing beyond that, but it's not a fast process.
I'm a bit astounded that heroku, in their attempt to deal with, um, let's call it "routing-gate", aren't talking about talking about multi-threaded dispatch and config.threadsafe!, but only unicorn with 2-4 forked processes. When it seems awfully likely that multi-threaded dispatch is going to scale a lot more efficiently with regard to number of overlapping requests.
I think some of it is the lack of mature, robust, 'self-managing' app server solutions. For MRI (with the GIL), what's likely needed is something that can fork multiple processes (to use all cores), with each of those processes dispatching multi-threaded (to deal with I/O blocking as well as even-ing out latency when not all requests finish in identical time). So far as I know, Passenger 4 Enterprise is the only thing that can do this for you, without you having to manually set it all up.
But in this case, as long as you expect your database requests to be fast and reliable, it's fine to mix in the standard blocking pg driver.
The advantage of evented I/O is that you don't have to do either of these things.
I already have this wrapper for client side on iOS and working on a Batman.js version.
I usually do this for client side Login with different providers.
Say Facebook or Twitter.
Login on the client, obtain token, send to server for validation. Server validates against Facebook/Twitter.
Server will tell the client to check back in X seconds.
Client waits X seconds and does another check. Server is either done or not.
I rather do that, than to keep a request open.
It's easier to manage on iOS as well since, say the user decides to check their email while the login is still processing.
On Kaya.gs to handle 3rd party requests I built a queue, which has the advantages of being able to be light on environment and have a small footprint.
The title is enticing as well, as this has nothing to do with Heroku.
To account for this, I created a controller mixin to recreate the middleware stack for responses: