Ironically, doing evented/fiber code right is probably harder than doing threads right, for this kind of stuff.
I'm a bit astounded that heroku, in their attempt to deal with, um, let's call it "routing-gate", aren't talking about talking about multi-threaded dispatch and config.threadsafe!, but only unicorn with 2-4 forked processes. When it seems awfully likely that multi-threaded dispatch is going to scale a lot more efficiently with regard to number of overlapping requests.
I think some of it is the lack of mature, robust, 'self-managing' app server solutions. For MRI (with the GIL), what's likely needed is something that can fork multiple processes (to use all cores), with each of those processes dispatching multi-threaded (to deal with I/O blocking as well as even-ing out latency when not all requests finish in identical time). So far as I know, Passenger 4 Enterprise is the only thing that can do this for you, without you having to manually set it all up.
Seriously, I just recently switched to puma and enabled config.threadsafe! and that was that. Now I/O calls like HTTP requests don't block the server, but are still synchronous. I didn't even need to switch to jruby.
I use this technique on Heroku for my oauth requests and it works well. One downside is that your responses have to interface directly with Rack, and you lose out on all the functionality in Rails and the middleware stack.
To account for this, I created a controller mixin to recreate the middleware stack for responses:
How do you check for the status of the job? Do you hold the client connection open and block the process while you wait? Or do you notify the client through some other mechanism (polling, websocket, etc).
The advantage of evented I/O is that you don't have to do either of these things.
This is not a performance improvement, its an arquitectural change that has to be accounted for when using Unicorn. Unicorn is NOT for slow requests, as their own documentation explains. (http://unicorn.bogomips.org/)
On Kaya.gs to handle 3rd party requests I built a queue, which has the advantages of being able to be light on environment and have a small footprint.
The title is enticing as well, as this has nothing to do with Heroku.
percentage isn't really fair. I'm sure it's a vast improvement, but a lot of requests were failing (before the change) due to what appears to be being under-scaled ... at least the author was fair in stating he was after a link-baity title.