

Comparing MRI Rails App Server Performance on Heroku - jrochkind1
https://github.com/jrochkind/fake_work_app/blob/master/README.md

======
evanphx
Because of the nature of Heroku (and thus EC2) there will always be unknown
and random latency associated with shared nodes and hidden network
utilization.

What's important is that you did all the tests on heroku and gave enough of a
window to subject all tests to the random latency.

This allows users to say "if I'm going to be deploying on Heroku, these
measurements are valid." If they deploy anywhere else, those variances will be
different and change the equation (though it's unlikely they'd completely flip
the results).

Thanks a ton for the great analysis!

~~~
jrochkind1
Thanks! And thanks for puma.

I think I actually probably DIDN'T give quite enough time to give enough
window to even out the env -- I generally only ran each scenario test for
around 30-60 seconds (you can see how long each test ran for in the results,
it's there in the captured output).

Still, by running so many tests and getting _largely_ consistent results, I
think it prob averages out into giving us a basic picture, and would be
surprised if results weren't basically consistent even if you ran longer.

But if someone wanted to re-run it giving dozens of minutes or more to each
scenario, hey, you've got my code, heh. It would probably take a day or more
to run everything, but such a person wouldn't have to spend the time I took
setting it all up if they used my code.

------
rartichoke
Any thoughts on doing the same thing with Sinatra and/or Padrino? Also do you
think rails 4 will make any improvements?

~20 reqs per second with barely any concurrency seems so low. Is heroku, ruby
or rails the culprit for such low throughput?

~~~
jrochkind1
> Any thoughts on doing the same thing with Sinatra and/or Padrino? Also do
> you think rails 4 will make any improvements?

I am not familiar enough with either of those frameworks to know if they can
do multi-threaded concurrent request dispatch -- if they can, then it'll apply
to them too, I think. But I'm not interested in doing the experiment myself.

I don't think rails4 fundamentally changes the picture, as far as _comparing_
different app servers.

> ~20 reqs per second with barely any concurrency seems so low. Is heroku,
> ruby or rails the culprit for such low throughput?

It may be mostly that I was choosing to model relatively slow apps. With an
app that takes 100ms of _cpu time_ to return a response, and 4 cores -- the
-theoretical- maximum is only 40 reqs/second, it's not possible for it to get
any higher than that.

For my own work, my apps don't get a whole lot faster than what I modelled,
but others might.

But I wasn't really setting things up to look at the actual requests/second
numbers -- in what I set up, the relationship between the scenarios is more
significant. If you wanted meaningful absolute numbers (or to compare
ruby/rails to something else) you'd probably want to test a real app doing
real things, not my simulation app.

I also think, generally, "requests per second" is not the right number to be
looking at -- what you (and the users or clients) really care about is how
long the clients spend waiting for a response (at median, and how much
variation), right?

~~~
rartichoke
Yeah latency is more important but wouldn't you cache most results in a real
app.

I think 37 signal's showed us that a rail's app is capable of delivering low
latency responses. In some presentation David mentioned most responses are
sent out in 20-50ms but he didn't mention the hardware involved.

------
trustfundbaby
What I've been wanting to see is how puma threads compete directly against
unicorn workers ... so as an exmaple ... how puma with 4 threads does vs
Unicorn with 4 processes? I can't find that particular comparison anywhere.

------
fbernier
Nice write up. I am recommending
[https://github.com/wg/wrk](https://github.com/wg/wrk) instead of ab if you
ever have to run those kind of benchmarks again.

