Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
500 requests/s with Ruby on Rails 4 on a 5$ / month server (wiemann.name)
26 points by matwiemann on May 27, 2013 | hide | past | favorite | 15 comments


Am I reading the ab output wrong or did you only make 40 requests and 13 of them failed?

Also there's no mention of what your test actually is?


Yeah spot on, the test is useless due to the small sample size. Rounding errors could give you 5000 requests/second if you're lucky.


Indeed

Complete requests: 40

Failed requests: 13

is far from good


And 28 were non 2xx, which is also a failure. Probably db issues since there is no way SQLite can handle 20 concurrent connections.


I corrected the blog post with a correct sample size. Thanks for the feedback!


You're missing the ab output now which also hurts the quality of the analysis.


I came to say the same thing. Probably best to just flag this and move on. The results show most of the requests failed.


What is the best way to establish optimal pool size? The article seems to gloss over this and instead mentions 'you need to set pool: 25', but I doubt this is a one size fits all solution.

Anyone have any experience profiling these sorts of things that can share some info?


You probably want to find out max_connections for your server.

Then figure out how many other connections you'll need outside your server (rails console, cron jobs, commandline) subtract that and you have your number.

I like to have about 10 spare connections.

Running

select name, setting from pg_settings where name = 'max_connections';

in postgres. I see I have 100 max connections.

So that minus 10 gives me a pool size of 90.


Cool write up - I never heard about Puma server. I should check it.

BTW, similar result like this can be also achieved with any RoR and jruby 1.7 (for me jruby 1.7 uses much less mem then 1.6). Threads in jruby are quite cheap. And as always database is the bottleneck but jruby is cool because you can embed BerkeleyDB in it (so open temp Berkely DB and cache things like crazy).


If anybody wants to have a go at trying it themselves, the coupon "SSDTWEET" will give you $10 credit on DigitalOcean.


Last time I benchmarked Puma on MRI 1.9 and 2.0 it was quite fast for most requests, but some requests took more than 20 seconds to complete. The bottom line being that it would be nice to see full ab output.


Discussion from his previous post in the series: https://news.ycombinator.com/item?id=5763282


I find a single dyno on Heroku does a good job too and costs $0 / month. So far the most traffic I've handled is 1,708 page views in an hour with Rails 4 (and a lot of action caching).


That's less than 1 request/second. Rails should be fine even without caching.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: