Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I downloaded Rails, followed a few online tutorials to build the app, and ran script/server to run the server. As mentioned in the benchmarking document, we're not experts on Rails deployment and will gladly accept suggestions for improvements that can be implemented in a few minutes.


You are running the benchmark with WEBrick, a slow, development only web server. For Rails you should run nginx + Passenger.


For Rails you should run nginx + Passenger

No, that's for a production website and requires setup. They just need to replace webrick with thin.

> gem install thin

then

> thin start

instead of

> ./script/server

Simple.


I ran the benchmark with the thin server using the following httperf command:

httperf --hog --num-conns 1000 --num-calls 1000 --burst-length 20 --port 3000 --rate 1000 --uri=/pong

The best I got was 258 requests / sec and 13.5 average responses / sec. The chart on the website plots responses / sec. So thin seems to perform worse. I'm guessing that we're running into the limit of 1024 file descriptors. I didn't test with the libev version of Snap, so my testing had that limit too, but my guess is that Snap is faster and manages to stave off the limit long enough to serve a reasonable number of requests. But I don't know for sure.


Um...shouldn't it also be:

script/server -e production

Running in development mode severely hampers Rails' performance.


Good point. I also noticed they're running snap with 4 threads. Since rails is multi-process rather than multi-thread they would actually need to set up something like passenger as acangiano suggested.

Better would be to just limit the test to single-process/single-thread. The idea should be to compare web frameworks not web servers.


Yes, but it's also useful to compare similar levels of effort. Using 4 threads with Snap is a similar level of effort to using thin. Setting up 4 separate processes with passenger requires quite a bit more effort, especially if you haven't done it before.


You can set up 4 processes with thin, just do:

> thin -e production -s 4 start

The problem is that each process listens on its own port. They can't all listen on port 3000, so you need some sort of proxy to parcel out the requests.


I did use -e production, just didn't remember to mention it in the post above. I'll try thin though.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: