Hacker News new | past | comments | ask | show | jobs | submit login

In the past I've noticed posters on HN picking on Rails by lazily linking to these benchmarks but click over to the average latency tab and Rails looks pretty solid with an average response latency of 1.8 ms, which is not at the very top but far better than Django, which is a comparable framework and is near the bottom of the average latency table.

If anything to me this data confirms that Rails is an amazing tool because not only do you get to develop quickly, but you also get pretty good average latency (or at least the potential depending on what you add to your app in terms of 3rd party libraries). And what Rails isn't good at is throughput, which is almost never a problem for an early stage company.

Working at a startup it's a huge success if I ever have to handle a lot of connections to my app, but today and everyday, I want fast response times on a page load.




Fair warning: as far I know, Wrk's latency measurement does not distinguish between 500s and 200s. For some frameworks, you will see unnaturally low latency because the front-end web server is providing a 500 response very quickly.


as bhauer pointed out, 500's are counted in those latency figures. If you look at the error count column, rails does miserably in everything but the "single query" test, which is not a common use case.


Great point. I missed the tabs at the very top, there's a lot more information density here than I first realized. The errors in the multiple query case are worrisome.


That looks a lot like an error in the test setup to me, it seems the rails example hasn't been updated for a while.


I was shocked by the rails results, and the massive number of errors, so I looked into it a little.

The setup they're using is an nginx serving 8 unicorn workers with a backlog of 256. They then throw requests at that with a concurrency of 20. DB pool is 256 too. It seems to me quite likely that the unicorn queue fills up very quickly and it starts rejecting requests, which could be an error. It's hard to see how a maximum of 8 workers would ever get close to the 256 available DB connections.

At first glance the unicorn setup is totally inadequate for the amount of traffic being thrown at it. The first thing to do would be massively increase both the number of workers and the backlog, otherwise this almost instantly turns into an overflowing request queue and literally millions of errors.

There's no denying, though, that this kind of request flood is not exactly rails' strong point and if you're expecting massive numbers of fairly simple requests you're probably better off with something else.


Rails fairs badly across the board. I'm afraid you're the one who's "lazily" linking in this case. Of course you can have good latency if you instantly 500, and process far fewer requests than the competition.


Did you miss the errors column? :/


> Working at a startup it's a huge success if I ever have to handle a lot of connections to my app, but today and everyday, I want fast response times on a page load.

Realistically, I doubt many humans can distinguish between 1ms and 100-200ms. response times.


True, especially when accounting for Internet latency.

However, the purpose of this project is not actually to measure how quickly platforms and frameworks can execute fundamental/trivial operations. Rather, these tasks are a proxy for real-world applications. Across the board, we can reasonably assume that real applications will perform 10x, 50x, 100x, or even slower than these tests. The question is, where does that put your application? If your application runs 100x slower than your platform/framework, does that put your application's response time at 200ms or 2,000ms?

That's a difference users do notice.


This article by Jakob Nielsen goes into that a bit: http://www.nngroup.com/articles/response-times-3-important-l... (1993). He claims <100ms feels instantaneous.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: