Hacker News new | past | comments | ask | show | jobs | submit login
The Great Web Framework Shootout – On GitHub (github.com/seedifferently)
167 points by stesch on Feb 19, 2012 | hide | past | favorite | 72 comments



From the linked blog post:

  Do these results have any real world value?
  
  Probably not. (...)
Kind of sums it up. It's fun to play around with web frameworks and draw some bar charts, but please don't take this seriously. And definitely don't say "X is slower than Y" because of this.

Things like that always remind me of this great quote by Tom Lane (source: http://en.wikipedia.org/wiki/Tom_Lane_%28computer_scientist%...):

  On idiotic benchmark comparisons "Try to carry 500 people
  from Los Angeles to Tokyo in an F-15.
  No? Try to win a dogfight in a 747. No? But they both
  fly, so it must be useful to compare them... especially
  on the basis of the most simplistic test case you can
  think of. For extra points, use *only one* test case.
  Perhaps this paper can be described as "comparing an F-15
  to a 747 on the basis of required runway length".


The DB Query test is pretty interesting, actually. It draws results into a much closer range, and shows that your bottleneck for WebApp/DB style apps will probably be your database.


Yeah, especially when you take into account that some of them are using a proper ORM and others are just using straight SQL queries. Sinatra looks twice as fast as Rails at DB based pages, but if they through in ActiveRecord in the Sinatra app it might be even closer.


> Yeah, especially when you take into account that some of them are using a proper ORM and others are just using straight SQL queries. Sinatra looks twice as fast as Rails at DB based pages, but if they through in ActiveRecord in the Sinatra app it might be even closer.

Absolutely. It's also true the other way - you don't have to use ActiveRecord with Rails, or the native ORM with Django, etc.

Of course, we already know that these benchmarks don't really have any parity, but they're interesting at a very high level as long as you understand why one might be faster.

PS: A more complex test would be nice, though - at a cursory glance, Django's ORM doesn't appear to be as slow as it used to be/is often attributed to be.


I agree. High-traffic websites are running on many of these frameworks. As long as several companies have scaled the frameworks in question, it's better to just choose the one that makes you more productive.

When building a company it's also good to choose something your team-members know and you can recruit for.


I mostly agree with you but there's a touch of real world value here... I mean, once you start adding code to 'hello world' it's not going to get faster. So, if you have an app with some stringent perf requirements, this would tell you if you can even really start with any of these frameworks.

For that same reason, I think metrics on how much RAM they settle in at would also be interesting. They could disqualify some frameworks from RAM-constrained requirements right away (ex. if you're building some hardware with an embedded-but-sophisticated web ui).


Good traffic for the blog though. Traffic is quite... valuable these days.


I'm happy that Go performed so well, but the example is a bit strange for a few reasons.

First, the version of Go they're using (r59) is very old. The current stable release is r60, and most Go users have recently switched to tracking the weekly snapshots in preparation for Go 1 (mere weeks away).

Second, and most importantly, they demonstrate the web.go and mustache.go libraries. These third-party libraries were written shortly after Go was released (in 2009 or perhaps early 2010). Since then, Go's standard http and template packages have seen a lot of development and should now perform much better than web.go and mustache.go.

Third, there's a new "database/sql" package in the recent weeklies that provides a single interface to SQL databases. There are several drivers available, including sqlite, so it's relatively easy to implement the database part of the benchmarks in Go, too.

Given the task of benchmarking frameworks, I suppose the author thought it necessary to reach for a framework. That explains why he looked outside the standard library for these tools. Fortunately for Go programmers, the Go project regards http, templates, and databases as core functionality in its standard library.

http://weekly.golang.org/pkg/


I don't think this is a resounding victory for Go. It's a compiled language and yet it can only manage just over 3000 requests a second. With such a small difference in speed, I'd be better off with Python.

If you want to see some impressive figures from a compiled language, look at Haskell's snap.


web.go is very allocation heavy and unoptimized so I am unsurprised at the less-than astounding performance shown here. Also as I said r59 is very old and go has come a long way since.

Go isn't just about speed. it is also a much cleaner and simpler language than Python. Go is designed to scale programs from tens to tens of millions of lines of code.


" it is also a much cleaner and simpler language than Python"

I jumped on github to look at some go code after you said this, but was quickly dissapointed. It seems like a mix between java and c


Under the hood web.go is using the http package, so it should benefit from all the improvements, you are talking about.

What I wonder more about is if the benchmark is setting the GOMAXPROCS env variable according to the number of CPUs on his system, otherwise he has 4 cores and 3 are idling around...


That web.go handlers return a string severely handicaps its performance. It forces you to allocate for each http request, probably several times. For large responses this is crazy. The go http package lets you write directly to the tcp socket, and as such is capable of being way more efficient.


"Hello World" tests really need to go away and be replaced by a concurrent workload that represents 100s of users accessing, writing, etc.

I remember a test that compared MySQL to SQL Server (Microsoft's SQL Server) a few years back.

MySQL had SQL Server beat hands down... For 1 concurrent user.

Once things scaled passed 10-20, SQL Server started winning.

At 30-50, MySQL could no longer respond and would crash, while SQL Server scaled to 90!

But everyone was wowed by the 1 user-load test! And countless posts were made over the years showing how MySQL is superior to the obviously flawed SQL Server.


Just to be pedantic, MSQL is not the usual shortening for MS SQL Server. MSQL is actually a product of its own, actually the precursor of MySQL.


> I remember a test that compared MySQL to SQL Server (Microsoft's SQL Server) a few years back.

Just curious, how many?


The test is actually not completely meaningless as it illustrates how our modern web frameworks have shifted the bottleneck from the database back to the frontend-nodes again.

That's usually a worthwhile tradeoff (hardware is cheap) but one that a growing number of junior devs don't seem to be making consciously anymore. For them it's just normal to be spending more time per request in the controller code than in the database query. Whereas for many oldtimers it will never cease to feel a little funny...

A side-effect of this trend is that you can't reasonably run your beefy rails-frontend on a mediocre CPU anymore. When latency is dominated by ruby-code crunching on strings then you really want the fastest per-thread performance that money can buy.


Hello everyone. I am the original author of these benchmarks, and first of all let me say that I am both surprised and humbled by the attention that they have gotten. Thank you!

Secondly, let me reiterate (since some people don't seem to be reading the website that's home to these benchmarks) that while I do think benchmarks like these can be interesting, I don't believe that they carry much real-world value. As the website states:

  "When it comes to code, the slightest adjustments have the potential to change
  things drastically. While I have tried to perform each test as fairly and
  accurately as possible, it would be foolish to consider these results as
  scientific in any way."
As others have said here, comparing frameworks can be very apples-to-oranges. Still though, I do think that this type of stuff can be interesting and has its place.

As to the "why isn't XYZ listed?" questions: This started as a pet project of mine out of pure curiosity, so I initially only included frameworks that I was familiar with. Thanks to GitHub, I will try to add more frameworks as the pull requests come in as long as I feel like they are popular enough to deserve a place next to Rails, Django, etc. Keep in mind that I have listed the Amazon EC2 AMI that was used, so feel free to run your own tests if for some reason I opt not to include your favorite framework in this list.

Finally, the last refresh of these tests was done in September 2011, which should explain why some of the version numbers are a bit outdated (I didn't know putting this stuff on GitHub would get so much attention or I would have waited). I hope to have another refresh done soon (Django 1.4, Pyramid 1.3, Rails 3.2, CakePHP 2.1, and a few new faces) but life/work is busy at the moment so please be patient and feel free to run your own tests in the meantime.


The database tests are meaningless, I think, since they're comparing e.g. ORM in Django and Rails to a single, non-abstracted SQL query in Bottle. It's very apples and oranges.


It serves to show that, if you want SQL performance, maybe you'll have to ignore the ORM. Every point tells you something. Whether this something answers an important question is a completely different matter.


I disagree with others that have said that these tests are worthless for a real application. It is worth knowing the baseline speed of a framework as this affects the maximum number of requests you will be able to achieve, without scaling out your application.


I strongly disagree. There's no such thing as baseline speed, there's only baseline speed on a given hardware config. Rails does not have a max speed of X reqs/second. It has a max speed of X reqs/second on a given hardware config. If you're profiling your own app you'll have to retest it on your own hardware first to figure out how close to that baseline you are.


There's no such thing as baseline speed

Of course there is. The "hello-world" microbenchmark tells you something about the core stack performance of the frameworks relative to one another. This tends to be surprisingly indicative of the relative real-world throughput per node that you may expect for a more complex application.

I.e. Rails (ruby) scores 4 times lower than Python in that micro-benchmark. This is pretty close to the difference that I've observed between real applications on the respective platforms. If anything you'll see the difference magnified in a complex application, but it's unlikely to turn around.

Whether that matters or not in the big picture is a different question (it usually doesn't).


If anything you'll see the difference magnified in a complex application, but it's unlikely to turn around.

I wouldn't say so. The bottleneck in a hello world benchmark might be a component of the framework, but in complex applications it's likely to be something else. Just look at the "Template Test with DB Query": There, Sqlite is the bottleneck, and the performance difference between Django and Rails fades into background completely.


in complex applications it's likely to be something else

Nope. Ruby/Python app-servers always end up CPU-bound, unless you're doing something very much out of the ordinary (such as blocking on an embedded SQLite...).


With all due resepct, bollocks. The dispatcher is not tested at all here for example. To do a proper test add in some complicated url dispatch rules, session handling, various kinds of nasty hard-to-abstract-out business logic and we might be talking. The trouble is there you're then talking about a week's work to write each app. On the other hand the "benchmarks" are an interesting ultra-simple rosetta stone.


The comparison is reasonable as long as you test against an equivalent feature-set on each platform.

It might actually be an interesting exercise to gradually layer more features on top (such as the dispatcher chain and different templating/ORM layers) to discover inefficiencies in the various frameworks by comparing the relative performance impact.

E.g. if you involve the dispatcher and Django suddenly gets 20% slower but Rails only 10% slower then that might hint at an inefficient dispatcher-impl in Django.


I'm disappointed that there are no Perl frameworks at all!

I've submitted a pull request[1] to rectify this gross oversight. Even though this benchmark is not particularly scientific, it still provides some meaningful data.

[1] https://github.com/seedifferently/the-great-web-framework-sh...


I'd like to see the results with nginx and siege, rather than Apache and ab. Other than that, it's clear that the more magic you bake in, the slower the framework will tend to be.


Interesting and pointless at the time, and of course sometimes that is the best type of entertainment.


Kohana is slower than CodeIgniter?! My life is a lie :(


I am actually more surprised that Yii is not all that fast as I expected.

Given that Symphony had such a huge buzz (aside from CodeIgniter), it is apparently the slowest..


Would be curious to see the other end of the coin: given a complex web app with dozens of controllers and hundreds of templates, what's the amount of developer time required to build it with each of these frameworks.

Rather harder to automate that, though. :)


Missing my favorite framework, Noir: https://github.com/ibdknox/noir


I would like to see Noir in there also. I have been using Noir a lot in the last 4 or 5 months: good performance and nice to develop with. BTW, off topic but since there was a lot of HN discusion yesterday about DHH's article on Basecamp Next and using pjax: I just blogged about using pjax with Noir: http://blog.markwatson.com/2012/02/using-pjax-with-clojure-a... (with a link to a github repo I just created).


I spent the last 5 years in the rails world.

Noir has been really fun to work with. It's really fast, API is predictable and easy to learn. It seems to be inspired heavily by rails but without the constant dependence on "magic" to do simple things.

It's also nice to break away from the Active Record overhead and be able to find new optimal ways to model data with Redis/Mongo.

I'd still recommend rails to the newbie developer, but if you're experienced in developing web apps, I'd highly recommend Noir.


Submit a pull request.


I'm quite surprised to see the PHP frameworks near the bottom... any ideas on why that is? As much as I love Rails, I would not have expected it to beat any mainstream PHP framework in this type of test.


PHP has a long startup time while Python, apparently starts faster. What the benchmarks prove on a simple app that display "Hello!" is which language has the best startup time! On a full app as Wordpress, the results may be different. Facebook still uses PHP (compiled) and not Python and so do lot of websites.


The Php frameworks in question are not exactly micro frameworks.

The Php frameworks are brought up and torn down on each request - they are not daemons.

The Php frameworks here, use a lot of file reads (many include files), compared with some of the slimmer micro-frameworks.


Those towards the bottom are either more Rails-like, i.e. they do a lot of "magic" in the background; or they are huge, with hundreds/thousands of files. There are lots of lightweight PHP frameworks not listed.


Interesting comparisons. I love the idea and the discussions of what particular tests could offer.

This may ruffle some feathers of people who love their particular framework and derive satisfaction from using it over others....

I think some other questions it's bringing up are even more interesting:

- If no problem is the same, does any framework matter, except the fact that you have one that decently keeps your organized?

- Is there really a one framework fits all approach? I think not. Some benefit from ORM, some might not.

- We all know premature optimization is not productive. Does it matter what really anyone uses?

- If we don't like trivializing a particular framework to these metrics, why do we trivialize any other framework in general?

- How do we build a better comparison?


It would be nice to see a hello-world example with 1, 10, 100 sample routes.

It's pretty easy to get a naive routing implementation perform really well with a handful of routes. The tables can easily turn in the context of a larger application however.


It's an interesting, but trivial test. The Rails numbers are incorrect and irrelevant now though because it's using an outdated Ruby version, 1.9.3 is now the gold standard, and outdated Rails version.


So, would the test be completely be made relevant if 1.9.3 was used?


It would be more interesting, but a simple test like this doesn't demonstrate much. A benchmark testsuite with a few algorithms or data processing patterns common to a domain would be more helpful.


This benchmark missing java frameworks and grails.


You can submit a pull request.


I know this is going to be burn me but I'm interested in seeing more compiled languages. The speed that I saw from the D Forum [1] was quite impressive. I was quite disappointed to see that Go didn't do better.

[1]: https://news.ycombinator.com/item?id=3592769


This benchmark is pretty pointless but I can't resist. Here is a version in Quixote:

https://github.com/nascheme/quixote-shootout

It looks like mod_wsgi is quite a dog.


So PHP framework rocks?


Surprised?


I hoped to see a comparison of expressiveness: e.g. a minimal todo list app implemented in several frameworks. I guess this is too much work...


Or, when we introduce an actual application, these tests are entirely meaningless.


would like to see how much time Silex strips from the Symfony bench since it uses some of the same components but much less extra stuff...


Any reason you didn't use CakePHP 2.0?


On a scale of 1 to 10 for scientific rigour of this experiment, where 10 is Neil deGrasse Tyson and 1 is Jerry Springer, I'll give you a 2.


Why on earth do people insist on comparing web frameworks on such a pointless metric as time-render-a-page? It's utterly pointless.


Awesome, another framework comparing different languages and different approaches to frameworks.


Why is tornado and node not on this, it would crush the competition.


Pretty far from crushing

https://github.com/carbonfive/hellod/blob/master/results.md https://github.com/carbonfive/hellod/blob/61bcf7495470350ea1...

The OP benchmark is missing netty and other frameworks meant to be high performance, for higher performance languages.


Node is only using one of the 8 cores in that test. It's not using express either.


Using a framework on top of plain node.js is not going to make it faster.


Deep down you know that's not what I meant


Am I missing something? Neither of those links have anything to do with Tornado.


It has node


Because tornado and node aren't web frameworks? Why don't we use a bus as a boat?... because it isn't a boat.


Interesting point... the difference between a language vs. framework is often overlooked..

If there are cases where they may overlap, how would you want to develop a test?


Node is arguable. You can use express.

Tornado is definitely a web framework.


I would also love to see ASP.Net MVC 3 being compared with the open source free stacks.


Laravel.com should really be in this, one of the new hottest frameworks.


You seem to be missing a framework there.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: