

Web Framework Benchmarks Round 3 - pfalls
http://www.techempower.com/benchmarks/#section=data-r3

======
bhauer
This is the latest update to our benchmarking of web application frameworks
and platforms. Since Round 2, we've had several pull requests. There is more
Scala, Erlang, Lua, PHP, Java, Haskell, more everything! (Sorry, we've not yet
received .NET/Mono pull requests.)

Additionally, with the help of the author of Wrk, we've been able to change
the methodology to use time-limited tests (1 minute) rather than request-
limited tests (100,000 requests). This means all frameworks are exercised for
the same amount of time.

We've migrated the results to a stand-alone site so we have a little more
screen real-estate for the charts and tables.

I look forward to any feedback, comments, questions, or criticism. Thanks!

~~~
nobodysfool
I know all languages are using mysql, but mysql drivers are pretty poor for
languages such as 'go' and 'python'. So it does make it quite unbalanced. I'd
love to see this with Postgres.

~~~
lbolla
Actually, tornado's test uses Mongo, which makes the test quite uncomparable
to others using MySQL!
[https://github.com/TechEmpower/FrameworkBenchmarks/blob/mast...](https://github.com/TechEmpower/FrameworkBenchmarks/blob/master/tornado/server.py)

------
jakejake
Thanks to techempower for putting this together. It's fantastic for me as the
author of Phreeze to see how my framework stacks up, I've always been curious.

One strange thing is that Phreeze rocks on the multi-query test on the EC2
instance but on dedicated hardware does poorly. Would anyone have a clue why
that would be? At the PHP level I have to admit I don't really factor in
performance tuning for specific hardware - I was very surprised to see such a
difference. I almost suspect the Nginx setup or something in the include path
searching for too many files on one platform but not the other. Does anybody
have any clues on where to look for something like that without having access
to the testing hardware?

~~~
bhauer
Hi jakejake, thanks for the kind words and for contributing your Phreeze test.
It's looking real good, and I hope to have at least language color-coding in
the next round so you can more easily see how it compares to your PHP peers.

As for your question, I'm not certain why the physical hardware turned in a
lower score than EC2. That suggests a configuration problem since we know the
physical hardware is in fact quite a bit higher-performance. Pat (pfalls) may
be able to find some time to help you diagnose it further.

Have you had a chance to benchmark Phreeze on some of your own hardware to
fine-tune its configuration?

~~~
jakejake
Thanks so much for the reply. I haven't been able to get the full suite
running where it creates the environment and everything via the setup scripts
- I just have tried to reproduce the Nginx environment and run tests manually.
But I really would like to duplicate the whole scaffolding so I can see what
may be going on. I have a crushing deadline in two weeks, then after that I'm
going to devote some time to getting the full thing running.

Color coding per language would be fantastic too!

------
pfalls
I want to give a quick thank you to everyone who has contributed thus far.
Being able to show frameworks that span this many different languages and
platforms is really an amazing achievement that we only we able to accomplish
with the help of the community. For the frameworks that are still missing,
we're not done yet, continue submitting pull requests and we'll get them in.

~~~
tsunani
It's great to see such positive attitudes towards the project, and of course
even greater to see the healthy competition amongst frameworks and languages.
I hope it continues!

------
neya
Wow, thank you for including Lift, It's mind blowing to see Lift perform only
marginally better than Ruby on Rails...I always thought it as a very
performant framework...

~~~
gregwebs
Lift is optimized for maintaining persistent connections with users and
maintaining their state for an easier programming model and probably performs
well at that.

If your use case actually matches these benchmarks then yes, Lift would be a
poor choice.

------
saurabhnanda
Why is that Clojure is consistently slower in the Alioth Benchmarks game [1],
but Compojure is consistently faster the Play-scala?

[1]
[http://benchmarksgame.alioth.debian.org/u64/benchmark.php?te...](http://benchmarksgame.alioth.debian.org/u64/benchmark.php?test=all&lang=scala&lang2=clojure&data=u64)

~~~
bhauer
Many variables are involved, but my conjecture is that it has more to do with
the particulars of Compojure versus Play than it does Clojure versus Scala.
For instance, take a look at the numbers put up by other Scala frameworks
(Unfiltered, Lift, Scalatra). There's nothing intrinsically slow about Scala.

------
pkroll
Where is all the attention that happened for the first two rounds? (I'm really
hoping someone will explain why Go's database results are so slow, and what
can be done to improve that.)

~~~
voidlogic
The most likely cause is in the mysql driver. Go provides an interface for
drivers to implement, similar to how JDBC does in Java. One of the go
developers, Brad Fitzpatrick, commented [1] that the actual code used by the
benchmark looks OK. The discussion of these test results on golang-nuts [1]
triggered a number of performance related pull requests [2] to the mysql
driver used.

Also of interest: <https://code.google.com/p/go/issues/detail?id=5323>

1:
[https://groups.google.com/forum/?fromgroups=#!topic/golang-n...](https://groups.google.com/forum/?fromgroups=#!topic/golang-
nuts/-YbB2Qjg41g)

2: <https://github.com/go-sql-driver/mysql/pull/52> <https://github.com/go-
sql-driver/mysql/pull/55>

~~~
JulienSchmidt
Just for the reference: It was not the drivers fault but an bug in the
database/sql package, which was fixed in
<https://code.google.com/p/go/source/detail?r=45c12efb46>

~~~
bhauer
Thanks for posting the follow up here, Julien. I'm glad this has been fixed.
We look forward to the Round 4 numbers with Go fixed up.

------
rjoshi
Can you please add CppCMS (<http://cppcms.com/wikipp/en/page/main>) and
Gwan(<http://gwan.com/>). Both are high performance web development framework
claiming fast.

~~~
bhauer
Hi rjoshi. I've added issues at our Github repository for those two. I don't
feel that we're qualified to create the benchmark tests for those two, so it
would ideal to receive those as pull requests.

------
bungle
Memory consumption charts would be a great thing to have. (edit: oh, I see:
[https://github.com/TechEmpower/FrameworkBenchmarks/issues/10...](https://github.com/TechEmpower/FrameworkBenchmarks/issues/108))

------
jnbek
Sure wish you'd benchmark the Perl Frameworks.. without them, the list is just
incomplete and discriminatory..

~~~
bhauer
We have not discriminated on pull requests, except where a proposed test is
redundant (say, testing the same framework as an existing test but with a
minor tweak) or doesn't work for us (we can't get the test to run).

Others from the Perl community submitted pull requests for some Perl framework
subsequent to Round 3 and we will be including them going forward.

Look to see that data included soon!

------
nazka
I was waiting this one a lot! Thanks!

------
scalabl3
it's very strange that in the Multiple Query Tests vertx performed very well
with EC2 and less well with dedicated hardware (it's position changed
drematically), but not so in other tests...

~~~
bhauer
Agreed. We only have conjecture on that one. I suspect that something in
either the test or the Vert.x code is causing a blocking behavior. If I recall
correctly, that test on i7 is not able to fully saturate the CPU cores.

