
Web Framework Benchmarks Round 5 - pfalls
http://www.techempower.com/blog/2013/05/17/frameworks-round-5/
======
amarsahinovic
It would be nice to add Beego [1], as I'm currently learning Go from an ebook
written by the author of the framework [2][3], and it would be nice to see how
it performs. Thanks for your hard work!

[1] <https://github.com/astaxie/beego>

[2] [https://github.com/Unknwon/build-web-application-with-
golang...](https://github.com/Unknwon/build-web-application-with-golang_EN)
English translation

[3] <https://github.com/astaxie/build-web-application-with-golang> Original
version in Chinese

~~~
bhauer
I had not heard of Beego, but we'd love to accept a pull request with a Beego
implementation of the tests. Perhaps you could put one together as an exercise
while learning the language and framework? :) Sorry, I can't resist playing
the "pull request" card.

~~~
amarsahinovic
I might give it a Go :)

------
JulienSchmidt
INB4 questions to the Go results: Nope, issues of round 4 not addressed yet.

Like in round 4 (since the related code hasn't changed) the many concurrent
spawned goroutines probably get in each others way, causing a high latency and
low throughput.

There was a reversion of the test without goroutines, which I think performed
better. But I was told the goroutines version is more realistic... (I don't
share this opinion).

To be fair, this version had also a manual connection pool to address a
previous bug.

Also Go's database connectivity is not very mature yet. There is still a lot
of work to do. I'm pretty sure it can and will be done.

~~~
bhauer
Hi Julien,

Thanks for the note. I'd like to get to the bottom of this and make the Go
test representative of best practices. A previous decision may have been made
to favor an implementation that was measured to be faster at the expense of
best practices [1], but that is not irreversible.

I am not a Go expert and I believe you are, and certainly @bradfitz is as
well. If you two tell us definitively to change the Go implementation to
better comply with best practices, I'll see that it's done for Round 6. I
apologize if you feel we stepped on your opinions in any fashion. I really
value your input in the project to date and hope it will continue.

[1] <https://github.com/TechEmpower/FrameworkBenchmarks/pull/209>

~~~
JulienSchmidt

      I apologize if you feel we stepped on your opinions in any fashion.
    

I don't ;)

I'm just not so happy about making the queries this way:
<http://i.imgur.com/u9Nx5.png>

Of course I will try to help solving this issue, but it is hard to contribute
without testing, since I have no idle server or suitable spare computer around
where I could run this tests properly. But my PC seems to have a somewhat
similar configuration to your "dedicated i7 hardware". I hope I manage to set
up the test environment in a dual-boot soon.

~~~
bhauer
Thanks for clarifying! It's reassuring.

Please let us know if there is anything we can do to help ease the process. It
is not strictly necessary to set up the whole benchmark platform. You can
create the very simple database with the scripts. Then just run Go alongside a
load generator such as Wrk in order to experiment with various approaches and
do spot checks along the way.

------
Todd
Thank you so much for finally including the ASP.NET/IIS/Windows stack. I
understand that it was a user contribution. This finally gives me something to
compare the other stacks against. I realize that Windows/IIS and
Linux/Apache|nginx is apples and oranges but it's still nice!

~~~
cmircea
Note that I submitted a pull request to have JSON.net and ServiceStack.Text as
serializers as well.

<http://www.servicestack.net/benchmarks/>

------
_mikz
I still miss the async sinatra there. I've done some work on that but don't
have enough time to finish it. -
[https://github.com/mikz/FrameworkBenchmarks/commit/2140775e1...](https://github.com/mikz/FrameworkBenchmarks/commit/2140775e198f9173bbf03cceab024074be889350)

the one thread issue is main problem of all ruby benchmarks there

~~~
bhauer
We'd like to include Async Sinatra, but I don't want to rush you. We look
forward to the pull request when you find the time to wrap it up.

------
FooBarWidget
There are some strange things I noticed about this benchmark's organization:

\- JRuby is a Ruby implementation. Why is it listed under Platform? It should
be listed under Language.

\- Why are Unicorn and Gunicorn listed under front-end web servers? Unicorn
and Gunicorn are explicitly _not_ front-end web servers, but are meant to be
put behind a reverse proxy, by design. The Unicorn author tells users very
clearly not to put it directly on the Internet because bad things will happen:
<http://unicorn.bogomips.org/PHILOSOPHY.html> section "Application Concurrency
!= Network Concurrency". It would be more suitable to put both of them in the
Platform category.

~~~
bhauer
Thanks for the feedback, FooBarWidget.

We have received some similar feedback previously and have discussed some
possible changes to the meta-data structure [1] [2]. As you can imagine, it is
actually a complex problem assigning consistent terminology to all of the
various parts that can compose a web application's deployment.

Consider Go (language, platform, framework, and server all in one, at least
from our perspective) versus Rails (framework only). Some frameworks embed a
web server, others don't, and so on. We have had to make several judgment
calls in classifying this very broad spectrum of frameworks, and freely admit
that there is room for improvement in that classification.

Incidentally, the Ruby deployment is Unicorn behind nginx. We opted to
identify the Ruby deployment as "Unicorn" because that is the key among the
two, and to clearly indicate the divergence from a previous round in which we
were using Passenger, much to the dismay of the community.

[1]
[https://github.com/TechEmpower/FrameworkBenchmarks/issues/26...](https://github.com/TechEmpower/FrameworkBenchmarks/issues/261)

[2]
[https://github.com/TechEmpower/FrameworkBenchmarks/issues/26...](https://github.com/TechEmpower/FrameworkBenchmarks/issues/260)

------
aweb
No metion of Yii? Too bad, it's the best PHP Framework I've ever used. Fast
but complete at the same time, and fully OO.

------
minikomi
Just a heads up - I think this post has been flagged off the front page?
Strange.

~~~
bhauer
Yes, the score became red according to hnslapdown [1]. It has happened to the
previous rounds, and we suspect those who can downvote stories don't feel
these updates have merit. So be it. We do enjoy the feedback from readers that
we receive from the brief time they appear on the home page, so as long as
they will permit us to share the updates here, we will continue doing so.

If you want to participate in a longer-form discussion about the project, we
invite you to join the Google Group [2].

[1] <http://thomaspark.me/2012/10/the-hacker-news-slap/>

[2]
[https://groups.google.com/forum/?fromgroups=#!forum/framewor...](https://groups.google.com/forum/?fromgroups=#!forum/framework-
benchmarks)

------
cmircea
You should really do a test on ASP.NET with JSON.net and ServiceStack Text.

The default JSON serializer is hopelessly slow.

~~~
bhauer
Thanks for the tip!

I need to be clear that the ASP.NET implementation that you see in Round 5 was
contributed by user @pdonald at Github.

That said, we'd be happy to receive more pull requests with ASP.NET changes
and improvements. We suspect there is a lot of room for improvement in the
Windows numbers.

~~~
cmircea
I've just submitted a pull request.

~~~
bhauer
You may be the fastest pull-requester I've seen in the five rounds of this
project. Good show.

~~~
cmircea
Hehe :)

It actually took more time to clone and open the code in VS than to make the
changes.

------
FooBarWidget
Please add Phusion Passenger (<https://www.phusionpassenger.com/>) to the
benchmark for Ruby apps. Right now Unicorn is the only server in that
benchmark for Ruby but it's far from the only available server.

~~~
krg
Round 1 used Passenger, but the feedback we got was that Unicorn performed
better so we switched to that. Currently, we're aiming to show each framework
in the best possible production configuration. In the future we plan to show
multiple web/app servers per framework so that you could compare Ruby on
Passenger vs. Ruby on Unicorn.

~~~
FooBarWidget
There is a simple explanation for that. Phusion Passenger always proxies data
from the web server to another process, for stability and security reasons. If
you benchmark Unicorn directly, without putting it behind a reverse proxy,
Unicorn will look faster simply because you're avoiding another kernel socket
operation.

However as I explained in <https://news.ycombinator.com/item?id=5727232>,
Unicorn is always supposed to be put behind a reverse proxy. If you do that
you should find different results.

Also, there's a lot of tuning in Phusion Passenger that can help performance.
The default is optimized for usability and stability. For example if you don't
prespawn processes, and let Phusion Passenger spawn them on the first request,
you'll be adding tens of seconds to the benchmark time, which would greatly
disadvantage Phusion Passenger in an unfair manner. You should set at least:

passenger_min_instances

passenger_max_pool_size

passenger_pre_start

------
snaky
OpenResty is _twice_ as fast as raw Go at multiple queries DB test (dedicated
hardware), huh.

~~~
stefantalpalaru
A pure Go MySQL driver is being used for the Go test. OpenResty probably uses
a thin wrapper around the C library.

~~~
xt
The openresty mysql driver is in pure lua. It has a very efficient
socket/pooling mechanism, using sockets from nginx.

~~~
snaky
It would be even more ( _much more?_ ) effective to use preconfigured
<https://github.com/chaoslawful/drizzle-nginx-module> location and
'ngx.location.capture' I suppose

And <http://leafo.net/lapis/> is missing again.

~~~
xt
I benchmarked the pure postgresql lua driver to be ˜3 times as fast as the
nginx-postgresql-c driver. When you use the nginx drivers from a lua context
you have to use an internal nginx request to that location, so there's some
overhead.

If you want to improve even further on the lua drivers, LuaJIT FFI is probably
the right answer.

~~~
snaky
> _˜3 times as fast as the nginx-postgresql-c driver_

That's interesting..

> _LuaJIT FFI is probably the right answer_

Do you mean calling nginx internal functions (DB driver or location capture,
like ngx_eval module does in that case) via FFI (I doubt if that is safe in
any way) or just use libpq from LuaJIT directly?

~~~
xt
I haven't benchmarked the mysql drivers, so results might be different there.

There's some work being done by openresty author w.r.t. ffi for openresty
itself, it might yield interesting results. And yes, I think both the option
you listed are viable. But the lua drivers already perform very well.

------
te_chris
Why is the overhead of all the PHP frameworks so high? Is it because they have
to evaluate all the framework code per request? The difference between raw PHP
and symfony2 is massive!

~~~
FooBarWidget
I think it's because raw PHP doesn't do anything. You're literally
benchmarking how fast you can do nothing. As soon as you add _any_ kind of
logic the number goes way down. It's like saying the performance of adding 4
numbers is a massive difference from the performance of adding 2 numbers.

------
est
If I am not mistaken, Java servlet version use prepared statement

[https://github.com/TechEmpower/FrameworkBenchmarks/blob/mast...](https://github.com/TechEmpower/FrameworkBenchmarks/blob/master/servlet/src/main/java/hello/DbPoolServlet.java)

Which is kinda faster than other full ORM and smaller network round trips.

~~~
bhauer
Yes, it does. Additionally, several of the ORMs leverage prepared statements.

------
alberth
Why is php one of the fastest on the database query test but one of the
slowest on all other tests?

~~~
camus
because you did not pay attention to the results , at all...

~~~
n0mad01
aah, and you know!

so, please, enlighten all shortsighted.

------
room271
I normally use Scalatra for Scala stuff, but interesting to see Spray's
inclusion and results.

------
snaky
What I miss from that benchmark:

\- CPU/memory consumption

\- higher concurrency level (4096 for i7 at least)

\- dependency graphs - latency/concurrency etc.

~~~
krg
Agreed, this would be interesting. It's under discussion:
[https://github.com/TechEmpower/FrameworkBenchmarks/issues/10...](https://github.com/TechEmpower/FrameworkBenchmarks/issues/108)

~~~
snaky
Nice! I hope we will see it in next rounds.

There is a big difference between "it serves 1200 rps" and "it serves 1200 rps
and barely seen in top" actually

~~~
bhauer
Yes, but to set expectations before we even implement the server statistics
capture we have planned: we _want_ the CPU to be fully utilized for most of
these tests. If you barely see the server showing up in top, something is
probably going wrong (or you've run into a different limit such as disk or
network).

We're testing the maximum number of requests a server can handle per second,
so the optimum is for the CPU to be fully utilized. A different test would
measure how much CPU is used if I want to serve X requests per second.

------
continuations
The blog post links to Round 4 results. I can't find Round 5 results anywhere

~~~
JoshGlazebrook
<http://www.techempower.com/benchmarks/#section=data-r5>

------
moremojo
so Java is fastest... then why does it seem that in the real world Java web
apps are slowest?

~~~
rubinelli
Assuming you are talking about web applications from banks and non-tech
Fortune 500 in general, it's because many of them are:

    
    
      * incredibly bloated (and still lack most of the features
        that an actual human user would want)
      * poorly coded by armies of outsourced programmers
      * using over-engineered code built on top of obsolete frameworks
      * running on a "homologated" stack, which is often 3 to 7 years out-of-date
    

(I know because I was partially responsible for some of them, in my dark
past.)

------
camus
Thanks for the benchmarks.

While it will not affect the way i work it is interesting to see the
differences between languages and plateforms , in the context of a web
application. Clearly Go and the JVM are doing very well in term of
performance. In some cases it can affect the hosting cost , especially when
deploying on pay as you go saas.

Big frameworks and ORM , especially in dynamic languages should really get
into serious optimization , there is no excuses for some frameworks to be so
slow on trivial things like DB requests ,etc ... there will always a last one
in the list , but that last one doesnt have to be only 1% of the performances
of the first one in the list. It's pretty shamefull.

------
stefantalpalaru
Please add Revel (the Go framework) to the dedicated hardware tests.

~~~
bhauer
Hi stefantalpalaru. Good eye! We will get that added and patch it into Round 5
soon.

