
Web Framework Benchmarks - Round 8 - curiousAl
http://www.techempower.com/benchmarks/#section=data-r8&hw=i7&test=json
======
zaroth
It's interesting to compare what the code looks like.

CPPSP (C++ Server Pages) which is putting up ridiculous numbers... here is the
Single Query test:

[https://github.com/TechEmpower/FrameworkBenchmarks/blob/mast...](https://github.com/TechEmpower/FrameworkBenchmarks/blob/master/cpoll_cppsp/www/db)

It's quite different from the more typical implementations, where they all
sort of look the same...

(Go)
[https://github.com/TechEmpower/FrameworkBenchmarks/blob/mast...](https://github.com/TechEmpower/FrameworkBenchmarks/blob/master/go/src/hello/hello.go)

(NodeJS)
[https://github.com/TechEmpower/FrameworkBenchmarks/blob/mast...](https://github.com/TechEmpower/FrameworkBenchmarks/blob/master/nodejs/hello.js)

(Gemini)
[https://github.com/TechEmpower/FrameworkBenchmarks/blob/mast...](https://github.com/TechEmpower/FrameworkBenchmarks/blob/master/gemini/Source/hello/home/handler/HelloHandler.java)

Also interesting to compare it to C# / HttpListener... which would benefit
from moving all the framework code out into a separate library;

(C#/HTTP.sys)
[https://github.com/TechEmpower/FrameworkBenchmarks/blob/mast...](https://github.com/TechEmpower/FrameworkBenchmarks/blob/master/HttpListener/HttpListener/Program.cs)

------
RyanZAG
Interesting observation regarding the differences between the EC2 and i7
results: the platforms at the top of the EC2 benchmarks are generally
MongoDB+async io java, while the ones at the top of the i7 results are
MySQL+heavily threaded (go, servlet, openresty). I think it's a pretty
interesting result because it shows how much your choice of available hardware
has on which platform would be best - and it's not a small difference either.

If you're going for an EC2/digital ocean setup with a lot of small instances,
then you want to go with something like vert.x or node or whatever - while if
you are deploying directly onto bare metal high core/ram servers, you'd be
better off with something that is better at handling high thread counts -
something like Golang.

~~~
piranha
Go has low number of threads, usually a number of your cores or close to it.
Goroutines are not threads.

~~~
RyanZAG
If you have 8 cores, you'll get 8 threads and all your requests will be nicely
distributed across the cores. This is why Go is up near the top for the i7
benchmarks. On the EC2 ones, there are far fewer cores, so the overhead of
distributing them is much more pronounced. As you add more cores, you'd
probably see Go pull further ahead of some of the competition. However if
you're only ever going to be running Go on small instances as many people do,
this advantage is actually a hindrance because of the added overhead. Not that
it's necessarily a big issue or anything, it's just interesting to consider.

The point I was making is that your actual hardware and workload can turn this
benchmark on its head. You may naively think you are upgrading performance by
switching to a different framework/language, yet if you don't understand why
each platform is getting the numbers it does you might end up rewriting your
app and actually decreasing performance because of your server hardware.

------
curiousAl
I've been following these for most of the rounds, and Go has been improving
impressively. Whether that's because of improvements in the language itself or
a more zealous crowd sending pull requests, I don't know, but it made me want
to try go, so I did. It's not as comforting as the scripting (PHP, Python, JS)
languages I'm used to. Having no REPL and having to think about types takes a
bit more getting used to than I thought (arrays vs slices/maps, and having no
REPL). I find having a quick build script (mine's in vim) so you can
compile+run and go back to the code quickly helps a lot. Also,
[http://play.golang.org/](http://play.golang.org/) isn't too shabby either.

It would be fun to see this project
([https://github.com/TechEmpower/FrameworkBenchmarks](https://github.com/TechEmpower/FrameworkBenchmarks))
become more and more popular, with formidable developers squeezing out
performance from their framework of choice.

~~~
dylandrop
Yeah but I'm kind of confused as it's my understanding Go is not a web
framework so much as a language. Is this just testing how fast Go can print
out the string "{message: 'Hello World'}"? Or are they testing a specific
component/library in Go? I mean obviously having a language just spit out a
line is going to be faster than having a fully blown framework such as Rails
work through all of the query parsing, view building, etc. so it doesn't seem
like a very fair or useful comparison.

~~~
dangrossman
It's included, alongside Go frameworks, for the same reason PHP/Ruby/ASP.NET
are included -- so that you can see how much overhead the frameworks are
adding compared to a minimal implementation in the language they're built on.
The code behind every benchmark is available under the source code tab up top.
The Go benchmark is using some JSON library, not just printing a string.

------
ritchiea
In the past I've noticed posters on HN picking on Rails by lazily linking to
these benchmarks but click over to the average latency tab and Rails looks
pretty solid with an average response latency of 1.8 ms, which is not at the
very top but far better than Django, which is a comparable framework and is
near the bottom of the average latency table.

If anything to me this data confirms that Rails is an amazing tool because not
only do you get to develop quickly, but you also get pretty good average
latency (or at least the potential depending on what you add to your app in
terms of 3rd party libraries). And what Rails isn't good at is throughput,
which is almost never a problem for an early stage company.

Working at a startup it's a huge success if I ever have to handle a lot of
connections to my app, but today and everyday, I want fast response times on a
page load.

~~~
grncdr
as bhauer pointed out, 500's are counted in those latency figures. If you look
at the error count column, rails does miserably in everything but the "single
query" test, which is not a common use case.

~~~
lhm
That looks a lot like an error in the test setup to me, it seems the rails
example hasn't been updated for a while.

~~~
venus
I was shocked by the rails results, and the massive number of errors, so I
looked into it a little.

The setup they're using is an nginx serving 8 unicorn workers with a backlog
of 256. They then throw requests at that with a concurrency of 20. DB pool is
256 too. It seems to me quite likely that the unicorn queue fills up very
quickly and it starts rejecting requests, which could be an error. It's hard
to see how a maximum of 8 workers would ever get close to the 256 available DB
connections.

At first glance the unicorn setup is totally inadequate for the amount of
traffic being thrown at it. The first thing to do would be massively increase
both the number of workers and the backlog, otherwise this almost instantly
turns into an overflowing request queue and literally millions of errors.

There's no denying, though, that this kind of request flood is not exactly
rails' strong point and if you're expecting massive numbers of fairly simple
requests you're probably better off with something else.

------
sker
HHVM and Dart seem to be the two new fast performers in town showing
impressive performance in some tests. JS has been falling off the charts
compared to some of the first rounds, but still a good option performance-
wise. C# keeps sucking badly. I miss Nimrod/Jester, I always wanted to see it
in the top 10.

~~~
pjmlp
> C# keeps sucking badly

I wonder why, the Fortune 500 sites we have built are handling the load quite
well.

~~~
bigtones
These benchmark tests for C# are run against MySQL or PostgreSQL on Linux. In
the Fortune 500 setup you're probably connecting to SQL Server or Oracle in
the back end for which Microsoft and DB vendors have optimized OLE-DB drivers.

That, and JSON serialization on .Net using default MS serializer is super
slow. Everyone uses JSON.NET or another faster serializer in the real world.

~~~
bhauer
We have SQL Server tests but they were not included in this round. They were
last in Round 7 and we'll include them again in Round 9. Here is SQL Server in
Round 7:

[http://www.techempower.com/benchmarks/#section=data-r7&hw=i7...](http://www.techempower.com/benchmarks/#section=data-r7&hw=i7&test=db&b=2)

Also, we have a JSON.NET test implementation thanks to community contribution.
It's test #138 and named "aspnet-jsonnet" as seen on the following chart of C#
tests:

[http://www.techempower.com/benchmarks/#section=data-r8&hw=i7...](http://www.techempower.com/benchmarks/#section=data-r8&hw=i7&test=json&l=2)

~~~
jongalloway2
Why weren't the SQL Server tests included? Technical limitation, something
else?

~~~
bhauer
Just time. We were already two weeks late on Round 8 due to a host of other
issues. We'd like to do one round per month if we can get our routine ironed
out.

We'll make it a priority to get them run in Round 9.

~~~
JTenerife
Please don't be annoyed by my following comment as I appreciate your effort
and like the benchmarks very much (as a hacker - as project leader I'd prefer
the "enterprise frameworks" to perform much better :-) ): You should really
avoid publishing incomplete benchmarks. They don't do justice to both, the
left out and the included.

~~~
mhixson
> You should really avoid publishing incomplete benchmarks.

If we had followed that advice, there never would have been a round 1. I was
so uncomfortable with the idea that we'd be publishing surely-flawed
benchmarks of frameworks we didn't really understand that I requested to be
taken off the project (prior to round 1). It was only after seeing the post-
round-1 discussions and the flood of pull requests that I realized I was
wrong.

These benchmarks are always going to have flaws. I think it is better for us
to regularly publish our best attempt than to try for perfection.

------
matrix
These results are tempting me to do my next project in a modern lightweight
Java framework. No Hibernate, bloated frameworks of yore, or weird complex
build and dependency management. Play is ruled out - it's Scala (Java is a
second-class citizen in Play).

Maybe something that ties together things like ebean ORM, Jetty, Jersey,
Jackson, Guice. Dropwizard is the right idea, but is geared towards building
REST backends.

Any suggestions on a pure Java framework that has critical mass and would fit
the bill?

~~~
matrix
Following up on my own question, there doesn't appear to be any that quite fit
the bill right now, if we define the ideal framework as having the following
characteristics:

* Java as a first-class citizen

* Strong core of basic web app functionality

* REST and Search engine friendly URLs

* Action oriented – basic framework for routes, MVC etc

* Stateless

* Good documentation, active community

If we look at action frameworks only:

* Play 2: Great except it's Scala. Ruled out.

* Spring MVC: Spring is bloated old-school Java with Hibernate. Out.

* Stripes: hasn’t had a commit in over a year… which is unfortunate because it looks interesting. Out.

* Spark: appears to be a one-person project. Out.

* Google Sitebricks – ditto

* Ninja: Ditto

~~~
olavgg
Grails? You just uninstall the GORM/Hibernate plugin. Controllers have to be
written in Groovy, but everything else can be Java.

~~~
vorg
When Grails developers talk about the minimum Groovy that must be used instead
of Java in their Grails code, it doesnt paint much of a picture for Groovy's
future. I've heard Gradle devs want to add Scala as an optional build language
in Gradle 2, but is Grails thinking about moving away from Groovy as well?

------
stesch
I like the benchmark and I appreciate the work that was put into, but Erlang
is missing again.

If you don't even consider Erlang you won't miss it. But if you know it has
some strengths for this kind of job and you don't mind the syntax, you'd like
to see it compared to other solutions.

~~~
agilebyte
I have heard that string operations (and I thus suspect JSON parsing too) is
slower on Erlang. Maybe it is not included as it was not built for raw speed
but rather stability, hotswapping code etc?

~~~
stesch
Erlang was included in round 6 of the benchmark.

~~~
agilebyte
Ah thanks, under Cowboy and Elli.

------
banachtarski
None of these numbers are significant! Give me something that tries hundreds
if not thousands or tens of thousands of simultaneous requests. Then we have a
real benchmark that will probably push a lot of these over the edge in terms
of mean latency and especially tail/peak latency.

~~~
MetaCosm
There have been a group of us -- consistently pushing for exactly this. The
maintainers of the benchmark are exceptional resistant to this idea...
[https://github.com/TechEmpower/FrameworkBenchmarks/issues/49](https://github.com/TechEmpower/FrameworkBenchmarks/issues/49)
...
[https://github.com/TechEmpower/FrameworkBenchmarks/issues/36](https://github.com/TechEmpower/FrameworkBenchmarks/issues/36)
...
[https://github.com/TechEmpower/FrameworkBenchmarks/issues/48](https://github.com/TechEmpower/FrameworkBenchmarks/issues/48)
... there are even more issues asking for concurrency increase, just search
for concurrency.

It it silly that such an rich and awesome set of benchmarks never pushes on
concurrency, one of the major points of failure "in the wild" \-- more common
as you become the go-between for your users and some set of APIs -- users
stack up on one side, waiting connections stack up on the other.

~~~
bhauer
There is a very simple reason for this: we do not yet have a test that is
designed to include idling. One of the future test types [1], number 12 on the
list, is designed to allow the request to idle while waiting on an external
service.

Until we have such a test type, there is no value in exercising higher
concurrency levels. Outside of a few frameworks that have systemic difficulty
utilizing all available CPU cores, all tests are fully CPU saturated by the
existing tests.

With that condition, additional concurrency would only stress-test servers'
inbound request queue capacity and cause some with shorter queues to generate
500 responses. Even at our 256 concurrency (maximum for all but the plaintext
test), many servers' request queues are tapped out and they cope with this by
responding with 500s.

The existing tests are all about processing requests as quickly as possible
and moving onto the next request. When we have a future test type that by
design allows requests to idle for a period of time, higher concurrency levels
will be necessary to fully saturate the CPU.

Presently, the Plaintext test spans to higher concurrency levels because the
workload is utterly trivial and some frameworks are _not_ CPU constrained at
256 concurrency on our i7 hardware. As for the EC2 instances, their much
smaller CPU capacity means the higher-concurrency tests are fairly moot. If
you switch to the data-table for Plaintext, you can see that the higher
concurrency levels are roughly equivalent to 256 concurrency on EC2.

For example, jetty-servlet on EC2 m1.large:

    
    
          256 concurrency:  51,418
        1,024 concurrency:  44,615
        4,096 concurrency:  49,903
       16,384 concurrency:  50,117
    

The EC2 m1.large virtual CPU cores are saturated at all tested concurrency
levels.

jetty-servlet on i7:

    
    
          256 concurrency: 320,543
        1,024 concurrency: 396,285
        4,096 concurrency: 432,456
       16,384 concurrency: 448,947
    

The i7 CPU cores are not saturated at 256 concurrency, and reach saturation at
16,384 concurrency.

We are not against high-concurrency tests; we are just not interested in high-
concurrency tests where they would add no value. We're trying to find _where
the maximum capacity of frameworks is_ , not _how frameworks behave after they
reach maximum capacity_. We know that they tend to send 500s after they reach
maximum capacity. That's not very interesting.

All that said, once we have an environment set up that can do continuous
running of the tests, I'll be more amenable to a wider variety of test
variables (such as higher concurrency for already CPU-saturated test types)
because the amount of time to execute a full run will no longer matter as
much.

[1]
[https://github.com/TechEmpower/FrameworkBenchmarks/issues/13...](https://github.com/TechEmpower/FrameworkBenchmarks/issues/133)

~~~
MetaCosm
Don't get me wrong, I am only annoyed because of the wonderful job you guys
do... it seems like such a glaring omission... because IMHO, it is where stuff
often actually "falls apart" in real life... and is some of the most useful
information you can possibly have.

The "trapped between APIs" scenario is one of the concurrency stressing ones,
as is slow clients with large content, as is websockets. As you tests show, A
LOT of frameworks do a damned fine job with serving lots of requests quickly
-- I think concurrency is a far more interesting differentiator.

Glad to see that most of what I want is "on the list": 11, 12, 15, 19. Would
be nice to see an additional "slow clients" test with large content -- where
the limit is how fast the clients can receive server data... meaning, the
limit on the server is how many clients they can stack up and handle
concurrently.

~~~
bhauer
Great! Please feel free to join in the discussion about future test types on
the GitHub issue if you want!

Based on your comment and some others, I am presently thinking we'll want to
bump up the priority of adding new tests in the upcoming rounds. Tentatively,
getting the caching test in is low-hanging fruit and may be next up. But the
external API test is probably next after that.

------
shijie
This is a fascinating round for WFB, with drastically different results from
round 7. I'm impressed with the strides Go has made, and also quite impressed
with JRuby. I know the banking app Simple chose it as its language/runtime of
choice, and they seem to leverage it well.

I'd still like to see a good showing from Django, maybe using uWSGI + Nginx. I
might submit a pull request and see if I can't get that included in the next
round. Gunicorn is great and incredibly easy to set up, but pales in
comparison to other platforms when it comes to raw speed.

~~~
ninjay
As far as Django goes, there hasn't been much tuning in general[0]. The only
thing I see them doing is template caching. At the least they should be
running 1.6 with persistent DB connections. Beyond that they have a lot of
middleware enabled that isn't being used.

[0][https://github.com/TechEmpower/FrameworkBenchmarks/tree/mast...](https://github.com/TechEmpower/FrameworkBenchmarks/tree/master/django)

------
riquito
I'd love to see how many lines of code each test required, but it's probably
impossible to do in a fair way

edit: I meant in the chart, at a glance

~~~
bhauer
In fact, we have some work in progress on that front, along with the number of
commits to the test implementation directory. Combined, these will give a
rough idea of code length and the amount of community input/review each test
has received.

~~~
riquito
Thank you for your work, it's really interesting

------
mrinterweb
I'm rather surprised to see rack-jruby up as high as it was. I discounted ruby
as an option for a very high performance http service, but I guess I'd be
wrong to do that. Don't get me wrong, I love ruby and I use it every day. I
just didn't expect to see it in the top performance contenders list.

~~~
bhauer
That is principally thanks to TorqBox, the codename for Torquebox 3, which is
built on Undertow. Undertow is the web server that is scrapping with Netty and
Vert.x on the plaintext tests.

Also note that the particular Rack test that performs very well is running a
very small amount of Ruby code. Thanks to these improvements, however, _rails-
jruby_ now consistently tops _rails-ruby_ , if only by a small amount.

See more on TorqBox: [http://torquebox.org/news/2013/12/04/torquebox-next-
generati...](http://torquebox.org/news/2013/12/04/torquebox-next-generation/)

------
desireco42
What always impresses me and leaves impression, is just how fast raw PHP is.
At times it seems PHP has been obsoleted by new platforms, but benchmarks like
these make a case for it's use. Especially because it is really easy for
beginners to pick this up.

~~~
kvtrew76557
The first PHP result for the JSON test comes in at 31.7% of the performance of
the top performer. PHP also occupies the bottom 3 worst slots.

~~~
krapp
That's disappointing. One thing PHP _should_ be really good at is
serializing/deserializing JSON.

------
neya
I know benchmarks should be taken with a pinch of salt, but by round 5 I was
totally into Scala (Scalatra), trying to write my own framework, so I could
get better bang for buck from my EC2 instances, which to be honest, aren't
cheap when compared to say, Digital Ocean.

Around round 6 of these benchmarks, I ditched Scala altogether (and also my
framework).The reason I ditched Scala was not because of it's performance,
etc. But it was because I was the only developer in my company who knew and
learnt Scala after reading a couple of books (one was around 800 pages).
Obviously, I needed a language that any other developer should have no problem
taking over, and Scala developers are 1)expensive 2)not easy to find. Also,
Slick (the database interacting code for Scala by Typesafe) wasn't mature yet.

For this reason, around Round 6, I started writing my own framework in GoLang
and used it internally as an 'auxiliary framework'. I will explain more about
this framework soon soon. In my company, we have about a handful of backend
programmers and a couple of frontend devs. I found that GoLang was much much
easier to teach my programmers, than say I could teach Scala. Please note -
Scala is a brilliant functional programming language, but if you are thinking
switching from Ruby/Python/etc would be easy, then you are wrong.

Now, we have a workflow that allows us to deliver as quickly as possible, but
without missing out on performance - We write our entire V1 in Rails. We
implement all the UI/frontend related code and then port it to our GoLang
framework. We have an internal generator where we just feed our rails app, and
the code for our framework is just 'ported'/generated on the fly based on our
framework and we just deploy it. So far, our productivity is slightly lost
while handling the type conversions, bugs, etc. But it's totally worth it. Go
outperforms Rails by a huge margin. I noticed that using something like Puma
helps a lot, but it still is no way comparable to our GoLang framework.

As for our framework, it's just pretty simple - Just organize all the files as
you would in a Rails application (Models/Views/Controllers/Config) and
everything just works without much performance hiccups. We use Gorilla
components for stuff like routing and cookies. The rest of the stuff is
slightly adapted from other frameworks (like Martini).

All in all, I love the ability to have JVM like performance with the
productivity of Ruby with a language like GoLang. And this round 8 benchmark
is nothing short of impressive. If you haven't tried GoLang yet, you should
try writing your own framework, not only do you learn about all the trade-offs
for the 'magic' that rails makes under the hood, you also learn about some new
stuff and thus become a better programmer.

I think GoLang is pretty impressive if someone as average as me can even write
a framework like Rails, except for better performance. Give it a try, people,
you won't be disappointed.

~~~
finishingmove
The language is called "Go", not "GoLang". Just pointing this out, not because
I'm trying to be smart or anything -- it just irks me to read "GoLang".

~~~
stusmall
You get into the habit of calling it golang because googling for "go" issues
isn't very useful. Golang is the nickname that is (or at least was last time I
did something in go) what the community tends to use for SO and blog posts.
Its sort of become the language's unofficial name.

Its really frustrating that a search engine company would use such an
unsearchable name for a new product.

~~~
mseepgood
> what the community tends to use for SO and blog posts

As a tag in the tag section, not as a name in prose.

------
brickcap
Looks like erlang frameworks are not represented...

~~~
kainsavage
We have been having trouble with Erlang frameworks since before Round 7.
Unfortunately, I was still getting up to speed and improving the suite mostly
for Round 7/8 and did not get to fix this yet. I do have it topping my todo
list for round 9, with the hope being to get them all back in and working
soon.

------
pfraze
Cppsp (top of the i7 charts) is some mad science

[http://xa.us.to/cppsp/index.cppsp](http://xa.us.to/cppsp/index.cppsp)

------
hit8run
I started a conversation in #python on freenode and people were a bit outraged
by the way frameworks are compared. Some open Database connections and never
close them (example: GO) and others open and close DB connections for every
request (example: flask). The guys at techempowered should review every pull
request and check if it is implemented in a fair way.

------
optymizer1
I find the JSON benchmark misleading a bit. I posted this before, but I'll say
it again: JSON serialization in Go is slow (2.5x slower than Node.js for
example [1]). The web server, however, is very fast. When they measure
webserver+json, Go wins because of its webserver, not because it serializes
JSON faster. If you want to parse a lot of JSON objects with 1 request (or 1
script), or if you have a large JSON object to parse, Node.js will outperform
Go.

That said, I rewrote my app in Go and I'm very happy with the performance,
stability and testability. The recently announced go 'cover' tool is very
useful and a breeze to use.

[1] Here are my benchmarks:
[https://docs.google.com/spreadsheet/ccc?key=0AhlslT1P32MzdGR...](https://docs.google.com/spreadsheet/ccc?key=0AhlslT1P32MzdGREdGl1X0pHWmU0d2xLcHNjbE9Yc0E&usp=drive_web#gid=0)
(includes codepad.org links to the source for each benchmark)

~~~
bradfitz
I optimized the Go JSON serialization in Go 1.2. See
[https://code.google.com/p/go/source/detail?r=5a51d54e34bb](https://code.google.com/p/go/source/detail?r=5a51d54e34bb)
... it went from 30% to 500% faster. It uses much less stack space now, so the
hot stack splits are no longer an issue (also Go defaults to 8KB stacks for
new goroutines now).

------
mmucklo
Regarding symfony2 at the bottom - I submitted a simple pull request to try
and fix some issues with the setup, but it's been sitting and sitting there...

[https://github.com/TechEmpower/FrameworkBenchmarks/pull/650](https://github.com/TechEmpower/FrameworkBenchmarks/pull/650)

~~~
bhauer
Hi mmucklo. We'll get that merged in for Round 9!

------
rartichoke
Benchmarks are fun but I'll stick with rails and its simple ways of letting
you cache data.

I'm ok with getting out the door response times of 8-15ms while serving 20,000
unique hits a day on a $5/month VPS. The server does not even break a sweat
too and it's doing more than serving the app too.

~~~
matthewking
What kind of response times do you get on a cache miss though?

~~~
rartichoke
80ms-350ms is normal under typical traffic conditions. It depends on the
complexity of the page.

That's still not terrible though and it could easily improve by massive
amounts with a stronger server. I have not gone crazy with profiling either.
Just using fairly basic cache blocks when applicable.

------
WoodenChair
It's amazing how well a young language like Dart and its frameworks performs
in the multi-query benchmarks. There's still so much more optimization to go;
at this stage it feels optimistically like the sky is the limit!

------
saltvedt
Whats up with the number of Rails errors?

------
bsaul
Anybody could explain what gemini is ? I've been on the eclipse project home,
and i really don't see the link with a web framework benchmark.

~~~
mutagen
Gemini is the private Java framework Techempower uses on their client
projects. I believe questions regarding its performance relative to various
open source and enterprise JVM frameworks inspired the first Techempower
benchmarks.

[http://www.techempower.com/blog/2013/03/28/frameworks-
round-...](http://www.techempower.com/blog/2013/03/28/frameworks-
round-1/#section=questions)

------
liquidcool
Am I the only one shocked to see Grails beat Spring? I mean, I think it's
awesome, but part of me wonders if something went awry in the Spring code. I
know a last minute (breaking) change kept Grails out of Round 7, so perhaps
whatever that was made a big impact.

~~~
ZoFreX
Spring has dropped quite dramatically in most of the tests, I wonder what
changed.

~~~
bhauer
The following was the last notable PR processed for Spring:
[https://github.com/TechEmpower/FrameworkBenchmarks/pull/606](https://github.com/TechEmpower/FrameworkBenchmarks/pull/606)

------
atonse
Interesting to see go moving up there.

Curious - any reason why you guys don't have ASP.NET tests in Windows with SQL
Server? I fiddled with the filters and found none.

Update: Never mind. I see it now. You don't have Windows tests on EC2.

------
Horusiath
I'm curious why ServiceStack.net has fall out so badly, since their own
benchmarks shows a lot higher performance than ASP.NET web applications.

------
nikentic
I cannot find Flask in the list. Any specific reason?

~~~
krg
The submitters didn't create a JSON test, but all the other tests are present.
Switch to the Plaintext test to see Flask.

------
riffraff
dumb question: are we sure these things are doing the same thing?

AFAICT some of the larger frameworks by default do a bunch of stuff (csrf and
ip spoof checks, session management, etag generation based on content etc)
that simpler solutions don't, but this things can usually be turned off.

~~~
jsmeaton
Exactly the same things? No, of course not. The non-framework code is the
same, but the framework specific code (and features/functions) is going to be
very very different. A lot of pull requests have been sent that turn off
certain features (like unnecessary django middleware).

Barebones frameworks of the same language are generally going to out perform
heavier frameworks. Feature count/matrixes are not taken into consideration
for these benchmarks.

~~~
riffraff
I'm sorry, I do not understand how this was obvious, I'll see if I can send
pull requests.

Of course barebones platforms will be faster, but doing unnecessary work is a
different thing.

~~~
jsmeaton
It's more obvious if you read the blog posts linked to each of the rounds (but
not this one), since they describe some of the changes that were made to each
framework test to bring them closer to parity.

------
lazyshit
I'm curious as to why Finagle has 0's across the board for everything.

------
anilmujagic
I'm really surprised with ASP.NET/C# results :-S

~~~
friism
Also note that those tests have tons of errors, so they're probably not
representative.

------
agnsaft
Why is Python doing so much worse than PHP?

------
guotie
which version of go is used?

~~~
nickpresta
Go 1.2rc3

------
veto64
i'm interested in who is financing this benchmarks. really sorry, but for me
it looks like a new way of doing seo marketing

~~~
bhauer
This comment makes me dream of putting together an Indiegogo campaign for the
project so that we can stop using our workstations and finally get some proper
10 gigabit Ethernet hardware. It sure would be nice if the JSON and Plaintext
tests weren't network-limited.

