
TechEmpower Framework Benchmarks Round 12 – Race Against the Thermite - bhauer
https://www.techempower.com/benchmarks/#section=data-r12
======
douglasfshearer
Sadly this round of tests was rather rushed due to the imminent
decommissioning of the donated hardware used to run these benchmarks [1].

There was no pre-release of the results to contributors, meaning that
frameworks with unusually high error rates were not investigated. I am
thinking particularly of Phoenix [2], as this is the first round in which it
was included.

Hopefully for round 13 they will have better availability of hardware, and be
able to follow to the release process they have previously used.

For anyone able to help, they are looking for a donor for test machines [3].

[1] [https://groups.google.com/d/msg/framework-
benchmarks/Hq4qxj0...](https://groups.google.com/d/msg/framework-
benchmarks/Hq4qxj0TL9o/WWyd1Ke1CQAJ)

[2] [https://groups.google.com/d/msg/phoenix-
talk/6cjJx61FPBg/LGs...](https://groups.google.com/d/msg/phoenix-
talk/6cjJx61FPBg/LGskk-XREgAJ)

[3] [https://groups.google.com/d/msg/framework-
benchmarks/IiDBC6l...](https://groups.google.com/d/msg/framework-
benchmarks/IiDBC6l1QuQ/gDyw6OR4BwAJ)

------
clishem
Happy to see that fasthttp (Go) and Phoenix (Elixir/Erlang) are now included!
fasthttp boasts some impressive some performance! The results of Phoenix are
not really reliable yet, because the implementation seems lacking (178000+
errors!).

~~~
pdappollonio
Any links got fasthttp?

~~~
pdappollonio
Never mind, just found it
[https://github.com/valyala/fasthttp](https://github.com/valyala/fasthttp)

------
valyala
I'm disappointed that 'prefork' version of fasthttp failed to run due to
unknown (yet) reason. TechEmpower should publish benchmark logs later, so the
problem with fasthttp in 'prefork' mode could be fixed in the Round 13.

Fasthttp starts distinct server process per CPU core in this mode. This should
result in perfect scalability on multi-CPU machines.

~~~
clishem
See [https://www.techempower.com/blog/2016/02/25/framework-
benchm...](https://www.techempower.com/blog/2016/02/25/framework-benchmarks-
round-12/) , they needed to get this benchmark in quick, because they need to
shift hardware soon. Maintainers weren't able to submit their patches to fix
things. If you ask me it would have been better not to run this benchmark.

------
rilut
Looking at the framework list, I've found Sabina. Why is Spark forked into
Sabina?

------
true_religion
I wonder why the PyPy setup always uses Gunicorn/Tornado while the CPython
setup always uses Gunicorn/Meinheld.

------
vpkaihla
php 7 is just as fast, or a teeny bit slower, than 5. Isn't that a rather
interesting result, given how the php community has claimed huge performance
increases in 7?

~~~
mitm2mitm
It's actually faster. My company was able to cut a lot of servers just by
upgrading. And you can find a lot of testimonials around reddit and twitter.
Best thing to do is not to trust me, or this horrendous benchmark suite, and
test it yourself.

That's why I loathe TechEmpower's benchmark, or any similar service trying to
benchmark the world. It's very misleading and error prone by nature. It's not
like you're comparing two GPU cards doing the same thing.

Take PHP for example, one of the most popular frameworks is using version 4.2
(latest is 5.2, and there's also a LTS release). It was released in June, 2014
and it's before several major architecture changes and refactoring. And
they'll probably say it's my fault for not contributing.

