Hacker News new | past | comments | ask | show | jobs | submit login

This is our first follow up to last week's web framework benchmarks. Since last week, we have received dozens of comments, thoughts, questions, criticisms, and most importantly pull requests. This post shows data collected from a second run on EC2 and i7 hardware that started on Tuesday of this week. A third round with even more community contribution is already underway.

Thanks especially to those who have contributed! We hope this is useful information.




Dear bhauer,

You have no idea how valuable this is to everyone! I know it takes a lot of effort to consolidate all comments, requests, fixes suggestions, etc. Personally, I've even seen you respond on the Play! framework Google groups.

Thank you for being such a down to earth person and helping out the community. You guys rock :)

Thanks!


Thanks so much for the kind words. We're obviously having a great time working on this, and we too think there's a lot of value to this for the community at large.


It's great that you're doing this, and listing stuff like standard deviation in the tables -- but I'd say your focus/interpretation of the data isn't quite right. At least provide the option to sort by standard deviation -- as that might well be more interesting than requests/second?

Maybe I'm just being mean because I was reminded of this essay by Zed Shaw earlier today (I was looking for his alluded rant on CC licenses, which I didn't find):

http://zedshaw.com/essays/programmer_stats.html

For instance, you state that:

  > In this week's tests, we have added latency tab (available using the rightmost tab at the top of this panel). On i7, we see that several frameworks are able to provide a response in under 10 milliseconds. Only Cake PHP requires more than 100 milliseconds.
Only cake php requires more than 100 milliseconds on average. But look at Django: Average around 60 ms, standard deviation around 90 ms (!). Not to mention a "worst" score of 1.4 seconds.


Note that the results data is available as JSON if you want to play with different sorting yourself: https://github.com/TechEmpower/FrameworkBenchmarks/blob/mast...

As pfalls said, we're definitely open to different ways to present the data. More sorting would be cool.


Thanks for the suggestion, we're still in the early stages of getting the latency information incorporated, and having the ability to sort by the various metrics makes a lot of sense.


As one of the people complaining about statistics last week (and also, by coincidence, citing Zed's rant), I'm glad to see you are working on it and open to more ideas and improvements!

Also, I like the "sportsmanlike benchmarking game between different communities" vibe I'm getting from all this.

Would be nice if the community helps turn this into the de facto example of how to benchmark correctly.

Now I'll just have to wait and see how Go 1.1 compares ;).


Go 1.1 is looking very strong! Pat (pfalls) just showed me some very preliminary numbers and we are extremely happy with them.

Thanks for the input and constructive criticism, vanderZwan. It has been very helpful to get feedback from yourself and everyone else. I too am particularly happy with the sportsmanlike competition vibe. You have no idea how fulfilling that is to us.


First time I'm actually happy for building something with Servlets (have a feeling I'm the only one here that uses it as the go to framework for hacking something quickly, I'm an old man), it's indeed dead annoying for REST services, but doable, and blazing fast. The question is - when does my productivity starts to be more important than performance (I think 99% of the lifetime of anything I'll ever build is the first and not the latter).

I'm very surprised to see Play-Scala is not in the same playing field as other JVM based frameworks, had a lot of hope for it, I hope TypeSafe will take that into consideration...

Node.js is the biggest surprise for me on the good side, I think it changes my plans a little as for what to learn next...

Thanks for a very valuable study, this is great


The contributions from the community have been great. To those who submitted pull requests that didn't make it into this blog post: We've been overwhelmed (in a good way) with the response and we're working to include as much as we can. Thank you!


Why no .NET C# and Mono?


Their repo at https://github.com/TechEmpower/FrameworkBenchmarks is open for pull requests

I don't think they have anything against C#, probably no one just has committed C# version to be benched.

(edit: whops, linked fork first)


I'd be curious how it would perform on C# (windows/.net vs. linux/mono) .. though the environment setup would probably be a bit more involved.. and IIS is a very different beast.

I'd think that it would probably land somewhere close to Java Servlets, but a bit slower. The framework stack for web requests in .Net is probably a bit more than it is in the servlet server in question. I would also think that Mono would be a bit slower than IIS, only because IIS does very well at pooling resources/threads for multiple requests.

There's also the question of async .Net handling vs. blocking. Most .Net code I've seen is blocking, but there are async options, and as of the 4.x releases are much easier to use.


Thanks for the link, errnoh. You're precisely right, we want to have C# and .Net tests included but didn't find the time to do so ourselves in the past week and have not yet received a pull request. I was not familiar with ServiceStack prior to feedback we received to last week's test, but from the looks of it, I'd personally like to see how it does.


In their previous test they said that it wasn't fair to test with C# on Mono as its not as performant as C# on windows.


Did you guys turn on byte code caching for all the PHP frameworks? If not, then I recommend everyone ignore these benchmarks until that is completely done.


So, they've put a ton of effort into this, have been very receptive to community feedback and criticism, and have followed through with a major update to the whole thing. And then you come in with a drive-by recommendation for everyone to ignore the whole thing because of your unfounded and incorrect assumption. Nice work.


I said that because it was the case with the first version of the benchmark.


We did. PHP 5.4.13 with APC, using PHP-FPM, running behind nginx.


Yes, we are using APC for all the PHP tests in this round of benchmarking.


Is there somewhere we can see the settings for APC?

Thanks for the great work!



Having max/min spare servers as the same amount is a bad idea. This is incurs a substantial amount of process swapping, as every single request php-fpm is going to try and ensure there are precisely 256 idle servers.


Ok, help us out: Given that we have Wrk set to max out at 256 concurrent requests, what would the ideal tuning for php-fpm? A pull request would be ideal, but you can also just tell us. :)


I'd set minimum idle to something like 16 or 32. php-fpm will not create more than 32 workers/sec.

What happens now is 256 workers running and 256 simultaneous requests occur. So php-fpm sees 256 workers busy, 0 idle. The minimum idle is 256, so it attempts to start 256 additional processes.


I could be missing something, but it looks like you are using the default settings. Have you tried tweaking it all? Specifically setting apc.stat=0, which will stop it from checking the mtime. You'll need to clear the cache with apc_clear_cache() when you make code changes though. You may also want to look at apc.php to check for fragmentation and adjust apc.shm_size if necessary


fork, benchmark, and create a pull request. :)


I'm hesitant to just change the config to apc.stat=0, since I can't ensure apc_clear_cache() will be called during deployment

I'll add in apc.php to help make sure other things are tuned properly though. different settings could be appropriate for different servers...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: