I see a lot less value in these benchmarks than I did 6 years ago or so when they first started appearing. Maybe I'm showing my age but the speed argument just isn't that important to me any more.
These days when I'm evaluating a framework, I look for strong documentation, integration with tools for observability and deployment, and compatibility with a well understood runtime that makes operations easier.
These things are way more important for building software that works, that scales, and is operationally efficient for a team of engineers to work with for several years.
I think the biggest beneficiary of these benchmarks is Postgres. Its drivers are often the bottleneck and everyone who uses Postgres benefits from more work on them.
And it's also nice to point at TFB when arguing with your architects that 100 rps is not "a lot".
That said, the top C/C++ benchmarks are still using an unofficial fork of libpq client that supports batching.
Rust and Java was on the top of the benchmark for a while just because they their own pipeline supported drivers.
> an unofficial fork of libpq client that supports batching
I thought they were just hand optimizing their code to the point they submissions were totally unidiomatic. But now I learn even dependencies get this kind of attention.
Years ago I submitted a ticket asking for inclusion of memory usage stats (interesting in small environments) and start-up time stats (interesting in serverless environments). But nothing ever happened on that front; I guess it's because TechEmpower is a Java shop and that are not typical metrics where the JVM shines.
I think the biggest benefit of these benchmarks has been to trigger improvements to all the low-hanging fruit in various frameworks. The differences at the high end don't really matter for many people, but I find it quite comforting if a framework is fast enough that I won't run into any performance issues I didn't cause myself. Because those I can fix, but if I hit a bottleneck in the framework it gets so much harder to figure out a way around it.
6 years ago every time you started a project you'd have to argue with idiots saying "you should do it in C because it's fast". Doing some genuine benchmarks of a scenario that's at least a little reflective of the real world is a huge step up from where we were.
Most of these benchmarks are insanely optimized. The code is rarely idiomatic. There's also a few frameworks in there that just wraps C libraries to get good scores.
I wouldn't use TechEmpower as a reliable source for checking framework performance.
I see more value. For some frameworks Techempower rankings have encouraged massive optimizations that save a lot of money in real world usage.
We upgraded an app from Spring 4 to Spring Boot a while back and it divided our hosting cost by 5.
I started caring about speed when I worked on an app that became much bigger than anticipated. If you start with a really slow framework your only option is a rewrite
I read once on HN that a lot of modern day deployment strategies and tooling seems like it was created because of very slow apps. When you're running a rails app, you need to scale horizontally, and so you need to orchestrate that, and so you have autoscaling instances, kubernetes, and so on.
But if a single, beefy instance can handle all the traffic your app can reasonably expect to see (think, StackOverflow on just a couple instances), then a lot of operational complexity just dissolves away.
I feel the same way, and not just for these benchmarks but all.
There is some value but it’s not a lot. At the end of the day it all comes down to how much time and motivation the person that implements the benchmark has, you can get pretty far in just about any language, after all they can all call into a native implementation so even an interpreted language can be fast. Then you can spend lots of time micro-optimising for the specific hardware the benchmarks run on.
Agree that you want those things too. But when you are choosing between two different frameworks of similar maturity and you have to choose a tech stack it can tip the decision somewhat, especially if you don't have the time to POC a stack and you need to make a decision.
An example scenario could be if a company has a choice between Java (Spring) and .NET (ASP.NET Core). Both are quite popular, have documentation, a big ecosystem around them, etc as you mentioned. If you might need to do low level/advanced web code at any stage the plaintext benchmarks may sway them to use .NET over Java; all else being equal. In that benchmark ASP.NET Core trumps Spring by a very large factor (Spring's 2% vs AspCore's 100%); you then obviously look into the benchmark to compare and contrast as you shouldn't trust them at face value. Although having said that if the .NET one is better tailored than the Spring one it does send a signal that the .NET one has a more performance orientated community willing to maintain the benchmark which IMO is a positive still.
Obviously, performance generally isn't the main criteria when choosing a framework. I would argue that your "software that scales" isn't important either in most cases.
I don't care much about performance either, especially compared to documentation, but that does not mean these benchmarks are useless. For instance, comparing similar frameworks can show big differences which illustrate different software architectures. Among the Nginx/phpfpm/Mysql fullstack-with-ORM frameworks, Laravel has roughly 8% of the raw stack capacity, while the other mature frameworks can do 2× or 3× more. I suspect this reflects Laravel's abuse of magic (calls to non-existent methods are redirected at runtime to other objects created on-the-fly, and so on). This may not be the real cause, but such a poor performance makes me suspicious about the complexity or the quality.
I think there are some really big differences when you walk down the list (at least in the last TechEmpower benchmarks I've looked at). It could be a factor of 10x or 100x. It might even be 1000x between some slow solutions and the fastests.
Managing 10 servers is perfectly ok, having to handle 100 servers would take a lot more work. A 1000 servers would require organizations to hire a lot more people and work a lot harder on automating server management, deploys, etc.
I agree so far as that just using some standard language like C# or Java is probably good enough even though there might be another language and framework that is faster. I will personally stay away from php, python and ruby for larger systems.
To me the interesting part is when you can read the benchmark source code and learn about a high performance feature for the framework or language you’re already using. That’s happened a few times for me with Go, for example. I hope more high performance enhancements are added even for frameworks that would appear near the bottom. That’s what keeps this relevant to me. :) Well, that and knowing roughly what the performance ceiling is for raw socket HTTP in various languages ;-)
It would be very interesting to see an additional column of "prime examples" which link to the biggest or most heavily used web apps/apis built on a given stack.
I suspect the examples would be few at the top of the list and many near the bottom.
> I look for strong documentation, integration with tools for observability and deployment, and compatibility with a well understood runtime that makes operations easier.
Great, let's see those metrics tested and compared in an easy to read table, then we'll have a matrix of qualities on which to base a choice of stack.
Performance doesn't matter until it does. One thing that can be nice is that an efficient backend can handle an enormous amount of load on just 1 server, which can save you a ton a complexity for various reasons (deployment, integrations with other services, and just less cost). Then on the extreme end where you have a very high traffic server, efficiency might let you scale back from ~12vms to ~3, which can be a significant cost savings, especially if done with no impact in productivity, which I believe is often that case just by moving from interpreted/dynamic langs to statically typed compiled/jitted ones.
Computers are stupidly cheap slave workers and are easy to spin up compared to dev time. Does it actually matter in almost any profitable business?
The speed argument is a much better argument imo, but we have to think if 100-300ms is worth the tradeoff of a great framework with a strong community. (I've not seen anything come near the beauty of Rails in Java, SpringBoot is not it. Last I checked there was no good SideKiq alternatives which feels kinda vital for most apps)
If a Rails app can handle only 300rqs that's over 1 million requests served an hour. A SpringBoot app handling say 2000rqs? Thats 7.2million a hour definitely a shit ton more but you know what, I'll probably just set the EC2 autoscale to 7 more instances and pay the extra $1000 a month because its still a magnitude of 10x cheaper than developer time. I can see your point in keeping server count low for simplicity being an advantage but not enough to persuade.
I say all this but I do like the speed argument, I think its important that we keep web requests fast for a better user experience. I mean Stripe, Shopify, GitHub all get by on the slowness that is Ruby but I will say that lately GitHub has been super slow for each page request but GitLab has been very fast which is also Rails.
There is another view point if you are operating on a slim margin business that keeping infrastructure costs tiny is important and that's another case for a faster backend for sure.
Now the interpreted lang vs statically type lang argument I can agree with, I hope for a language that sits somewhere between Java and Ruby. It seems Crystal fits that bill at the moment but has no backer and will be years off a great ecosystem if it takes off.
Companies who are feeding 2k+ machines to run their web servers care about cost savings. Hundreds of billions of requests/day, not millions. The database systems to handle that load, networking, power, bandwidth and all the people who maintain it.
> Computers are stupidly cheap slave workers and are easy to spin up compared to dev time. Does it actually matter in almost any profitable business?
The complexities mentioned generally fall on the developers and sysadmins who have to be aware of them and work around them, those people are definitely not cheap slave workers. Screw ups handling these complexities can bring the whole system down.
I worked at 80 person company which broke and closed because of this. We almost escaped this demise by going from PHP to HHVM which unfortunately was very alpha at the time, but we simply ran short of money and time. If it had worked, we would have scaled down 15x our AWS costs and would have saved at least 40 jobs.
Postponing (often indefinitely) the need for scaling out your application can benefit you tremendously, not so much because of the instance costs, but primarily in terms of reduced complexity in all aspects of development and operations.
Also, for a SaaS, keeping the cost per served transaction low can mean the difference between making a profit or a loss in a market that has a converged on a price-point for a type of service.
>There is another view point if you are operating on a slim margin business that keeping infrastructure costs tiny is important and that's another case for a faster backend for sure.
Which is why Rails is only suited for SaaS where you get paying user early in the development. If you require traffics for ads revenue it simply doesn't scale. And majority of the Web is still based on ad revenue model.
I think some people are knocking these benchmarks for the wrong reason.
I’m a .NET dev primarily. And .net has historically been a terrible performer on these benchmarks. With .net core the asp.net team and community has been working on performance.
While the raw plain text benchmark is not really realistic of a real world scenario that we would use on a day to day basis. It is an indicator of the performance baseline that .net core can achieve before you begin adding all the fluff on top.
This coupled with sites like stack overflow showing the gains they get using asp.net and moving from 1 version to the next gives me confidence that .net is not a bad choice these days.
Prior to .net core I was more or less ready to go switch full time to something else. But now I’m happy using .net.
Agree, that MS is working on performance and can show it on TechEmpower benchmarks is great. I also think some top performers are using special ways to be that fast but if you move down a bit in the list there are probably more realistic implementations.
Always look at the source. Some of them are very representative of high-performance C#, but others cheat a little. E.g. regex-redux just calls out to PCRE, although .NET 5's Regex would easily be competitive with Java as well.
Mad props to these guys for keeping this massive project going for so long. Seems like a massive amount of work only to get picked apart by my friends here on HN.
Thanks, tomcam. It is a lot of work, but it's what we do in between our paying work. And we find it fascinating and fun.
I think what really hits home are the stories about people upgrading elements at the foundation of their stack—their platform or framework—after those foundations have seen optimization efforts. It is rewarding to see application developers realize dramatic performance boosts within their applications with so little effort; we feel we have contributed to this delight in a small way. For example, check out this post by the Azure Active Directory team [1] where they saw massive performance improvements by upgrading from .NET Framework to .NET Core 3.1 (which isn't even the latest version).
Some people focus too much on specific rank ordering and the optimization efforts made by those who jockey for top spots. It's better to consume the data in rough tiers of performance, however you choose to define those tiers. We tend to encourage people to consider performance as one part of the puzzle when selecting infrastructure software. Several high performance platforms and frameworks (in Java, Rust, C#, Go, Python, and more) are also ergonomic and easy to work with. And using high performance infrastructure software means you can avoid premature optimization in your application code and avoid premature architectual complexity. You can enjoy the trifecta of architectural simplicity, low-latency, and good scale headroom.
Obviously we know this project will always receive diverse and critical opinions. That's fine; hackers are a very opinionated people. For those who value the data, ourselves included, we are happy to keep putting the effort in.
i think it is really interesting to see the changes in different languages and platforms over time and also very useful to the community to see the optimisations folks come up with. thanks for the continued work brian and team!
I just posted a super quick addition of a title attribute on the language color box. So you should be able to hover over the little colored box at the left of framework names to see the language name for that color.
I can only imagine that while you must have been ready for an onslaught of criticism that you would have ended up with such a staggering workload so many years later. Literally unimaginable to me.
I was surpised to see javascript framework called, just-js show up at rank 9. In case, people are wondering, it looks like a new V8 runtime environment. https://github.com/just-js/just
- Even if it's JS, just-js make almost no use of dynamic memory allocations.
- The authors rewrote himself a postgresql driver that support batching request.
- it wraps high performance c++ libraries
Even if the techempower implementation is far more verbose and complex than mainstream JS framework, the amount of work behind just-js and its preformances are just impressive!
> Some framework is using extreme optimization tricks, so be careful
That accurately represents how I've been forced to run applications in production too, when under huge stress and we're trying to squeeze everything out of the existing hardware.
Welcome to _Hacker_ News ;)
Edit: And absolutely yes, check out the code behind it. Every benchmark has their flaws and biases so it's important to verify before making conclusions.
Code doesn't make sense, weird ass abstraction, but works beautifully. I never could understand where the dependency injection parameters came from lmao
Normally I just click on Fortune Cookies, since I think that is the most representative of a typical Web Workload.
And then Click on Filter, Full Stack Framework, And Full ORM only. Again, most people will be using some sort of Web Framework with Full ORM.
I am surprised the top results is from PHP. For many years it has always been Java. And it is from a framework called ubiquity [1] which I have never heard of it before.
There is Crystal and Lucky [2]. Which is exciting because they are getting very close to 1.0
Impressed by PHP's performance - it makes an entrance at position 20. If you exclude Javascript, the next interpreted language is Lua at position 81, then Python all the way down at 206. Ruby makes its first appearance at 255.
Julia and Nim
These are both new, modern languages that tout their performance as a benefit. It's a shame they did not take part in the benchmarks.
***
Regardless of what you think of the benchmarks, the rankings do affect people's perceptions of languages and frameworks (both positive and negative). For example, I don't use PHP, but these benchmarks tell me that PHP has leapfrogged over Python and Ruby in the performance stakes for web development. And not just by a small margin, but by a significant difference.
I like the TechEmpower benchmarks, but you have to read the code to really understand the comparisons being made.
In many languages, a bunch of web server code might end up being implemented in C and not the language itself. When you're creating a minimal endpoint for a benchmark, you might be exercising that C code and not the language itself. For example, the PHP implementation uses `htmlspecialchars` to encode the information which is implemented in C. That doesn't tell you much about the performance of PHP, but just that it has an optimized HTML escaper. The `asort` function used to sort the results is implemented in C and has more limited functionality compared to more general sorting functions that might be able to take lambdas in other languages. The PHP implementation even takes advantage of `PDO::FETCH_KEY_PAIR` which will only work if there are only two columns.
Likewise, the PHP implementation doesn't use templates, but manually builds strings. The fastest Python implementation actually renders a Jinja2 template: https://github.com/TechEmpower/FrameworkBenchmarks/blob/mast.... That's much more realistic in terms of what you'd do in the real world, but you're going to be carrying the overhead of a real-world template system like Jinja2. Part of Python's failure here is that no one wanted to implement an optimized Python version that would just build a string instead of rendering a template.
Changing the test constraints a bit would ruin a lot of the advantages that PHP used there. Let's say that you had to retrieve three columns: `id, sort_key, fortune_text` and sort on the `sort_key`. Now you need to read more information back rather than just being able to make it an associative array (hash map). You need to be able to sort based on that sort_key which means probably giving a sort call a lambda.
This isn't limited to PHP. A bunch of Go implementations do things like allocating a pool of structs and then re-using the same structs to avoid the garbage collector. A lot of implementations create result arrays sized so that they won't need to be re-sized (creating additional allocations and additional GC work). The rules say this isn't allowed, but they do it anyway.
So, before comparing tests, I'd look at the implementations to make sure that they're comparable. A Django implementation that actually returns objects and is rendering templates and looks like a canonical Django implementation is very different from an optimized PHP version trying to avoid running any PHP as much as possible. When we start looking at the popular PHP frameworks which will be executing a bit of PHP like CodeIgniter or Laravel, we start seeing performance similar to Python frameworks as the PHP code is doing similar things like rendering templates. It just happens that no one implemented a Python version that didn't use a fully-fledged template renderer.
And this is the weird thing: the benchmarks changed your perception of PHP while comparing things that weren't similar. I think PHP is often faster than Python and Ruby, but probably not to the extent that your perception might be given these benchmarks.
I actually find it fascinating to look at the implementations and see which communities care about realistic implementations vs. leveraging all sorts of tricks to win the benchmark.
This is intential. Eg. not using a templating engine shows the raw power of the http stack. Good contenders have multiple stacks in the list (e.g. aspcore) with various depth into the stack. For asp.net core it is (a) raw http server, (b) with Middleware and (c) with MVC. And when you read this, you can compare apples with apples
The original author of actix has a new framework called ntex. Looks like it was a fork of actix and then went from there. One of the differences I was able to spot was that it appears that it uses a (radix?) trie for the router.
A couple of frameworks use a radix tree for routing, e.g. ASP .NET Core. Sadly, routing isn't even part of the TE benchmarks. There are some applications out there with 500+ routes, so I think it is pretty relevant.
What is the most fun and shocking to me to browse the code https://github.com/TechEmpower/FrameworkBenchmarks to see just how unergonomic and messy the code gets in something like this. 20 something files in 10 directories fiddled with config files isn’t uncommon.
Of the bunch, by only looking at the code, which of them would you like to get dumped on you 2 weeks before a deadline? Which is the most ergonomic?
Yeah, the benchmark is no longer useful due to all the micro-optimizations in the code that never happen in the real world.
Most of them check the Nth letter of the HTTP method to do routing, hard code the content-length, etc. This benchmark shows which language is the fastest to throw the hardcoded data into the socket.
Meantime, Django[1] is the simplest and a real-world application.
I wrote this comment based on pre-release benchmarks, based on Physical benchmarks run against Git commits from Jan 23 - but it should still largely apply to the final benchmarks:
It’s kind of frustrating to spend a few hours investigating preview results of the latest techempower benchmarks just to come to the conclusion that (a) truly high performance requires tuning your code to linux, to the network interface cards, and a deep understanding of what your code is doing and (b) right now at the top of the benchmarks, common high performance Rust code is mostly “unsafe” memory-wise, and the same is true of C++ for obvious reasons. After that:
* When looking at garbage collected languages with very minimal implementations (Jooby mostly), Java can come out on top but requires 4 Gigs of memory which is more than I would ever like to use.
* After Java comes C# but performance slows down the second you want to do anything less optimized, like use third-party libraries.
* Finally, we’ve Go, which in a surprising twist is basically neck and neck with some pure PHP server implementations, but once I include those, I might as well mention there’s a JS implementation that skips the Node.js async event loop and offers performance on par with C++ and Rust, but heavily uses C++ internally and is thus more “unsafe”.
What this tells me is kind of what I expected, implementation matters more than language… at this point, memory usage aside for Java, an efficient HTTP/1.1 server can be implemented in basically any language and when tuned or stripped down tends to run faster than the commonly used web servers, like Nginx, Tomcat, Express, etc. Often this means writing plain text RAW HTTP to a socket and managing sockets efficiently.
Which brings us back to Go, though. Of all the implementations I’ve looked at so far, even the PHP one, only the Go benchmark is written like “standard Go” such that most people writing a service in Go will write something high performance without trying to make it high performance. Effectively, you don’t need to ensure all your dependencies use special optimized code routines to get something relatively optimized working quickly in Go with low memory overhead unlike Java.
I’m a bit shocked by this conclusion as I was really hoping that Rust’s high performance use cases would win out, as it’s true that Rust can get 3x faster than Go, but on the Rust side, both actix and ntex are too immature as neither’s hit 1.0 yet, while tokio has hit 1.0 but its server, warp, is slower than Go.
Irritating is that each of the techempower benchmarks use different implementations. For example, there’s a benchmark that’s supposed to measure 20 calls to Postgres, but one benchmark gets to the top by making only one long SQL statement that changes 20 rows. Another implementation uses pgx’s Batch functionality to send multiple queries in batches (a big timesaver, but not technically standard libpq), but then the standard Go variant doesn’t use Batching even when it could (which means we can’t compare custom implementations to generic Go ones fairly): https://github.com/TechEmpower/FrameworkBenchmarks/search?l=...
> For example, there’s a benchmark that’s supposed to measure 20 calls to Postgres, but one benchmark gets to the top by making only one long SQL statement that changes 20 rows.
That seems like a clear cheating case, over-optimised to the circumstances of the benchmark.
> right now at the top of the benchmarks, common high performance Rust code is mostly “unsafe” memory-wise
I only see a single "unsafe" in the Rust implementation[0]? And that's only in the "raw" Actix instance, not the pg. Or are you talking about Actix framework being mostly unsafe?
> only the Go benchmark is written like “standard Go” such that most people writing a service in Go will write
Maybe it's just me but the Actix apis seem pretty ordinary.
Yep, that's what I meant. To me, until Rust unsafe implementations hit 1.0 as web frameworks, not just Tokio, I can't consider them as stable or bug-free as more mature web servers. It's arbitrary, yes, but it's also an important distinction. 1.0 generally means "you can rely on this to not change, and to have a useful set of functionality" and bug fixes equivalent to an LTS release. This is especially true if functionality is common enough that it makes its way into a standard library or conventional best practices. Generally, it has to hit "1.0" before it does so, though, in any stable manner...
The Actix unsafe code has long since been resolved. There are very few instances of it, and each are small, specific, easy to reason about, and wholly internal. I'd guess the risk is about on par with—or safer than—VM runtime languages written in C/++.
Far as 1.0 in Rust, the predominant culture is to treat "0.X.x" releases roughly equivalent to a "1.x" semantic. I have run into very few issues of API breaking changes in "0.X.x" libraries, and even those are basically handed to you thanks to Rust's robust compile-time checks.
Of course your mileage may vary, but I've had a way worse time with even well-established breaks in Ruby, Python, and Go libraries. I understand your risk aversion; I am similar. But I've found Rust to be an absolute delight for stability. The shaky ground of the first few years has largely given way to impressive robustness over the last few.
I strongly urge you to consider Rust if it matches your needs. You may be quite pleased!
The 0.x stability is news to me. I would suggest the community would get wider adoption by moving to 1.0 for components that others can build on, such as web servers.
To me, that Tokio hit 1.0 is a significant milestone, one that encourages its use for new projects. I personally look forward to the first web server widely adopted that hits 1.0 and relies on an async implementation that’s also 1.0…
I agree that Rust is very attractive, but it’s hard for me to argue for Rust against Go’s stability if you can live with Go’s more limited syntax. If Rust could promise the same stability as you’d have writing a JSON API or HTML-templates website in Go, I’d start picking Rust for new projects.
My perspective, to be clear, is that the heisenbugs likely still exist in Rust servers but have been mostly ironed out in Go ones. I’m reminded of Hyrum’s Law - https://www.hyrumslaw.com/ - under the assumption that as libraries hit 1.0, their behaviour can be more reasonably relied upon than pre-1.0…
> on the Rust side, both actix and ntex are too immature as neither’s hit 1.0 yet
Maybe so, but we've been running https://pernos.co on Actix for a couple of years now with no issues. We didn't have to use "unsafe" either, and I understand that Actix's use of "unsafe" internally has also been significantly reduced.
Tokio's "warp" is much less mature than Actix AFAICT. Version numbers can be misleading.
That's a good point. And since AWS put forward their support for Rust recently, along with Microsoft's interest and the new Rust Foundation, the future is looking rather bright for Rust. But it reminds me still of the early days of React, the early days of TypeScript, or the early days of Apache, or PHP before Laravel, when there was room for a lot of competition and the "best practices" had yet to work themselves out.
Part of it is that Rust as a language moves quickly, I agree, but there's right now a lot of confusion between picking the pre-1.0 Actix or the post-1.0 Tokio, with AWS seemingly endorsing Tokio but with practical web development restricted to Actix Web for now. This confusion highlights the risky nature of pre-1.0 web servers -- it's not that you can't be productive and stable, but you might need to keep living on the edge for a long time. If you have a small service, that's fine. If you're looking for an API that won't change for 3 years or more, it's harder to suggest Rust at this time...
> truly high performance requires tuning your code to linux, to the network interface cards, and a deep understanding of what your code is doing
With absolutely no snark intended: what alternative do you see (or imagine) to this? Applying quite a bit of mechanical sympathy to tune a program to the system and visa versa seems essential and inescapable.
The incorrect alternative I meant is the idea that you can "just switch languages" or frameworks and get an instant speed boost. I mean, yes, sometimes your web server, caching proxy or CDN can act as a silver bullet, and rewriting can provide its own benefits if it simplifies the code, but assuming one language is better than another because "it ranks higher" is only one way of looking at the problem. It's more important to consider other factors, from low-level Linux to high level system architecture, perhaps with something like OpenTelemetry to pinpoint distributed hotspots, for example. Yes, you can learn something from these benchmarks, but alone benchmarks and frameworks can't speed up your app. :)
These always make me feel bad. Everything I use or have used is in the < 15% performance rank.
Do the ones at the top (like Actix Pg for example) provide everything you need to do real development, or are they stripped down? In other words, is this comparing track bikes to cross country bikes?
If you read the source code, it’s fair to say that while you can use the frameworks and implementations at the top to build your own systems, they would likely perform slower than these benchmarks as you add more features to them. Now, you might not need to add features, and that’s fair, but then you should compare implementations equally and that’s impossible when not every benchmark is written with identical algorithms and implementation code, for, well, obvious reasons.
It’s worth looking at low level HTTP, socket and concurrency management if you want faster performance, but that’s not really a language-exclusive feature at that point. And the more realistic you make the benchmark — the more communication between microservices, for example - the more your application architecture, deployment hardware, kernel and network tuning, and so on can play a role.
I am reminded of http://rachelbythebay.com/w/2020/10/14/lag/ for example, as something really low level you probably don’t need to worry about… until you do. http://rachelbythebay.com/w/2020/05/07/serv/ also. The same is true of most of the performance optimization at the top. Really fast? Yes. Useful? Depends on the rest of your code. Are you writing haproxy? How much does your app really need to do? Read the benchmark source code and get inspired, maybe.
My conclusions for the moment: Switch to Go if low memory usage and high performance is as critical to you as post-1.0 stability. Go is generally stable these days ;-) Otherwise if instability can be tolerated but you want high performance, use one of these newer web server techniques and write a web server in unmanaged C++ that has very minimal functionality with language bindings. Just-js can serve as an example. Heck, the PHP benchmarks show that if you use PHP to write your own HTTP server you can still achieve high performance. That tells me the advantage here goes to web servers that literally “do less” rather than picking a language as faster over another language… especially when most (all?) languages can interface with C or C++ to do the HTTP layer at high performance while writing your code in whatever language you choose…
I've noticed Rust frameworks are invading the top ranks in the last couple rounds. Of course there was lots of .. controversy over unsafe in actix.
I wonder if the current actix-core is the rewritten/safer one once a bunch of Rusters wrestled/annoyed the original author into handing it over. If so, it's still near the top.
Actix is entirely a community project now, and a big scrutiny has been placed on the few instances of unsafe code that still remain. They should be correct/verified at this point.
Techempower classifies each one. Actix is what they call a "platform"
>a platform may include a bare-bones HTTP server implementation with rudimentary request routing and virtually none of the higher-order functionality of frameworks such as form validation, input sanitization, templating, JSON serialization, and database connectivity
As you move into team developed, non-trivial, real-world production apps, you move further down the list.
Even if you use the top ones in the list, you will have to build abstractions in code to handle software complexities. Those abstractions will reduce performance.
I do wonder, if we could use AI to dynamically remove those abstractions, to the extent possible and transform abstracted, multi-layer code to simple functions / compile targets, improving performance. Sort of like looking at the project, at this instant, and folding all database / processing code into higher performance code that reduces function calls or object instantiations.
Really depends upon what you're doing. A lot of these benchmarks get a lot closer together depending upon that. E.g. compare "fortunes" to "multiple query".
Having been part of a startup using Tcl to do a Rails like stack in 2000, and later migrated to ASP.NET, I learned to only use development stacks with JIT/AOT compilers out of the box.
Yes they offer everything we need for real development at enterprise scale.
Note also, another reason to browse GitHub is to look at how the various implementations are written. For example, sometimes frameworks use fewer SQL statements to complete the same work, or they configure database libraries with different settings. I’ve noted a few instances where gaming these benchmarks is possible simply because a new framework implements the algorithm differently.
Not to say they don’t have value — they do — but keep in mind your code may look very different from benchmark code. I do think it’s worth looking at the code to examine why some approaches scale better than others, though!
Rarely is it the case that more code = faster performance, but sometimes features you need are sacrificed for extra performance even when a benchmark is classified as “realistic”. Viewing source code is how you can determine which frameworks actually best fit your needs, rather than just which ones performed well in this test…
For example, it’s easy to think that raw Go performance is terrible compared to Rust or C++, but it’s worth pointing out that the raw Go example uses commonly available standard libraries baked in to the language to accomplish almost everything it does. Most other examples at some level or another have to deal with epoll and concurrency manually themselves or via the web server replacements they use. Based on this, the overhead or raw implementation of socket management is often what is being benchmarked. Implementations like just-js use C++ and epoll under the hood to implement their server, like most of the other high performance examples at the top of the benchmarks.
The question often then becomes, do you want to write your own HTTP server or take advantage of an API someone else has written that might have more features or middleware? And remember that configuration matters, I forget off the top of my head, but the Go examples with a new web server had more performance optimization than the raw-Go benchmark had. Obviously I should submit a PR to fix this, but my ultimate goal here is to say that reading the source code will both help you become a better developer but also catch the shortcuts and techniques used by high performance implementations…
Regards go... Look at the aspcore variants. The highest ranking one is completely "bare metal" while the normal full MVC use case ranks somewhere middle. The big stacks should contribute multiple variants so you can compare apple with apples and leave the oranges out
Probably the lack of HTTP keep-alive support - it's explicitly disabled due to bugs in the older HTTP library the stable version uses (seen as a large number of errors in previous rankings), so each request needs a new TCP connection.
These days when I'm evaluating a framework, I look for strong documentation, integration with tools for observability and deployment, and compatibility with a well understood runtime that makes operations easier.
These things are way more important for building software that works, that scales, and is operationally efficient for a team of engineers to work with for several years.
Everything else is just vanity metrics.