Anecdote of my own: I was working on a web bug that had to generate a few v4 UUIDs on every request, and using a version of Node for which the libuuid wrapper wasn't working, so I was using the fastest script-based generator I could find, but it was still too slow. A .NET Core version of the same code handled something like 40x the number of requests on the same hardware.
If nothing else, it demonstrates that using the same solution for all your different problems is A Bad Thing, because there are surely things that .NET Core is not particularly good at either.
Because, of course, when you get to the fun of benchmarks there's always a faster options. With .NET Core offering them 20,000 requests per second (on a C3.large), that is a terribly low bar to hit. Again, maybe they're doing something amazing, but many frameworks have rates in the seven digits on that sort of hardware. And I know .NET Core can process basic requests in the six digits, so even it is hardly the limiting point, and it comes to the logic.
Still terribly flawed, but better than someone rewriting an app and then gloating about speed improvements.
But if we want to use that benchmark, doing the most trivial useful thing of all -- serializing a simple object to JSON -- and node.js beat core .net. In the recent iteration it beat it by almost 2x (which makes the "node.js is slow we all know that" bit in the linked piece humorous given that it beats aspnetcore in every test but plaintext, which is an irrelevant test anyways given that they put their service behind nginx).
I don't use node.js. I don't advocate it. I personally think most node.js solutions end up being a spaghetti mess. But these sorts of "the details are hidden but look at the magic we wrought!" benchmarks are worse than useless. They're fools gold for people choosing platforms and thinking this sort of advocacy is guidance to learn from.
The AgeOfAscent story is not a PR piece, the writer (Ben Adams) is a very active contributor to the Kestrel server behind .NET Core. If he's writing about improved performance it's probably because he wrote >1/3rd of the patches.
What a weirdly trite response given that you're essentially repeating what I said. And without specifics the linked piece, and its claims about benchmarks, is utterly meaningless.
"Rewrote inefficient code. Now it's faster. Story at 11!"
The AgeOfAscent story is not a PR piece
Humorously it was likely a "copy a L1 cached static response" type benchmark.
Okay, so it wasn't PR, it was self-aggrandizement (which is effectively PR). Got it. Though in this article it was linked as a performance improvement of "switching" to .NET Core, when really it was a story of terribly inefficient .NET Core code becoming better, though again in a nutshell it is meaningless. Cool.
This is a mildly maddening article - not because I have any emotional attachment to either platform - but because it never really gets to the heart of the matter. It's like they can tell us, but they'd have to kill us afterwards.
A 20X improvement cannot be attributed to .NET core, it must attributed substantially to some implementation/algorithmic issue that they resolved once they got to the .NET core platform. There's a big problem somewhere with node.js if it is 20X slower. If true, then its simply a bug or series of bugs to be fixed in node.js.
Obviously this isn't very scientific without a benchmark suite.
That's because the empty request doesn't do much. Start adding real work and the statically typed JIT compiled .NET code shows its real teeth.
But removing any 3rd party libraries, and external requests. Generally .NET core is faster.
Is that with reference to "faster than .Net regular" or "faster than node.js"?
You're right either way but I'm not sure that "it's just faster" is all that relevant. The "Just a basic request/response with barely anything thing in it" case removes anything of interest.
For any normal non-trivial, non-demo setup, the web server is fine. I would look at the performance of data stores and other backends. The performance of "asynchronously hand off to our queuing service" is most likely the overriding factor in a 20 or 30x increase.
Then they have likely not very much to gain from framework speedup either.
The fact that raygun could introduce a "asynchronously hand off to our queuing service" says quite clearly that "already optimised data retrieval and writes" is not the case that we are talking about.
But assuming that dropping something on to a queue is incredibly fast. Switching how you serve your http requests can increase performance by some multiples because it would be the largest constant in time taken assuming a very fast queue drop.
Although I agree, if your at that point your probably serving all your requests fine anyway.
I'm using .NET core for a production app now. Team of 16. It's working beautifully. The data access is the bottleneck in 99% of what I have ever worked on, not the web framework. That is why Rails is so popular.
I understand your point, but your statement isn't true in all circumstances. It depends on your business model. Performance begins to matter more when the economics demand it. Say you're working on a game that has a free tier, or a service that offers some things for free and some things paid. Every customer hour costs you a certain amount, and you can expect a certain amount of revenue per customer hour on average. With this kind of a balance, a 20x performance difference can mean a fundamental shift in what kinds of business models can be profitable.
So for something like enterprise software, yeah, add another server. But for something like an MMO or 4Chan performance can be make or break. It's all about cost per user hour vs. revenue per user hour.
So lets take your hypothetical game example... What are your requests doing exactly? Are they returning cached content? If so, the caching policy is more important. If you're running a game, game state is probably changing constantly so you will need to connect to a database. Are you indexing things correctly, sharding, etc? Are you aware that Eve online runs python code?
You can point to any hypothetical scenario where you get a million users on a free tier, but then you will either have to get some funding or start charging for your services. In the case of the chat app Discord, they wrote their server architecture in Elixir, which runs on the Erlang VM called BEAM, which destroys .NET core at scale. Does that mean my team should stop using .NET core? No. I have to consider the knowledge base of the people I currently have as well as familiarity with deployment. The cost of the servers are a lot less important in the real world for 99% of applications any of us will write. I would even argue that it would be better to get to market faster with a product that scales poorly and to scale it as needed. Facebook was and still is written in PHP.
I am aware that much of Eve Onlike is written in Python. But they are very premium: $15/mo. Would they be able to support their infrastructure for say $3 from 1 out of every 10 users once every 3 months?
I'm also aware that Facebook was originally written in PHP. Their business model is different, but they are also world leaders in PHP performance and compiling. There's a reason they invested so much in the performance.
If you really wanted throughput and large numbers of simultaneous connections, you'd be looking at Erlang, anyway.
Simply booting up a single core http server should net you around 4-5k requests per second. Spin up an instance per core and you should be _at least_ in the 10k realm.
There's no such thing as "requests per second" generally. It's requests per second for a specific workload.
So, whether Node can do 4-5k rps with "hello world" doesn't matter much. It's the same engine that needs to also do the further processing for each fuller request.
Because the program is performing CPU compute for each request. Node is single threaded.
It's a shame that the Node project killed off the multithreaded web workers pull request. It sorely needs that functionality. Pools of node processes are a poor substitute.
I recently found Node much faster then .NET Core for a very specific scenario:
Elasticsearch - Node / ASP.NET Core (Kestrel) - NGINX - Client
where every connection must be TLS. So, Node / ASP.NET Core have to decrypt traffic from Elasticsearch and encrypt to NGINX. For a minimal workload the whole trip took 50 ms with .NET Core and 20 ms with Node. Obviously there's something wrong with the .NET setup - maybe some setting with Kestrel and TLS.
Anyway, that's my anecdote. And I wouldn't dare writing a block post how node is 2,5 times faster than .NET.
Isn't/wasn't part of the hype behind node that it's fast? Compared to what?
> 1. Of course it’s faster, Node is slow
> 2. You must be doing Node wrong, it can be fast
Allow me to offer a third :)
When you rewrite something, it had better be faster! You know a lot more about how it works and how it's used than you knew at the outset. There was a recent post, about going back to Ruby after creating the first version in Clojure, that touches on this point.