Hacker News new | comments | show | ask | jobs | submit login
How Raygun increased throughput 20x with .NET Core over Node.js (raygun.com)
105 points by aliostad 9 months ago | hide | past | web | favorite | 60 comments



Here's what I read: they had a solution written in Node.js; they rewrote the solution in .NET Core; it was 20x faster. It's anecdotal, sure, and Microsoft prompted them to write it, and YMMV, but they did a thing and it had a remarkable result so they've remarked on it. It's not realistic to expect them to take the time to build a representative sample that doesn't expose any of their proprietary business logic; they're running a business and have more important things to do.

Anecdote of my own: I was working on a web bug that had to generate a few v4 UUIDs on every request, and using a version of Node for which the libuuid wrapper wasn't working, so I was using the fastest script-based generator I could find, but it was still too slow. A .NET Core version of the same code handled something like 40x the number of requests on the same hardware.

If nothing else, it demonstrates that using the same solution for all your different problems is A Bad Thing, because there are surely things that .NET Core is not particularly good at either.


The biggest problem with .NET Core imo is that .NET just seems to be permanently uncool, despite how good it is. Maybe one day this industry will be less fashion driven...


Yeah, but not everyone is a hype-train riding, super-cool hipster intent on switches languages every 6 months as some kind of fashion statement.


They aren't, but there's still a lot of irrational hate for anything even remotely associated with Microsoft, even amongst people who aren't chasing the hype. I can't really think of any other major general purpose platform that has this kind of PR problem.


I'd love to use it, but being Windows only for years was a non-starter for me.


.NET core is cross platform. It is also far more performant than traditional .NET. The new framework is a huge leap forward.


C / C++ is too, frameworks are for the lazy


Really wish everyone was as hard working as you.


This? Hard working? Psh. If you're not using a magnetic needle and a steady hand you're way too lazy for me.


This, and the AgeOfAscent piece it links to, both read like PR pieces commissioned by Microsoft. Both are absent of actual information to really determine what exactly they were testing, or what this means to anyone else's problem space.

Because, of course, when you get to the fun of benchmarks there's always a faster options. With .NET Core offering them 20,000 requests per second (on a C3.large), that is a terribly low bar to hit. Again, maybe they're doing something amazing, but many frameworks have rates in the seven digits on that sort of hardware. And I know .NET Core can process basic requests in the six digits, so even it is hardly the limiting point, and it comes to the logic.

https://www.techempower.com/benchmarks/#

Still terribly flawed, but better than someone rewriting an app and then gloating about speed improvements.


With respect to the TechEmpower benchmarks, if you look at Round 13 [1] they note that ASP.NET got 859 times faster in their benchmarks since Microsoft dedicated effort to improving it.

[1] https://www.techempower.com/blog/2016/11/16/framework-benchm...


Indeed (though it's worth noting that they're talking about the performance of ASP.NET Core versus Mono, which was notoriously disastrous performance wise, and that story got somewhat misrepresented). In no way am I saying that the .NET Core is slow.

But if we want to use that benchmark, doing the most trivial useful thing of all -- serializing a simple object to JSON -- and node.js beat core .net. In the recent iteration it beat it by almost 2x (which makes the "node.js is slow we all know that" bit in the linked piece humorous given that it beats aspnetcore in every test but plaintext, which is an irrelevant test anyways given that they put their service behind nginx).

I don't use node.js. I don't advocate it. I personally think most node.js solutions end up being a spaghetti mess. But these sorts of "the details are hidden but look at the magic we wrought!" benchmarks are worse than useless. They're fools gold for people choosing platforms and thinking this sort of advocacy is guidance to learn from.


It's an MS 'Customer Story' so it is a press release.


Duh, yes, of course the logic makes it much slower than copying a L1 cached static response to the client. But then that is not a use case anyone cares very much about.

The AgeOfAscent story is not a PR piece, the writer (Ben Adams) is a very active contributor to the Kestrel server behind .NET Core. If he's writing about improved performance it's probably because he wrote >1/3rd of the patches.


Duh, yes, of course

What a weirdly trite response given that you're essentially repeating what I said. And without specifics the linked piece, and its claims about benchmarks, is utterly meaningless.

"Rewrote inefficient code. Now it's faster. Story at 11!"

The AgeOfAscent story is not a PR piece

Humorously it was likely a "copy a L1 cached static response" type benchmark.

Okay, so it wasn't PR, it was self-aggrandizement (which is effectively PR). Got it. Though in this article it was linked as a performance improvement of "switching" to .NET Core, when really it was a story of terribly inefficient .NET Core code becoming better, though again in a nutshell it is meaningless. Cool.


yea that is caveat for comparing numbers in a simple http payload benchmark -- doesn't really reflect real workloads. I don't think it's meaningless that .NET got better. Perhaps more competition is better, otherwise we'd all be using Java/netty. However, I also see the repetitive work in having different stacks.


> when we started to look at .NET Core in early 2016, it became quite obvious that being able to asynchronously hand off to our queuing service greatly improved throughput. Unfortunately, at the time, Node.js didn’t provide an easy mechanism to do this, while .NET Core had great concurrency capabilities from day one.

This is a mildly maddening article - not because I have any emotional attachment to either platform - but because it never really gets to the heart of the matter. It's like they can tell us, but they'd have to kill us afterwards.


I'm with ya. All hype, no details.

A 20X improvement cannot be attributed to .NET core, it must attributed substantially to some implementation/algorithmic issue that they resolved once they got to the .NET core platform. There's a big problem somewhere with node.js if it is 20X slower. If true, then its simply a bug or series of bugs to be fixed in node.js.


Just a basic request/response with barely anything thing in it, shows the .NET core is faster than node.

https://stackoverflow.com/questions/43920942/unexpected-outc...

Obviously this isn't very scientific without a benchmark suite.


It's not 30 times faster. Also, it doesn't matter which one is faster unless there is. Huge speed improvement. Your web server is not the bottleneck.


>It's not 30 times faster.

That's because the empty request doesn't do much. Start adding real work and the statically typed JIT compiled .NET code shows its real teeth.


"Your web server is not the bottleneck." - Highly depends on your setup.

But removing any 3rd party libraries, and external requests. Generally .NET core is faster.


Actually 3rd party libraries and application code is where .NET is likely to be faster unless node lib is wrapping a C library - .NET can easily beat JS an order of magnitude simply by having control of memory layouts and high performance/specialized data structures. Meanwhile JS has crap structures (doesn't even have primitive arrays) and everything is it's dynamic object where you pray the JIT eventually figures out a constant structure and tries to optimize.


Just to nitpick a bit, JS does have primitive arrays using TypedArray.


This is a good point - it's also why you can still optimize JS on microbenchmarks but for real world code where you'd want to use an array of structs in say C# for eg. there's just no JS equivalent other than write obfuscated stuff around typed arrays.


Yes, but if your going to compare(Badly in this case). You probably want to test it in isolation.


> But removing any 3rd party libraries, and external requests. Generally .NET core is faster.

Is that with reference to "faster than .Net regular" or "faster than node.js"?

You're right either way but I'm not sure that "it's just faster" is all that relevant. The "Just a basic request/response with barely anything thing in it" case removes anything of interest.

For any normal non-trivial, non-demo setup, the web server is fine. I would look at the performance of data stores and other backends. The performance of "asynchronously hand off to our queuing service" is most likely the overriding factor in a 20 or 30x increase.


Some have already optimised the heck out of their data retrieval and writes.


> Some have already optimised the heck out of their data retrieval and writes

Then they have likely not very much to gain from framework speedup either.

The fact that raygun could introduce a "asynchronously hand off to our queuing service" says quite clearly that "already optimised data retrieval and writes" is not the case that we are talking about.


Asynchronously dropping something on to a queue, possibly leaves the queue client implementation as the suspect slowing node down.

But assuming that dropping something on to a queue is incredibly fast. Switching how you serve your http requests can increase performance by some multiples because it would be the largest constant in time taken assuming a very fast queue drop.

Although I agree, if your at that point your probably serving all your requests fine anyway.


Honestly, add another machine and the problem is solved. youll be able to afford a team of programmers to rewrite your app in any language of your choosing by the time you feel the slow down.

I'm using .NET core for a production app now. Team of 16. It's working beautifully. The data access is the bottleneck in 99% of what I have ever worked on, not the web framework. That is why Rails is so popular.


> Honestly, add another machine and the problem is solved. youll be able to afford a team of programmers to rewrite your app in any language of your choosing by the time you feel the slow down.

I understand your point, but your statement isn't true in all circumstances. It depends on your business model. Performance begins to matter more when the economics demand it. Say you're working on a game that has a free tier, or a service that offers some things for free and some things paid. Every customer hour costs you a certain amount, and you can expect a certain amount of revenue per customer hour on average. With this kind of a balance, a 20x performance difference can mean a fundamental shift in what kinds of business models can be profitable.

So for something like enterprise software, yeah, add another server. But for something like an MMO or 4Chan performance can be make or break. It's all about cost per user hour vs. revenue per user hour.


The benchmark linked appears to be a 4x speedup that can potentially be attributed to the view engines used. It's really ridiculous to argue for 20x speedups without evidence of the deficit being in the web framework.

So lets take your hypothetical game example... What are your requests doing exactly? Are they returning cached content? If so, the caching policy is more important. If you're running a game, game state is probably changing constantly so you will need to connect to a database. Are you indexing things correctly, sharding, etc? Are you aware that Eve online runs python code?

You can point to any hypothetical scenario where you get a million users on a free tier, but then you will either have to get some funding or start charging for your services. In the case of the chat app Discord, they wrote their server architecture in Elixir, which runs on the Erlang VM called BEAM, which destroys .NET core at scale. Does that mean my team should stop using .NET core? No. I have to consider the knowledge base of the people I currently have as well as familiarity with deployment. The cost of the servers are a lot less important in the real world for 99% of applications any of us will write. I would even argue that it would be better to get to market faster with a product that scales poorly and to scale it as needed. Facebook was and still is written in PHP.


There's very little persistent storage involved with what MMO's and many other kinds of game servers do. Unlike web apps and many business apps, they are definitely not data bound.

I am aware that much of Eve Onlike is written in Python. But they are very premium: $15/mo. Would they be able to support their infrastructure for say $3 from 1 out of every 10 users once every 3 months?

I'm also aware that Facebook was originally written in PHP. Their business model is different, but they are also world leaders in PHP performance and compiling. There's a reason they invested so much in the performance.


It may also be that it's easier to write in performance issues or bugs unintentionally with Node.


I agree with you. A bold claim with very vague explanations makes it a bit hard to believe. I'd love to have more details on how they pulled that off.


this is yet the "more detailed" post. Here is the original story https://customers.microsoft.com/en-US/story/raygun which I objected to https://twitter.com/aliostad/status/849186214045528064


I'm a .NET fan, but it sounds strongly like these guys knew what they were doing on .NET and not so much on node. It's amazing how many of these rewrite stories actually boil down to "and the new version didn't do some incredibly dumb things the old one did".

If you really wanted throughput and large numbers of simultaneous connections, you'd be looking at Erlang, anyway.


The fact that they were only getting 1k reqs/sec with Node gives me concern. It clearly shows something went wrong there very early on at a very fundamental level. By no means is Node the end-all be-all for performance by any measure, but you should definitely be getting much much higher throughput than 1k reqs/sec.

Simply booting up a single core http server should net you around 4-5k requests per second. Spin up an instance per core and you should be _at least_ in the 10k realm.


>Simply booting up a single core http server should net you around 4-5k requests per second.

There's no such thing as "requests per second" generally. It's requests per second for a specific workload.

So, whether Node can do 4-5k rps with "hello world" doesn't matter much. It's the same engine that needs to also do the further processing for each fuller request.


My understanding is that they hand just off the payload to another queueing mechanism. So it just returns ACK and done.


> The fact that they were only getting 1k reqs/sec with Node gives me concern. It clearly shows something went wrong there very early on at a very fundamental level.

Because the program is performing CPU compute for each request. Node is single threaded.

It's a shame that the Node project killed off the multithreaded web workers pull request. It sorely needs that functionality. Pools of node processes are a poor substitute.


Looking at my Prometheus stats right now, I've got a Node TCP server doing 12k concurrent connections with 3-5% CPU and 130 MB memory. At least 4 DB queries are done per request, sometimes 10 queries. Raygun either has some nonobvious stuff going on, or Microsoft paid them to write PR fluff. After 6 years with Node it seems odd for their performance to be so bad. They should dive into more technical details to explain what specifically was made faster.


You've described a classic I/O bound server application (waiting on the database) that Node handles well. As soon as you introduce serious computations or server side rendering into the request handling performance would fall dramatically.


And Raygun, according to what the website says it does, is doing a significant amount of work.


Care to elaborate on why additional node processes are a poor substitute? They are really easy to reason about, and to manage. What are the downsides? Not trolling, really interested in hearing about pros/cons.


One obvious thing is that multiple processes means no sharing of any kind of state or in-memory cache, so you immediately have to go to an external cache like Redis or whatever, with the additional maintenance and minor performance hit.


As the sibling comment points out, your middleware that has a 30k req/sec hello world may slow down to 500 req/sec on a production server once it's competing with everything else on the thread.


Obviously they did something terribly wrong in the first place and now are showing off being all right. Netty would probably faster. As there're no details at all, I feel free to just guess :-).

I recently found Node much faster then .NET Core for a very specific scenario:

Elasticsearch - Node / ASP.NET Core (Kestrel) - NGINX - Client

where every connection must be TLS. So, Node / ASP.NET Core have to decrypt traffic from Elasticsearch and encrypt to NGINX. For a minimal workload the whole trip took 50 ms with .NET Core and 20 ms with Node. Obviously there's something wrong with the .NET setup - maybe some setting with Kestrel and TLS.

Anyway, that's my anecdote. And I wouldn't dare writing a block post how node is 2,5 times faster than .NET.


>> Node is a productive environment, and has a huge ecosystem around it, but frankly it hasn’t been designed for performance.

Isn't/wasn't part of the hype behind node that it's fast? Compared to what?


Has anyone any code they can show which performs the same job on both and is close to comparing apples to apples? It'd be interesting to me, not for performance comparison, but as a real world application structure comparison, a kind of more advanced Rosetta Code[1] example.

[1] http://rosettacode.org/wiki/Rosetta_Code


It looks like they ran their last node.js benchmark around Sep 2015. I'm curious if this was still with node.js v0.x, or if they had already adopted the latest version of io.js (3 or maybe 4).


Or node.js v6.x for that matter. Seeing as io.js has merged back into node.


There is no technical information in this article. As a result it reads like Microsoft marketing and is off-putting to .NET Core, something I'd otherwise be interested in.


> From the questions we received around the specifics of our performance improvements, there seems to be two schools of thought:

> 1. Of course it’s faster, Node is slow

> 2. You must be doing Node wrong, it can be fast

Allow me to offer a third :)

When you rewrite something, it had better be faster! You know a lot more about how it works and how it's used than you knew at the outset. There was a recent post, about going back to Ruby after creating the first version in Clojure, that touches on this point.


Hey! Where could I find this post?



I never trust any benchmark stories from Microsoft ever.


Well there are plenty of public not run by Microsoft benchmarks that show this same trend: https://www.techempower.com/benchmarks/#section=data-r14&hw=...


The article is an excellent example of sneaky PR practices. Not only it lacks the technical details, it deceives the masses with insane amounts of artificial hype. "How AstroTurf invented astroturfing" would be a better title for it.




Applications are open for YC Summer 2018

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: