Hacker News new | comments | ask | show | jobs | submit login
Bing.com now runs on .NET Core (microsoft.com)
375 points by sebazzz 6 months ago | hide | past | web | favorite | 152 comments

Somehow I actually fell in love with .NET Core 2.0. While their naming is still confusing and it's hard to know what will come next. I found myself amazingly productive with Razor Pages. It's really cool if you don't need an SPA and you could do everything pretty damn fast. It's like WebForms, but without the Microsoft specifics and instead relies on the good old html5 part to make an app great.

I was talking with a friend yesterday. He got a position as a manager for a group of Java/Oracle devs. We started talking about how being a manager wasn't all its cracked up to be. We are coders and we are really good at what we do.

What makes coding more "fun" is the .NET framework. It had its headaches, but its a really nice and polished framework for getting things up and running. Especially with web building. .NET 4+, C#, and Core are Microsoft's message to the development world that they hear us.

Here's hoping they continue with this trend.

That is one thing I always admired about .NET from afar: there was generally one fairly polished framework that you could reasonably trust. In Javaland I feel like I sometimes I have to second-guess everything to make sure I'm using the "best" implementation of something.

That and C# is an ISO standard which is pretty cool.

I'm a C# developer who has been learning React for the past year, and it really is an exhaustive job to look for the right JS library for every case, this is my process:

1. Facing a code task.

2. Asking myself, should I reinvent the wheel or should I look for a library?

3. Google for libraries solving the task, open at least 5 npm/github tabs.

4. For every tab, I should make sure the library is not too big for my needs/popular/still being maintained/not too many open issues. -Closing the tabs that didn't pass this step.

5. Taking a look at documentation/examples to determine if it can solve my problem. -Closing the tabs that didn't pass this step.

6. Testing the library and see if I can make it work.

7. Implement it in my project.

It's a really different environment, and it has its advantages, it's just that Microsoft has spoiled me a little with their out of the box implementations.

As a dev that's trying to brush up on JS after neglecting it for years, it's really hard to know where to start.

As a non-apologetic Java-fanboy I can attest to "I have to second-guess everything to make sure I'm using the "best" implementation of something." This is true. The Spring Boot libraries helps to alleviate that a bit.

It's a good and a bad thing. It's good beacuse there is a lot of competition.

> That and C# is an ISO standard which is pretty cool.

Microsoft publish older versions of C# through ECMA, but that documentation is always behind the implementations (which are all now owned by Microsoft). The latest ECMA document is for version 5.0 of C#.

For years it was not really true. You need DI framework? Study all benchmarks and comparisons for 10 most common DI frameworks. Need ORM? Welcome to the NHibernate vs Entity Framework debate. Logging? Here you go with another 10 choices.

After a year-long pause in .NET development I got back into it recently and I can only agree. As a Linux dev, .NET core is an amazing product and it's incredibly productive compared to JS, Go or Java.

I'm also fairly exited for Blazor so I can write fullstack C#.

When I heard they bought github, my first thought was that I hope they enable this on github pages. Dynamic github pages would be amazing and it would attract a lot of new developers to Razor Pages.

> I found myself amazingly productive with Razor Pages

I'm curious, compared to what?

actually I'm a java guy or to say it differently, a scala guy and i mostly use mvc to create my normal web stuff. however with razor pages I'm so much faster to do stuff. because it's a little bit simpler to setup. Your model is actually just "code behind" and you could still have a good structure. and as a bonus, razor view engine is like twirl (scala project) minus some quirks. also you can still easily add typescript, etc. Or even make a SPA with the WebApi and SpaServices (heck I even use the Webpack Middleware). It's just amazing how integrated .Net Core 2 feels. And the easy stuff is already integrated into the Framework, like Authentication/Authorization. I've never had a secure cookie authentication running that quickly.

actually when I looked at .net core 1.0 I basically already tought it is abadonware, because of the new build system, because i wasn't a huge fan of c# and lot's of stuff just didn't worked, but since we needed to migrate our old WebForms application I somehow tought that before rewriting everything, it might be a good idea to look into it and it somehow clicked. c# is really moving fast and the recent additions even makes myself as a scala programmer feel more at home.

Plain old MVC.

SPAs too, it will depend on every project, of course, but a lot of projects don't actually need a REST API, and don't need a client side SPA with all the bindings and parameters coming and going, a lot of projects will be perfectly fine with server rendering.

Odd statement considering you can do ASP .NET Core MVC with Razor pages too. Which is usually my main approach to ASP .NET Core.

If you want to build an SPA, their boilerplate[0] for it is also excellent and supports several (angular, react, react-redux... unfortunately they deprecated Vue support [1] because it was great, but that's easy to bootstrap from react starter) and they even support server-side prerendering with a sidecar

[0]: https://github.com/aspnet/JavaScriptServices [1]: https://github.com/aspnet/Announcements/issues/289

How would you compare .NET core and the JavaScript ecosystem (node + frontend frameworks)?

.NET's ecosystem is somewhat lacking compared to Javascript. This is what I love about node.

On the other hand, the standard library that .NET provides is superior.

From my experience, it's possible to deploy a .NET site with just a hand-full of 3rd party dependencies (if any).

Well, you can as well call javascript code and use any node package from net core


That what made my decision to use dotnet core for web app

Its also like php^; whip stuff up quickly and easily, but keep your eye on the ball, or you get hit in the face by spaghetti code.

There's certainly something to be said for an easy way to get running quickly, when you can't be bothered with an SPA.

...but it doesn't scale. :)

(^ the comparison is more apt than you might imagine; razor pages are the 'one file per logical page' model for asp .net core, and very very reminiscent of webforms, and all the legacy that carries).

(... and don't even get me started on razor components, where every UI interaction (ie. every single browser event) involves a full round trip to the sevrer to rerender the component state, due in .net core 3.0.

I'm just saying: Microsoft have done some excellent work with .net core and kestrel but that doesn't mean every idea they have is a good one)

you can write spaghetti with every language in every framework.

btw. razor pages doesn't mean that you need to write spaghetti code. I've seen tons of MVC code where the view layer got way too much code.

most of the time it will help if people actually only put the bare minimum of code into the view layer. (only if/for/etc where absolute necessary)

Sure, but... I suggest that it is a trade off in which functionality is exchanged for convention and simplicity...and perhaps not for good reasons, or in a way that encourages good outcomes by default.

If I was under the impression the ASP.NET team was on the ball and agreed with the decision they were making, I'd be more prepared to run with it... but I'm not.

For example, there's an open github issue discussing the concerns people have raised about the ASP.NET team suggesting that razor pages are better than MVC.

Read it yourself and you'll be extremely familiar with the various opinions on the matter:


This, is again, as I said before, an example of not everything that (specifically the ASP.NET team) does, being a good idea.

well most of these concerns are from people that actually never tried to run with razor pages, which is really sad.

as said my background was basically 5 years or more working in mvc and even partially working on the framework itself. and I would say that Razor Pages is a good starting point for any application.

currently I think both, Razor Pages AND MVC is important, but it's so easy to use both, so there is no problem. If something grows to much you can easily split it into the old traditional MVC paradigma, but actually most of the time I migrate directly to a web api instead of dotnet core mvc. (at the moment I have 2 mvc controllers, which are basically the same as web api 2 endpoints (no view/no model))

also razor pages is kind of MVVM which is common more (i mean in the javascript world more stuff is mvvm than mvc, not sure why java/c# can't follow that aswell?!) and more for UI related frameworks (which razor pages of course is) but most people didn't realized that on this github issue..

Exactly this, if you've got crazy logic going on it should be removed from the view and done in the controller. The only code in your view should aid displaying model data and defining the layout output. I usually do my best but of course there are exceptions, especially if using third party libraries for views.

> That is a 34% improvement

That graph is completely misleading then. I can understand why they don't want to show actual y-axis values, but at least put relative values on that axis. Even "60%" and "100%" would do.

Thanks for the suggestion, I've modified the graphic.

It's still misleading unless you look for the specific numbers...

I like the fact that they are highlighting the individual Pull Requests. It's a good reminder that .NET Core is indeed open to community changes.

Indeed. But it is worth noting that most of those individual pull requests were from Microsoft employees.

It is still awesome that the community can comment, and Microsoft does take community changes regularly for .Net Core, VS Code, etc.

I tried to figure some actual numbers related to this, see http://mattwarren.org/2017/12/19/Open-Source-.Net-3-years-la... for a write-up

Not a .NET developer, so apologies for the dumb question, but why did MS release a second open source .NET implementation around the same time as they acquired Xamarin? Do .NET core and Mono compete with one another? Do they serve complementary roles? Why would I choose one over the other?

I'll take a stab at this.

.NET Core and Mono are not exactly analagous. A better comparison historically would have been Mono to .NET Framework (i.e. classic .NET).

.NET Framework is a fairly expansive set of standard libraries bundled with a runtime - it's commonly used and well-supported, and dates back to about 2001, give-or-take a beta or two. There's a lot of current and legacy applications built on this out in the wild.

.NET core is effectively a do-over for the longer term in the form of a minimal set of dependencies that imports more of the standard library separately. There's a couple of reasons for doing this - primarily the parts that have been abstracted out mean that .NET core runs in a lot more places than the classic framework (including natively on devices like the Raspberry Pi, for example), and can also be (and is) fully open-source (the classic framework is mostly open-source these days as it is, but licensing problems meant that not every part of the toolchain could be opened up).

There's also the question of expectations when it comes to community changes. .NET Framework was adopted in many quarters on the basis that it was developed by Microsoft directly, and bureaucratically opening it up to community changes after the fact becomes problematic due to the massive number of stakeholders and their expectations.

I'm sure someone else can add more info but as I understand it this is the gist.

Just to confuse things even more, there is also .Net Standard. (https://blogs.msdn.microsoft.com/dotnet/2016/09/26/introduci...).

.Net Framework (Windows only), .Net Core and Xamarin are all implementations of .Net Standard.

To add even more confusion, ASP.Net Core can run either on .Net Core or .Net Framework.

I know I've over simplified this for myself. But the way that I look at it is like so.

.Net Standard is an "Interface" - No implementation behind the scenes just a list of apis.

.Net Framework, .Net Core, Xamarin are all "classes" or implementations of .Net Standard.

ASP.NET Core only relies on .NET standard which is implemented by both .Net Framework and .Net Core. Which means it can run on both.

ASP.NET (Framework) does not rely on .Net Standard but directly links to the implementation of .Net Framework. Which means those libraries can't run anywhere and are instead limited to only .NET Framework.

I'm no Jon Skeet, or tech guru. So please take this explanation with a grain of salt and if I'm wrong correct me as needed. This is just how I've wrapped my head around it.

David Fowler (Microsoft engineer on ASP.NET Core) posted an "analogy in C#" gist a couple years ago that is similar to yours, but in code. It's a couple years old so is a bit out of date at this point, but the basic idea holds.


This is correct. .NET Standard as the name implies is the standard by which each framework tries to match. If you make any libraries that are .NET Standard compatible they will run everywhere. They used to have "Portable Class Libraries" but that was a mess. Also .NET Standard wasn't as good till recently thankfully. The earlier days were a mess since not everybody (framework) supported certain things.

That was perfect.

You can't do disruptive changes with the baggage of current stack.

System.web itself is over 2 mb and you don't need everything from it, all the time. Cross platform was another feature which was hard to do w\o starting over

Thank you, that helps!

For server on Windows, Linux and MacOS you'd probably sway .NET Core.

For client on Windows, UWP is .NET Core, and WinForms or WPF currently .NET Framework. Though .NET Core 3.0 is meant to bring the full set of Windows UI stacks to .NET Core https://blogs.msdn.microsoft.com/dotnet/2018/08/08/are-your-...

For Tizen you'd also use .NET Core

For client on macOS or Linux you use Mono

For games you'd probably use Unity (Mono based)

For iOS, tvOS, watchOS, Solaris, Android, Solaris, IBM AIX and i, BSD (FreeBSD, OpenBSD, NetBSD), PlayStation 4, XboxOne, WebAssembly you'd use Mono

Can still use Mono for client on Windows and the ASP.NET Core stack is quite happy running on Mono on the server; but they have different focuses.

If you were building a library you'd target the .NET Standard 2.0 "contract" and it would cover all the runtimes.

> For client on macOS or Linux you use Mono

I’ve been using .NET Core 2.0 and now 2.1 on embedded ARM Linux. Works surprisingly well.

Hm, curious - why do you say wasm and XboxOne would lean toward mono? Doesn't Blazor target .NET Core? I would've assumed .NET Core w/ UWP would be first class citizens on XboxOne? Thanks in advance!

Edit: maybe I was confused about Blazor. Looks like it can work with ASP.NET Core, but the info I'm finding talks about building with mono.

I guess the more interesting question is, do you think Mono will remain important for the places you've mentioned, or will be replaced more and more by .NET Core? Especially given the UI goals with .NET Core 3 (and my own biased belief that .NET Core is the clear unabashed future of .NET).

For the WebAssembly client side Blazor it uses the Mono WebAssembly (though .NET Core on server) as the Mono compile chain for WebAssembly as it is the most mature implementation (also produces smallest output from their work on watchOS).

If you are using Blazor as a full desktop app (e.g. Electron) then you can use .NET Core on the client and that can talk to the javascript as a local server; but for running in the browser directly as WebAssembly it uses the Mono (WebAssembly) runtime.

XboxOne can run UWP and for that you can use .NET Core (via UWP) in a "shared" mode; however it also has a dedicated "game OS" mode (which most boxed/AAA games use) which takes priority over everything and has full hardware access, but that doesn't allow Jitting so you can't use .NET Core in that mode.

Mono and Unity have mature AoT compilers so can run in this mode (Xamarin also need AoT for iOS and Android which also don't allow Jitting, and Unity needs it for a lot of the platforms they support. Mono and Unity use different approaches to AoT though)

Mono is now sharing C# source code with .NET Core (and it goes both ways); but as far as I'm aware .NET Core doesn't seem to be rushing to fill any of the gaps that Mono serves well (iOS, Android, etc)

Though they are working on an AoT version of .NET Core for Windows/Linux/macOS https://github.com/dotnet/corert but that seems to be more focused on competing with golang and rust's AoT single exe offerings.

For desktop UIs; we'll have to see what's next :)

> For desktop UIs; we'll have to see what's next :)

I've been working on a project to bring GUI to .NET. I think it really is the best option out there.


WASM (Blazor) uses the Mono runtime.

> I guess the more interesting question is, do you think Mono will remain important for the places you've mentioned, or will be replaced more and more by .NET Core? Especially given the UI goals with .NET Core 3 (and my own biased belief that .NET Core is the clear unabashed future of .NET).

I don't know what the future is for Mono. Mono currently has support for platforms that .NET Core doesn't specifically target - iOS, Android, native code through Unity, WASM, etc. It'll probably start to move toward specifically supporting those platforms while continuing to support newer versions of the .NET Standard.

.NET Core/.NET Standard is the future of mainstream .NET. .NET Framework is dead. I've heard it from Microsoft people directly. There probably will never be a 4.9, and there will definitely never be a 5.0.

> .NET Framework is dead

Surely it's not that dramatic?

I'd imagine at the least it would be in maintenance mode for years, although in principal I'm inclined to otherwise believe this. I don't think anyone was really convinced by the early talk of co-existence.

I'm pretty sure there will be updates that implement newer .net standard versions, but I doubt it will receive a lot of non essential upgrades for performance and such in the future.

.NET Framework was first, the major install on Windows only. Mono came after, to try and replicate .NET on non-Windows and with an open-source approach. Eventually .NET Framework code was open-sourced (at least read only in the beginning), which also helped Mono get better.

.NET Core was created to take the aging .NET Framework and rebuild for a faster, easier deployment, quick release cycle, and cross-platform on Linux and Mac OS; without hurting the decades of backwards compatibility.

Mono meanwhile became the base for Xamarin for mobile apps and Unity as a gaming engine. Even though Microsoft acquired Xamarin, Mono already had much uptake in both of these scenarios that it didn't make sense to attempt replacing it, so now we have Mono for mobile+games and .NET Core for the rest.

.NET Framework will still be around for a decade but is effectively retired and .NET Core 3.0 will fill in any remaining gaps for Windows apps that still need the full framework today.

My understanding is that Mono is being billed as a client framework (Monogame, Xamarin, Unity, Blazor) while Core is more for the server/systems target. To add to the confusion, ".Net Standard" is a library compilation target that should work everywhere (even the legacy Windows-only .Net Framework). Interestingly, the newly MIT-licensed Xenko game engine compiles to .Net Standard libraries which is how it supports so many platforms (via barebones launcher/shim projects for whichever framework implementation is desired). Edited for clarity, thx.

.Net Standard is what you build libraries against; its the abstract contract of the runtime.

Then .NET Framework, .NET Core, Mono, Unity, etc runtimes then implement that contract so a library built against .Net Standard will work on any of the runtimes.

From what I remember reading a post or listening to podcast is .net core is "server-centric" and mono is "client-centric" also relative "foot-print" in size.

This is a really great question, in fact. Normally, people go with what's available. Without connecting the dots.

.Net core replaces Mono.

.NET Core is superior than Mono for Server Software like ASP.NET Web Apps but Mono is more suitable for embedded environments like iOS/Android/PlayStation/Xbox,etc and is also what's used in Microsoft's new Blazor project for running C# Apps in Web Assembly (https://github.com/aspnet/Blazor).

Not according to Miguel it doesn't.

Don’t know what his opinion is but I think .net core 3 would be closer to replacing mono than 2.X

I'm a huge fan of C# and ASP.net, but this transition hasn't went as smoothly as I hoped.

I tried .NET core a lot pre-1.0 and it seemed really fragile (especially on Mac and Linux, which was my main excitement around it).

As I was a bit nervous about that, I started a new project in 'classic' .NET w/ MVC5. It however seems a lot of projects are migrating rapidly to .NET Core and leaving support for legacy projects recently.

It does seem to be a giant pain to migrate a legacy asp.net app to asp.net core (very manual). Anyone have any tools/advice on this process?

Porting to 1.0 was a pretty big headache, there was a lot of stuff in full .NET that wasn't in 1.0, and a lot of open source libs hadn't been ported yet.

In 2.0 they had a bit of a rethink about how stripped down Core was going to be and added a heap of stuff in, plus a lot more (most?) big open source libs have been ported. So porting legacy apps is much easier.

However I don't see any point porting a legacy app unless there is something in core you want. Perhaps you'd like to move your app over to running on Linux servers? At my previous company we ported parts of our product so that we could sell cross-platform support without needing our users to install mono.

> Porting to 1.0 was a pretty big headache, there was a lot of stuff in full .NET that wasn't in 1.0

Yeah, this.

For example, originally they weren't going to include SqlDBAdapter or DataTables or any related classes. They really just thought everybody was going to be okay with using Entity Framework for everything and not having a generic, non-ORM database interface.

Most of my coding involves either extremely simple tables or where I want to manipulate tables for third-party applications that have over a thousand tables where I don't have access to the source code. Most of it is ETL or ETL-like. It's also usually in PowerShell or Python, but some of it is in C#. Making an EF would take ages for some of these applications, especially when the schema isn't always relationally sound (but it's third party so I can't change it). It still just blows my mind they they thought it'd be okay to make everybody box and unbox their database into classed objects instead of just letting you manipulate it as a DataRow. As far as I know, they still consider DataTable and the like to be "legacy".

That's not really correct. Raw SQL access (SqlCommand etc) was always part of .net core. You can't really call DataTable and friends a generic non-ORM data interface - it's an API unique to ADO.net.

I was an early adopter of .net core. It definitely had its quirks in the beginning. Recently I gave it another go, it's really improved a lot. Like it's production ready now. I'd say pick it up again.

I remember all of the KRE, KVM, project.json, etc. It was pretty rough, and that's from someone that's been using .net daily since the 1.0 beta many years ago.

They've fixed all of that stuff.

Yeah, I'll definitely use it from day 1 for new greenfield projects, but quite a lot of concepts have changed to make migration trivial.

It's stabile now. Just forget the alphas and 1.X ever existed. If you really want to get a hang of it, don't use Visual Studio. user VS Code, and the CLI tools:


Rider makes .NET Core amazing on Linux. Well worth the price.

Agreed. F# + Rider + Ubuntu is my daily driver.

Visual Studio is a must for 99% of .net jobs, right? I see a lot of praise and comments that it's so good you can't replace it

Do you think they will drop VS and adopt VS Code as the recommended solution? Since it's locked to Windows, bloated and there's no 64 bits version

I'm not sure why he's suggesting VS is bad here. VS is still an insanely good C# environment, core or not

There are some extremely opinionated assumptions underlying your question. VS isn't really bloated, given what it's trying to do- and in recent years it's been rock-solid stable. Visual Studio for Mac exists (old Xamarin Studio) and is coming along. And the engineering team has made its point on 64 bit clear that they don't believe it will bring the performance people like to say it will.

But more importantly VS and VS Code are two different approaches- IDE vs TextEditor. The vast majority people would say C#, as a compiled language, is best in an IDE. Either way, I'm pretty confident in predicting that there is no way Visual Studio gets "dropped."

VS isn't going to be dropped any time soon, there are so many features and plugins for it that it would be a massive job to replace it.

They have moved more and more features to run into separate child processes to prevent the memory use requirements for the main VS process. VS 2017 uses a lot more memory than 2013 (the previous one I used). My 8GB machine gets slowdowns every now and due to SQL Server getting swapped out.

.NET core has several compatibility packages you can use.

However, once you have Web Forms in your project there is no way forward but to rewrite the Web Forms pages.

Any links? I don't use webforms, just MVC5 with razor pages.

Assuming its C# you should mostly be able to copy the source of the razor pages and controllers over wholesale.

The startup and identity/authentication is a bit different though.

Why even bother? Mvc5 still works well, and will continue working well for the years to come. Is there a pressing need to migrate?

I’ve decided to wait it out.

Some packages are getting EoLed - eg Auth0 have discontinued their System.Web implementation in favour of .NET core.

If it wasn't for that, I wouldn't be worried at all...

I see, yeah.

Maybe you can convince them to stay the course by citing their own tagline? It says legacy right there.

We provide a universal authentication & authorization platform for web, mobile, and legacy applications.

Donno. I've recently moved stuff to Visual C++ 1.10 with the NT 3.5 SDK and it's been amazingly fast building stuff.

Who needs "features" anyway?

I wonder if it's running on windows or linux. And whether they're using ASP.NET core or rolled their own framework.

HTTP Response header say the server is running IIS 10.0 so its most likely Windows Server 2016.

Not neccessarily. IIS could also serve as a reverse proxy/load balancer, with a Kestrel installation behind it. (If I'm not mistaken the response header is only set in static resource responses.)

Interesting, I thought kestrel was supposed to be way faster than IIS. Wonder why they stuck with IIS after going through all this.

Current: Windows Server 2016 + http.sys + homegrown managed layer replacement for System.Web.

Near future: Windows Server 2019 + http.sys + ASP.NET Core

Medium future: Windows Server 2019 + Kestrel + ASP.NET Core

You are not supposed to expose kestrel to the open internet. It should always run behind a reverse proxy like nginx or, in this case, IIS.

Microsoft has changed thier guidance. Check out the different verbiage between the 1.x tab and the 2.x tab.


I'm sure they are using Kestrel behind IIS.


Kestrel needs a reverse proxy in front of it for SSL termination etc. https://docs.microsoft.com/en-us/aspnet/core/fundamentals/se...

Doesn't need on for SSL; needs a reverse proxy if you want to do port sharing (many apps on port 80/443 switched by host header) or Windows Auth.

You can also build your reverse proxy using Kestrel.

Though IIS is more battle tested so they may be using that as front line server for such a high profile service as Bing

Is it even wise to design a system that uses un-encrypted backend traffic? The Snowden revelations did demonstrate that intelligence services are snooping on those.

Not over an actual network; but localhost or in-process it should be fine? Though the in-process IIS hosting is ASP.NET Core 2.2 as it got bumped from the 2.1 release https://github.com/aspnet/IISIntegration/issues/878

ok but then it is not much of a load balancer.

Meant for port sharing; multiple apps or subdomains switching either on host header or path; as you can't have multiple processes listening on the same port (80/443) on the same machine. Or changing which is run based on the path

That's no longer a requirement with the latest release of .net core.

"... we started an effort to make the code portable across .NET implementations, rather than relying on libraries only available on Windows"

It looks like they're planning to move off Windows - or, at least giving themselves the option to.

They say views use razor, so in all likelihood it's asp.net core.

Wow, they got an eye-popping performance improvement from this change.

They've added the concept of immutable views into managed or unmanaged data, aka Spans. This allows huge reduction in allocations in the framework: https://msdn.microsoft.com/en-us/magazine/mt814808.aspx

I get the feeling that there has been a lot of low-hanging optimization fruit in .NET that went unaddressed because a) the original compiler was too complicated to change and b) there was no pressure to improve it because it only ran on Windows and there were no real competitors.

a) was addressed by Roslyn, which is apparently much more manageable, and now that .NET is supposed to run everywhere, it has to have competitive performance. Hence the big gains.

I think there's also c) .NET Core is OSS, and has had a lot of contributions from the community for things that MS themselves didn't prioritize.

Yes, this is an excellent point I forgot. With closed-source, you only get to make perf improvements if your manager decides there's not a more important feature you should be working on instead.

Just a small thing, the Roslyn compiler (or any C# compiler) does very few optimisations, see https://blogs.msdn.microsoft.com/ericlippert/2009/06/11/what... for some details.

Almost all of the 'compiler' optimisations are done by the JIT at run-time when it converts the IL code into machine/assembly code

For example, .NET doesn't do escape analysis, though I've read this is less of an issue for .NET than Java because .NET has struct value types instead of everything being a heap-allocated object.


It is, but I wonder what they're measuring on Y-axis. I just says `(actual values omitted)`.

Seems like it's also offset (e.g. Y starts above zero).

>... the final precipitous drop (on June 2) is the deployment of .NET Core 2.1! That is a 34% improvement ...

If so, that graph seems to be deliberately misleading.

It says "The Y axis is the latency (actual values omitted)". I, too, wish they had shared the actual values, though.

Its latency

In what? Bananas per second? No value on the axis means we don't know what _exactly_ is being measured, or have an idea about the overall state. They say 37% improvement but that's largely meaningless. 37% improvement of what?

If latency is 1 minute and they've shaved 22 seconds off (37%), neat. But it's still bad.

If anything, the deliberate omission of scale on that Y axis is really disturbing.

It's in milliseconds. I'm going to update the post and add that note, but omitting values was a business decision, sorry.

The first commit says "Vectorization of string.Equals". It now uses SpanHelpers.SequenceEqual. How is that vectorized when it's just an unrolled loop from what I see? Does vectorization not mean using SIMD instructions? It also means improving data dependencies?


char ref (managed pointer) is cast to byte ref https://github.com/dotnet/coreclr/blob/04d9b557ef8b7c60a1194...

Then it uses SpanHelpers.SequenceEqual for byte which is vectorized


>char is cast to byte

To be clear, char is not cast to byte (which would truncate). A char ref (ie a char pointer) is cast to a byte ref (ie a byte pointer).

Good point :) Updated

Oh thanks! Was looking at the wrong SequenceEqual

Maybe there is an autovectorizer in the compiler that recognizes the shape of that loop and uses SSE to do it 16 bytes at a time instead of 8 at a time?

The CLR/runtime to optimizes that code into SIMD instructions, if the hardware allows for it.

Are there resources that describes Bing web stack/infrastructure? Something à la http://highscalability.com/.

Bing is huge both in codesize and technologies used, but most of it is a flavor of Windows Server 2016 (soon 2019) + http.sys + C# + Razor + TypeScript for Frontend. C#/C++ for middle and lower tiers.


While it's not Bing, this is the canonical example I have seen in response to a lot of "What is at least a very good way to do x in .NET Core?"

Most of the stack is custom.

I know nobody who uses Bing to be honest. Am I wrong in thinking this?

DuckDuckGo, Alexa, and Cortana use it.

It gets about 21% of the search market.

If you know anyone that use https://duckduckgo.com/ or https://www.searx.me/, then you know some people who use Bing (at least indirectly). Also, some people use Bing (directly) to find porn, apparently...

I use bing all the time. The results are good enough and I get points for amazon vouchers. "image searches" ahem work great too.

Ah, yes, I too use Bing to search for images of a specific nature with some frequency. It's quite likely the best search engine available for the... higher arts.

I do not know if they have done it on purpose, but kudos to the Bing team either way.

I use duckduckgo which is bing with a skin and added features. One of those features is "bang commands" which allows you to use a single site to search thousands of other sites individually. If I want to check if my local Home Depot has the particular deck screws I need, adding !homedepot to my duckduckgo query redirects me to Home Depot's search results page. This is very useful as web sites become heavier and heavier which results in a terrible experience just getting to a search box. Ditto Amazon and most other ecommerce sites. Plus it is nice if I don't get the results I am looking for on duckduckgo to just add !g to the query and get redirected to Google's results.

Anecdotal, but I alternate between Bing and Google, with a general preference for Google.

I've gotten two free meals using Bing rewards.

That's... impressive. I didn't realize the rewards were actually substantial.

I use bing all the time for TV shows, its way better than google in this regard

I thought this was a really good post[0] on the networking performance potential of .NET.

[0]: https://dzone.com/articles/how-raygun-increased-throughput-b...

Sort of? It's interesting work but I don't know if I'd frame it as "networking performance potential of .NET".

The title is "How Raygun Increased Throughput by 2,000% With .NET Core (Over Node.js)". From the article:

> In terms of EC2, we utilized c3.large nodes for both the Node.js deployment and then for the .NET Core deployment. Both sit as backends behind an NGINX instance and are managed using scaling groups in EC2 sitting behind a standard AWS load balancer (ELB).

They're comparing their Node.js vs C# versions behind ELB and NGINX. It's barely touching the networking part of the stack directly. They gains are almost definitely all from a stronger compiler story and a concurrency story.

They say themselves:

> Node is a productive environment and has a huge ecosystem around it, but frankly, it hasn’t been designed for performance.

.Net Framework is windows only

Mono is open source port of .Net Framework to non-windows platforms (linux, mac os)

.Net Core is cross platform (like Java but without GUI features for now)

.Net Standard for libraries shared by all 3 above.

Definitely interesting... I think this speaks mostly to having a direct compile option for .Net Core that works well more so than the platform as a whole.

Personally, my first past at most things these days would be with Node.js simply because of productivity, but it's nice to see more options for performance growth as needed.

Years ago I spent a lot of time trying to convince some of our large bank customers that we could run their stuff on Windows NT instead of SunOS or AUX or whatever by noting that microsoft.com ran on NT. I'm not even sure if it was actually true at the time, now that I think about it.

It would be great if they compare running bing on Linux vs Windows! Is there a hidden gem somewhere?

Barring any bugs, the hope is that the code generated on Windows and Linux for the same architecture will be very similar, modulo calling conventions and ABI.

Then you come down to issues like Linux networking vs. Windows networking, Disk I/O differences which are interesting but from a .NET perspective less so in my opinion.

It is very interesting to know if running .net core on Linux gives you better performance than on Windows. If you don't get any perf. penalty then moving to Linux saves you a lot in Windows Server licenses! And if Linux is faster then there is really no reason to stay with Windows Server anymore.

Bing is Microsoft, they likely do not pay for Windows licensing, and would also rather work with the Windows team than switch to Linux.

Of course Microsoft won't. THE outcome of the performance comparison is important for all the other companies using .NET.

They should have renamed .net to something else. Now we have to deal with the same issues as the python 2/3 but maybe even worst as it's harder to tell which apis are available looking at sample code.

just switch to. net core. "* core" is pretty googlable

Bing folks, are you running this on Linux? Common problem a few years ago was that perf of the Linux version of Core sucked compared to Windows. The only way to fix that is through serious dogfooding. I’m wondering to what extent Microsoft is committed to such dogfood on Linux.

Windows Server 2016. All the improvements in the post that helped us are the same on Linux. I will agree though there's more dogfooding to be done. It's happening slowly but surely.

>> It's happening slowly but surely.

Exciting to hear. .NET is truly a diamond in the rough on the Linux side, its only major problem seems to be catch-22 in the sense that few people use it there, so the ecosystem is slow to come in.

Techempower Benchmarks all run on Linux and they are quite focused on the performance of them https://www.techempower.com/benchmarks/#section=test&runid=9...

7+ million requests per second on linux for a single server (maxing out a 10GBps network) isn't to shabby ;)

Also they run continuous benchmarks for all platforms in a wider range of tests https://github.com/aspnet/Benchmarks

With the results being public at https://aka.ms/aspnet/benchmarks

Narrow benchmarks aren't interesting. Use in large scale production, however, more or less guarantees stuff will eventually work well.

A few years ago is a long time for .NET core, it was in it's infancy then. It's a mature product these days. I've never been a MSFT-platform dev, but I enjoyed fiddling with .NET core for a while. The only annoyance big then was they were caught between two package manager formats

That also hamstrings the ecosystem.

The good news is it's no longer the case :)

". NET is applying an update, please return to Bing after reboot"

Bing is completely mess as a search engine. I tried it 3 or 4 times and every-time the context of results was long away from desired.

I heard Bing good for porn, it's probably the one strong side, LOL.

Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact