Hacker News new | more | comments | ask | show | jobs | submit login
Memory usage of a toy C# server and client with 500K concurrent connections on (github.com)
288 points by pplonski86 27 days ago | hide | past | web | favorite | 266 comments



C# is a pretty underrated language. I've worked with JS, Java, C++, Perl and I've always preferred to use C# for personal projects.

async/await - C# is the ONLY language which gets this right. Javascript has it now but you need to be use libraries which support it too, which are hard to find. Kotlin has coroutines but Java base library doesn't have support for coroutines so you can't use it all the way through and create a chain of all async functions. Once you use async/await it's hard to go back, since the code is as easy to write as synchronous code.

Then you have proper support for generics(unlike Java which has type erasure), LINQ queries and an amazing IDE which is visual studio.


I think C# is underrated because it was closed source for so long, and working with a closed source ecosystem is such a pain (and risky!). At least personally, that made C# a complete non-starter for me before. However, I've recently started learning it, and it really is nice. It's much more pleasant to work with than Java.

Hopefully it will see a renaissance when people realise that with ASP.Net Core they can write code that looks broadly similar to Rails/Laravel/Django code, and get much better performance more or less for free. There are definitely some rough edges around the ecosystem though. Entity Framework isn't nearly nice as Laravel's Eloquent ORM, and it's pretty jarring looking at a promising library, only to realise that it's a commercial offering (don't get that with PHP!).


Not to mention the recent performance gains from adding Span<T> in .NET Core 2.1.

I've used F# in a few professional projects before and that language is certainly underrated. I used to work in Microsoft's DevDiv and everyone knew it was criminally underfunded. I think if it had OCaml-style modules and Microsoft threw more weight behind it could take off big time.


Indeed, it was learning Rust that led me to being interested in learning C#, and Span<T> is basically the equivalent of Rust's &[T] slices, which are super useful for avoiding copies. It's pretty cool that C# has things like these and stack-allocated structs, which let me carry over a lot of the performance optimizations I might use in Rust.


Adding it to dotnet core by default certainly helps I think. I didn't start learning it until recently, but when I did having it part of the tools I use every day was awesome.


Don Syme will never allow ocaml-style modules


Why not?


Don Syme, on omitting ocaml-style modules from F#:

In addition, there was the question what not to implement. A notable omission from the design was the functorial module system of OCaml. Functors were a key part of Standard ML and a modified form of the feature was included with OCaml, a source of ongoing controversy amongst theoreticians. The author was positively disposed towards functors as a “gold standard” in what parameterization could be in a programming language but was wary of their theoretical complexities. Furthermore, at the time there were relatively few places where functors were used by practicing OCaml programmers. One part of the OCaml module system – nested module definitions – was eventually included in the design of F#. However, functors were perceived to be awkward to implement in a direct way on .NET and it was hard to justify their inclusion in a language design alongside .NET object programming.

The Early History of F# https://fsharp.org/history/hopl-draft-1.pdf


I think it might make interop harder.


If you’re not happy with EF, and don’t mind writing your own SQL, give Dapper a try. I’ve used a variety of SQL ORMs in C# over the last 12 years, but they always seem to fall short for me in some way. I like to be in control and also have nice types, and Dapper is the perfect answer for that.

Yes, you do end up writing some code that a great ORM would just give you for free, but honestly I haven’t felt pain from doing this as it makes me actually think about and set in stone what relationships and models I intend to support in my data layer.


Or better yet, don’t fall for the monotoolism trap and use both! I regularly use EF for inserting lots of hierarchy using one transaction, and dapper for fast view/materialized view reads. It’s an excellent combination.


Sure, and then there’s of course old school ADO.NET for bulk inserts.


One of the main projects I'm working on is using JSON for a more normalized usage of stored procedures with MS-SQL `method(tokenJSON, inputJSON, out outputJSON, out outerrJSON)`, so using Straight ADO.Net for most calls and responses. I'm not a fan of DB heavy logic, but it has made the API layer super thin and works pretty well.


I find writing raw SQL to be quite annoying for simple CRUD operations. It's just so easy to make stupid little syntax mistakes when writing SQL. And things like having to update all your queries if you add a column to your database are pretty annoying too...

ORM's like Eloquent (PHP) are super nice, because they let you lean on them completely for simply queries, and then give you layers of opt-outs. For example, inserting a row is just obj->save(). For slightly tricky selects you can use the whereRaw method to interpolate raw SQL into just a where clause of the query, or you can use DB::query() to write a full raw SQL query if you have some queries that are particularly complex.

I guess I could give Dapper a go though. SQL is... fine.


I think you misunderstand Entity Framework if you think it's worse than Eloquent. Eloquest is ok and certainly pretty reasonable for PHP. But EF and Linq are in another ballpark entirely -- it's extremely easy to use and even allows you to do type-safe queries.


Perhaps! I'm new to .Net and Entity Framework.

But from what I can see, EF is trying to provide an abstraction layer that pretends that I am just working with collections of objects, which I don't like at all because it makes it harder to control which queries actually get executed and when!

On the other hand, Eloquent provides an abstraction for generating queries, but this maps pretty closely to SQL, and it otherwise largely stays out of the way...

The type safety is nice, and of course you don't get that in PHP. But IMO that's more about the language than the library.


EF will only query exactly what you ask for and will execute the query only at the point you actually iterate or request an object. For changes, EF will only hit the database when you call SaveChanges() on the context. It's an abstraction that works with very few "leaks".

Eloquent also has object collections. Both frameworks have similar methods for filtering and querying. Eloquent is much more clunky (because PHP) and the underlying conceptual model is different (ActiveRecord style vs. DataMapper style). I certainly would use Eloquent for PHP projects but I definitely wish PHP could do something like EF.


Doctrine is closer (in terms of actually been a Data Mapper) than Eloquent so it may be more up your alley.


I mean, I get writing boilerplate SQL can be annoying, but honestly if you organize your code well then you end up making changes only in a few places (your repository class and wherever the new column actually gets used). By encapsulating your database in a set of repository classes you can avoid most headaches. Even if you don’t write your own SQL for most things, this is a really valuable pattern to follow in data-centric code of any kind.


Look at Dapper.Contrib - it's an extension lib for Dapper that adds a load of ORM-esque functions for basic CRUDing.


Yeah, also Dapper.Extensions/Mapper/SimpleCRUD etc.

There are a lot of add-ons for it that simplifies all the generics for you.


> I find writing raw SQL to be quite annoying for simple CRUD operations. It's just so easy to make stupid little syntax mistakes when writing SQL.

There are ways to mitigate this [1]. You write your queries in .sql files in Visual Studio, so you get intellisense and syntax highlighting, and you can even add these to your automated test suite to ensure they run correctly.

[1] https://github.com/naasking/Dapper.Compose


what is the benefit of "composing" queries via this library instead of just running them separately in a single connection? The "composed" queries are completely opaque to each other and are independent - they are not "composed", just "batched".


> what is the benefit of "composing" queries via this library instead of just running them separately in a single connection?

I think the link is pretty clear: it combines and batches queries while providing a type-safe interface for the returned results. It further provides intellisense and unit testing features that are tedious and error-prone to achieve in other ways.

Of course you can do this all by hand, or manually batch your queries to avoid round trips, but why would you want to?


Dapper is quite good middle ground.

The flexibility and performance of full SQL, without having to write all the mapping boilerplate by hand.


Can blend the PHP and .NET Core ecosystems if you want: https://www.peachpie.io/



I am a long term Java developer, but C# is looking really, really nice.

The eco system still seems wonky, but NuGet probably isnt more weird than that of Golang.


I'm not sure about now, but in the past every time I touched Java projects, it just seemed really wonky as well. I see a lot of that carries to the "enterprise" side of both areas. But will say C# was much easier to get a grasp of both with and without tooling for me.

NuGet, I'm still not sure of either, but at least it mostly works transparently. Though I haven't actually tried publishing anything to it, only consumed. npm seems to have the least resistance imho, for good and bad results alike.


One of the first things you should do when you find a library that you are considering is look at the license. This applies to any language, and any type of software (commercial, open source, SaaS or not). Even for open source, not all open source licenses are compatible with each other (eg, you cannot use GPL in a MIT-licensed project).


My anecdotal feeling is that most popular JavaScript libraries expose promise-returning functions these days. Browsers and Node has supported Promises for about four years, and async/await has been around for about two years. I have not personally had much problem using async/await with third party libraries. And wrapping those that use good old callbacks was pretty easy all things considered.


I've used C# since 1.2 (not exclusively).

The language is really good - it's one of my favourites. It's almost 2 decades old, but it's evolved naturally and is still a modern language.

Unfortunately, it's the shit around it that let it down... MSBuild, NuGet, VS - they're full featured, but very slow and clunky and hard to work with. Only popular because there's been no viable alternative.

Now, with dotnet core, there's no more reliance on Windows (for both dev and production). I can develop and deploy on my preferred OS, and there's a choice of IDE (Rider and VS Code).


A lot of people in the C# community actually use Paket (F#'s package manager) because NuGet is a complete shit-show. I'm about to do that change myself, because NuGet is truly one of the worst package managers I've ever used (and that is saying something).


I’m an F# developer in a C# shop, so I’ve only used NuGet. Can you enumerate some of the advantages of Paket?


paraphrasing from the main site(https://fsprojects.github.io/Paket/):

> Paket is a dependency manager for .NET and mono projects, which is designed to work well with NuGet packages and also enables referencing files directly from Git repositories or any HTTP resource. It enables precise and predictable control over what packages the projects within your application reference.

The big change is solution-level dependency management instead of the nuget-default project-level management, so you always have the same versions of dependencies across all projects in a solution. It also uses a lockfile for these versions so that restores are idempotent. It also allows for fetching independent files from HTTP-accessible locations or git repos, which is nice in F# because the language is succinct and enables you to reuse modules without going through the rigamarole of making and publishing nuget packages.


I use .NET on and off since the pre-beta days (priviledges of working at an MSFT partner) and never seen anyone use Paket, other than on F# tweets.


This is different from my experience. I've find that I run into 10 npm issues for every nuget issue.


I use Rider for a pre dotnet core large project and it's much faster than VS. It's like having VS with Resharper, but without the performance hit that Resharper brings.


Yes, I find Rider to be like ReSharper without VS.


I think it's only underrated in the HN/startup world.


C# owns enterprise development, and makes a good showing in fintech (though still less common than Java/Python/C++).

It is very uncommon in stand-alone software development of all sorts (shrink-wrapped, service development, commercial web apps, etc) for very obvious reasons. That doesn't make it underrated, there just happens to be a lot of other great platforms.


Where I'm at (US Fortune 500 companies) it's all Java, very little .net. Ymmv


I work for Pivotal. We see pretty much 50/50 between Java and .NET across F500.


Go below Fortune 500 (Fortune 10M) and it's almost all .Net.


Yeah. My impression is that C# rules the world of enterprise and fintech software, at least here in Norway. It might be different elsewhere.

It seems like a good language with very good tooling if object oriented programming (in the Java and C# sense) is your posion of choice. Now with .Net Core and other good iniatives from Microsoft it might be a more relevant language than ever.


Actually, it's a very good language (although arguably getting too featureful/complex for its own good these days) with fairly suboptimal tooling. NuGet is slow and a pain in the neck, MSBuild is a huge, slow, blundering XML-tastic beast of a thing, etc.

Migrating large projects to dot net core piece by piece is fraught with tooling bugs and issues (you pretty much need to migrate everything to SDK-style projects before you do anything else, for example, otherwise you get strange interactions between net461 and netstandard projects around auto-generated binding redirects). There's still DLL hell. Default "copy local" results in crazy n log n file copies in large solutions that totally kills build performance (you can tweak this to use hard links in an effort to improve things, but not within Visual Studio which is where I really want cycle times to be low, or do mad hacks with shared output folders, but that can give you nondeterministic builds).

So it's far from all roses - check out their bug trackers on github and there are a number of surprising issues. There are also a lot of missing things on Linux still (e.g. Out-of-the-box support for kerberos auth in ASP.NET core/WebAPI things on Linux). Quite a number of APIs pretend they are there in netstandard but throw at runtime on Linux machines, and yet other things are annoyingly half-hearted, like mapping some Linux syscall return codes to Windows HResults, but not all of them and not consistently across all APIs.

It's getting there, but the surrounding ecosystem has a lot of catching up to do compared to Java, IMO.

The language is nice, though. F# also.


"C# has suboptimal tooling" is a new one. Maybe if you use vim. Visual Studio is far and away the most comprehensive IDE available.


C# tooling is very sub-par compared to what exists in the Java world, and I'm not just talking about build tooling (package management is a tough problem and Maven/Gradle are not perfect but NuGet is just atrocious) but debugging/deployment tooling as well. It's an artifact of how C# was a closed-source product for so long, the first-party tooling is pretty good for basic stuff but try deploying it outside a Windows environment and the edges become viciously sharp. Mono is a joke compared to OpenJDK, and I can happily deploy my application on any of a half dozen web servers that provide some superset of capabilities of the official server.

I understand that's changing of course, now that Microsoft has open-sourced the core of the language and started pushing to get support onto Linux and other platforms, but they are still making up an almost 2-decade deficit here.

But yeah, Visual Studio beats the pants off Netbeans and Eclipse, while functional, is very clearly what happens when you let an engineer design a UI, everything is possible and nothing is easy.


They are first converting debugging tools, the rest comes later


No, they are correct, the tooling is bad. Tooling in the sense of setting up a build for a large and complex project, that can resolve dependencies quickly and correctly, where the build is very fast and deterministic, where incremental builds always work correctly, where building multiple configurations is easy and reliable, and so on.


what has good tooling for that?


I believe the state of the art is bazel and similar systems. Bazel supports C++, Python, Java, and Go well that I know of, maybe more too. It's also extensible, and people have rolled their own support for various languages. You can find some custom bazel rules on GitHub.


Bazel is an interesting build system, but it worries me that there are TensorFlow releases from 2018 that won't compile with Bazel clients from 2019 without adding extra flags.


Correction: December 2018, not 2019.


The development tooling is great for small projects and breaks down a little as project size grows, while the automation and deployment tooling is middling-to-bad in my experience. The parent post is right about NuGet and MSBuild both being pretty ponderous compared to the myriad of package management and build solutions for various frameworks in the OSS world, and deployment for wider-scale systems is falling behind the K8s/containerized state of the art as well.


"Comprehensive" is not always a positive trait of an IDE. I much prefer Rider over Visual Studio.

NuGet does have some rough edges, and compile time for even my small .NET Core projects tends to be on the order of seconds, which is surprising for a mature compiler and stripped-down framework.


If they're Web projects and you have node_modules (or a similar huge number of files) in a subfolder, you may need to exclude them in your project configuration. I've found dotnet core builds to be very fast, unless the build is accidently scanning a few hundred thousand files it doesn't need to. Example at very top: https://github.com/caseymarquis/QApp/blob/master/App/App.csp...


I tend to take the a .gitignore for C# and merge the Node baseline together, add in test* and .* roughly speaking [1]

For that matter, anything using node/npm on anything but SSD/NVME is VERY slow (HDD, even enterprise drives)... local builds for me are in a couple seconds for a large complex project, on the servers it's minutes. So, YMMV.

The API layer I'm working against is .Net core, local build is under 3 seconds, building/deploying 12 seconds on my desktop (kills existing container, builds in one container, deploys to another and starts) docker for Mac, probably faster on linux... NVME drive on i7-4790K, 32GB RAM.

[1] https://gist.github.com/tracker1/8cd6309ecc3e480616f79e83712...


I don’t think it’s the compiler actually. I’m more inclined to blame the project system. In fact visual studios does some hacks just to get around it in some perofrmace sensitive things.


Indeed. It's actually pretty easy to write a custom msbuild logger to instrument (and subsequently chart) all the steps and you discover most of the wall clock time on big solutions with large numbers of projects is not actually compile time. csc itself is fairly competitive with javac. But by Christ, msbuild and nuget restore (which needs to run masses of msbuild machinery these days) are slow.


It's definitely the project system. I once worked on a solution containing ~150 projects. I consolidated all the code down to about 15 projects and the build time of the solution dropped by about 5x. I've been working in Java lately and frankly, I see little difference in compilation speeds between the two.


Rider uses the same project system, so I haven't had a chance to observe any alternative.


I find VS to be a bloated legacy relic that's particularly limited without ReSharper, although I much prefer the leaner and more advanced Rider which is a fraction of the size with more intelligent features that VS has been copying (from R#/Rider) for several years. Rider is leaner, faster, smarter, more extensible with higher quality plugins offering better support for developing modern Web Apps.

IMO the investment in VS Code basically signals VS has become too bloated and complex to innovate on so they're better off creating a new IDE from scratch with better plugin extensibility and ecosystem that's innovating and delivering features significantly faster than VS which comparatively looks like it's stalled.


I don't know why you appear to have been downvoted - that's pretty much bang on IMO. Rider loads very large solutions an order of magnitude faster than Visual Studio and is generally more stable too. I've built tooling to do very large scale refactors at work and VS won't even open many of the synthetic solutions I generate as part of that as they seem to be just too large for it, whereas Rider always gets there eventually.


I haven't really used R# or Rider so cannot comment there. Will say I love VS Code and the progress it's made. Though once they had a file tree and an integrated terminal (bash on win/lin/mac) I've been very happy.


Visual Studio has always been 1st class for developing whatever Microsoft is currently pushing, and the debugger is by far the best I've used.

Anything outside of that is average, if anything.


It all depends on your perspective. Java has a very long history of enterprise-grade tooling, including IDEs. Way back when VS still didn't have all the basic refactorings for C#, IDEA had structural search (i.e. pattern matching search that understands language constructs, types etc). VS has improved, but so did the Java IDEs.

So when you look at VS from the perspective of someone who's used to typical Unix or JS tooling, it's comprehensive. If you look it from the perspective of an old time Java developer, it's lagging behind.


ReSharper 1.0 for VS was released in 2004. For perspective at this time C# didn't support static classes.


Back when ReSharper was first released, it basically advanced VS to the point that was considered essential baseline by most Java developers.

As I recall, at the time, it seems that the Java ecosystem was mostly investing into writing code and declarative markup (usually XML), while Microsoft was most interested in visual designers and other ways to help devs on its platform avoid writing code. Thus, VS was far easier to get started with, but advanced coding was much better in Java IDEs.


It's funny, post 9-11 I was pretty down and out for a year or so post-dot-com etc. I learned C# using the command line compiler and the complete reference book in 2001-2002. No IDE at all, and building with BAT/CMD files.

I don't think the tooling caught up to the ease of not using the built in tooling until VS 2005 for me. Of course by VS2012, it felt like such a bloated slow mess, I don't think I've looked back much until relatively recently. Been doing a lot of Node work, which I tend to think of as very low friction.

These days, I absolutely love VS Code which tends to strike the balance of more than an editor, but not slow and bloated like a typical IDE. Integrated file tree + terminal are the best UX to me.


Microsoft also pushed for their version of markup (wpf + xaml)


You may mean partial classes or generics... it's had static classes since the beginning IIRC.


Source? In release notes of C# 2.0 one of main new features is support for static classes.


Oh.. okay, you are right... but I'm pretty sure you could create a class with all static members before that designation.


Did you even read past the first line of my response? <rollseyes>


I've not used C# and .Net Core in any real capacity so thank you for the corrective comment. I think some newer languages are getting tooling right, like rust and to some extent go (though dependency handling has been a sore thumb).

Maybe C# is really only fantastic on Windows with Visual Studio?


Monodevelop has become very polished, its kind of re-branded as Visual Studio for Mac. Its a decent free IDE.

RIDER from JetBrains is fantastic IDE on all platforms.

VSCode is a good editor, when you want to write smaller (single file) C#/F# programs.

LinqPad is only for Windows but is nice companion editor to riff off small methods/functions or probing LINQ etc.


Even on Windows I find msbuild to be a huge drag, especially on larger projects and at scale in an enterprise. There are also some questionable design choices for things like nuget behaviour around dependency versions. For example Version="2.0.0" really means "version 2.0.0 or above", including major version updates (which you probably actually don't want by default if people are following semver). Despite that, if your project A depends on library X with Version="1.0.0" and A also depends on project B, which itself depends on library X at version 2, you will get a build warning (or error, if you do warnings as errors, which you should) that A is downgrading the version. And it does (you'll get version 1 in the output folder if it's a warning). Which is arguably wrong. (If I'd specified "[1.0]" as an exact match, I'd expect that.) It's doubtless done in the name of backwards compatibility, but stuff like this still sucks, even if there's a reasonably good reason for it having to.


Would you mind sharing how to fix build performance using hard links? Is that similar to setting all projects' output dir to the same physical directory?


Like I say, you can't do this from Visual Studio as the common/csharp targets files have specific checks that stop it working, but if you're just invoking msbuild on the command line then you can set the following properties:

/p:CreateHardLinksForCopyFilesToOutputDirectoryIfPossible=true /p:CreateHardLinksForCopyAdditionalFilesIfPossible=true /p:CreateHardLinksForCopyLocalIfPossible=true /p:CreateHardLinksForPublishFilesIfPossible=true

It will give you a considerable speed-up for very large solutions, especially if there are lots of common project dependencies, as it will make hard links instead of copying entire files. This could save you gigabytes of file copying in very large solutions. Google for "hard link" if you don't get what I mean by that.

Note that dotnet core works differently and doesn't do copy local at every stage up the dependency tree when you build, instead copying dependencies directly into your top level application project only when you run the publish target, which is obviously cheaper than either approach.


Nice... thanks for posting this, will try to remember next time I touch the build server(s).


Also, setting all projects' output folders to the same directory is a pretty bad idea. You're presumably doing parallel builds (/m). If so, and two projects depend on the same library but at different versions, which version will go into your output folder? Depends which project happens to finish building last. That's non-deterministic. Boo. What if the versions aren't even API-compatible? (e.g. Some of your test projects are still on NUnit 2?)

Using hard links gives you most of the speed up, but without the attendant issues.


The are startups using it. We're just not in the MS bashing bandwagon you see here on hn.


I don't get the impression that C# is bashed on HN, rather the opposite


They said "MS bashing" not C# bashing. In other words, using anything MS was once considered uncool. There's still a number of people who are waiting to catch MS pull a Embrace... stunt.

I've not seen any thread on HN bashing MS without reason. However, the only guys who claim to use C# either work with big companies or freelance.


MS basshing was only based on the M$ quote. Because it was closed source.

Some context has shifted, the Microsoft takes over Github part has shown sentiment shift also ( not everyone of course)


I'm not specifically anti-MS myself and am generally supportive of their OSS moves. I think it aligns well with their SaaS initiatives (Office365, Azure, etc) and think that it will continue.

Beyond that, MS does deserve the skepticism it gets, and I'm not a fan of the "M$" references or similar. Leadership at MS has changed and it does show. I do wish they'd stop doing sleazy things with windows though. I've been using more Linux and Mac as a result.


Perhaps also in the Open Source world (for obvious reasons), which might reduce people's exposure to it if they haven't used it at work. But agreed that it's more popular in other kinds of business.


Any obvious reasons currently; or just historic ones?

i.e. its been fully open source for 4 years https://mattwarren.org/2018/12/04/Open-Source-.Net-4-years-l... and is under the .NET Foundation rather than Microsoft https://dotnetfoundation.org/


One reason might be that the tools have telemetry (i.e., automatically gathering and sending data about your system and your use of it to Microsoft) enabled by default[1], which is unexpected and off-putting to people in the free software world. It seems to reflect a very different attitude to some fundamental cultural values.

According to the comments on the issue tracker, it was supposed to be OK because it was anonymized – but oops, there was a bug, so it wasn't totally anonymized. But it's OK because you can disable it if you happen to know about it – but oops, the disabling mechanism had a bug.

Things that are important to many free software users are not important to Microsoft. I was excited about .NET Core until I became aware of this stark misalignment of goals and priorities.

[1] https://github.com/dotnet/cli/issues/3093


Mainly historical ones I think. It's been open source for 4 years, but for most of that time there was a paralell closed source implementation which complicated the ecosystem a fair bit.

I'd say it's only really in this last year that it's really started escaping its legacy and started really making sense as a non-microsoft stack.


It's open source, but does it comply with the four freedoms to make it FOSS? Why does mono still exist then?

inb4 "but EEE was only the 90s!"


Mono still exists for the same reason .NET 4.X still receives updates, amongst other things. It also has better mobile and crossplat support for targets outside of the server realm


Also, don't forget that C# had async/await since 2012, 5 years before it was standardized in JavaScript. The only older mainstream (for its time) language to have coroutines is Turbo Pascal 7 in 1992, which was... also designed by Andy Hejlsberg.


I don't remember coroutines in Turbo Pascal 7, and can't find anything about them in the reference books. Wikipedia suggests the uThreads unit by Wil Barath (from 1995?) but it wasn't "natively" included with TP7, right?

http://computer-programming-forum.com/29-pascal/c1ccf8167920...


Are you not mixing it up with Concurrent Pascal?

I don't remember Turbo Pascal ever having similar to coroutines and I used all versions up to Turbo Pascal for Windows.


Async await is not a coroutine implementation, it’s syntactic sugar over Task and the TaskPool that behind the scenes uses normal threads. Edit: actually it tries to use a coroutine-like implementation otherwise it falls back to the thread poool.


It does not "fall back" to thread pool. Whenever a thread pool is used, it is because you requested it, e.g. like calling Task.Run().

The async/await implementation will never fall back to thread pool by itself. Tasks are "futures", and in that sense a thread/task pool may be used in async/await when you need to run tasks in parallel. But when a thread pool is used is always under the control of the programmer.

Task is a more basic concept than threads. Tasks are about asynchronous execution, thread about parallel execution. Parallel execution is inherently asynchronous, but asynchronous execution is not parallel.

Indeed, that it the whole idea behind async/await: Enable the asynchronous model without the overhead of multiple threads.


That is what I thought until I wrote the benchmark(s) in the post. I was surprised that even though I never called Task.Run, things were being run on 4-10 background threads.

It does use a threadpool scheduler by default for a console app. Yes, I could override that if I wanted to.


I believe that there is still only one thread executing your code. Now, depending on the SynchronizationContext that thread may change. If your I/O request completes on another thread, the default synch context for console apps just continue on that thread (avoids a context switch and thus more effective).

For UI threads (WPF, WinForms) it is essential that the code continues on the original thread. This the synch context used in WPF/WinForms will post the continuation on the original thread once it becomes available (thrugh the big message loop).

For ASP.NET threads, requests are processed from a thread pool. The ASP.NET synch context IIRC will schedule continuation on any ASP.NET managed thread.

So yes, you may see your code (esp. in console apps) executing on another thread after an async call, but that does not mean that .NET schedules your tasks on a thread pool. There is still only a single thread of execution at any one time, until you explicitly use a thread pool (e.g. Task.Run)


Yeah, I should clarify what I was saying. I wasn't calling Task.Run(), but I was doing:

    for (var i = 0; i < 1000000; i++) { 
        var unused_task_var = RunTaskAsync();
    }
    await RunMainMonitoringTaskAsync();
And I purposefully wasn't calling await on "unused_task_var". And this was doing what I wanted, running all those tasks on multiple background threads in the pool, as long as RunTaskAsync method itself yielded once early on in its function (to return control back to the for loop).

tl;dr If you call an async method and don't await on it, it will be parallelized - for a console app, at least.

Honestly, I don't find the MS docs on this to be that great. Not 100% sure I'm even doing it the right way here. Everybody says use Task.Run but that is for CPU bound tasks. I want to run a ton of IO waiting tasks.

edit: looks like I'm doing it right. See "Async Composition" @ https://blog.stephencleary.com/2012/02/async-and-await.html


Nod, iirc if you read through the specifications it is definitely Threads managed by a default pool as a default implementation. Depending on your needs trying to prematurely optimize can really distort how threads are used. I've seen this in a few prior projects, but forget some of the details.

In general it works pretty well until you need more than the defaults offer, then it becomes almost an exercise in frustration.


This is a good article which tries to explain how .NET works under the hood. https://blog.stephencleary.com/2013/11/there-is-no-thread.ht...


They use whatever the current dispatcher is. The default dispatcher is the one that uses a thread pool, because that's the only thing you can do in the absence of an event loop. But the functions themselves are compiled the same, into a state machine with callbacks. That compiled code knows nothing about threads (or event loops, for that matter). It just uses the dispatcher to schedule continuations.


This whole C# async/await hype I don't get? I've come to see async/await more as a hindrance rather than as a good thing. In C# if you're not careful your codebase gets littered with tasks everywhere, even in code that has nothing to do with concurrency originally.

Async/await makes concurrency bubble up. This is a great article that explains this issue: http://journal.stuffwithstuff.com/2015/02/01/what-color-is-y...

It's also why I tried Golang. Once you use goroutines it's hard to go back, since the code is synchronous code.

(I use JavaScript with async/await in my day job)


The problem with all the other solutions is that they end up being language-specific. That article you've linked to lists the various languages that chose that route: Go, Lua, Ruby. Now try invoking a Lua coroutine from Go, and have it interop properly with goroutines while running.

The nice thing about async/await is that it's just a wrapper around futures, which are themselves a composable wrapper around callbacks (semantically; the implementation can be more efficient, of course). So any language that can express function callbacks as first-class values can represent a future, and any language that can represent a future can interop with the async/await world. Which is why you can have JS code doing await on C# code doing await on C++ code in a Win10 UWP application, and it all works. And you can take existing libraries and frameworks that use explicit callbacks (often written in C), and wrap that into futures such that it's all usable with await.


This is 100% correct. I've never seen this reasoning presented so well before. You're absolutely right, there are languages out there that target the web and the backend on the seever, they can do that and use async await for both targets.


This is one of those things that sound amazing and powerful, but without a concrete example I have no feeling for how revolutionary it is.


For example, WinRT uses futures for asynchrony, and so does Python 3. As a result, it's possible to represent asynchronous WinRT APIs as asynchronous Python methods, such that you can write idiomatic Python code (with await).


> In C# if you're not careful your codebase gets littered with tasks everywhere, even in code that has nothing to do with concurrency originally.

This is a spot-on. The only language I know that did this all right is Haskell, where most of your code doesn't need any thinking of async operations (either runtime gets you covered, or function takes a callback as a parameter, instead of any await magic), and when you need them -- you have `async` library that gets all you need.


a common misconception.. haskell does not "auto-async" anything.

Laziness is not Async.

You explicitly provide instructions which expressions get turned into async expressions in Haskell.


I never stated anything remotely close to that :) What I've meant are:

- either Haskell gives you API which is done "right", without any Async types. For example, `bracket` function

- or runtime gives you lightweight threads by default, so you can just `fork myComputation` and it'll run in a lightweight thread which "just works"

- or finally, in rare situations of interleaving different computations with complex dependencies, you just use the `async` library, which is not a special language construct but a library, which lets you "await" and all that


That is a good post and a valid criticism.

I'd say the more important concern with c# is to be careful you're not accidentally calling a "really blocking" function from an async method, or you will lose scalability. The lesser issue is the mundane task of changing all your function signatures in the chain to async.

I too would use Go if it didn't regress in every single area for me, other than concurrency, and make me sad.


What if your function signature needs to change and it's in public API used by thousands of call sites? The viral nature of it IMO is by far the biggest issue. That and much harder debugging / worse stack trace readability.


I love C#, but I also don't find async and await in JS to be too bad.

It's easy enough to wrap a non async library with promises so I can use it with async and await.

I've sometimes done the same thing with non-async libraries in C# by wrapping them with Tasks.


Wrapping a sync method in a Task causes a thread to block on the sync method's completion and blocking threads is much of what async/await is intended to alleviate.


I'd be inclined to believe he means wrapping an asynchronous method (with callbacks) in a Task.


Wrapping non-async code with a Task doesn't do anything. It's only useful if you need to pass a Task around instead, or need to move that work to a different thread. Otherwise you might as well block in the method you're already executing instead of adding more overhead.


You're absolutely right! It doesn't make sense to do this most of the time. I wrote my original message in a rush and left out a lot of context.

I usually only do this when I'm pretty sure I'll have to farm out the functionality to a separate service in the very near future. I just start with an async facade around the library, and then swap in an async wrapper around the web service when the time comes. Then I can just drop it in without needing to make any changes aside from changing the interface binding in the DI container initialization.

I suppose this might violate YAGNI. But I really don't do it often at all. I only doing it when I'm following the principle of PSIGNIS (Pretty sure I'm going to need it soon). Maybe it's better to just use the library synchronously and then refactor when it's time to move the functionality to a separate service. I don't think the amount of dev time spent is hugely different either way.


I think they meant wrapping it so that it runs in a background thread, e.g. with Task.Run.


Have you used Golang or Elixir? These languages get it very right.


They get something else right, not async/await. That something can often be used for the same purpose, but it's a different tool.


How is this not a difference without a distinction?


I can drive in a nail with a hammer or a boot, but that doesn't make hammers and boots the same thing. There are many different ways to accomplish any task.


I don’t think your analogy is very good. Boots are useful besides driving nails. What is async/await useful for except concurrency?


I have no experience with Go, but boots being useful at tasks other than driving nails was pretty clearly exactly their point.


Right, so what's the thing that async/await is good at other than concurrency?


Golang is pretty interesting, given how it's architected. But you have the concept of channels added on top of async/await which you need to keep in mind.


> Once you use async/await it's hard to go back, since the code is as easy to write as synchronous code.

Wait until you try goroutines :)


I don't think anyone would enjoy going back to goroutines (threads) after async/await. Multithreaded code is actually much harder to write and deal with. Go is inappropriate example here.




Goroutines are probably closer to Fibers which D also has:

https://en.wikipedia.org/wiki/Fiber_(computer_science)


They used to be closer to fibers years ago, when they were running on a single thread by default. Now they behave exactly like normal threads with the same problems. And fibers are threads anyway.


Care to share a reference to them no longer being lightweight?


They still start with tiny stacks, if that's what you are asking. I believe it's somewhere around 2 KB nowadays.


goroutines are anything but threads.

Here's a 40min talk explaining how Go's scheduler works:

https://youtu.be/YHRO5WQGh0k


No, they are threads. It doesn't matter if they start with tiny growing stacks and whether blocking syscall wrappers call into the scheduler. Those are just implementation details.


I mean, sure, but they have all the same problems as threads for most / application-level developers. They're indeed quite different when you get into calling out to C or other kinds of FFI, but relatively few developers need to do so directly. For everyone else, they come with very nearly exactly the same semantics as threads.


I have used both and its hard to argue that goroutines and channels are as easy to use as async/await.


On the surface, perhaps, but when you factor in the gotchas of both approaches and the debugging difficulty, I much prefer goroutines. In particular, I’ve never witnessed someone DOS a fleet of servers in Go by scheduling a task was making a sync call under the covers, by doing an intensive computation in the event loop, or by scheduling tens of thousands of tiny events. We also have a lot of bugs involving accidentally putting the task/coroutine into a variable instead of the result of the task because the programmer forgot the “await” keyword, but that’s a dynamic typing problem.


Goroutines are not threads. Async await behind the scenes uses threads.



In that case and in some others it doesn’t, but in general it does: https://stackoverflow.com/questions/17661428/async-stay-on-t... Probably it makes sense to say that async await can try to use a continuation passing style or run on a different thread because there is no actual guarantee that it will use one or the other.


After I used Go channels (https://go101.org/article/channel-use-cases.html), I feel async/await is much like a rigid toy. :)


Can use System.Threading.Channels https://docs.microsoft.com/en-us/dotnet/api/system.threading... if you like that programming model


Do those channels support async? Since C# is built around async/await for scaling to large numbers of connections / tasks, they wouldn't be very useful if they didn't support async, but I'm not familiar enough with modern C# to figure out if they do or not.


Yes. It's a library that is fully async. We process millions of records/second using these.


Do you happen to have any public information on this approach, it sounds interesting but I'm not up on the C# world like I used to be.


This is the best overview article so far: https://ndportmann.com/system-threading-channels/


Thank you.


Yeah you have a triple for different use cases; the sync call is `TryRead` so its non-blocking; `ReadAsync` for async reading and `WaitToReadAsync` to wait for data to be available without reading (likely then followed by the non-blocking `TryRead` in usage)


Does it support select? Almost half of concurrrent programming use cases in Go are select related.


Yeah, at a guess you'd combine the multiple `WaitToReadAsync` with `await Task.WhenAny`; then use `TryRead` to run though the bunch of channels in a non-blocking way.

Could even use the return `await Task.WhenAny` to tell you the specific one to check. Shouldn't be a very hard construct to make; to make it more select like.


good. It looks it is much like the reflect.Select (https://golang.org/pkg/reflect/#Select) in Go.

It has the benefit that any number of tasks can be selected, but the efficiency is not good as hard-coded select blocks.

But it looks C# Task.WhenAny has much less use case variants than Go select block.


aka "threading without thread handles". no thanks.


It works out okay for the most part, but fair point.


The thing about Go is that you're always writing preemptively threaded code, with all of its issues.

The holy grail is cooperative multitasking and always-on, opt-out async [0] if you ask me.

[0] https://gitlab.com/sifoo/snigl#tasks


I guess I think of that as a feature, but I also write threadsafe code by default. It’s not an issue compared to our async python code. Someone calling a library that makes sync calls under the hood can block the event loop until the load balancer kills it. Same thing with CPU intensive tasks or just a lot of tasks on the event loop. Never mind the problems of async code in a dynamic language.


> Never mind the problems of async code in a dynamic language.

Such as?


Forgetting the “await” keyword and trying to use a variable which now actually holds a future/coroutine as though it were the awaited future. We write tests to cover those cases.


An interesting point. But this could be mitigated in a fairly obvious way by requiring an equivalent of "await" that is basically an identity transform for future-returning function calls (i.e. if a call is returning a future, you must always either explicitly indicate that you're awaiting it, or else explicitly indicate that you want the raw future - a naked call is an error).


That wouldn't solve the problem at all--at best I'm getting a marginally clearer error message. It doesn't help me (the programmer) remember (before runtime) that my expression is evaluating to an async function and not a sync function. Whether I get the runtime error, "invalid call to async function" or "future has no property 'status_code'" is of little consequence.


Don’t underestimate erasure. Theories for free is a useful thing. Granted c# generics allow for some useful meta-programming not possible in Java. But I’d rather take a language with no runtime reflection at all and a properly staged metaprogrammimg instead


> Don’t underestimate erasure. Theories for free is a useful thing.

Indeed, phantom types are wildly underappreciated. Long ago, I created an library [1] that ensures the type safety of any runtime generated programs you create using it. It does this by typing the various stacks that are in play at any given time, but every new generic type on the CLR ends up creating new type descriptors in global data structures (which aren't GC'd), so complex programs with deep stacks end up generating a lot of static, uncollectable data that just hangs around forever.

[1] http://higherlogics.blogspot.com/2008/11/embedded-stack-lang...


Being forced to pass the type as a parameter in a generic java method is pretty annoying... and it has nothing to do with metaprogramming, a simple generic factory method or even an “IfNotNull” generic function would need it.


But that’s type inference.

Haven’t touched java since version 7 i think. But was pretty convinced they both have comparable type inference capabilities when it comes to parametric polymorphism.

The big difference between C# and java is the reification of the parameter values, inferred or not.

And the reification method choosen for C# gives you some extra metaprogramming capabilities, especially when combined with reflection.

My argument was that this can be provided design time instead (inference too) while retaining the extra safety of having theorems for free


TypeScript supports async/await pretty well (after transpilation of course).


I'm not sure why people (were) downvoting the parent comment.

Typescript (and normal Javascript) both support async/await very well as long as you use Babel to compile your code.

Infact, you don't even need to use Babel to compile your code because a large portion of browsers natively support async/await: https://caniuse.com/#search=async

I'm not sure why people say you need libraries that support async/await. In JS/TS, async/await are built into the language itself and most libraries utilize the Promise API, which means they also support async/await (since async/await is built on top of promises).


And the TypeScript compiler is now capable of generating the polyfills too, so you don't even need Babel for that: https://bit.ly/2UlW9hs

Example taken from here: https://mariusschulz.com/blog/typescript-2-1-async-await-for...


The golden touch of Anders Hejlsberg :)


I’ve always liked C#. But official support outside Windows is only very recent.

And .NET is still very much Microsoft’s baby. Unlike an independent language like C or JS or Rust, MS could pull the rug out at any time.

Yes, Mono. But Mono is an unofficial port of a moving target. They have no control over the direction of .NET.



The board hasn’t even been elected yet!

Maybe MS will in future cede all control over the language. Maybe in future 99% of code contributions won’t come from Microsoft employees.

But even then it would take years for the language to become more widely adopted outside Windows.


> MS could pull the rug out at any time.

What you mean? C# is open source. And, here is the location - https://github.com/dotnet/csharplang


A considerable portion of Mono's original developers & current maintainers are on Microsoft's payroll and have been since the acquisition of Xamarin years ago. Because large portions of .NET and C# stewardship moved over to the .NET foundation, community members have a formal role in the development of both as well.

So in practice, 'pulling the rug out from under' Mono (or .NET Core) users would be pulling the rug out from under paying customers.


Well, surely Microsoft wouldn't randomly pull the rug out from under a small subset of its own customers!

Unrelated, anyone know how I can get my Windows 10 Mobile phone to sync Zune songs to my Windows Media Center? My Xbox360 stopped working smoothly after the last few forced updates


It depends on whose circles one moves around.

Since I have mostly been around enterprises, I can say it is pretty much appreciated.

This are however the kind of customers that will happily have WebSphere, SAP, cluster management with AD, SQL Server OLAP and so on.

There won't be any projects being published on github and getting software into the projects always requires some kind of change request.

Not the kind of shop many HNers like to talk about.

Regarding Java, as someone that works on both platforms, I still look forward that Valhalla comes to fruition.


Those shops that you talk about are typically large banks and / or insurers where change requests processes are put in for a reason. Cost of an error can cost millions of $s.


Not only, and I do agree with those change requests.

Which is why I am usually against the "I rewrote X in Y", unless it holds a good business value story going for it.


Nonsense. F# has a better async story. Javascript is also good.


After dealing with F#'s Async<T> in production, I'm not sure I'd prefer it to the TPL. There are plenty of performance and efficiency problems and the expressiveness is also a double edged sword (we found lots of programmers had issues getting a good intuition around evaluation vs definition time and even very experienced F# devs would get surprised).

Part of it is not necessarily the design F# took but that the TPL is incompatible enough that the mixing of TPL-centric .Net libraries becomes a problem. It might be wise for F# to support the TPL via computation expressions with something like `task { ... }` (there are a few implementations out there but a compiler supported state machine generator would be more ideal).


If I may, I'd like to invite you or someone on your team to read a couple of chapters of John Reppy's Concurrent Programming in ML and try Vesa's excellent delivery in https://github.com/Hopac/Hopac.

I'm not clear if you absolutely have to use TPL but CML is likely just a better model for tackling concurrency and Hopac is orders of magnitude more efficient than F#'s async.


There's a `task {}` computation expression is the FSharpx.Extras library, which makes it much more pleasant to work with TPL libraries, and removes the performance issues associated with using Async.AwaitTask etc.


Yes. That one works pretty well, though it still lags a bit behind the efficient state machines that C# generates which can also help avoid a lot of allocations and indirection in deep task chains. One step at a time though, it'd be great to see F# get a refresh around the TPL (even though it had Async first, it'd be better to align with the platform in the long run).


This is the most current one https://github.com/rspeele/TaskBuilder.fs that Giraffe also uses.


The developers I've encountered that use F#'s Async get confused at first mainly because they are used to C#'s Tasks being started upon creation. I personally think the F#'s model conceptually is better and I've seen more surprises with the C# one from dev's I've worked with (e.g. which SynchronizationContext is set and why does it work in my test but not in the app, why is there no trampolining and I'm deadlocking here, why do I need to pass around CancellationTokens everywhere, etc). There are benefits to the F# async model (execution) such as implicit cancellation token parsing from the root task which is much harder to implement if the Tasks are already started before you finish the root, that they are re-runnable, they can run on whatever context you want easier at the composed root level, and the fact that it uses a more generic feature in the language (which usually comes with a slight performance overhead). Both F#'s Async's and Tasks aren't the best for CPU parallelism IMO anyway - my impression is that they are for using up idle IO time across a number of threads more efficiently. An article which I think compares the two despite it being biased a little to F# rightly or wrongly: http://tomasp.net/blog/async-csharp-differences.aspx/


C# guy here. I played with F# a bit, and kind of fell in love with it really quickly - but I find the async syntax really confusing compared to C#.


I disagree on JS libraries having a lack of support... I'd say that most current libraries have Promise support (which works for async/await). You even have async generator support, which is pretty damned cool for an alternate I/O interface that's imho cleaner to work with.

Other than that, I do agree, I really like C# a lot and am glad the effort that's been put into making sure Core works cross platform. I still shutter every time I have to work in VS (not Code) though.


? in JS you can promisify cb style lib you can await any function that returns a promise so why do you have to use libs that support it ?


That's a bold claim about C# being the only language to get async await right. It may be the one to invent it, but there are many languages that implement this feature successfully.

Something I'm curious about, how does C# handle parallelizing async code? Is it hard? Easy? How do you do it?


I think C# creates a state machine for every async/await method that is compiles which keeps track of where it is at after every await and invokes the next piece of code based on that. Also I thought async/await came in F# first? According to wikipedia async/await came in 2010 (F# 2.0) and then was added in C# 5.0 two years later.


async/await is available in java via kilim (i'm a maintainer) and quasar (though that project is on hold while the author works on project loom)

https://github.com/kilim/kilim


I haven’t used C# but I’ve found Scala’s monadic futures to be pretty great. Unfortunately they aren’t referentially transparent though. Supposedly scalaz’s tasks are better in this regard.


LINQ can be slow. And the language is way too verbose imo.


C# and typescript if you wanna stay in nodejs. Microsoft knows how to develop a platform.


> async/await - C# is the ONLY language which gets this right

Have you tried Python's asyncio?


> Javascript has it now but you need to be use libraries which support it too

This applies just as much to Python asyncio so I'm not sure it's a good suggestion from that perspective. You can use an executor to wrap things that don't support asyncio but that doesn't really qualify as good asyncio/await imo.


Yes, although it also applies to C#... so it doesn't actually make much sense as a criticism. Although I guess maybe there was a point in time where Node libraries hadn't all transitioned to Promises yet.


I still see a fair amount of node libraries that either haven't converted to promises, or their documentation makes it unclear if a function with a callback has a promise based alternative.


Comparing to what? It may be underrated comparing to Java. But comparing many other languages, it is not underrated at all. I mean there are more languages which are more underrated than C#.


What do you think of c++ coroutines?


Underrated? Do you live under a rock by chance?


We just upgraded from .net 4.6 to .net core , without touching any logic change we are getting 30% performance upgrade. .net core is game changer within .net community.


I think you could expect similar gains going to .net 4.7.2 though most optimizations done for core has been implemented and enabled for .net to.


What do you attribute the gains to?


A few days ago I was checking the former computer language shootout for another language and saw that C# is beating Java on all the benchmarks. This surprised me because I always had remembered Java being slightly ahead.

https://benchmarksgame-team.pages.debian.net/benchmarksgame/...

Can someone more knowledgeable give me some perspective on how representative these benchmarks are? At this point would it be wrong to say that in general the C# and .net environment is more performant than Java? If so, will there be any changes to Java to bridge any gap?


I've seen multiple reports of large performance improvements (like ~30%) when moving from the old .Net to .Net Core (the newer, open source version). I imagine this would be more than enough of a difference to overtake Java.

C# has a number of features (like `Span<T>` and value types) which Java doesn't currently have, which enable programs to run more efficiently, which might explain the difference. IIRC there are plans to add value types to Java, but they're not ready yet.


More coming in .NET Core 3.0 as it get access to all the SSE, SSE2, SSE3, SSSE3, SSE4.1, SSE4.2, AVX, AVX2 and FMA cpu intrinsics

https://github.com/dotnet/corefx/tree/master/src/Common/src/...


Well, I would say that it highly depends on your workload. .NET's biggest performance win is lower-level memory management, like structs and Spans. If you can use those to avoid allocations and improve memory locality in your critical path, you'll end up beating the JVM.

For allocation-heavy workloads that can't use .NET's lower-level memory tools, I think the JVM is still a bit ahead with its better GC and profiling HotSpot JIT compiler.


that benchmark is terrible (see the github for details). for k-nucleotides, the test that showed the biggest delta (c# vs java), i was able to improve the java perf by 35%. but the test harness is so poorly done that i didn't bother running it against the other impls. the reality is that good java developers just don't care about this sort of thing (which may say something unfortunate about me) and the benchmark is bad

https://github.com/nqzero/k-nucleotide


What a terrible attempt at criticism — `nqzero` says it's terrible so it must be terrible!

`nqzero` says "the test harness is difficult to configure and use" — but does not show any example of what is supposed to be difficult about using the bencher script.

`nqzero` says "the tests are not representative of common programming tasks" — but does not show any example common programming tasks and does not show the tests are not representative.

`nqzero` says "there's no attempt to account for JIT warmup, and many of the tasks are too short to ever warm up" — but admits that "it is plausible" warmup costs are amortised and comparison will show miniscule difference.

`nqzero` says "maintainers are opinionated in terms of what code they'll allow, effectively choosing the winners" — but (again) does not show any example.

`nqzero` says "doesn't appear to allow for jvm options to be included" — but seems not to have looked.

`nqzero` says "the test cpu is from 2007 and is not necessarily representative of current cpus" — That at-least is true!


> able to improve the java perf by 35%

How much more memory did you use?

> i didn't bother running it

https://salsa.debian.org/benchmarksgame-team/benchmarksgame/...


.NET team member here. We haven't invested any effort into this particular benchmark. We're happy to invest in benchmarks that have well-defined (and enforced) rules on entries. Our understanding is that the "game" part of "benchmarkgame" is appropriately descriptive. We have put a lot of effort into the TechEmpower benchmarks.

In terms of .NET performance, we've made significant improvements [1] with each release. If there was a gap between C# and other languages before, it is likely that it is much smaller now due to our efforts.

We have a balanced view on performance. We are investing in application throughput, latency, memory usage and disk footprint. Most .NET users care about multiple of those things. We also hear from users who care disproportionately about just one of them. This motivates us to care about all of these things, although throughput and memory usage is where the biggest gains are.

I have this (not very good) joke that many .NET developers until recently would not have been able to accurately describe what structs were (as opposed to classes). To a large degree this was because we used them in very narrow ways in the products. That has changed a lot in the last few years. ValueTuple, ValueTask and Span are all good examples. The culture of the .NET has changed and so is the .NET community around us. This will result in higher performance code more generally within typical .NET applications (as opposed to some theoretical outcome). If this happens, and I do believe it will, .NET will be forever changed in a pretty radical (or at least "rad") way.

[1] https://blogs.msdn.microsoft.com/dotnet/2018/04/18/performan...


I think the .net implementation benchmarked before was using mono, that's why Java was ahead before.


Historically, Java has better JIT compilers. One of the reasons is that Java started as a bytecode interpreter, and JIT was later bolted on for hot paths - this makes it possible to have JIT run for longer (since you can interpret bytecode in the meantime). .NET was JIT-only from the get go though, and its JIT compiler has to be "fast enough" that cold calls aren't atrociously slow, which restricts optimizations.

On the other hand, C# has features like structs, non-virtual methods (by default at that), and reified generics, that make for faster code without any JIT magic.

And, depending on the exact code being tested, one or the other factor can easily dominate.


I always liked C#. It's probably the nicest language to work with. Worked with golang recently a bit and its like going back to the 90s or 80s. It's just a little late with all the open sourcing, I just hope besides enterprise devs, folks would see the value also - most revs I've spoken to that dont like it havent actually even touched C#.


I see the same - lots of devs with opinions of M$ and "isn't ASP.NET that thing with forms and callbacks". Too bad for them!


I'm familiar with both Java and .NET worlds and for a recent product had to pick one. I ended up picking Kotlin (JVM) only because the team I'm working with are more proficient in Java but I must admit, C# looked incredibly tempting technologically.

Java won't go away any time soon but C#/.NET seems to be under better and more determined leadership.


Have used C# before for almost 5 years. Develeoped for backend and frontend both! Love the language, love TPL. Kotlin in my opinion brings a better coroutine implementation. I love they way I can intercept contexts and arrange them, letting me have go-ish selects, and the structured concurrency. I think Kotlin might take it a level higer.


Very nice. I had looked into using tasks for TCP servers written in dotnet core C# recently. I find it mildly annoying that some of the async socket IO in C# is not easily awaitable, but after working around that I found it fairly easy. All in all, I see a good future for dotnet core. I can recommend the Jetbrains Rider IDE to go with it.


I think the pattern is to give the socket to a `NetworkStream` constructor, then it has all the regular async methods on it.


> some of the async socket IO in C# is not easily awaitable

Do any examples come to mind? I thought Microsoft did a pretty thorough job of adding async support to nearly all of the BCL in .NET 4.5. Not saying there aren't gaps, I'm just curious. Sockets' non-blocking model is an excellent candidate for async/await-ifying.


There were plenty of gaps really. Async'ification is still ongoing, but I don't think we'll ever see it in classic .NET - Core is where it's at these days.


The methods on Socket don't have async methods. As others have pointed out, you can use NetworkStream. I have missed the memo on that :)


Search for system.io.pipeline. big stuff is coming there.


I moved from C# to Java world - similar but very different. Java has way more libraries and frameworks which is great, but it almost makes the complexity worse as there are n different Javas.

The biggest thing is memory management. I never had trouble with GC in dotnet, Java locks up with full GCs WTF.


Obviously C/C++/Rust with handwritten state machines could make this way smaller, e.g. under 100 bytes. But at some point there are diminishing returns, and also developer productivity, bugs, and security are big tradeoffs.

Looking at this another way: you need >20x more memory to achieve the same level of performance as native code in C#. Suddenly the benchmark doesn't look that great anymore.


> Looking at this another way: you need >20x more memory to achieve the same level of performance as native code in C#.

It's more like, you need 10x more memory with 100x more productivity than C/C++.

If you don't care about productivity, however, you can actually write C# as you'd write C, and with a very similar perf profile. You can put everything into structs to avoid heap allocations, for example. You could use stackalloc, which basically gives you fixed-size non-bounds-checked stack-allocated arrays, exactly as in C. You can have pointers that GC knows nothing about, and you can even do arithmetic on them like in C. You can even have type punning unions. And then you can put an API on top of all that which makes it usable with async/await - and write the rest of your code in high-level C#.


20x on a toy problem isn't bad, if some portion of the increased memory usage is fixed (in other words, doesn't scale up with the complexity of the app).


True, on the other hand if C#'s memory usage is reasonable enough that your program isn't memory bound, then it might start looking better again.


It still will be amortized by your application's data. For example, reading and buffering actual messages. The benchmark literally just has a one byte "buffer" per connection.

I threw this benchmark together in a couple of hours pretty easily. The most annoying part was just measuring the RAM :-)

But yes, everything is a tradeoff. If you want safety, you pay more - either in RAM, or in cognitive load.


That thing is not using TCP...


Correct, hard to use TCP for 500K connections for a quick test when there are only 64K source ports which suffer from TIME_WAIT issues as bonus. It uses Unix Stream sockets.


Can you distribute your connections across 127.0.0.1, 127.0.0.2, 127.0.0.3, etc.? You've got the whole class A reserved as the loopback which should give you plenty of ports to work with.


64k ports is not a limitation for number of simultaneous tcp connections, in linux kernel ports are identified by (srcip,srcport,dstip,dstport)


It says that in the doc.


C# is such an expressive language.


Which makes it tempting to write a code nobody understands but the author.


Can somebody enlighten me if official microsoft csc is available for linux? Last time i checked there was mono with some old C# standard and there was .net core suite... for freebsd only(due to licensing?)


This must have been some time ago. The .NET runtime / C# compiler you're thinking of is Rotor, a version of the runtime for educational use released back in 2006. Mono has followed the official implementation quite closely for at least the past decade.

The new official C# compiler - Roslyn - was released in 2014. It's open source and cross-platform. It was followed up same year by .NET Core, which is the new primary distribution of .NET with support for Windows, Linux, macOS and a handful of [other] BSD derivatives.

Sources:

https://github.com/dotnet/roslyn https://github.com/dotnet/coreclr https://github.com/dotnet/corefx

Binaries:

https://dotnet.microsoft.com/download


I know nothing of C#, but there are a lot of comparisons to Java in this thread. Does C# have anything comparable to the JVM? That is, does anything compile to C# bytecode, if there is such a thing?


Microsoft don't like to admit it but C# and the CLR are their direct response loosing a lawsuit to Sun taht made them stop developing their own versions of Java and the JVM.

So Microsoft got the luxury of being able to go back to the drawing board and try to "do Java right". At least with the knowledge gained from developing Java in the 90s.


Plus they hired a veteran language designer to lead the effort -- Anders Hejlsberg. Compared to Java they were unapologetic in rapidly iterating the language and much more aggressive about borrowing ideas from research.

Arguably it came full circle in Java 8, which was a big update to the language.


Yes, since the get-go .NET has targeted a VM called the CLR (the common language runtime). The biggest difference between the JVM and the CLR (initially) was that Microsoft designed the CLR to be mutli-language, whereas the JVM was written "just for java".


Yes. The CLR (Common Language Runtime) is like the JVM and support multiple languages, hence the name.

C# compiles to Common Intermediate Language (CIL) which is then assembled into a form of bytecode.


TIL MSIL is now called CIL. What the hell!


It has been called CIL close to a decade now..


The first edition of ECMA-335 was published December 2001, so now almost two!


Yep, that was the most surprising thing to me. In my circles people have just continued calling it MSIL all this time and I’ve never questioned it or needed to research anything MSIL related so I guess it slipped past me all this time. Hence the “what the hell!”


MSIL is Microsoft's implementation of the open standard CIL, which has been open for over a decade, and is how Mono even works.


That explains a lot.

More

Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: