
Go vs. C#: Compiler, Runtime, Type System, Modules - binarynate
https://medium.com/servicetitan-engineering/go-vs-c-part-3-compiler-runtime-type-system-modules-and-everything-else-faa423dddb34
======
throwaway894345
> C# is ~ 4.5x faster on a single-threaded burst allocation test.

It certainly makes sense for Go to trade off allocation performance for
latency--Go employs value types and stack allocation more
frequently/idiomatically than C# and as a consequence it enjoys low, constant-
time STW pauses. It would actually be pretty interesting if C# made the same
tradeoff--it might make C# use value types more frequently/idiomatically as
well. IIRC, this would probably drive less inheritance and more composition
since I believe value types can't be inherited. You might end up with C# code
that looks more like Go?

------
coder543
Since this article seems to be written from the point of view of someone who
primarily knows C#, I'll provide some counterpoints from the Go side, which
I'm familiar with. I have some C# experience, but it's a few years out of
date.

> The comparison there was done before against .NET Core 2.1, and I plan to
> share the update for the most current state (.NET Core 3.1 vs Go 1.13.6)
> next week. But preliminarily, the distance became even larger:

For starters, "the current state" is Go 1.14.4. Updating one and not the other
seems unkind.

Go 1.14 also introduced this change:

"The page allocator is more efficient and incurs significantly less lock
contention at high values of GOMAXPROCS. This is most noticeable as lower
latency and higher throughput for large allocations being done in parallel and
at a high rate."

Whether it _improved_ single threaded burst allocation performance or not, I
don't know, but it seems likely that such an invasive change would have
affected it one way or another.

I also don't find it very compelling to talk about single threaded burst
allocation performance... aren't most allocations usually being done across
multiple threads, and not in a very bursty (not gigabytes all at once) manner?

I think that Go also stack allocates things more frequently than C#, which
makes allocation throughput generally less interesting.

> Isn’t this per-call “extra” (16 bytes on call stack + two comparisons) a bit
> too expensive?

No? I'm not sure why you think it is. Most local comparisons are _extremely_
cheap on modern processors. Values on the call stack do not have any
contention from other threads. Contention would lead to caches being evicted,
and the instruction pipeline would be more likely to be thrown out and
restarted because of a branch prediction failure. Local values on the call
stack should be really easy for the branch predictor to guess, especially if
you're repeatedly calling a function. I wouldn't go so far as to say
prediction and pipelining make such comparisons free... but it really almost
does, from everything I've seen over the years.

> btw, pay attention to an unusual “prologue” of every call that checks for
> the potential need for stack expansion. That’s the price Go pays for
> goroutines — most of other static languages don’t do this extra check per
> every call.

Again, these kinds of checks are rather inexpensive. The cost of growing the
stack is amortized over the lifetime of the goroutine, so unless you are
constantly creating very short lived goroutines that need large stacks, there
really isn't much cost to this. Years ago, Go used to have split stacks, and
_those_ were expensive, especially if a hot path was straddling one of the
splits between stacks.

> There is seemingly no way to make e.g. a map to rely on your own equality
> comparer (sometimes you need this), and honestly, I don’t know how to
> implement a workaround assuming equality is always structural — e.g. even if
> you start substituting the keys with your own wrappers, wrappers still can’t
> override their own equality/hashing, so…

Depending on exactly what you're looking for, there are some easy solutions.
You can have the "key" be embedded into the larger struct, or you can have a
function that returns a key struct. This is an example of using embedding for
this purpose, which I think would be a reasonable solution:
[https://play.golang.org/p/mkHyLZiAYOa](https://play.golang.org/p/mkHyLZiAYOa)

The problem is if you're hoping to later get the key back and have access to
the other fields of the struct. In my opinion, that's an antipattern. You
should be storing those fields in the value of the map if you want the map to
hold them but not use them as part of the key. I've never once (in any
programming language) wanted to store an entire key in the key part of a map,
but only use part of the key for comparison purposes.

There's no great quote to pull from the article, but there's a strong
implication that Go has to compare every field in order to determine
equality... but this simply isn't true. Go only has to do this to determine
inequality. If you have two pointers, Go will first check whether the pointers
are equal. If they are equal, its job is done. Otherwise, it dives into the
fields to see whether those are equal or not. Unless you want "reference
equality" to mean that you can just cheaply compare the pointers without
falling back to structural equality, but... why would you want to do that? You
can certainly hack something together with `unsafe`, but I don't recommend
worrying about this until you have a benchmark pointing at a specific
problematic comparison. (which I've also never seen) Reference equality in the
sense that it doesn't fall back to structural (or some other form of) equality
is a gigantic footgun that I've seen trip people up in numerous languages. I
would rather not have that as a language feature. A lot of people end up
accidentally comparing two different pointers that contain the same structural
value, and they can't figure out why the two values aren't being treated as
equal.

There are a lot of other statements throughout this article that I find
questionable, but I feel like I've already written too long of a comment at
this point. Yes, "language I know" is a lot easier to like than "language I
don't really know", but the conclusion was even more one-sided than I
expected, given the level of effort the author seemed to be putting into the
article. The absence of generics and sum types are real issues with Go... and
even C# doesn't have sum types either, so that's a real problem there too.
Almost everything else mentioned in the "C# features missing in Go" section is
just completely unnecessary. More is not always better, especially when it
comes to language features.

When I google "c# web server", the first thing that pops up is a tutorial for
how to write your own from the ground up with sockets. When I do the same
thing with Go, I get a practical tutorial for how to use the standard library
web server to build an interactive wiki application.

Go's standard library is a huge component of the language's success.

Being able to make an easy-to-deploy binary just by typing "go build" is huge,
and the compile times are great. Sure, .NET supports some form of AOT
compilation, but it's not the default, and when I google ".net core aot", the
first thing that pops up is the CoreRT repo that says "This is an experimental
project. We have no plans to productize it in its current form" which is not
super encouraging.

There are a lot of things to like about Go, but it is _different_ from many
other popular languages, so it's easy for it to make people feel
uncomfortable. I'm eager for Go to have generics, because that really is my
biggest complaint about it.

If you want a cohesive, awesome type system, Rust is built on a decade or two
of research that didn't exist when C# was designed, so C# has to live with a
lot of baggage as it expands to include other, newer ideas. Go has avoided
most baggage simply by avoiding most features, for better or worse.

~~~
sdflhasjd
> Being able to make an easy-to-deploy binary just by typing "go build" is
> huge, and the compile times are great. Sure, .NET supports some form of AOT
> compilation, but it's not the default, and when I google ".net core aot",
> the first thing that pops up is the CoreRT repo that says "This is an
> experimental project. We have no plans to productize it in its current form"
> which is not super encouraging.

While not the same thing as AOT compilation, `dotnet publish
-p:PublishSingleFile=true` achieves the goal of having a self contained
executable.

~~~
coder543
Definitely good to know.

Not a critical feature these days with CI being as good as it is, but can you
run that command from macOS or Windows and get a Linux binary?

~~~
unsignedint
Yes, you can cross compile to any supported target using -r flag followed by a
RID (e.g. win-x64)

[https://docs.microsoft.com/en-us/dotnet/core/rid-
catalog](https://docs.microsoft.com/en-us/dotnet/core/rid-catalog)

