The comparisons to python are a bit dated now. All of the examples can be implemented in python using typing extensions [0] and statically checked before runtime using mypy [1].
One thing I like about Go is that you can read a decade-old blog and still find it somewhat relevant.
Python was the first language I made money with. However, these days, I struggle to read and make sense of type-ridden, generic-filled, Pydantic-infested Python code.
> Python was the first language I made money with. However, these days, I struggle to read and make sense of type-ridden, generic-filled, Pydantic-infested Python code.
I'm in the same boat. Python was great when it was a snake. I liked additions like the "with" statement - they were very pythonic.
I think it's good when the language evolves, although the direction Python took feels more like grafting - oh people like cats, so let's graft some fur onto the snake; people like bats so let's attach wings. The creature no longer resembles a snake, or any other animal for that matter.
I still think Python 3.0 was the right thing to do - get rid of "old style" classes, default to Unicode strings, be strict about mixing Unicode with bytes, etc. While there, I wish we got rid of the __init__(self, ...) crap - repeating yourself three times was absurd. The language was getting better up until 3.4 or so, and it slowly started going downhill from there, async being the inflection point.
I don't think you can fix it anymore. Python 4.0 will never happen, at least not the way 3.0 did.
Go has its problems too. "if err != nil" is awful - stack unwinding exists, but it's awkward and "bad style". Tuples exist, but not as a first-class object. Generics dropped way too late (but at least I'm happy we went thru so many proposals, finally settling on something actually reasonable). Past mistakes cannot be easily undone, so I'm happy it's taking a more conservative approach.
Loved Python when I got into it circa 2011. Didnt have prior programming experience, minus basic BASIC and HTML. It was simple enough and the stdlib include enough things to get me going, but it had enough complexity to intrigue me to dive deeper (list comprehensions, bytes vs strings, inheritance vs composition) and I think I learned a ton about programming thanks to it.
But these days when I see modern Python, it looks "uncomfortable". It has so many features, and so many ways of doing things that just figuring that out feels like a massive time sink.
I write server-side software with Go now. I feel like with Go I can just sit down and start solving problems. That applies even to sitting down and diving into a 10 year old codebase.
However how would real first class tuples be an improvement in Go? Alef had them, and allowed various manipulations, as well as returning them, and passing them to functions.
I note that they are present in Hare, but not present in Odin. Where the latter has the Go inspired multiple return values, but (AFAICS) no tuples, but does add tagged unions.
Generally I'd not want to store a tuple, preferring a struct with named fields.
So the only uses I can think of are those temporary ones for multiple return values and assignments, which are already covered.
My issue with Go's implicit tuples is similar to the pre-1.18 generics built into the language. We've had generic append since before 1.0, but it was "magical", you couldn't write your own generic append for e.g. a custom container. We've later managed to add generics in a way that didn't make append, make, etc seem out of place; but make remains a function with an optional, typed second parameter, IIRC the only one of its kind.
The implicit tuples seem just as magical. You can have func f()(int, error), but a:=f() is an error. It's arguably better than Lua (which ignores the second value), but arguably loses to Python (which returns a proper, first-class tuple).
Similar with destructuring. You can have g() struct{int;error}, but not i,err:=g() or struct{i, err} := g(). You can have f()(int, error), but again not a:=f(). You can have h(int, error) with h(f()), but that's a hardcoded special case, and somewhat unintuitive, since it violates x:=f(); h(x) - which would however hold in case of returning a struct. Go is just less composable, full of arbitrary exceptions and edge cases.
Sorry, but your response seems to be about consistency, or "purity" at some level.
While what Go has may be inconsistent, what functional impact does that have?
I can't see a need for 'de-structuring' as such, absent tuples. Even if it had real first class tuple types, like Alef did, what would one do with them? As I indicated, I'd not want to store them (other than holding in locals), prior to use.
As I recall, Alef did support de-structuring with tuples, as well as re-structuring. One could assign either way between an unnamed tuple, and an 'aggr' (it's name for a struct).
So at most I'd want to break them apart, which the return value thing gives.
Hence if I was creating Go 2.0, I can't see why I'd want to add first class tuples, but could see a use for adding tagged unions.
> Sorry, but your response seems to be about consistency, or "purity" at some level.
> While what Go has may be inconsistent, what functional impact does that have?
Same reasons why Go fixed C's: inside-out type declarations, function pointer syntax, ERRNO, headers, macros, signal handling, UB, all the things that technically had no "functional" impact but still directly contributed to consistency, ergonomics, clarity, ease of comprehension, and (either by proxy or directly) correctness.
> I can't see a need for 'de-structuring' as such, absent tuples.
Your playground example of a, b = b, a is not destructuring a tuple in action? It's basically the same syntax / mechanism as Python's destructuring assignment, which existed since before Go (except Python's was always more powerful).
It's almost like you can do everything you want with a tuple in Go, except for actually holding it in your hand.
> Even if it had real first class tuple types, like Alef did, what would one do with them?
Similar things you'd do with a function without a name - work directly with the data at hand, without having to do the extra round trip to the attic to declare its name or shape.
> Hence if I was creating Go 2.0, I can't see why I'd want to add first class tuples, but could see a use for adding tagged unions.
That would probably break Go. I liked Chris Siebenmann's take on the subject:
Meanwhile tagged unions bring you virtually all the way to ADTs, where pattern matching (generalised destructuring) is basically a must.
(By the way, Python stumbled really badly when it added pattern matching without even having proper structs. It's almost comical, given def __init__(self, ...), that should've been gone as a part of the 3.0 break-the-world.)
> Similar things you'd do with a function without a name - work directly with the data at hand, without having to do the extra round trip to the attic to declare its name or shape.
Note that in Alef, tuples are essentially a dual for an aggr, but with unnamed fields. So one always has to (explicitly, or implicitly via inference) declare its 'shape', in terms of number of members, and type of members.
So one could declare:
tuple (int, byte *, int) t;
Then manipulate 't', one could also have a function return a tuple as in:
byte *str; int value;
(nil, str, value) = something(7);
However the tuple 'shape' is always statically determined. Is that in your view satisfactory, or not?
Or do you desires something where the tuple is an entirely dynamic type, sort of akin to syntax sugar on top of '[]interface{}'? More akin to the sort of dynamic thing which Python offers?
Such that one can potentially have a program run, and each call to a given function returning a tuple may have different numbers of elements, potentially of different types within it. So that for said program, if the function return value depended upon input data, one could not determine the full set of tuples which may be returned?
Since func() (A, B) is different from func() (A, B, C), I'd expect first-class tuples to do the same. At which point there would be a lot of similarity between (A, B) and struct{A, B}. Perhaps they should be equivalent? From which "struct{a A, b B} = f()", or even "switch x.(type) { case struct{a A, b B}: ...}", or "case (a A, b B): ..." could follow. It might be too much Rust influence, but it might be a good influence, given there's already a lot of interest in unions/enums/ADTs (but see "start with goals").
The counter-arguments are that "type A struct{}" and "type B struct{}" are different types, and that anonymous structs are seldom found in the wild (likely due to their verbosity), but perhaps this is a chicken-and-egg problem? Go already does local type inference, because "var mypackage.VeryLongThing = mypackage.NewVeryLongThing()" is stupidly repetitive. But there's always a fine balance between code being terse and readable (I will never wrap my head around APL).
Please state you musings about the async. I'm just getting into that right now and I wonder if my misgivings are a result of my age or a fundamental issue with the Python implementation of the concept.
Seriously, all and any insight and advice appreciated. Thanks.
Any Go function can take a channel, return a channel, or use a channel (and spawn goroutines) internally, unbeknownst to the caller/callee. It may lead to some bad design decisions (code that must not be async may never be), but it won't get in your way when you least need a refactor.
If you're having problems with async in languages it isn't your fault.
Spinning off a thread for a single small purpose and trying to synchronize with the result seems fine in theory, but it is a very small piece of the larger puzzle of concurrency and usually gets people into trouble once they realize they need more, because anything more complicated than that one use case becomes very tricky.
tl;dr Async/await is what the Python community has settled on, so I don’t think there’s any point in resisting it now, but the fact that it happened is a historical curiosity from my point of view.
-----
I’m not the person you replied to, and I actually don’t have anything against async/await as a pattern where it is needed, but I know that prior to Python getting async/await, there were some moderately popular green threading / coroutine libraries like gevent and eventlet. You would (mostly) just write normal, synchronous Python, and then blocking calls would be intercepted by the runtime and allow other coroutines to take their turn. This felt Pythonic to me at the time, because most code would work in both sync and async environments. You didn’t have to write a separate async version, and you didn’t really have to update old libraries.
The other pre-async/await approach was taken by Tornado and Twisted... basically a form of callback hell. I don't think anyone liked this, but it might have been popular because it worked... and unofficial green threading implementations like gevent/eventlet sometimes broke in interesting ways. (I think officially incorporating a gevent/eventlet-style solution into Python would have overcome most of the issues... but, that's just my speculation.)
I’ve never used Python professionally outside of some short scripts, but it was one of the first languages I used seriously for hobby stuff back during the early days of the Python 3 transition. I never fully understood why Python chose to switch to async/await. Promises can be useful for structured concurrency patterns, but as someone who has been writing Go professionally for a number of years… I just don’t think most code should need to be async-aware.
For a language like Rust, I think async/await makes perfect sense. Rust cannot afford to impose a runtime on everyone, and async/await can be implemented in a very low level, efficient way that gives the developer as much control as they need. This kind of ultra-low-level optimization stuff just isn’t relevant to Python… so, as an outsider, I almost wonder how (in Python) async/await isn’t just a clunkier coroutine system.
If I were to try to rebut my own comment, I would say that async/await was probably chosen because "explicit is better than implicit", and green threading might have been too implicit for the Python community's tastes.
There are a few truths in this. I’ve been involved in a few large projects written in Python and have seen both ups and downs.
Green threads required extensive monkey patching, and debugging those programs was incredibly hard. Instagram moved from them to async/await and wrote a blog post about it, iirc.
But I agree that Python’s async/await implementation is a bit too low-level and could use better abstractions. A lot of the hate Python async gets is due to the `asyncio` library. It’s a shame that the default library is full of deprecated and gotcha-ridden APIs. Trio attempted to fix these, but adoption has been low.
The community settled on async for the same reason I love Go despite all its faults. It’s flawed, but you can build successful systems with it. Lots of companies still write new services in async Python instead of Go because, as big as the Go community is, Python’s is absolutely ginormous.
Plus, LLMs brought more new people to Python than most other languages, and it’s easier to find Python developers and teach them async than to hire Gophers.
> Bob Nystrom worked on Dart at Google and wrote the amazing Crafting Interpreters book. Rob Pike referenced this article in his 2023 talk, Go: What We Got Right, What We Got Wrong, while discussing the trade-offs between CSP and coroutine-driven concurrency.
I know, which is the exact reason the reply states "uncharacteristically". And yet, it fails to acknowledge existing terminology and tradeoffs and just states at the end in a couple paragraphs how Goroutines shall deliver us all from evil (which could not be demonstrably further from the truth). Surely we need to expect better from one of the Dart's authors than forgetting to mention all the caveats virtual threading via stackful coroutines comes with?
And this completely ignores that Go applications end up having to reimplement .NET's task system system anyway and that Goroutines are poorly suited for "fine-grained" concurrency. Even the UX of Goroutines is bad. Want to return a value when one exits? Be sure to not screw up synchronizing and passing the data by hand. One must imagine Gophers writing all that boilerplate happy. I suppose, it's a preferred pastime to learning the internals and ceasing to post the same tired shibboleths.
.NET is out of the question when candidates flat-out refuse to work with GoF-style OO. No one claims to be excited about building modern distributed systems in Microsoft’s Java.
That said, the claim that goroutines are “poorly suited for fine-grained concurrency” is misleading. Define fine-grained.
Goroutines have near-zero startup cost and can scale to millions, whereas .NET’s async-await is still backed by a thread pool with scheduling overhead. The real issue isn’t execution but synchronization, and Go provides both message passing (chan) and low-level synchronization (sync.Mutex, atomic). If you’re writing workloads where every nanosecond counts, you’re probably using specialized primitives anyway—not .NET Tasks.
As for UX—want a return value? Use a channel. Need to sync? Use sync.WaitGroup. Need structured cancellation? context.Context. The alternative in .NET involves juggling Task.WhenAny(), ConfigureAwait(false), and state machine overhead—as if keeping all that junk in your head doesn’t come at a cost.
Goroutine orchestration isn’t “boilerplate”—it just gives you explicit control instead of forcing you into .NET’s awkward, OO-ridden async/await scaffolding.
> .NET is out of the question when candidates flat-out refuse to work with GoF-style OO.
If you think that it's all there is to .NET (besides your account, I have not encountered anyone seriously using the term in a long time), then I'm sorry you had to experience that specific codebase or a team environment that made you think this way. Pattern abuse and "GoF" is a Java-ism and comes from not having sufficiently expressive language together with particular culture of application architecture. However, please, stop repeating the same phrase in every message and actually look into what differentiates C#, Go, Java, etc. Could be a pleasant and illuminating learning experience.
And unlike Go, C# is a multi-paradigm language which is effective at functional and systems programming :)
> Define fine-grained.
Take a transport, like an HTTP client, send two/three/n requests concurrently. Or fire off two relatively quickly completing but still benefitting from multi-threading computations.
> Goroutines have near-zero startup cost and can scale to millions
> As for UX—want a return value? Use a channel. Need to sync? Use sync.WaitGroup. Need structured cancellation? context.Context. The alternative in .NET involves juggling Task.WhenAny(), ConfigureAwait(false), and state machine overhead—as if keeping all that junk in your head doesn’t come at a cost.
Goroutine orchestration isn’t “boilerplate”—it just gives you explicit control instead of forcing you into .NET’s awkward, OO-ridden async/await scaffolding.
Do you actually have any experience with this or are you simply repeating something that you think is a problem? Context propagation issues are known and the resulting UX has a common complaint of being unwieldy in Go. CancellationToken or even implicit cancellation propagation in F# are far superior and long-time solved problems. I guess, in the land of Go, if Go does not provide, it must be impossible.
And why would anyone need to bother with explicit synchronization anway? That's what 'await' is for. Have more than one task? Just fire them off, and 'await' when you do need a result. No Task.WhenAny/All etc. necessary. These are for more complicated logic, or for mapping a sequence/collection of tasks onto results. Very handy and terse. Turns 25-line mess in Go into a sleek one-liner.
The context problem is really annoying when using go. Many functions unnecessarily adding a context parameter so they can cancel a task. They made such nice syntax sugar for channels, why not also context?
Never used F#. Looked it up for thread cancellation and it is syntactic sugar similar to defer in Go. Kind of like an await return is a wrapper around f(x) callback.
Been years since I wrote Go code, until last week. Daily has become C#, python, and powershell. Took a day to fall back in and write a tool while learning a new GUI framework. Found the strengths and weaknesses of the GUI.
Personal, I find Go more pleasurable to work with than python and C#. Reality, the project requirements dictate the languages that maybe used. This is the reason why Go was chosen over the others listed and unlisted C, C++, Swift, rust, ...
Which would be a better experience with developing for iOS, Swift or C# or Go? If it was company's main solution or product, Swift, even when I will need to learn it.
>Take a transport, like an HTTP client, send two/three/n requests concurrently
That is business logic. Back end maybe embedded and this could become a DOS attack.
.NET implemented the code to manage task pools so you don't have to. N aysnc is really a set of Z handlers. With syntax sugar.
>Context propagation issues are known
Doesn't a context need to exist for two entities to communicate and no system can exist where context is not shared?
This is the adapter pattern in GoF, which in can be reduced to function g controlling the interactions between function a and b, where function a or b may act upon the agreed upon return type, callback type, or memory type. Address of the return, callback, and memory must be shared either with-in g() or a() and b() .. from CPU register up to system memory address.
>And why would anyone need to bother with explicit synchronization anway?
Dependent on business logic. Including which 3rd party libraries must be used for the solution. Software might be architecture for using Reactive objects which mimic async / await with more well defined and shared behavior. A custom managed thread maybe needed for more accurate time limits or an event loop is needed.
>sleek one-liner.
Be warned, one liners may seem useful, until they need to be debugged. There is a great difference between the two C# Reactive objects:
#if DEBUG
// Set ID_WITH_ISSUES_OR_TESTING to the proper ID when developing new user experience or interface. Use for debugging single issue bug that just happens with the same ID or same line of 3rd party product tie-ins.
if (x.Id == ID_WITH_ISSUES_OR_TESTING) {
var i = DateTime.UtcNow();
}
> That is business logic. Back end maybe embedded and this could become a DOS attack.
It is not. There is nothing in lightweight concurrency that makes it "business logic"-specific.
> Be warned, one liners may seem useful, until they need to be debugged.
This has nothing to do with reactive patterns below. Task.WhenAll debugs just fine, and if one or multiple tasks throw, you get a nice AggregateException which can be further inspected with specific stack traces and such.
> Dependent on business logic. Including which 3rd party libraries must be used for the solution. Software might be architecture for using Reactive objects which mimic async / await with more well defined and shared behavior. A custom managed thread maybe needed for more accurate time limits or an event loop is needed.
Again. This is a reply to the comment about how in order to yield a value out of a goroutine, you need to manually fashion a place where to store it, and then to create a WaitGroup or a Cond/Mutex or maybe a channel and then manually wait on it. Something you get with a single 'await' keyword in C#. It has nothing to do with business logic or spawning an OS thread, or using a reactive framework.
This is a more elaborate version of how I feel about Python these days. Python is still a much nicer language than something like JS, but it hasn’t been very successful at saying no to things.
I like type hints, but it’s easy to go overboard with them. Pydantic and FastAPI are great. The problem is that typenauts and academics coming from other languages are trying to bring every feature under the sun to Python. The core team hasn’t been able to fight this barrage of feature requests.
The same is true for Go. I regularly see Rust/Haskell folks talking about how things could be better if Go had xyz feature. While it’s true that Go would probably have benefited from a little more expressiveness, how much more? Where do you stop?
I like Go because it’s not Rust or Zig. I mostly write server software, and Go is far more productive in that space. The Go team understands this and is much more protective about scope creep. Keep your type theory off my lawn and let me make money in peace, please.
More specifically, you can do interfaces with Protocol[0]. The main problem with it seems to be as soon as you a single Protocol mypy becomes extremely slow.
A quirk of Go is that I can cast a `[]string` to `interface{}`, but I cannot cast `[]string` to `[]interface{}`. This blog post is my go-to explanation for why the second is not possible but the former is.
Yeah, I have plenty of criticisms of Go, but this is something that it got right.
For those who aren't familiar with the issue, in Java you can assign an array of a subclass to a variable declared as an array of the superclass, which leads to issues if you actually try to mutate it. Imagine if Cat and Dog both inherit from Animal, assigning a Dog[] to an Animal[] is totally valid, but then setting one of the elements to a Cat will throw an an exception.
I don't know if I disagree with you or not, but having not used a language with inheritance with a while, I also don't really miss it at all. Even if it doesn't cause problems on its own, it just doesn't feel like it provides that much that can't be provided via some other mechanism either.
For what it's worth, I'm not sure this is even something I'd consider a "notorious" problem with OOP because the solution is really simple and wouldn't break anything else if done correctly from day one: just don't allow using arrays of subtypes in a place where an array of its supertype is expected. This was already known at the time Java was designed, and I'm pretty sure the people who designed Java knew it too; they just didn't pick the right way to handle it in my opinion. There's nothing inherent about OO that makes this bug that much harder to deal with than any other language because it can happen in any paradigm with subtyping (as demonstrated by the fact that it could have been present in Go if they did allow casting arrays of interfaces in that way).
The mutability is a red herring here. It depends on what methods your object has and where the generic type resides within the signature. It just so happens that an immutable array has only one relevant "method":
get :: Int -> T
Here, T is on the right/return side, therefore an immutable array type would be covariant. But a mutable array also has a
set :: Int, T -> void
which has T on the left/parameter side and therefore requires contravariance (and you cannot have both, therefore you get invariance).
But the issue is not with mutability, since you could just as well have something like
setImmutable :: Int, T -> Array(T)
and you still wouldn't be able to make Array(T) covariant.
And for the record, it is only raw arrays in Java that have this issue, generic types like List<T> are invariant (of course you can explicitly ask for a List<? super T> if you want to only call "setter" type methods on it, or List<? extends T> for getters, aka methods where T appears only on return side).
This setImmutable would return a new instance of Array<Animal> which would not be Array<Cat> like the original, and would contain any mix of cats and dogs.
So immutability resolves the covariance problem, I think?
My main point was that mutability and covariance are two orthogonal concepts that just happen to overlap here due to function signature shape. But it seems that you are right, I cannot think of any expression that would type-check with Array<Animal> but not with Array<Cat>. ChatGPT was very unhelpful in trying to produce a counterexample, since it probably trained on too many entry level stackoverflow posts about this issue.
Interface inheritance is OK (as long as you pay attention to these issues) but implementation inheritance is a huge mess because it's effectively antithetical to modularity. Your entire class hierarchy must be understood/evolved as a whole if you use implementation inheritance, that's the only way to avoid running into severe problems with its semantics as the code changes over time. Of course that's not exactly the most common approach given that class hierarchies in many real-world codebases are large and it's just not practical to survey them as a whole.
Is that really a "quirk" of specifically Go? I am pretty sure Java--the implementation of which defined a lot of how numerous languages handle this kind of thing--has the same behavior: you can cast String[] to Object, but not Object[].
Note that array covariance is largely considered a design mistake and if .NET was to be redone it wouldn't have one (and a bunch of smaller things that are an artifact of generics getting introduced in C# 2.0 and not 1.0, well, the problem is much worse in Go heh).
Other data structures like List<T>, Span<T>, etc. do away with covariance. There's an upcast for ReadOnlySpan<T> but only because it's zero-cost and does not introduce the issues stemming from covariance.
Luckily, you almost never see someone use array covariance beyond occasional object[] upcasts (and the compiler is also good at reasoning whether to insert covariance checks or not).
.NET is still called COM+ in .NET's source code, they initially envisioned .NET as a successor to COM. They wrote about COM+ in 1997 in an issue of Microsoft Systems Journal (they talked about GC, runtime type info, etc.) It makes me believe the idea for .NET (COM+) existed before the lawsuit. It's possible that after the lawsuit, the COM+ and J++ teams were merged to reposition it as mainly a Java alternative.
That was the Ext-VOS project, that I made a reference to.
WinRT was supposed to replace .NET by the original goals, after the Sinofsky and his followers took over Windows development after Vista, it has been COM as the main API.
"Turning to the past to power Windows’ future: An in-depth look at WinRT"
I never heard about "Ext-VOS". Tried googling it, and all mentions of it are from you ("pjmlp"), on Reddit, Github, etc. :) Where can I read more about it?
OK, so from what I've read, "VOS" was the older name for CTS (Common Type System), not the name of the whole project. The project was named COM+, COM Object Runtime, and Project Lightning at different times. "Ext-VOS" ("extensions to VOS") is a document that primarily proposes generics, which were only added in C# 2.0.
There was mostly no reason to cause unnecessary code churn to rename pre-existing code with complus-named variables inside dotnet/runtime. External-facing features and documentation never reference it. For compatibility reasons it still recognizes env. variables prefixed with COMPlus_ alongside DOTNET_ though.
Yes, and 25 years later Java is still relevant enough that Microsoft has become yet again a Java vendor, with their own distribution, made the key contribution for Windows ARM support, VSCode Java experience is still ahead of C# DevKit thanks collaboration with Red-Hat, and doesn't require an additional licence.
At the same time, JVM has embraced the polyglot ecosystem CLR was supposed to be, while C# seems to have sucked the life of all those language implementations demoed at the launch back in 2001, and .NET SDK being offered on computer magazine CDs.
Which is kind of interesting, how things have changed 25 years later.
At least we finally have cross platform support (ignoring Mono and DotGNU efforts), and good AOT instead of NGEN, which should have been there on .NET 1.0.
If you have to reach for popularity as an argument in favour of technology X or platform Y, perhaps it's not a technical point you want to make?
> VSCode Java experience is still ahead of C# DevKit thanks collaboration with Red-Hat, and doesn't require an additional licence.
I'm not sure if you're intentionally attempting to make inflammatory replies or something in what I said rubbed you the wrong way.
(for other readers - this marks me posting for 20th time here that DevKit is an optional product and thousands of developers are happily coding in C# in VS Code and VSCodium, Neovim and Emacs without ever running into it)
Unfortunately not always the best technology wins out, and the turns that go around them are nonetheless quite interesting.
I am stating facts, do you want links to .NET team interviews where they assert it is a business decision VSCode is never going to achieve feature parity with VS for .NET?
VSCode for Java doesn't have such artificial constraint, hence the better tooling, mainly implemented by Red-Hat.
C# DevKit being optional doesn't change the fact specific features are only available when users opt into using it, with a corrrespondig Visual Studio license.
And yes, this irritates me, because I feel it is a disservice to .NET community how Linux and macOS developers are kind of 2nd class, not by .NET team themselves, but higher up Microsoft management.
No, I don't. Developer tooling teams which ship Visual Studio and DevKit are almost completely separate from the teams overseeing the development of .NET itself (including the team responsible for the SDK). And, honestly, I could not care less because it does not affect me or my colleagues in any material way, nor it affects the actual Linux or macOS users.
And also, have you tried writing Java in VS Code? I have and it is overall worse (read: less stable) than just using IntelliJ Idea in a way that isn't an issue when doing so in C#/Go/Rust/TS.
Ah doesn't affect you that graphical profiling tools are only available on VS, that management thought for a brief second to make hot reload VS only (saved by .NET community uproar and folks like Hanselman jumping onto discussion?), or that MAUI designer was thrown into the garbage can, and what is now available is pretty much WIP?
Lucky one I guess.
I have used it, its stability issue is orthogonal to feature parity, being whole Eclipse running headless, which is the point, features.
C# inherited this from Java, which added it because it was designed without generics and therefore decided that the additional expressiveness provided by enforced array covariance was worth the tradeoff of having an unsound type system.
Huh, fair enough!! What's then funny in the context of this thread is that, while I am sure I knew that 20-25 years ago, it feels so wrong that it would support that -- and more like a "quirk" that it ever would -- that it surprises me (though it certainly doesn't shock me: most things have many quirks).
It's not just the O(n), you must think of interface{} as a pair (concreteType, pointerToMutableData). It's a can of worms, allocating n*2 word-sized objects for each rune is just the surface layer - strings in particular are immutable.
I think Go does the right thing to make this allocation and assignment explicit, you may be a little less surprised with how the program actually behaves. https://go.dev/play/p/PzuBpM66VX2
And decoding subsequences of UTF-8 code units (bytes) into Unicode scalars (runes), which also forces a copy.
But why did Go pick the same syntax for cheap conversions and expensive ones, though? I'd expect this to be a standard function, not a type conversion.
Don't all conversions require copying the value at the level of the language semantics? You can cast a float to an int, but you can't assign an integer value to a float variable (leaving aside whatever optimizations the compiler might make). Casting a float to an int is only cheap because a float is a fixed size. So it seems unsurprising to me that casting a larger value results in a proportionally larger copy.
In all of these cases, it's just a type-cast which is zero-cost, and I think that's what makes it feel surprising that casting between strings/runes specifically incurs more significant computation than any other cast.
All of those are copies and hence O(n) in the size of the value copied. This is hidden somewhat because your examples use constants and literals. (As numeric constants aren’t typed in Go the first example arguably doesn’t involve a copy, but the rest do.)
Yeah I was really surprised by how much of a performance difference it made when calling variadic functions that support `args ...interface{}`. Literally every since call is going to be an allocation of multiple parameters when passing structs without a pointer
You have to manually collect and then apply the profile though (very few teams do this). A proper solution is doing a whole program view optimization (at least like the one done by .NET's NativeAOT). But typically of Go design philosophy, a stop-gap solution was chosen instead.
[0] https://docs.python.org/3/library/typing.html (from Python 3.5 - released Sep 2015)
[1] https://github.com/python/mypy (v0.1 released Sep 2009)
reply