Hacker News new | past | comments | ask | show | jobs | submit login
More powerful Go execution traces (go.dev)
376 points by nalgeon 10 months ago | hide | past | favorite | 148 comments



What about error stack traces? I find it crazy that you're supposed to grep error strings and pray they're all different.


Errors are different in every stdlib or 3rd party package I've ever used.

The ecosystem as a whole decided that errors should be useful, values, and properly bubbled up - and that your application should not be so hard to reason about that you need a stack trace to know where in the deep recesses of your code something went wrong.

I love stack traces in Java, but I don't miss them at all in Go.


Sounds a bit apologetic. I don't think that the ecosystem decided. The language creators decided, and the ecosystem couldn't do much about it.


Go's development process is fairly open and democratic though; if enough people flag it up, and if a case can be made for stack traces, it can be added.

As it stands though, the consensus is that it doesn't add enough value, it's too expensive (cost/benefit), errors are normal program flow, and 3rd party libraries can add the functionality if one decides they need/want it in their application.


I say ecosystem to mean the majority of packages. That's the ecosystem.


"your application should not be so hard to reason about that you need a stack trace to know where in the deep recesses of your code something went wrong."

Curious how Go is better than other languages in this regard.


That quote isn't about Go specifically though, but about general application design. It seems to be a subtle snipe that languages with stacktraces like Java, C# are too complicated or have too many layers ("lasagna code"), making them more difficult to debug.


we wrap all errors with a stack.Wrap function that adds stacktrace to the error. This allows us to add stacktraces to logs and err reporting tools like Sentry. Huge time saver


You can do it yourself if you want using `runtime.Stack` and a custom error type


Embarrassingly, I’ve been writing Go for a while but never really thought about it. Now that it’s been mentioned I’m curious why this isn’t baked in by default for errors. Does anyone know?


You’re supposed to prepend the context to an error before you return it, so the final error reads like a top-level stack trace: “handling request: fooing the bar: barring the baz: connecting to DB: timeout.”


Well, how would that work?

Errors are just values. They don't have any special meaning nor is there any special ceremony to create them. A panic must start from calling panic(); there's no function or special statement to create or return an error.

It might be possible to augment every `error`-typed return such that the concrete type is replaced by some synthetic type with a stack trace included. However, this would only work because `error` is an interface, so any function returning a concrete-typed error wouldn't be eligible for this; it would also add a lot of runtime overhead (including, at the very least, a check for nil on every return); and it would cause issues with code that still does == comparisons on errors.

On the whole, I think error-wrapping solves the problem of tracing an error well enough as the language exists today. If errors are going to start having some magic added to them, I think the entirety of error-handling deserves a rethink (which may be due, to be fair).


> I’ve been writing Go for a while but never really thought about it.

Don't feel bad, I've tried to do this in some places, but I'm not sure it's worth it. It adds a ton of boilerplate to Go's already verbose error handling, since you need to wrap every error that gets returned from libraries.


Creating stack traces is expensive.


you only do it when there is a not nil err + being able to have a stacktrace is worth more than whatever it costs is CPU


Good error-wrapping discipline works better than stack traces, because not only do you have a trace of the calling context, you also have the values that caused the problem. Granted, a stack trace "always works", but it will never have the values of important variables.


Now you mention it, a stack trace with function calls and all their arguments would be really powerful. But also expensive, ideally it would have zero overhead or only have the cost if you actually look at it / want to debug it.


One of the major optimizations is to pass function arguments in registers instead of on the stack. These registers might preserve their original values by the time you unwind the stack, but in many cases, probably won't. Preserving those values would lead to fewer available registers and/or more frequent register saving, which would create at least some overhead, even if you don't ever inspect it.

There are probably lots of situations where it's worth it, though.


We do both and it works great


That's still under the assumption that errors are exceptional, rare, and require a stacktrace to debug. But most errors are not exceptional and do not need debugging, like idk, a row not found error in a database.

A crash, sure, that might need a stacktrace to debug. But that's already in place.


The Flight Recoding pun did not go unnoticed.

https://docs.oracle.com/javacomponents/jmc-5-4/jfr-runtime-g...


Really cool work. I'm still slightly paranoid about the overhead of leaving tracing enabled all the time but if it really is 1-2% that'd be totally worth it in many cases.

More awesome work from the Go team. Thanks!


> but if it really is 1-2% that'd be totally worth it in many cases

As one of the people who worked on the optimizations mentioned in the article, I'm probably biased, but I think you can expect those claims to hold outside of artificially pathological workloads :).

We're using execution tracing in our very large fleet of Go services at Datadog, and so far I've not seen any of our real workloads exceed 1% overhead from execution tracing.

In fact, we're confident enough that we built our goroutine timeline feature on top of execution traces, and it's enabled for all of our profiling customers by default nowdays as well [1].

[1] https://blog.felixge.de/debug-go-request-latency-with-datado...


I love datadog but you guys really give splunk a run for their money in the invoicing department :/


lol


Go has a race detector that you can leave on for a while in like a dev environment and it’ll flag race conditions. There was documented overhead and at some point even a memory leak but they spent months looking into it and eventually plugged the leak and the saga is now part of their official docs. It’s really interesting to see that kind of stuff laid bare so I would trust that this feature will at least run reasonably well and if it doesn’t, they’ll likely fix it: https://go.dev/doc/articles/race_detector https://go.dev/issue/26813


The race detector is an invaluable complement to a language like Go that lacks compile time safety. It meaningfully increases the confidence in code and is anecdotally both good at finding races while also giving straightforward error information to plug them. It won’t find everything though, since it’s runtime analysis.

In general, Go tooling makes up for a lot of the intrinsic issues with the language, and even shines as best in class in some cases (like coverage, and perhaps tracing too). All out of the box, and improving with every release.


Only if we leave Smalltalk, Erlang, Java and .NET ecosystems out of the picture.


Ah yes, the notoriously popular and actively contributed Smalltalk ecosystem.


Doesn't change the fact of being there first, with a rich standard library, and one of the ecosystems that drove the design of rich developer tooling, decades before Go was even an idea.


No on claims Go was "first"; I don't get why you're so obsessed by that. The previous poster just said "The race detector is an invaluable complement". And it is. There was no comment at all about any other language; it just says "this is a nice thing about Go".

If you don't like Go then fine, no problem. Everyone dislikes some things. No hard feelings. But why comment here? What's the value of just stinking up threads like this with all these bitterly salty comments that barely engage with what's being said? Your comments alone consist 13% of this entire thread, and more if we could all the pointless "discussions" you started. It's profoundly unpleasant behaviour.


The Go community is quite deep into it being the first into fast compilers, co-routines, slices, static compilation, and rich standard libraries.

The commenter said more than that.

"In general, Go tooling makes up for a lot of the intrinsic issues with the language, and even shines as best in class in some cases (like coverage, and perhaps tracing too). All out of the box, and improving with every release."

Best in class?!?


> The cost of race detection varies by program, but for a typical program, memory usage may increase by 5-10x and execution time by 2-20x.

That's "let's run the race detector and get some coffee" overhead, not "let's leave it on in dev" overhead.

Still cool that they have it available!


The main value of the race detector is enabling it for tests (go test -race), and then writing sufficiently exhaustive unit tests to cover all code paths.

I do think most gophers, instead of tests, use a combination of prayer and automatically restarting crashed processes when they inevitably panic from a race, which seems to work better than you'd expect!


I am always running everything in race mode when developing locally and have caught stuff before.


I've started development of something similar for Clojure's core.async library, which implements Go-like channels. This is something I cobbled together to trace the origin of a subtle sporadic bug that would happen once every 10,000 runs in a program that makes heavy use of channels. I'm not really working actively on it, so it only covers a subset of the functions in core.async but I keep on expanding it, adding debugging support for functions every now and then when I need it.

It represents the flow of data between channels/threads as a sequence diagram using PlantUML. Today I implemented core.async/onto-chan! (it spins a thread that will take items from a sequence and put them onto a channel). Here's what it looks like:

https://pasteboard.co/VhvDroREOvOQ.png

It's especially useful in Clojure, as the experience with channels is not as polished as in Go (or so I heard): when a put or a take hangs, your only recourse is to sit and squint at your code. This tool will color them red, so you can immediately spot what's wrong. It also allowed me to spot a channel spaghetti plate I had not anticipated and wouldn't have remarked otherwise.

For now I've taken a "maximalist" approach which includes inspecting buffers, so it incurs a heavy runtime penalty that's only ok at dev-time (+ it keep tracks of the whole trace, so no flight recording in sight for now).

In the future I'd like to give it a proper interface (it's just a svg file with hoverable elements for now), maybe using constraint-based graph layout to lay out the sequence diagram using cola.js/adaptagrams or the sequence diagram from Kiel university Kieler project when it's integrated into the Eclipse Layout Toolkit. Thesis from the developer of this module:

https://web.archive.org/web/20220519080528id_/https://macau....


This is really amazing! I look forward to giving it a try. Kudos to the Go team and contributors.

Say what you will about the language itself, the Go stdlib is one of the most useful and pleasant standard libraries I've ever worked with. It gets better with every release, and great care is taken to make it featureful, performant and well documented. These days for most general things I rarely have the need to reach for an external package.


> I rarely have the need to reach for an external package

Agree 100 % ! And it gets better as time evolves.

For example the recent `log/slog` introduction to stlib kills the need for 99.9999% of people to use third-party logging, because structured logging is now in stlib.

Same with the new http mux. Many people will now be able to migrate over to stdlib because of the richer functionality, and only the small community of outliers who are doing stuff like regex mux parsing will need the third-party libs, but with time no doubt regex mux will make its way to stdlib too.


I remember hearing from people in the early days of Go that it was only nice because it hadn't aged. It didn't have the "barnacles" that Python and Java had acquired, but that it surely would after a decade. At the time, Python and Java were about 15 years old (since their 1.0 releases). Well, it's been 12 years and Go is still clean and minimalist. Maybe all of the cruft comes in years 12-15? :shrug:


Go did acquire some of that though; a few packages are deprecated, some have different ways of doing the same thing (e.g. the addition of context and fs packages added new paradigms, and later netip and new JSON package added "urllib2"-like "v2" packages), and some things weren't really a good idea to start with.

I do think the situation is better than in Python and overall not bad after 12 years, but it's not as clean and minimalist as it could be.


I think they're saving up a list of things to clean up for a possible Go V2, but at the same time they're in no rush because the things they could strip out aren't painful enough yet.


Go has many barnacles. Off the top of my head:

- context.Context is two different barnacles. The first is the thing itself; do I "need" context here? Probably should add it just to be safe. The second is the myriad of older libraries that don't use it or have tacked it on with a duplicate set of functions/methods. Library authors seem to not understand the point of major versions or else are afraid to pull that trigger.

- http.Client is a barnacle. First, there's the fact that you have to close the response.Body even if you don't use it. Then there's the difficulty of adding middleware or high-level configuration to it. Then there's the fact that they broke their own rules for context.Context and instead of taking it as a parameter in the methods, it's stored as part of a struct. You can work around many of these problems for your own code, but not when using third-party libraries.

- The entire log package is a barnacle. Thankfully we now have log/slog but log existed for so long that lots of third-party libraries use it. Even if the library authors knew to avoid the footguns that are Fatal/Fatalf/Fatalln and Panic/Panicf/Panicln, context and level support are spotty at best.

- The (crypto/rand).Read and (math/rand).Read debacle. Really, the entire math/rand package is a barnacle. Thankfully this has also been addressed with math/rand/v2 but the old one will live on for compatibility.


> - The entire log package is a barnacle. Thankfully we now have log/slog but log existed for so long that lots of third-party libraries use it. Even if the library authors knew to avoid the footguns that are Fatal/Fatalf/Fatalln and Panic/Panicf/Panicln, context and level support are spotty at best.

Heh. I wrote some code that called log.Fatalf with an error message, and then later added a verbose switch when running the program (kind of an `if !verbose { log.SetOutput(os.DevNull) }` type of thing).

Sure enough, when I ran it with some input that led to run into the log.Fatalf, the program exited, but no error message was written out. Cue quite some head-scratching!


The usage of the term execution trace really threw me off. I am more familiar with that term meaning a instruction execution trace or maybe just a function trace, so it made no sense why stack unwinding would be a bottleneck. Turns out it is actually a goroutine event (with stack trace at time of event) trace [1]. I guess Go does not have a code execution trace package?

[1] https://pkg.go.dev/runtime/trace


Go doesn't have an instruction (execution) or function call tracer. Go's tracer is primarily tracing scheduler events. So maybe the term scheduler tracer should have been used?

Anyway, using the term "execution tracer" in Go goes back to the initial design doc from 2014: https://docs.google.com/document/u/1/d/1FP5apqzBgr7ahCCgFO-y...


Tracing code is a way to see what's happening in your code which Go provides:

https://www.infoq.com/articles/debugging-go-programs-pprof-t...

Many other language use the same technique, see Python with: https://github.com/bloomberg/memray


I was talking about execution tracing, specifically, not tracing as a general concept. Tracing as a general concept just means a non-sampled time-series event stream and as commonly used these days generally also includes "stacking" to distinguish it from generic event streams or logging.

When you add a additional term like "execution" you are specializing the term to mean a event stream of "execution". In areas I am familiar with, that would normally mean a trace of the complete execution of the program down to the instruction-level so you can trace the precise "execution" of the program through the code.

What is described here would, in the terminology I am familiar with, be more like... a thread status and system event trace just applied to goroutines and the Go runtime, respectively, I guess? It does also include the stack trace at the time of the event, so it does have more data than that, but that is qualitatively different than a instruction execution trace that allows you to trace the exact sequence of execution of your program.


i've sometimes heard that the JVM had best in class tooling for server troubleshooting. How does go compare to it now ?


Go is one of the best language in term of tooling, it has just one binary that contains everything, I don't know any other language that includes out of the box all of that:

- testing

- benchmark

- profiling

- cross compilation ( that work from any platform, like compiling a windows .exe from your raspberry pi for example )

- some linting

- documentation

- package mgmt

- bug report

- code generation

- etc...

Java is probably more advanced in some fields ( like tracing / profiling ) but it lacks others.


> single binary

> cross compilation ( that work from any platform, like compiling a windows .exe from your raspberry pi for example )

This, I think is one of Go's best selling point.


There is really nothing in Go that Java ecosystem lacks, in their 30 years of existence.

The only thing one could arguably argue that Go does better is value types, but even that requires careful coding so that escape analysis is triggered, and in that sense, it is only a matter to use a JVM implementation like GraalVM, OpenJ9, Android ART, PTC, Aicas, Azul.


Golang also has some of the worst tooling because everything is based off of what comes builtin, and because they're not specialized projects, they're very limited in capability and configuration.

Coming from ts, tooling like tsconfig has a lot of options, but sensible defaults can be set with a single flag like strict mode. If some org has some specialized needs, they can dive into the configuration and get what they need.

With golang, not only would it be a lot for any single team to offer all those features at a decent level of polish, the golang culture in particular is very, very resistant to small bits of comfort because of dogma like "worse is better". It's kind of similar to Haskell's "avoid success at all cost".


This is a classic example of a 'contrarian' take for what feels like the sake of it. TS/JS tooling is a total and utter disaster at this point.

The commonJS/module transition is a nightmare. The fact that something like 'prisma' exists - a c-written 'query engine' that turns prisma js into sql.. wut?

This ecosystem is on a highway to hell literally.. I really hope Bun works out, because I do like Typescript, I do like programming in it - but I'm absolutely done with spending hours upon hours figuring out configs in tsconfig, jest, package.json eslintrc, prettier, vstest and whatever the next 'new' abstraction is. In Go I can just focus on the code and forget about the rest


I can’t believe we’re seriously comparing go polished tooling with bloatware and half baked crap in the js/ts land especially when it comes to package management and such.

Also there’s a lot of go tooling that doesn’t come from go team itself because go standard library exposes first class code introspection utilities. go vet is an example of this


Why spend your time coding when you could be fiddling with configuration files all day? I love re-learning how to make package.json, tsconfig, esbuild, eslint, prettier, mocha and webpack play nice every time I start a new project.


Missing peer dependency. Have you tried nuking your node_modules?


No I'm busy upgrading webpack and rewriting all my tests from jest to vitest


I've been involved in the Go community for almost 10 years, and I've never once heard anybody say "worse is better". Comparing Go's tooling to Typescript feels like a farce, especially since you've neglected to mention the horror that is npm.


TSConfig having a lot of options isn't a selling point though; you shouldn't need any of those. That said, the flags and strict mode are intended for when you have an existing "sloppy" JS codebase or mindset.

You shouldn't need anything (except strict mode) in a new TS project.

That said, TS is quickly becoming the antithesis of what Go tries; every release for the past few years I've seen now are all features where I'm like, "I will never need or use this". Some conveniences have been improved on - like better type inference - but that will mainly allow for cleaning up workarounds.

"some orgs with specialised needs" are going to be legacy projects, either older TS versions that didn't have the features that it does now, or JS codebases. This isn't generally an issue with Go projects, most of which are greenfield. That said, there's some X to Go conversion projects that produce less than ideal code though, like usage of the `interface{}` type that at this point is a code smell.


TS tooling excels at preventing you from getting work done


Which tools are you talking about?


No?


For good reason, the JVM has a ton more knobs that need adjusting. You can't just run Java code. The JVM has a lot of tricks you have to customize for based on your workload.

For years until 1.19 the Go GC has had only one tuning parameter (GOGC).


I don't see the connection you're making between knobs that adjust runtime behavior and tooling. As an aside, "you can't just run java code" is a bit hyperbolic, plenty of people "just run" java apps and rely on the default ergonomics. The modern JVM also offers more automated options, such as ZGC which is explicitly self tuning.


With Go, you never spend hours/days debugging broken builds due to poorly documented gradle plugins. As an example :)

You really, truly, just run.


That is the problem right there, using Gradle without having learned why no one uses Ant any longer.

As for Go, good luck doing just run when a code repo breaks all those URL hardcoded in source code.


That’s not a real issue:

* the go module proxy ensures repos are never deleted, so everything continues to work

* changing to a new dep is as easy as either a replace directive or just find and replace the old url


It requires work to keep things working, exactly the same thing.


The module proxy is used by default and requires no work. I don’t think what you’re saying makes much sense.


So there is some magic pixie dust that will fix url relocations for the metadata used by the proxy, without having anyone touch its configuration?


I think I’m missing something, because I’m pretty sure you understand the go module proxy (having seen you around here before) but I really don’t understand what problem you’re talking about.

If a module author deletes or relocates their module, the old module is not deleted or renamed from the module proxy. It is kept around forever. Code that depends on it to not break does not break.

If they relocated and you want to update to the new location, perhaps for bug fixes, then you do have to do a bit extra work (a find and replace, or a module level replace directive) but it’s a rare event and generally a minor effort, in my opinion, so I don’t think this is a significant flaw.

For most users most of the time they don’t need to think about this at all.


> good luck doing just run when a code repo breaks all those URL hardcoded in source code

You're on a tear in this thread being wrong about how Go works, but I'm really curious what extremely specific series of events you're imagining would have to happen to lead to this outcome. If I use a dependency, it gets saved in the module proxy, I can also vendor it. You would have to, as a maintainer, deliberately try to screw over anybody using your library to accomplish what you describe.


Not when one git clones an existing project, only to discover the hardcoded imports are no longer valid.

Being a mantainer has nothing to do with some kind of gentlemens code of condut.


> You can't just run Java code. The JVM has a lot of tricks you have to customize for based on your workload.

This sounds like something you would hear 10 years ago in relation to the CMS garbage collector. Since Java 9, G1 has been the default gc for multi core workloads and is self-tuning. The CMS gc was removed 4 years ago. If you do need to tune G1, the primary knob is target pause time. While other knobs exist, you should think carefully before using them.

We run all of our workloads with vanilla settings.


Note that Go needs those knobs just as much as the JVM does, at least some of them. They just didn't want to expose them.


Which knobs does go need?


Fine-grained control over various GC phases and decisions (such as the level of parallelism, max pause times). Until Go 1.19, it was even missing a max memory limit, and even now it only has a soft memory limit.

Additionally, of course it would be nice to have more GC options, such as choosing between a throughput-oriented GC design and a latency-oriented one, having the option of a compacting GC of some kind to avoid fragmentation, or even having a real time GC option.

Go has chosen a very old-fashioned GC design with very few tuning parameters possible, but even so it only exposes a very basic form of control.


I agree with most of your points but how is it "old-fashioned"? To me, that means things like reference counting or long stop-the-world pauses, neither of which are true of Go.


I would rather just write code and trust the existing GC than mess around with knobs all day. I suppose there are < 1% of projects that may see some benefit in messing with the GC.


I think Go is catching up, but it's still significantly behind. For example, Go memory profiles are much, much worse than Java's - they don't even have them integrated with the GC to show the ownership graph (they can only show where each object was allocated, instead of which other objects are holding a reference to it). The CPU profiling parts seemed more up to par. This tracing thing is nice, I'm not as familiar with this are of Java. Also, I don't think Java has a built-in race detector (except perhaps for the detection of concurrent write and iterations in collections?).

Also, the OpenJDK JVM supports live debugging and code hotswap, going so far as to de-optimize code you're debugging on the fly to make it readable. Go doesn't support live code reload even in local builds.


One of the weakest areas is analyzing heap dumps.

The current format has very limited tooling; "go tool" has some extremely rudimentary visualization. There is gotraceui [1], which is much better, though you need to use Go trace regions to get much useful context.

There's a proposal to support Perfetto [2], but I don't know if anything has come of it.

[1] https://gotraceui.dev/

[2] https://perfetto.dev/


I really like Go's tooling, and while I'm fluent in Java I've never been on a full-time Java team. I've remarked to Java friends in the past that I think Go has best-in-class tooling and had my ass handed to me, in detail and at length. By many accounts, Java is the gold standard for tooling in language ecosystems. It's a compliment to Go that you'd even consider the comparison. :)


Java's tooling is indeed very good but comes with a huge downside of being bound to an IDE.


Go's standard library is a shining example of what all standard libraries should strive for. Yet, we still have some languages who's developers refuse to include even a basic http API in their standard libraries in an age where even embedded systems have started to speak http. Imagine if if the same had happened with TCP and UDP...

Here's to the continued success of Go and other sanely-designed languages.


Can you please not take HN threads into programming language flamewar? We're trying to avoid that kind of thing here - see https://news.ycombinator.com/newsguidelines.html.

We detached this subthread from https://news.ycombinator.com/item?id=39710822.


I think it’s a bit extreme to characterise it as sanity vs insanity. I think you need at least one of a large stdlib or good dependency management.

I love Go, used it since the 1.0 release and have used it at work for years. No complaints about the language. But for the longest time Go didn’t have quality dependency management and pulling in dependencies was annoying. Building your programs with the large, high quality stdlib was the path of least resistance.

Since Go 1.0 (2012) most new languages have coalesced on good dependency management. Rust, for example, copied Ruby’s bundler idioms since before 1.0 (2015). People who needed good quality libraries were able to pull them in with minimal hassle. That’s why they didn’t need a large standard library to succeed. I’ve written more about the tradeoff here - https://blog.nindalf.com/posts/rust-stdlib/

To you, I’d suggest using less charged language like sanity. In this case there’s a good technical case for both ways, and it’s not productive or nice to imply that people who choose a different path from you are insane.


> To you, I’d suggest using less charged language like sanity.

To the grandparent, I'd suggest keeping the language as it is. Status quo in the current programming ecosystem is often insane, and when it is, we should call it out.

> it’s not productive or nice to imply that people who choose a different path from you are insane.

In a general sense, I agree. However, when it comes to programming, insanity is so widely accepted as normality that I think it would be counterproductive to blunt the language used to describe it.

As an example of insanity for those who may (justifiably) think I'm talking out of my bottom - in NodeJS ecosystem the standard library is so bad that it is considered acceptable to use dependencies for even the simplest tasks, which caused the `leftpad` incident which broke countless programs.


Leftpad is not appropriate here because Node.js didn't add any additional String methods anyway. Therefore it could have been prevented if JS proper had enough String methods beforehand, regardless of attitude towards dependencies. Grandparent's points in comparison are more about the scope of standard libraries. (And in terms of convenient String routines, Go is still a bit inferior to Python!)


> Therefore it could have been prevented if JS proper had enough String methods beforehand, regardless of attitude towards dependencies. Grandparent's points in comparison are more about the scope of standard libraries.

Leftpad could've just as well be a part of some standard library, like...

> in terms of convenient String routines, Go is still a bit inferior to Python

... Go strings library :) https://pkg.go.dev/strings


Maybe I should've been more clear about "scope".

Standard libraries can pick their scope of functionalities and depth (or completeness) for each functionality. Nowadays every programming language is expected to come with a good string support, which is about the scope. But there are a lot of string operations you can support. PHP for example has `soundex` and `metaphone` functions for computing a phonetic comparison key. Should other languages support them as well? I don't think so, and that's about the depth or completeness because you can never support 100% of use cases with standard libraries alone. Ideally you want to cover (say) ~90% of use cases with a minimal number of routines.

Leftpad was clearly due to the lack of depth in JavaScript and Node.js standard libraries. JavaScript now has `String.prototype.padStart`, and an apparent name difference suggests a good reason that some standard library may want to avoid it: internationalization is complex. A common use case is to make it aligned by padding space characters, but that obviously breaks for many non-Latin scripts [1]. And yet many people tried to use it, so `left-pad` was born with a suboptimal interface, and we know the rest.

HTTP support is different. I totally agree that HTTP is something you want to support in a sorta native fashion, but a standard library is not the only way to do that. In fact it is not a good place to do that because it is generally slower to change (Go is a notable exception but only because its core team is very well funded and strongly minded). Python did support HTTP in its standard library for a long time, but it doesn't support HTTP/2 or HTTP/3 at all and Requests or urllib3 are now the de facto standard in Python HTTP handling. Modern languages try to balance this issue by having a non-standard but directly curated set of libraries. Rust `regex` is a good example, which may be surprising given than even C++ has a native regex support. But by not being a part of the standard library, `regex` was able to move much faster and leverage a vast array of other Rust libraries, and it is now one of the best `regex` libraries throughout all languages. That's what nindalf wanted to say by different "ways".

[1] For example, `"한글".padStart(5)` will give you `" 한글"` but its visual width is larger than 5 "normal" characters. This is not merely a matter of fonts and Unicode has a dedicated database for the character width ("East Asian Width"). Some characters are still ambiguous even in this situation (e.g. ↑), and the correct answer with respect to internationalization would be: don't, use a markup language or (for terminals) a cursor movement instead.


I think there's a distinction between Rust and Go in that sense, Go has a clear objective, be a good (if not the best) language for writing web services (that's why Google developed it in the first place), that's why their stdlib is so rich to write this kind of applications, while Rust was developed to be the system language for the next 50 years, that's why the difference in stdlib.

The only downside of Rust is that you have to pull more than 100 dependencies to build a simple hello world server (Tokio + Axum | Actix), then you need a database driver, most likely something that depends on SQLx (not sure here how many, dependencies), but one of my projects got near 500 transitive dependencies with only 20~ish direct dependencies.


Rust dependency management is the best parts of Go with the worst parts of NPM.


You hit the nail with Go's initial purpose. That not only explains the high quality of the std lib in terms of http, but also the lower quality in other areas.

By the way: You still need a DB driver with Go. They just provide an interface, which is cumbersome and error prone to use. That's why the community package sqlx exists.

To create a hello world in Rust, one just needs to add two crates, which in terms pull many other crates. So yes, too many total deps for my taste, but an Axum server is much more feature full than a server build with Go's std lib.


Go's objective was "C but with GC and async runtime". It got adopted for web services by chance, a better alternative to nodejs or python in some aspects, but the zero-values end up being a huge footgun to serialization which is the bread and butter of web stuff.


sanity - Soundness of judgment or reason.

This doesn't sound extreme. The statement, "designed with sound judgement," doesn't carry the implications that you're defending from.


It's funny that your comment here applauds Go but your blog post praises Rust.


It is possible for more than one language to be good. Both languages succeeded with their approach.


“But how can you support two opposing sport teams, that doesn't make sense”

Too many people on HN when discussing about programming languages unfortunately…


> Yet, we still have some languages who's developers refuse to include even a basic http API in their standard libraries in an age where even embedded systems have started to speak http.

There’s also the philosophy that the language core should be as minimal as possible. I’m not 100% sold on it, but there’s definitely a valid argument to be made in favor of it.


My take as well. It's not black or white. At times I wish Rust's std lib was greater (by the way, it's not small either), but other times I'm blown away by the quality of some community crates.


As I wrote in another comment, it depends on the goal of the language, Go was build for writing web services, that's why they have most, if not everything, you need to write this type of apps.


> Go was build for writing web services

Yes, I very much agree. The comment I was replying to however was referring to “other languages”.


I really like that Go has a great std lib, but it comes at the cost of quality or at least feature fullness. Some examples:

- The csv package. Compare Go's csv package with Rust csv crate. The package in Go's std lib is close to useless compared to the Rust community crate.

- Logging. No tracing, limited providers, just recently added structured logging.

- The xml package is very slow and cumbersome to use compared to modern element-tree-like implementation. For larger docs you probably need to resolve to a community package.

- If I'm not mistaken, Java has a build in http server, which usage is probably very low. Instead people are using Jetty / Netty / Tomcat (? my Java times are long ago)

To repeat myself: I personally like the std lib approach of Go. I disagree to any narrow view on this topic.


Python (1991), Java (1995), .NET (2001), Smalltalk (1972), Common Lisp (1984), Ruby (1995), Perl (1993).

Go is following not leading.

To this day ISO C and ISO C++ still don't include TCP and UDP on their standard library, that comes from POSIX, the UNIX APIs that didn't make it into neither ISO C nor ISO C++.


I have deep respect for Python and like working in it, but Python's standard library has only one of the attributes listed on the comment you're responding to. The Go standard library is different in a variety of ways from Python's standard library, and, to the original commenter's point, it is far more common to replace portions of the Python standard library with third-party packages than it is with Go's standard library.


I love the, yes but, kind of arguments when people get shown how they aren't exactly right.


They're really not comparable standard libraries. For the most part, the "batteries included" features in the Go standard library are the idiomatically accepted versions of those features in the community, which is not something you can say about Python's standard library. It's a major difference. Using something other than net/http would be a code smell in Go code, but using urllib would maybe almost be a code smell in Python.


[flagged]


No, people can have opinions about things that are neither glowing nor scatological.


Good for them.


I take it you're using urllib2 to make HTTP requests in Python?


When the standard library does the job, there is no need to look elsewhe, unless forced to by third party libraries dependencies.


Go is not quite the same but definitely on a similar level of abstraction to C, C++, Rust, etc. If that statement makes you raise an eyebrow, I'd suggest that while Go requires more runtime than C or Rust, it does still give you:

- Direct memory access through `unsafe`. Python kinda does through `ctypes`?

- Tightly integrated assembly language; just pop Go-flavored assembly into your package and you can link directly to it.

- Statically-compiled code with AOT; no bytecode, no interpreters, no JITs.

Therefore what really sets Go apart is that it gives you all of these rich standard library capabilities in a relatively lower level language.

Of course I kind of understand why Rust and C++ don't put e.g. a PNG decoder in the standard library, I think this is somewhat an issue of mentality; those are things that are firmly the job of libraries. But honestly, I wish they did. It's not like the existence of things in the standard library prevents anyone from writing their own versions (It doesn't stop people from doing so in Go, after all), but when done well it provides a ton of value. I think nobody would claim that Go's flags library is the best CLI flag parsing library available. In fact, it's basically just... fine. But, on the other hand, it's certainly good enough for the vast majority of small utilities, meaning that you can just use it whenever you need something like that, and that's amazing. I would love to have that in Rust and C++.

And after experiencing OpenSSL yet again just recently, I can say with certainty that I'd love Go's crypto and TLS stack just about everywhere else, too. (Or at least something comparable, and in fairness, the rustls API looks totally fine.)


> Direct memory access through `unsafe`. Python kinda does through `ctypes`?

Already available in ESPOL and NEWP (1961), Modula-2 (1978), Ada (1983), Oberon (1987), Modula-3 (1988), Oberon-2 (1991), C# (2001), D (2001) and plenty others I won't bother to list.

> Tightly integrated assembly language; just pop Go-flavored assembly into your package and you can link directly to it.

Almost every compiler toolchain has similar capabilities

- Statically-compiled code with AOT; no bytecode, no interpreters, no JITs.

Like most compiled languages since FORTRAN.


> Statically-compiled code with AOT; no bytecode, no interpreters, no JITs

Except in reality in does not work, like you can't easily create a single binary out of most C/C++ project.

You always going to fight with make / GCC / llvm and other awful tools with errors that no one understand. It does not matter if the underneath tool / language is supposed to support it, can a developer make it work effortless or not.

In Go you download any repo type go build . and it just works. I can download a multi millions line repo like Kubernetes and it's going to work.


Depends on the C compiler one decides to use.

If you believe that regarding Kubernetes you're in for a surprise regarding reproducible container builds.


Huh? I think you misinterpreted what I meant to suggest that those features individually were unique or unusual. I was only using them to demonstrate that Go is on a similar level of abstraction to the underlying machine as C, C++ and Rust.

> Like most compiled languages since FORTRAN.

Yes. But you didn't list "compiled languages since FORTRAN", you listed:

> Python (1991), Java (1995), .NET (2001), Smalltalk (1972), Common Lisp (1984), Ruby (1995), Perl (1993).


First of all I never mentioned that I wrote an exaustive list of compiled languages since the dawn of computing, rather a reply to

"Go's standard library is a shining example of what all standard libraries should strive for. Yet, we still have some languages who's developers refuse to include even a basic http API in their standard libraries in an age where even embedded systems have started to speak http. Imagine if if the same had happened with TCP and UDP...

Here's to the continued success of Go and other sanely-designed languages."

You then moved the goal posts by talking about stuff that wasn't in that comment.

As such I am also allowed to move my goal, mentioning that

"Already available in ESPOL and NEWP (1961), Modula-2 (1978), Ada (1983), Oberon (1987), Modula-3 (1988), Oberon-2 (1991), C# (2001), D (2001) and plenty others I won't bother to list."

Are all languages that compile to native code.

"Ah but what about C#?!?", it has had NGEN since day one, Mono/Xamarin toolchain has supported AOT since ages, Windows 8 Store Apps used MDIL toolchain from Singularity, replaced by .NET Native for Windows 10 store apps, Unity compiles to native via their IL2CPP toolchain, and nowadays we have Native AOT as well.

And I will had that Java has had native AOT compilers since around 2000, even if only available as commercial products, with Excelsior JET, Aicas, Aonix, Webspehre Real Time, PTC, and unsafe package as well, even if not enjoying an official supported state (nowadays replaced with Panama for exactly the same unsafe kind of stuff and low level system accesses)


I do not understand how you and some other people seem to see a claim to be first in every feature a language ever says it has. Who is claiming primacy or uniqueness for almost any feature in any language ever? When has a Go designer ever claimed to be the first ones to implement a feature?

Programming languages have been largely just shuffling around features other languages have for the last 50 years now, and I can only go back that far because when you get back to the very first languages, they're unique and first by default. Even when a language is first to do something, it's generally only the first for a feature or two, because how would anyone even make a programming language that was almost entirely made out of new things anymore? Even if someone produced it, who would or could use it?

You seem to spend a lot of time upset about claims nobody is actually making.


It is the way people insist on writing such arguments.


I don't know what your goal in this discussion is. I don't think anyone is claiming Go invented having a nice standard library, nor is anyone claiming that Go invented compilers or anything weird like that. I think you misunderstood the entire discussion point.

On a similar note, iPhone did not invent cameras, MP3 players, cellular broadband modems, touchscreens, slide to unlock, or applications.


I think if the Rust team had the capacity they might have considered adding—and maintaining—more stdlib functionality. I never asked for details but I guess that the core Rust enjoys only a fraction of the funding Go and .NET have enjoyed. It’s not only a merits based decision case I think.

Regarding C++, it’s based on a standard, the situation is a bit different. You have a variety of implementations. Imposing anything beyond the typical use case the compiler implementations have catered to would induce an insurmountable overhead on the implementation coherence. Therefore, I believe it’s more reasonable to have application-level logic in the realm of community maintained libraries. In addition, C++ is huge syntactically and the stdlib is immense, but it’s more focused on algorithmic tasks rather than on quickly building mini-servers.

Besides, the Go community has a myriad of reinvented wheels, ranging from logging over caching to maps, and until recently HTTP server libraries. The logging story for example has just recently led to a discovery of the patterns desired by the community, mature enough to accommodate structures logging in Go’s stdlib. Similarly for error handling. Robust and settled approaches turned a de facto standard make total sense to be included.

.NET is different again, having a center of gravity with Microsoft and the .NET Foundation, with typically one preferred library for a given task, contrary to Java and Go. Centralized and decentralized, the classical dichotomy.


I think Rust team would still have a very small stdlib even if they have more funding, here's a talk[0] where they aim to be relevant for the next 40 years, in that time span there would be a lot of changes and to be backward compatible, you cannot have features in your stdlib that will be irrelevant or change in the next 5, 10, 15, 20 years.

[0] https://www.youtube.com/watch?v=A3AdN7U24iU


That’s a great point indeed. In Java and C++, there have been quite a few deprecations over the years and decades. In the Rust community, there’s unfortunately quite a lot of interesting but abandoned crates.


What changes are set to be made to the file format of PNGs that would prevent it from being relevant in the standard library 40 years from now?


[flagged]


I agree with everything besides the single language.

Something like D or Nim would also count, although I do grant they feel like toy ecosystems when comparing with .NET.


> it isn't, I know how a good one looks like)

Good. Just don't tell others lest they point out many shortcomings in your choice.


Well, would you like to?


what about Julia? it has all the low level features you would want and it's structs are c compatible.


Unfortunately, I know very little about Julia aside from hearing good things about its application in scientific domain and it having interesting options for accessing SIMD which are similar to C#'s SIMD API. Someone else here could perhaps provide better context.


> Go is an extreme parody of C mindset adapted to kindergarten level of risk

There's nothing of substance actually said here so it's hard to refute or support it very much.

> so that Google could tackle its internal dev culture issues.

Only one sentence in and this sounds like a vague reference to that thing Rob Pike said one single time 10 years ago or whatever, but I don't really care why Go was supposedly created or what it was for over 10 years ago, I care about what it is today. I think you can do better than this.

> There is only a single high-level language with GC today that exposes appropriate low-level primitives and it is C#. In C#, you can declare C-style struct and pass it directly to C code across interop without any marshalling whatsoever. You cannot do this in the way everyone uses Go.

I dunno what you mean exactly, you certainly can declare a C-style struct in Go. You can declare it using C, or you can pass a Go-defined struct into a C function, it follows roughly the same alignment rules, I do this all the time when coding against Win32 APIs. It's in fact better than it used to be because there is now a way to pin memory so that it is safe to pass Go pointers directly to C code, using `runtime.Pinner`, but certainly there was no limitation to using structs the C way before. There aren't many guarantees about memory layout, but there wasn't in C either.

> I'm also amused by how "AOT" is one of the biggest demonstration of "X Y issue" in practice where developers have little understanding of how each platform can have drastically different ways of achieving "lean fast to start applications". What's unfortunate in this is they make Go's AOT as a selling point, where it's just its deployment model, at which it's not even the best nowadays

OK... but for context... I was literally only trying to demonstrate that Go is a relatively low-level programming language in terms of how close you are to machine code, putting it in a similar camp to languages like C and Rust, rather than something like Python. It is not an argument in favor of a given way of packaging or executing software. You can of course make programming language implementations with different tradeoffs, but I don't think it's accidental that scripting languages, compiled bytecode languages and compiled machine code languages tend to have different attitudes, I'm of a mind that they are mostly the way they are because of the characteristics that would lead someone to reach for one versus the others.

> you will be much better served by forgoing Go in favour of Rust or C#

I don't think there's any reason to disagree that Rust is a great choice of programming language. Rust has numerous amazing advantages versus most other programming languages, offering an excellent set of guarantees. However, it does so by paying a lot of language cost to get there. I'm not saying, for example, that this means that Go is good at some use cases and Rust is good at others, I'm saying that Go and Rust have some overlap at what they are good at and they have some areas where one will have nicer tradeoffs than the other. Rust handily has the advantage for writing concurrent code, which is ironic given Go's name and how much its concurrency model was emphasized early on. But Rust is a very, very complex programming language; sometimes what it goes through great, great pains is something that you absolutely, 100% want to have, at any cost, and sometimes it's just not. When dealing with embarassingly parallel programs like network services with nothing-shared architectures, Go shines; it's pretty good at these. When every CPU cycle and byte counts, Rust wins against Go every single time, because it offers greater control and vastly more powerful compile-time metaprogramming. (Go offers essentially none, since even generics mostly compile down to interfaces.) Does the complexity always matter? Well, maybe not, but you'll notice in compile times.

> please do not tell me about how good Go's standard library is - it isn't, I know how a good one looks like

This is a shallow dismissal that shouldn't really convince anyone since it doesn't even attempt to paint the picture so I will reply with a retort of the packages in the Go standard library that I think are pretty great and I would like to have in other programming languages.

- Literally all of `crypto`, including `crypto/tls`. To be completely fair, I wouldn't argue they are perfect (`tls.Config` is a little weird) but they are very clean and reasonable implementations of cryptographic routines including some nice optimized assembler code across many architectures for plenty of them. The actual interfaces are pretty good and help to avoid some basic pitfalls. It is relatively easy to e.g. create and sign an X509 certificate, versus OpenSSL.

- `regexp` - Neither the fastest nor the most fully-featured regular expression engine. On the other hand, though, it gives you re2 behavior - the same featureset, and the same runtime behavior, specifically that it scales linear with time with regards to input size. That makes it a rather good library to have as a default choice, since it is significantly less likely to wind up as a surprise footgun that way, versus say PCRE.

- `image/png`, `image/jpeg`, etc. They're neither the best nor the worst PNG or JPEG encoders/decoders. But they're PNG and JPEG decoders, in the standard library, and the speed they perform at it seems acceptable, and they're memory safe. For doing any serious amount of image processing, I'd rather shell out to libvips, but there are a lot of use cases where it's tremendously nice that there's basic codecs like this in the standard library. Same for the compression algorithms in `compress` and the archive implementations in `archive`.

- The `go` package. Like most of the other packages, there's nothing terribly astonishing about this package. However, it does give you enough Go compiler guts to go ahead and parse and work with Go code. This is very useful because inevitably with large codebases you are going to want the ability to write accurate static analysis tools, make code that can do large tree-wide refactors, and other such tasks. For C++, you probably have little choice other than to use Clang's AST. For Rust, the closest I'm aware of is the `syn` crate, but I might be out of date here. These options generally require a great deal of effort to do even rather simple things, making it rather hard to get to the point of break-even for them, and I think that's not great. Some of this is definitely due to language complexity, but that is indeed a cost you have to pay multiple times. So it better be worth it!


The regexp being "not the fastest" might be an understatement: https://github.com/BurntSushi/rebar?tab=readme-ov-file#summa...

Nothing that Go has is uniquely better or even competitive. Both C# and Java have way more extensive and optimized standard libraries. What Java has working against it is the lack of structs, monomorhpized generics and SIMD API to ensure all standard library bits stay as performant as they are in .NET, but nonetheless the areas that have no use of those are optimized to a comparable degree. And you can be sure they offer sufficiently good support on all major platforms: Linux, macOS, Windows and, what is a major Java/Kotlin's advantage, Android. Something that Go treats with pretending like the only platforms that exist is UNIX/POSIX.

There are other things, like Go's blessed way of slicing strings being problematic because it does nothing to prevent you from tearing code points so you have a lot of code that might be silently corrupting text data.

And for interpreting AST, this exists in both Java and C# with different strategies. C# approach is a bit more complex but extremely powerful with build-time source generators that have full access to AST generate new code, fill out existing partial members or intercept calls, and does not require any external scripting. A good example of that is generating gRPC clients and servers code from .proto file by simply executing `dotnet add package Grpc.Tools` and adding .proto reference to .csproj.

Go may seem like a good language after coming from scripting nightmares, it may even seem like a good systems programming language that is easy to use. I promise, it is not. In order to call intrinsics, in C#, I just call them because that's what it offers, in Go, I have to write asm helpers manually.

If you look at what modern C#, Kotlin and, again, Rust provide in their respective areas, you will soon realize that Go does not have anything to offer that is unique or better, except perhaps the culture of minimalism, which other ecosystems are also advocating for in at least last 5 years.

p.s.: I forgot to mention the reason interop was brought up - in C# it can be zero cost or nearly zero cost because it is something that was considered at the language and platform inception. It is one of the reasons C# will never use green threads and eventually will move over upgraded runtime handled task system (mind you, current day state machine implmentation works well, but can be improved). The cost of FFI in Go, unless you use Cgo, is dramatic. This is something that is not acceptable for a language positioning itself as a systems programming one.


> The regexp being "not the fastest" might be an understatement

Good benchmarks, but what you are seeing are the consequences of the fact that Go's `regexp` engine isn't very optimized, it implements a fairly naive NFA-based regular expression engine. This is perfectly reasonable for simple use cases like input validation or parsing simple string formats, it's plenty fast in that case. When you throw a complex regex with backtracking you can see performance drop severely compared to other engines. If you throw a regular expression that does not have backtracking or case insensitive literals it performs reasonably well and executes it in one pass as you'd hope.

Of course if you were writing something like grep where the throughput is directly tied to regexp execution, yes, you would most certainly want to pick an optimized regexp engine, though it would likely come at the cost of needing a JIT and other complexity.

The existence of `regexp` in the standard library, though, certainly doesn't stop you from picking a more optimal library, any less than it does for JSON parsing for example.

> Nothing that Go has is uniquely better or even competitive. Both C# and Java have way more extensive and optimized standard libraries.

I think that it is very nice when standard libraries contain "optimal" implementations of things, but in most cases it's not the most important concern. Having sufficient implementations of things is much more important. And in that regard, Java is far from the worst, but it's also far from the best, too. For the longest time, Apache Commons was treated as a defacto-standard library for Java programs, I'm sure you've experienced this. Most of the stuff in Apache Commons is stuff you can also find built-in to Go.

> What Java has working against it is the lack of structs, monomorhpized generics and SIMD API to ensure all standard library bits stay as performant as they are in .NET, but nonetheless the areas that have no use of those are optimized to a comparable degree.

Honestly, Java's performance was never that bad, it is/was specific things that really caused it to suck, like objects, reflection, the GC. Some of it has been improved greatly, in part thanks to ZGC and other innovations, but Hotspot was always pretty good at running tight loops and computations at a respectable speed.

What makes Go nice is that it just doesn't need as much optimization to begin with. It's funny to compare Go code to highly complex and optimized code all of the time, but this happens mainly because it still often competes in the same class despite being kind of dumb and simple by comparison, for a myriad of reasons. The Go GC is probably technically not as optimal as the latest and greatest Java technology, but it doesn't really matter too much because Go manages to do a better job at preventing objects from escaping to the heap in the first place. Java is recently getting on this train, too, but it's got a long road ahead, as Java code and interfaces will need to be adjusted to try to minimize unnecessary escaping to fully exploit this. A lot of common Java patterns don't make this easy.

> And you can be sure they offer sufficiently good support on all major platforms: Linux, macOS, Windows and, what is a major Java/Kotlin's advantage, Android. Something that Go treats with pretending like the only platforms that exist is UNIX/POSIX.

What is wrong with Go's support of Windows or Android? I use Go for Windows programming all the time. I was even working on writing a Win32 language projection for it, but I lost a bit of the code and haven't had time to pick it back up. Go ends up being nice to use with native Windows APIs especially since you can dynamically link to libraries without needing CGo.

Meanwhile, Java programs seem to have a harder time dealing with platform interoperability than Go programs. For example, ANTLR4 still seems to have issues handling paths with backslashes: they work, but the relative path calculations that are done are wrong, resulting in different behavior/output. I choose this as an example because it's a really popular and not particularly new Java program, but it's also the latest iteration, showing that Java platform interoperability issues are nothing new. It's, of course, not Java's fault that Java programs may contain bugs; but it is Go's work that has made it less likely for Go programs to make this mistake by designing the standard library to make path manipulation straight-forward whether you want to deal with OS-specific paths (`path/filepath`) or slash-only paths used in URIs (`path`).

> And for interpreting AST, this exists in both Java and C# with different strategies. C# approach is a bit more complex but extremely powerful with build-time source generators that have full access to AST generate new code, fill out existing partial members or intercept calls, and does not require any external scripting. A good example of that is generating gRPC clients and servers code from .proto file by simply executing `dotnet add package Grpc.Tools` and adding .proto reference to .csproj.

gRPC code generation is reading from protobuf descriptors, not modifying existing C# code. It's cool that C# has a good interface for printing C# code, but it's kind of not what I was getting at with that.

And of course, it's obviously still possible to parse the grammar into an AST in Java or C# or C++ or ... but the advantage for Go is that yeah, the grammar is simple and has a lot fewer productions and a lot less syntax than pretty much all of those. That means that mere mortals can write their own refactoring tools, and often do.

With Go's syntax being so dumb, it's possible to just generate proper Go code with text templates, which is what a surprising amount of Go code gen does. It's not especially fragile because the syntax has rather few surprises and once you know them it's not very challenging to follow the rules completely using only simple string operations. (You can also, of course, fill out an AST and write it too, using `go/printer`.)

> Go may seem like a good language after coming from scripting nightmares, it may even seem like a good systems programming language that is easy to use. I promise, it is not. In order to call intrinsics, in C#, I just call them because that's what it offers, in Go, I have to write asm helpers manually.

> If you look at what modern C#, Kotlin and, again, Rust provide in their respective areas, you will soon realize that Go does not have anything to offer that is unique or better, except perhaps the culture of minimalism, which other ecosystems are also advocating for in at least last 5 years.

I disagree. I am not particularly new to programming, and I have come to the conclusion that Go is a great programming language for productivity, as it strikes a nice balance in a lot of aspects of programming language design. I also am not trying to suggest that there are no virtues of C# or Rust, either, just that it's especially weird how much of a hate-boner Go has. It's not like liking it is particularly popular anymore, trust me, as a person that likes Go I would know, so you'd think the irrational hatred of Go would end. Alas, it continues, to the point where there's somehow an argument here that unlike basically every other programming language that has ever existed, somehow Go is the one with zero merits at all. That's pretty much what it feels like I'm arguing against right now, FWIW.

... This comment was too long, so I'm going to need to reply to the rest in a different post ...


> p.s.: I forgot to mention the reason interop was brought up - in C# it can be zero cost or nearly zero cost because it is something that was considered at the language and platform inception. It is one of the reasons C# will never use green threads and eventually will move over upgraded runtime handled task system (mind you, current day state machine implmentation works well, but can be improved). The cost of FFI in Go, unless you use Cgo, is dramatic. This is something that is not acceptable for a language positioning itself as a systems programming one.

The cost of FFI in Go is a cgo callgate regardless of whether you use Cgo, I can point to the actual implementation if need be. The cost from going from Go execution to C execution is ~tens of nanoseconds, so it's not particularly easy to measure. I would guess its on the order of hundreds of CPU cycles. I've stepped through it in IDA from time to time, it's not a ton of instructions. Of course, "not a ton" is a shit load more than "literally zero", but it's worth talking about I think.

There is a potential for greater cost, because with the way the Go runtime scheduler works, if the call doesn't return quickly enough, the thread has to be marked "lost" and another is spawned to take on Goroutine execution. This happens rather quickly of course and it isn't free since it results in an OS thread spawning. But this is actually very important, more on this in a moment...

Meanwhile C# async uses the old async/await mechanism. This is nice of course and it works very well, but it has the problem that execution does not ever yield until you await, and if you DO call a C function and it blocks, unlike Go, another thread does not spawn, that thread is just blocked on C execution until it's over. That was my experience playing with async in .NET 7, but I don't think it can change because either you have zero-cost C FFI or you have usermode scheduling, you can't really get both because the latter requires breaks in the ABI.

I would be happy to talk more because I am honestly pretty disappointed that there's not really a better way to do what Go tries to do. I'd love to have the advantages of Go's usermode scheduling with preemption and integrated GC sequences with somehow-zero-overhead C calls, but it simply can't be done, it's literally not possible. You can take other tradeoffs, but they lose some of the most important advantages Goroutines have over most other greenthread implementations. Google and Microsoft have both produced papers researching this. Microsoft's paper on fibers basically comes to the conclusion that you literally shouldn't bother with usermode scheduling because it's not worth the trouble:

https://www.open-std.org/JTC1/SC22/WG21/docs/papers/2018/p13...

However, their conclusion about the cgo callgate taking around ~130ns does not match what I've seen. But just to be sure, I searched for a random benchmark and found this one:

https://shane.ai/posts/cgo-performance-in-go1.21/

    BenchmarkCgoCall        28711474         38.93 ns/op
    BenchmarkCgoCall-2      60680826         20.30 ns/op
    BenchmarkCgoCall-4      100000000         10.46 ns/op
    BenchmarkCgoCall-8      198091461          6.134 ns/op
    BenchmarkCgoCall-16     248427465          4.949 ns/op
    BenchmarkCgoCall-32     256506208          4.328 ns/op
Of course this may be fairly optimal conditions, maybe it depends on the conditions of the goroutine stack leading up to it, but I think it's fair to say that "less than 50ns per op" is not an unreasonable amount of time for the cgocall to take as long as we're considering that "0ns" is strictly not an option for what Go wants to achieve. With Go you don't have to care if something blocks or not; everything blocks and nothing is ever blocked. That's not something that can be accomplished without some runtime cost. The runtime cost that it actually takes is very nearly zero, but the runtime cost of integrating that with something that doesn't eat that cost is unfortunately higher, and that's where the CGo problem lies.

(I admit that a substantial portion of this problem is actually around the stack pivoting, but if you squint hard enough you can see that this is also inextricably woven into how Goroutines manage to accomplish what they do.)


Wow that's a long post. While I read it, wanted to note that .NET deals with blocked threads by having the threadpool scale the worker thread count dynamically through hill-climbing algorithm that will work to reduce the time the work items wait in their respective queues unhandled (the threadcount can be 1, it can be 200 or more, depending on what you do, 200 is clearly a degenerate case but this is what you get if you managed to abuse it badly enough to act as good old thread per request way). It also has out-of-hill-climbing blocked thread detection (things like Thread.Sleep) to cope with workers being blocked faster. It is all around a very resilient implementation.

As for the cost of FFI, in .NET, the regular p/invokes that don't require marshalling (most of the time it's just UTF-16<->UTF-8) cost approximately ~4-1.5ns. The cost can be further reduced by

- Suppressing GC frame transition (safe for most sub-1ms calls)

- Generating direct P/Invokes when publishing as an AOT binary (they are bound at startup but dynamically linked dependency referenced this way needs to be available)

- Static linking. Yes, .NET's AOT binaries can be statically linked and it is done by system linker, which makes the call a direct jump, which costs as much as a similar call in C. .NET can also produce statically linkable libraries which can be linked into C/C++/Rust binaries (although it can be tricky)

On AST - you are not parsing C# yourself, you are using the same facilities that are utilized by Roslyn. You can do quite a few tricks, I'm working on a UTF-8 string library (which, naturally, outperforms Go implementation :P) and it uses new interceptors API to fold UTF-16->UTF-8 literal conversions during build. My skill is way lower than of engineers working with it in more advanced settings and yet I was able to easily use it - it is very convenient despite the learning curve.

On Go hate - it's simple. It has reached quite some time ago the critical adoption rate where it will be made work in the domains it is applied to regardless of its merits (hello a post on HN describing the woes of a company investing millions in tooling to undo the damage done by bolting in NILs and their unsoudness so tightly). It has serious hype and marketing behind it, because other languages are either perceived as Java-kind-of-uncool, or are not noticed, or bundled, again, with Java, like C#. And developers, who have a rift where their knowledge of asynchronous and concurrent programming should be, stop reading at "async/await means no thread blocky" and never learn to appreciate the power and flexibility task/future-based system gives (and how much less ceremony it needs compared to channels or manually scheduled threads, green or not).

Just look at https://madnight.github.io/githut/#/. Go has won, it pays well, it gets "interesting and novel projects" - it does not need your help. Hating it is correct, because it is both more popular and worse (sometimes catastrophically so) at what other languages do.


Surprisingly, I think we're actually mostly in agreement here, so there's not much to reply to. I think the only real takeaway is that we don't agree on the conclusions to draw.

> On Go hate - it's simple. It has reached quite some time ago the critical adoption rate where it will be made work in the domains it is applied to regardless of its merits (hello a post on HN describing the woes of a company investing millions in tooling to undo the damage done by bolting in NILs and their unsoudness so tightly). It has serious hype and marketing behind it, because other languages are either perceived as Java-kind-of-uncool, or are not noticed, or bundled, again, with Java, like C#. And developers, who have a rift where their knowledge of asynchronous and concurrent programming should be, stop reading at "async/await means no thread blocky" and never learn to appreciate the power and flexibility task/future-based system gives (and how much less ceremony it needs compared to channels or manually scheduled threads, green or not).

I agree that bolting on nil checking to Go is pretty much an admission that the language design has issues. That said, of course it does. You can't eat your cake and have it too, and the Go designers choose to keep the cake more often than not. To properly avoid nil, the Go language would've needed to adopt probably something like sum types and pattern matching. To be honest, that may have been better if they did, but also, it doesn't come at literally no language complexity cost, and the way Go is incredibly careful about that is a major part of what makes it uniquely appealing to begin with.

Meanwhile while Go gets nil checkers, JavaScript gets TypeScript, which I think really puts into perspective how relatively minor the problems Go has actually are.

> Just look at https://madnight.github.io/githut/#/. Go has won, it pays well, it gets "interesting and novel projects" - it does not need your help. Hating it is correct, because it is both more popular and worse (sometimes catastrophically so) at what other languages do.

I gotta say, I basically despise this mentality. This basically reads somewhere along the lines of, "How come Go gets all of the success and attention when other programming languages deserve it more?" To me that just sounds immature. I never thought this way when Go was relatively niche. People certainly use Python, JavaScript, and C++ in cases where they are far from the best tool for the job, but despite all of those languages being vastly more popular than Go, none of them enjoy the reputation of being talked about as the only programming language in history with no redeeming qualities.

People generally use Go (or whatever their favorite programming language is) for things because they know it and feel productive in it, not to spite C# proponents by choosing Go in a use case that C# might do better, or anything like that.

But if you want to think this way, then I can't stop you. I can only hope that some day it is apparent how this is not a very rational or productive approach to programming language debates.

Unfortunately, even though I'm sure it definitely plays no small part, I can't really assume that Go's popularity plays into any person's hatred of it, because flat-out, that would feel like a bad-faith assumption to make...


[flagged]


There's nothing about the Go compiler that enforces any case standard. I can't imagine a more trivial hill to die on.


Nitpicking, but Go is one of the only languages I’m aware of where identifier case is semantically relevant (to determine visibility). Given that fact it’s hard to say “nothing” enforces a case standard.


Haskell enforces that types and type constructors start with Uppercase, and that everything else starts with lowercase.


Adding to that, the compiler doesn't enforce using "mixed caps", as the documentations calls it, but it's strongly recommended (https://go.dev/doc/effective_go#mixed-caps)


> I can't imagine a more trivial hill to die on.

I can't imagine why this attitude pervades on hacker news. I can't imagine a more trivial reason than yours to make a reply.

I guess some people see this as a place to discover differences, and others see it as a place to enforce blind conformity to the current fad.


I was just thinking today that the case style is the only thing I continue to find objectionable on a day-to-day basis after a decade of writing/reading Go.

I've fantasized about the idea of a future `go fmt` version rewriting all the code which seems possible if somewhat impractical.



That is a bizarre thing to care so much about.


That's actually PascalCase, which I like better than camelCase and snake_case.


PascalCase for public variables, camelCase for private variables. Just to keep you on your toes.


and snake_case for db/sql.


kebab-case might be feeling left out of the party.


> I love go, but I hate CamelCase so much that I often find myself using a thesaurus to find better names for my structs and functions

what does one have to do with the other? also yeah I agree, CamelCase is awful. for a while I was doing:

    Camel_Case
but it just didnt fit with anything, so now I use:

    CamelCase // just the normal style, for public identifiers
    snake_case // for private identifiers
but, I do not respect acronyms, as keeping them hurt readability I think:

    HtmlHello
    HelloHtml




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: