Coming from swift, it's pretty refreshing to see a language that has decided to settle on the language itself pretty early and focused on improving all other aspects of the dev experience.
Last time i ran my go test suite on the CI i thought it had a bug because it was so fast compared to what i'm used to with swift.
I do still have a weird feeling of getting back to the stone age whenever i get a nil pointer panic though. i wish go devs could figure out a way of fixing those last quirks without impacting the rest too much.
> I do still have a weird feeling of getting back to the stone age whenever i get a nil pointer panic though. i wish go devs could figure out a way of fixing those last quirks without impacting the rest too much.
Unfortunately fixing the "billion dollar mistake" as a retrofit is pretty much impossible without breaking backward compatibility. I see what they were trying for with nil and zero values, but IMO they should have introduced a "result" and "option" special type (or probably more Go-like would be separating nullable and non-nullable types via '?' like Dart did). If they did it like Dart, they could intro the concept without breaking backwards compatibility, wait X years until mostly adopted everywhere, and THEN break backwards compatibility by making it mandatory.
They should have introduced sum-types. Or in C term: tagged unions that are checked by the compiler.
They did the opposite when they designed the language, with their 'multiple return values', which are like tuples but only allowed in one special position in the language. But as you already suggested, tuples are the exact opposite of the "result" type you'd want here.
I would have much preferred sum types to generics. I can live with rewriting min() or using for loops over iterators or writing my own linked list; it’s a lot harder to safely (and performantly) work with data that can have a fixed set of shapes.
You know how Go had generics for a long time, but only for the language designers who got to use it in functions like `make` or data structures like `map` or `slice`. There didn't used to be any generics for users of the language.
Similarly, Go has tuples, but only for the language designers who get to use them for multiple return values from a function. No tuples to be used elsewhere where users of the language might find them useful.
Now, Go also has sum types. But, yet again, only for the language designers: they get to stick `nil` as one of the constructors (to use Haskell terminology) in their favourite data types, but you as a language user don't get to make new sum types.
Another example is operator overloading: the designers get to overload eg arithmetic operators, but you don't get to do that.
In contrast C++ even for all its other faults at least makes an attempt to hand you many of the tools the language designers get. (Though, I like Rust's and Haskell's approach here more.)
No, the language designers use the same type system as everyone else. Yes, Go had a fixed set of generic data structures, but those were available to everyone.
“Nil” does not a sum type make; it’s just the zero value for a reference type and the type-checker doesn’t check that you are handling nil and non-nil cases as a type-checker would do for sum types (if Go’s pointers are sum types, then integers and floats in any language are sum types as well).
Go has never had operator overloading for the language designers or anyone else. The operators that the language designers use behave the same way as when users use them.
I have a lot of respect for Rust, and I was a C++ developer in a past life, but most of the software I write these days cares a lot more about developer velocity than about eeking out every last bit of performance or correctness, and most of the time Go is the ideal candidate for those sorts of tradeoffs.
I don't know that you can just add compile time nil checking to go and still retain everything else.
It seems to me - although I could be entirely wrong - that the mindset that produces this kind of idea ("nil pointers are teh devil!") is incapable of producing the kind of experience that Go produces with its toolchain (trivial cross-platform compilation, etc).
Again I can't prove it, but it just seems that way.
For instance, if you believe nil pointers are the devil and you should statically enforce checking them at compile time, it seems that it would only be natural that you would also think the following other things are very important to be enforced at compile time:
- Read-only references
- Move semantics
- And all the related baggage
What does this have to do with compile times and cross compilation? Maybe not much, but time and attention are limited resources, and when you focus on concepts typical related with "nil pointers are evil" you take away precious resources that could otherwise have been dedicated to building the kind of developer experience that Go provides.
I really don’t understand why you got downvoted. That’s also my explanation of why they did it this way (which i don’t agree with, but they built a PL, i didn’t).
Zig is a good counter example to my argument, but there are things that Go does that Zid does not, namely, a runtime to manage:
- Goroutines
- Garbage collections
I'm not saying these are good or desireable per se.
The point is a bit more general: you can't just accumulate features without this accumulation having other effects on the language and its toolchain and causing degradations in other ways.
My speculation: the authors didn't know / understand algebraic data types.
Also, to make algebraic data types useful, you really want parametric polymorphism. But yet again, the others of Go weren't familiar with this. The only vaguely related technique they knew about were C++ templates, and they (reasonably!) decided that they didn't want C++ template hell in their language.
That last part about templates is the least speculative of the bunch: I read some of the discussion they had about generics, and they explicitly mentioned templates (and how complicated they are) and pretty much mentioned nothing else for how to design or implement generics.
Go recently got some generics, partially thanks to some help from Phil Wadler who's otherwise more known for his work in functional programming.
You can use sum types without pattern matching and vice versa, but you are absolutely right that they synergies well.
I don't know enough about Golang: do you know whether it's possible to add pattern matching with destructuring as a fairly shallow syntactic sugar?
Generics were a much bigger change to the underlying language (and so would be Rust-like lifetimes, or even immutability); but pattern matching seems like something that should be relatively easy to add with only local repercussions?
Unfortunately for the same reason it lacks many other modern features introduced by CLU and Standard ML in the mid-70's, language designers don't want to overburn Go developers with PhD level concepts.
Algebraic data types (more technically sum types) are very much not “PhD level concepts”, despite the name. They’re just what C developers would call “tagged unions”.
I sympathise with the idea but I do think they put the line in the sand too close. Go that had generics and sum types from the start would be my near-perfect GC-ed language
Not having generics was never a fixed decision. The FAQ said since day one that they "may well be added at some point" and that "The topic remains open", so there was no "walking back".
By the way, not having ADTs is not a fixed decision either.
You seem to mistake the fact that the Go team is in no rush to add things to the language as a general rejection of these things.
It doesn't matter. What does is Go 1.0 shipped without generics. That single decision immutably affected the entire language. Now that generics have been retrofitted, the issues are clear as day:
- Awkward transition period between a stdlib with and without generics: [1]
- Completely different APIs for built-in data structures (slices, maps) and generic ones
- Lack of obvious follow-up features that would have been there at 1.0 if generics were added, e.g. iterators
They took the time to do it properly with input from experts on type systems (e.g Phil Wadler). The result is probably much better than what we'd have if the Go developers had quickly thrown together an implementation of generics 10-15 years ago. For example, the resulting type system is known to be sound.
Where did you get the information that the Go team never wanted Generics, even the hype around having generics yet the stats shows 50% of Go developers wasn’t interested in it
>Where did you get the information that the Go team never wanted Generics
By them acting as if they never wanting Generics, not having Generics from day zero, delaying their implementation for a decade with BS excuses, pretending they are some kind of unsurmountable problem....
They were literally pressureed into getting them in, after years of resistance, when they recognized the mess they've made
I have watched this talk back then when it came out, and I just rewatched the parts you linked. Nowhere in this talk he said that he isn't into generics. He said that he is not yet satisfied with design drafts (that existed at that time), and that he would like to bring in experts. Which he did, when he asked Phil Wadler to join, which led to the current design. The talk is actually proof that he was and is open to generics.
"If we can implement these and learn about it a lot of what becomes important will clarify and something will come out of it, maybe something wonderful."
Again, you make up some warped interpretation in your head.
And over that period, not a single person put forward a viable and fully worked-up proposal for how generics should work in Go. It's almost as if programming languages aren't developed by anonymous people complaining on the internet.
>And over that period, not a single person put forward a viable and fully worked-up proposal for how generics should work in Go.
That was the official excuse (while each and every proposal coming in was shot down, just to get.a sub par, half-thought, Generics implementation, full of sui generis and NIH details implementation.
It's not rocket science, there are 100s of languages with Generics, including languages with many orders of magnitude more than the adoption Go has.
Which proposal would you rather had been adopted instead?
It's strange to describe the current implementation has "half thought". A lot of work was done to make sure it was correct: https://arxiv.org/pdf/2005.11710.pdf It's probably one of the most carefully thought through generics implementations in a mainstream programming language.
>It's not rocket science, there are 100s of languages with Generics, including languages with many orders of magnitude more than the adoption Go has.
It's easy to add generics but not so easy to get it right (see e.g. Java's soundness issues, the total mess of C++ templates). Rust's generics also have some dark corners (e.g. https://github.com/rust-lang/rust/issues/84857).
There’s nothing stopping language maintainers implementing a feature if you really hate the slow and thoughtful journey then the language isn’t meant for your “ideal programming language”
Also generics is mostly stuff that make libraries more convenient, not average user code. It also reduces bugs where otherwise interface{} and type checking would be used.
If you compare the amount of attention, language bashing, dedication and sweat being put in it I should expect the survey to show at least 75% adoption
I am not sure this is true. Every single go project I have seen at work has pkg/ and internal/
If anything, I wish people would have main.go be a bit longer so I can see the main bones of the application but people always like to a := app(conf) ; a.run()
The design philosophy is what makes a programming language, any language maintainers could one day decide to have any of these, when you begin to understand “why this is language existed” you begin to understand its purpose
That’s also my question. ADT seems really to fit well with the no-class no-inheritance design that go took. I don’t see how it would affect the language in any major way. But then, i’m not an expert.
A massive source of compilation pain is that Rust just generally generates a lot of code - it always prefers specialization over dynamic dispatch unless the source specificies dynamic dispatch. Couple this with a culture of believing dynamic dispatch is always worse and lots of codegen for generics and you get one hell of a task for your linter. I wouldn't call this an optimization, but rather an almost cultural design decision.
It’s more correct to say that it always prefers zero-cost abstractions. Defaulting to dynamic dispatch would go very much against this, even if in some cases it’s not a bad idea.
Most of the code is cold. Spending compile time on that probably isn’t worth it. As such it’s not a zero cost abstraction (yes yes - it normally means runtime but still - developer time is important too and that’s where the trade off is).
In most cases static dispatch is faster. In many cases dynamic dispatch can be just as fast (speculation by CPU negates the cost) and in some cases it can be faster (better code density). This 0-cost abstraction principle from c++ is actively harmful because there’s almost no such thing. Performance is such a subtle thing that what’s 0-cost in one scenario is a non 0 cost in others. Also spending compile time on the “dark matter” of code that’s rarely executed is probably not the best way to spend the developer time budget. I want CI to generate fully optimal code for production in release builds. For running tests in presubmit, I probably want a little bit less optimization. For local development I want it to be just fast enough that compile times are quick and I can iterate quickly (except for the cases I’m tuning performance in which case I have no choice but to spend max compilation time).
I agree, but what you are asking is impossible in the general case. The compiler can’t know what is hot and cold without PGO, and doing dynamic dispatch in development to speed up compilation seems iffy.
Surely a better approach is to use incremental compilation, and only do this expensive stuff potentially only once?
However I do agree with your overall point - dev compilation in rust is too slow.
One of my favorite go-to examples for this is "Producing Wrong Data Without Doing Anything Obviously Wrong!" 4.2 where they demonstrate a reproducible >30% performance difference just by "misaligning" stacks with an unused environment variable. That's in the context of unexpected measurement bias, but you won't convince me piles of cold code can't affect memory layout less than that.
Yes, but said REPL just wraps user entered expressions itno a function body and then compiles them to a shared object via cargo - thus the language is interpreted via the slow compiler anyway.
I think the main problem with rust, is that the things they decided to settle on early on (aka the borrow checker) are the ones at the root of all the difficulties. They then tried to tame those issues little by little by improving the language ergonomics.
This is pretty different from swift for example. Which had a sane basis very early on, but decided to expand the language by adding features over features, moving the language to a lot of different directions, turning it into an ugly beast ( i still like it though, but i’m not sure for how long).
Learning to work with Rust's borrow checker leaves one with the sinking realization that every C program also has a borrow checker: The very fallible programmer.
While there are other benefits to the borrow checker, the main one is safe memory management and GC languages get you that much while allowing for faster development.
Can't disagree with you at all, even though I want to.
I love Rust and I hate having to emulate enums in Golang but the speed of development becomes more and more a deal-breaker the more time I spend with Rust.
I might just settle at OCaml, if at all possible. Or, since this is the real life and there are no simple final solutions, I'll likely just become a master of all three.
But really, as much as I appreciate Rust for being super strict, I also don't think strict lifetime management is as important for many tasks (though it's a life-saver for some).
Yeah, I 100% agree with hating emulating enums in Go (particularly because there is no safe way to emulate exhaustiveness checks). Personally I haven’t found Ocaml to be any more productive (inadequate libraries, poor tooling, constant futzing with multiple std libraries), but maybe I didn’t commit to it for long enough?
That said, I’m starting to do a bit more systems development so I’m excited to dabble a bit more in Rust.
gc is about memory disposal. IMHO the main purpose of the borrow checker is to ensure proper concurrent memory access, while the memory is still in use.
I don’t think the main purpose is concurrent memory access considering the enormous share of memory managed by the borrow checker which is never accessed in a concurrent context. Whichever purpose is the “main” purpose hardly matters anyway—the fact of the matter is that Rust makes memory management quite burdensome even though it makes it easier to ensure certain correctness properties.
At this point in history it should be more or less apparent that "here's a gun you can shoot your foot with, just don't do it lol" is not a viable strategy. It's the strategy of the C devs whose memory safety bugs and buffer overflow mistakes we still find to this day, in a number of popular and very widely used programs.
...Nah. We shouldn't go back (and I am not saying you claim so, I am kind of just developing the thought here). Let's have memory safety and algebraic data types and maybe pluggable GC runtimes (if lifetime management proves too difficult which it has at times; not every task is perfectly suited for Rust's borrow checker) and async, and any other goodies that help us solve real problems, and let's not dream of a simpler life.
That "simpler life" has failed. We should acknowledge that and move on and work with the reality in front of us.
Rust's lifetime management and async runtime(s) can be maddeningly difficult but after being on all parts of the spectrum -- from bash scripts and JS Wild West projects to Rust -- I confidently claim that the more strictness the better.
...Though I wouldn't refuse Rust having compile times like those of Golang and OCaml, can't deny it.
> At this point in history it should be more or less apparent that "here's a gun you can shoot your foot with, just don't do it lol" is not a viable strategy.
However in the case of Haskell, manually mucking around with mutable aliases, references and pointers is culturally similar to using unsafe in Rust. By default in Haskell you are using immutable variables. And if you have multiple threads and want to share mutable state, you typically use software transactional memory.
> ...Though I wouldn't refuse Rust having compile times like those of Golang and OCaml, can't deny it.
Well, Golang has quick compile times, partially because it makes the human do half of the compiler's job. OCaml is indeed something that's more worth aspiring, too.
As a compromise, I found that `cargo check` is much quicker than a build and does most of what I need when developing: most of the time, I don't actually care about the resulting binary, I just want the compiler to tell me quickly whether I introduced any errors it can detect.
Unfortunately you can't start simple and add lifetimes-as-types into your language, because that's at the very core of it. You'd kill all backwards compatibility with it. There's likely no gradual path to Rust from non-Rust.
My guess is this is what swift has planned. They’ve been slowly adding hidden features inside the compiler to detect more and more problematic memory sharing patterns. My hunch is that the two language will eventually converge. Rust slowly improving the ergonomics, and swift slowly adding more safety.
> It’s important to understand that unsafe doesn’t turn off the borrow checker or disable any other of Rust’s safety checks: if you use a reference in unsafe code, it will still be checked.
Going from the other side would be really hard to design for. The whole standard library in rust is relying on the lifetimes being there. If you started without them you'd end up with things which can't be implemented safely anymore. Even with the lifetimes available, some traits took ages to stabilise and ensure no edge cases. You'd have two standard libraries and the compiler would have to know how to make them interact. Or maybe they couldn't interact and you'd have to remember some hashes have get/set you started with and others need `.entry(...).and_modify(...)`. That sounds terrible for usability.
Pretty much the only time I see a nil pointer panic in Go is when a junior dev doesn't initialize a map, which is easy to diagnose and fix (and should probably get a linter). It's a non-issue as far as I'm concerned.
Most go types have useful default value. Nil slice is a 0 length slice. Default byte.Buffer is an empty buffer. Default of struct is all fields default. That’s all great.
It's because appending to a slice returns another slice, whereas assigning a key to a map doesn't return another map. That means that there's no sensible way to assign a key of a nil map. You could make adding a key to a map work like appending a value to a slice, or you could add an extra level of indirection to the underlying map type (make it a pointer to a struct instead of a struct). Neither of those alternatives seems particularly attractive, though.
> (You might wonder why I didn't just … I needed to run on … with … in the first place. The answer is that I was distracted by the flow of circumstances. First I tried … so I tried to …, then I had … same problem, and by that point my mental focus was on 'make the compiler toolchain work'.)
While intimately aware of the phenomenon in a technical context and something I constantly check myself on, I’ve never heard it described quite like that. ‘distracted by the flow of circumstances.’ So relevant to many parts of one’s life, it’s a nice turn of phrase, yet also complete BS.
I learned a lot from this article and discussion here!
Interesting from a project I currently work on, we use a static compiled binary and I thought just the DNS resolver was a thing which needs to be switched and use the internal DNS resolver of the Go sdk.
But as we noted, once you have something like PAM plugins and using ldap sssd to get users from your LDAP, the static Go binary cannot resolve the user IDs.
So there is more to it and we have to decide whether we implement a user lookup natively in the Go binary or rely on libc and how the actual Linux system is configured to login users.
Also I have not tested this hypothesis, but I assume that with all the troubles
I have seen with a Linux vpn client pushing their resolvers into the Linux client and how systemd resolver and other kinds of resolvers mess up your DNS, I wonder if the Go internal DNS resolver implementation picks up on this mess.
Anyways for the Go Compiler here this might not be such a big problem.
DNS is pretty straight forward though, because systemd-resolved runs the stub-resolver. So if your /etc/resolv.conf points to it (as it usually will) then everything will "just work" fine since it's all DNS requests.
What we really need is something similar for doing users and groups when you get down to it: which we should have because at the end of the day we're really just asking the system tell us some UIDs and GIDs, or what names to assign to such things.
Except using a socket involves a different process, so you may get a different result than you expect (given namespaces).
I'm not that familiar with plan9 (which is where I think most of the namespacing concepts came from), does it allow direct syscalls, or is linux the only system where you can syscall directly?
On solaris, freebsd, and so on, the operating system ships the kernel and libc versions together, libc makes syscalls, and everyone else makes C ABI calls to libc.
The libc ABI is in fact the only interface the kernel, on solaris bsd etc, gives you to do something like "open a file" or "make a network call".
On those operating systems, the kernel syscalls may change between versions, so if you statically link libc, or if you make syscalls directly, your program may break when the OS version is upgraded. Every program is meant to use the libc ABI, which they won't break, and then internally the OS may refactor libc+the kernel syscall interface together without worrying about breaking anything.
macOS is similar to this.
Linux is unusual in that the libc and kernel implementation are maintained separately, and both keep their own stability promises, so on linux it is viable to make a pure go implementation which makes syscalls directly. Which is exactly what go has done, and why this static toolchain is possible.
Most programming languages don't bother with the linux syscall ABI because they'll have to use libc for other unixes as well (like solaris etc), so they might as well do it for linux too.
This is the correct answer. Don't use syscalls unless you know what you're doing, obviously. Take a look at how musl and glibc POSIX calls are implemented, some have nuances you have to be aware of, and in others libc might do hidden stuff behind the curtain.
Example of small nuance that can ruin your day/week: the size of syscall(2) arguments is fixed, but for some syscalls the arguments must be smaller. Hence you have to pack and aline the values properly in the registers. This example is from the syscall(2) manpage.
On Linux with glibc, static linking breaks DNS resolver client: gethostbyname(3) and friends. As a workaround, you can link to musl libc, but that is only for software written in C, it does not work with C++. FreeBSD is better in this regard (their libc allows static linking without breaking DNS client, even for C++ software). However, on both OSes static linking is incompatible with dlopen(3) and dlsym(3), i.e. your application will be unable to load any .so files, so you if you link statically to libc then you must statically link to every other library. This is unlike Windows, where OS kernel API DLLs (ntdll.dll, kernel32.dll) are separated from libc DLL (msvcrt.dll if MSVC, cw3220.dll if Borland C++, mingw-something-something.dll if MinGW, IIRC there was something similar for Watcom etc), so libcs from different compilers may load simultaneously to the address space of a process (e.g. in application with third party plugins compiled against different libc, or a different version of the same vendor libc). Also on Windows OS kernel API DLLs (ntdll.dll, kernel32.dll) are always loaded to the address space of your process, even if you link to libc statically. So your application always can call LoadLibrary(), TlsAlloc() etc, and it is possible to write libc-less applications in the plain C.
> but that is only for software written in C, it does not work with C++.
I have a pretty complex C++ command line tool which works just fine with MUSL and results in a distro-agnostic executable (https://github.com/floooh/sokol-tools). What potential problems should I be aware of?
Even in C there can be issues. the nokigiri ruby gem builds (or used to build) libxml and libxslt (which are pure C) with patches making effort in removing a couple of GNUisms.
For C++ we were faced with some issues, so the process we ended up with is:
- build musl, install it in some location
- inject a few GCC libs and linux headers required for C runtime to have the above location be a proper sysroot for clang to use
- build LLVM libc++ and a few libs (e.g libunwind) as static libs against that sysroot using clang, and inject them into the sysroot
- build whatever C++ final product we want against the sysroot using clang, statically linking libc++ in
- for a dynamic lib, remove " "needed" dynamic reference to libc.so in ELF. also, hide all symbols from libc++ and load with bind local so that when loaded the shared lib prefers its internal symbols (which would make it crash if it jumps to another libc++) and does not pollute the symbol namespace with its internal ones (which would make another lib crash if it jumps to the internal libc++)
- for an executable binary instead of a lib, dynamic references may instead need to be altered so that it works for both
It all hinges on musl being a subset of glibc, which is not entirely true either (see the musl website for differences in behaviour, which may or may not matter depending on the piece of software)
Not all OSes have libc per se, only those of UNIX/POSIX origin.
Secondly, on most UNIX environments, libc isn't the ISO C standard library, rather the full OS public API, static linking to it is possible (GNU libc being the exception due to how it is designed), however it limits what the binary would be able to call across OS versions if at all, e.g. the actual implementation semantics change.
> static linking to it is possible (GNU libc being the exception due to how it is designed)
Apple does not ship a statically linkable libSystem (I don’t think they ever have), and openbsd is trying to move away from it to tie up origin verification.
Is dns lookup provided by the kernel on macOS? I'm pretty sure this would all be done in userspace anyway.
There is no "native" way to do this unless you consider libc (or libsystem) "native". There is no kernel interface to do dnd lookups just standard userspace tooling.
It's not difficult, and it was done like that for some time, but the macOS system call interface is not guaranteed to be stable (more like guaranteed to be unstable), so the correct thing on macOS is to go through libc (libSystem), which Go now does since 1.12.
For DNS specifically, this landed in Go 1.20: here is the change set [1].
A different change [2] had a similar effect of using the system facilities for certificate validation in 1.18.
In general, using the custom stuff “works” (until it doesn’t!) but results in janky programs that don’t behave with respect to split tunnel DNS and so forth - very common with cross compiled programs until recently.
It sounds like it's pretty much the final step; the toolchain will be fully reproducible on 1.21: https://go.dev/cl/454836
The CL description describes the changes, then:
"Combined, these four changes (along with Go 1.20's removal of
installed pkg/*.a files and conversion of macOS net away from cgo)
make the output of make.bash fully reproducible, even when
cross-compiling: a released macOS toolchain built on Linux or Windows
will contain exactly the same bits as a released macOS toolchain
built on macOS.
The word "released" in the previous sentence is important.
For the build IDs in the binaries to work out the same on
both systems, a VERSION file must exist to provide a consistent
compiler build ID (instead of using a content hash of the binary)."
We build our client executables in a debian squeeze docker container exactly because of those glibc versioning issues. This way, they run on all currently supported linux distributions, even on RHEL 6.
Speaking of portability and golang, this sounds promising: https://github.com/tetratelabs/wazero Can anyone tell me if this is close to working as well as wasmtime, which uses CGO?
What the article describes is a compiled Go binary that will run on any* Linux system without using any form of libc, dynamically or statically.
So it's not just relevant to musl or musl-based distributions (perhaps especially because musl is almost always linked statically anyway, so the distribution doesn't really matter in that case either).
I guess they parse /etc/resolv.conf (via Go standard library) and just ignore /etc/nsswitch.conf (which is going to work OK in 99.999% cases). For the rest 0.001% cases there is a non-default option to link to system libc dynamically.
I would have predicted that an app using the network on my computer would make use of my computer’s network settings, since all other apps do. How would it know the correct resolver to use on my network otherwise?
The answer is in another thread, go only checks resolv.conf. Other settings get ignored. That means if you see unexpected behavior on one system you only have to check resolv.conf, not all the other network settings.
Wow, so the best language for multi platform Linux development could be Golang. Does it has enough libraries to be a serious alternative? What do you think about that?
I have a project to rebuild a complete alpine to make static binaries, glibc does not like static for political reasons, the feature to make static binaries is broken, and undocumented on purpose. Long life musl!
GNU may have a political opposition to static linking, but there are major practical issues too. For features like nsswitch, the libc has to dlopen shared libraries, which themselves link against libc. Static linking means you'll end up with (at least) 2 different libc versions in the address space at the same time, operating on the same data structures. GNU (and some other libc implementors) try to make that work as well as possible, but it's necessarily an unholy mess.
28 years of using linux, never used nsswitch.conf. I learned about it's existence when a colleague of mine asked a question about it during an recruiting interview of a candidate.
I would expect anyone doing sysadmin work on Linux in any sort of centrally managed network would have exposure to it. It is certainly relevant any time you are messing with NIS/LDAP configs and troubleshooting.
Your typical developer, even doing C app development I wouldn't expect to have interacted with nsswitch unless their local name resolution was broken somehow and they were trying to fix their own problems.
Came up in my first few years of using Linux, as a kid, at home, to get NetBIOS and Avahi name resolution working so my device could access Windows network shares and other things by name. No fancy enterprise use case required; just being able to ping a computer on the LAN by name involves nsswitch.conf!
Not until someone does the work of transitioning all uses of dlopen in glibc to dbus communication with a service. I don't think that's currently a plan, and building a dbus client into glibc doesn't seem very plausible.
I didn't believe you that it was broken, but you're right - very disappointing. For anyone interested, the bug for it being is at [1] (reported mid 2021).
The build failure is easy to fix, so I created a repo at [2] which builds a program against a glibc with static nss. I verified with strace that it does indeed check nsswitch.conf and try and load dynamic libraries (I'd at least submit my patch [3] for the build failure but I find mailing lists to be a hassle)
All this said, I wouldn't call it undocumented - it's documented in the `configure --help` itself as well as the online version [4], and it has an FAQ entry [5].
Last time i ran my go test suite on the CI i thought it had a bug because it was so fast compared to what i'm used to with swift.
I do still have a weird feeling of getting back to the stone age whenever i get a nil pointer panic though. i wish go devs could figure out a way of fixing those last quirks without impacting the rest too much.