Is binary dependency management just not an option ever?
I have a friend that works for Google and supposedly they have a proprietary build infrastructure that will offload the building of C++ code into a cluster. I sort of wish Google open sourced that as I believe it basically does some form of binary dependency management.
Yes I know Go like C++ can target lots of platform (and thus needs many binaries built) but an organization only needs the major ones. And yes I know the language statically compiles to a single binary but that doesn't mean things can't be precompiled.
Go these days seems to be mainly used for microservices or small utilities and thus you don't need and should not have a gigantic code base for such things. I can understand monolithic UI apps having large code bases but this is clearly not what Go is being used for these days.
There are many other languages that compile to native that seem to compile fairly fast (OCaml and Rust from my experience but I don't have huge code bases).
Is compilation speed really an issue given the use cases for Go?
Yes, I find compilation speed to be one of the most important things. But it is not a selling point for go. The go compiler is not very fast, and speed is not an acceptable excuse for a lack of parametric polymorphism. Ocaml has not just parametric polymorphism, but many other basic type system features. And yet ocamlopt is both 5-10 times faster than the go compiler, and still produces faster binaries.
> Does Bazel require a build cluster?
Google's in-house flavor of Bazel does use build clusters, so Bazel does have hooks in the code base to plug in a remote build cache or a remote execution system.
The open source Bazel code runs build operations locally. We believe that this is fast enough for most of our users.