go get (-d) -u ./... && go mod tidy
go get -u
git config --global url.git@HOSTNAME:.insteadOf https://HOSTNAME/
- We don't need this dance in our Dockerfiles / build scripts / dev machines anymore. We have a baked GOPROXY / GONOSUMDB in our build image and developers configure the same proxy (via `go env -w`) locally.
- We pull all packages over the proxy, so builds are faster / use less bandwidth / still mostly work when GitHub goes down.
- SCAs get more difficult; the credentials the proxy has are more limited than any individual developer's GitLab tokens / credentials, and owning the proxy is going to be harder than a single developer's laptop.
machine <your GitLab domain> (e.g. gitlab.com)
login <your GitLab id>
password <your GitLab personal access token>
For the access token, you can also leverage Gitlab‘s CI Job Token.
What we do is an „echo $(netrc contents with $CI_JOB_TOKEN) > ~/.netrc“ in the pre-script in CI.
(https://stackoverflow.com/a/61257782 and https://docs.gitlab.com/ee/security/token_overview.html)
But for other stuff (namely private npm packages), (relatively) modern versions of Docker support build-time-only secrets: https://docs.docker.com/develop/develop-images/build_enhance.... Pass the secret as part of the docker build command and then access it inside RUN --mount=... stages.
Don't know how that'll work out for a larger project but it works well enough in this small API I wrote.
 - https://blog.bramp.net/post/2017/10/02/vanity-go-import-path...
See xyzc.dev/go/ppgen and inspect the source for an example.
Also, Go modules are dependent on DNS, so any domain name change for your repo will have to result in code (import path) changes.
Modules are not for monorepos and internal components, modules are for third party dependencies that need to be pulled in at compatible versions, and for general multirepo work.
This is trivial to do with any other module system I've used (Maven, Nuget, Konan, pip, cargo), but it is extraordinarily brittle with Go.
I don't understand? This should literally just be an import statement. If your Git repository is anchored at importpath "source.example.com/git", and your code lives under "app/service/backend", and wants to import "lib/db/mysql", it just does so via `import "source.example.com/git/lib/db/mysql"`. No need to use Go modules at all. Or if you do need to use Go modules, just have one module for the entire repository.
The only reason I can see this not work is if you have multiple Go modules in a single repository, or even worse have module import paths not corresponding to paths within the monorepo. But in that case the fix is easy: just don't do that?
Modules serve a purpose inside a (large) organization just as between organizations, and tying modules to your source control choices is a generally bad idea.
The whole point of a monorepo is that everything is using the most recent version of every dependency, always. If they don't, then you broke the build. Versioning does not exist, dependency hell does not exist, everything works together. Dependencies are simply imported and atomic commits upgrading both client and server are possible.
If you need to use an old API for a dependency, then that is a new dependency that is added to the monorepo as a separate project that you now have to maintain.
There are plenty of downsides too, but these are the advantages and the reasons why Google had no incentive to design a good module system for Go v1.0.
One of the main reasons to use a monorepo is to work at a single HEAD, letting you do large scale refactors within single/linked commits, without ever having to fork any code due to breakage (as you change both sides of a potentially breaking contract in the same change). I would argue that if you're attempting to do multiple code versions in a monorepo, you're doing monorepos wrong. Or at least, you're not doing them Google-style, and that's really what GOPATH and Go has been engineered to work well with.
In that case, yes, you will likely hit an impedance mismatch with Go OOTB. But I'm also quite sure it's possible to write custom tooling around the Go compiler to work with this setup (ie. use `go tool compile` and set things up manually for compilation, without using `go build` which is quite opinionated).
However, if you have a monorepo with multiple code versions per subpath, what you're really doing is multirepo disguised as a technically-a-monorepo-I-guess-I-mean-it's-a-single-repo-that-makes-it-a-monorepo, with none of the benefits of either approach.
This ensures you have a common place to look for code, and more importantly, it ensures you have a unique history for all of your code - what often happens is that ProjectA has some internal library LibA; after a while, ProjectB needs functionality similar to LibA, so now you move LibA from /ProjectA to /Libs/LibA, and can easily re-use it across ProjectA and ProjectB. But in older branches of ProjectA (e.g. a hotfix branch for an already released version) it is still a sub-folder, and you can keep moving bug fixes from here to there. If you split all of this into different repos, you can't really carry over history, at least not with ease, and definitely not while both versions live together.
However, code artifacts should be versioned separately from all of this - you create builds all the time, and they, by default, use fixed versions of both external and internal libraries, as there is no reason to spend time updating a library that already does all you need - especially not while it is work-in-progress. So you have a build system that creates and consumes artifacts, and that tracks dependencies between artifacts.
That is an anti-pattern from my experience, as it's a vicious cycle that ends up with stale builds. Library authors, unburdened by the requirement to always have a working library will start to ship breaking changes ('you should've pinned it!', they will say, not even thinking about possibly making the change non-breaking or at least having a grace period). This in turn makes library users pin to some 'stable' versions early, and these pins will never get updated. That in turn makes library authors care even less about not breaking master, and this escalates into a tangled ball of mutual version pins and no easy way to untangle it. The purest form of dependency hell. That in turn has negative effects on long-term maintenance: for example, if a library introduces a security fix, or some other change that needs to be rolled out ASAP to all users. Who is responsible for updating the dependencies then, or backporting the fixes? The library authors, who now have to backport fixes to all possible forks, with an urgent change blocked by having to chase down all library users and convincing them to pull in the backport? Or application authors, who might not know anything about the library codebase? What if there is some other transitive dependency in the chain, that is also pinned at an ancient version? This quickly gets very, very hairy, and tends to be a combinatorial explosion of complexity as new edges of the multiversioned dependency graph appear.
Contrast this to the radically different approach of never breaking HEAD. On every commit, CI picks up what build targets did that particular change influence, and re-runs tests for _all_ of them. Change your own code? That's fine, your affected tests run. Change a library? That's also fine, but tests of all code that depends on it now also run, and you better not have introduced a bug, or that change will simply not be allowed to be merged. That's it. No pinning, no releases for any library, just continue to get code changes merged, and make sure they pass tests. No special process for backports, no special coordination of version bumps, not having to declare stability contracts for every library, no having to mentally juggle a complex tree of versioned dependencies and the backports involved when trying to get a fix landed. Whatever code you push must work and that's it. It is then the job of release teams/automation to qualify a version of the software at a given monorepo version, and if anything breaks unexpectedly - just get it fixed in the code and cut a new release.
Granted, this requires tooling around your monorepo: proper CI, a fast build system that can re-run only affected targets, a proper code review system that lets you work on large changes in chunks, and then merge them at once onto HEAD. But that's something that's generally unavoidable with monorepos.
tl;dr: Choosing to 'not spend time updating a library that already does all you need' is just kicking the can down the road, whatever library you use you must be able to use a newer version of, and it better be early in the process when things can still be changed, rather than when things are on fire.
With normal build/package management systems, nothing stops you from setting up your CI/CD system to pick up the latest from master in 20 repos, and nothing stops you from configuring maven to pick up dependencies from different moments in history from various places in your monorepo.
Also, this whole 'build everything from head' can't work when shipping versioned software to customers that demand patches and security updates, unless you have absolutely perfect test coverage AND absolutely perfect backwards compatibility, up to the internal API level. This would be like shipping the latest kernel to fix a security problem on a 5 year old system running 2.something.
"/v2", git tags but not branches, and the blast radius of trying to use a forked dependency: Golang's awful module system is top of my list for reasons to recommend away from golang. The design is full of weird kludges and your choices are zero documentation or overwhelming documentation that obscures what you really need.
This is considering the fact that nixpkgs has Go module support.
You can even use Nix's recursive, lazy nature to declare dependencies between libraries too in an equally lighweight way as go mod.
However, this is a lost cause at this point. Go is removing GOPATH support in the next release (Go 1.17). See: https://blog.golang.org/go116-module-changes