Hacker News new | past | comments | ask | show | jobs | submit login
Go Modules Cheat Sheet (encore.dev)
138 points by eandre 9 days ago | hide | past | favorite | 50 comments





i have also written an article given my frustration in finding simple information on the topic ie. how to upgrade go deps[0] which is not immediately easy to find from the extensive Go wiki[1] and a general go mod how-tos tutorial[2]

[0] https://golang.cafe/blog/how-to-upgrade-golang-dependencies....

[1] https://github.com/golang/go/wiki/Modules#how-to-upgrade-and...

[2] https://www.youtube.com/watch?v=AJxKKmzRTUY


Why did you decide not contributing to the wiki? (curious, not blaming tone)

the wiki already have enough information - it is in my opinion structured and packed in a way that is difficult to find. I think if it would be possible to split the Modules wiki into multiple search-engine-indexable files - info will be slightly easier to find

Ahhhh.... relief. The Go docs are vague on day-today tasks like this. Also how were those nifty arrows done?


I stand very happily corrected! Thank you very much!

[flagged]


Arguably updating packages could be a bit more obvious than

  go get (-d) -u ./... && go mod tidy

In 99% of cases you only need

    go get -u

Thanks! I used Illustrator to create it and then cleaned it up by hand.

Question: Why is go so integrated(?) with github? I mean, why can't I (or can I?) go get from my own personal gitlab server?

You can. If you have a `.git` in your import path, it should just work OOTB, using Git as expected. Otherwise, whatever code hosting service has to implement support for returning a proprietary meta tag when serving a ?go-get=1 query - and Gitlab already does that, IIRC.

https://golang.org/cmd/go/#hdr-Remote_import_paths


For a truly private server, you'll need two needles of that mountainous haystack: set GOPRIVATE and tell git to use ssh first:

    git config --global url.git@HOSTNAME:.insteadOf https://HOSTNAME/

Is there not a better solution for this? We have this baked into our docker builds and it irks me that we have to copy personal credentials into a docker build so we can use git to pull modules and build. Is everyone actually doing this, or do you setup read-only tokens per private module, or anything else?

We set up an internal Athens[0] caching proxy with credentials to pull from our GitLab. So far it seems to be win/win/win

- We don't need this dance in our Dockerfiles / build scripts / dev machines anymore. We have a baked GOPROXY / GONOSUMDB in our build image and developers configure the same proxy (via `go env -w`) locally.

- We pull all packages over the proxy, so builds are faster / use less bandwidth / still mostly work when GitHub goes down.

- SCAs get more difficult; the credentials the proxy has are more limited than any individual developer's GitLab tokens / credentials, and owning the proxy is going to be harder than a single developer's laptop.

[0] https://docs.gomods.io/


At least for Gitlab, you can write a .netrc in the user‘s home dir like so:

machine <your GitLab domain> (e.g. gitlab.com) login <your GitLab id> password <your GitLab personal access token>

For the access token, you can also leverage Gitlab‘s CI Job Token.

What we do is an „echo $(netrc contents with $CI_JOB_TOKEN) > ~/.netrc“ in the pre-script in CI.

(https://stackoverflow.com/a/61257782 and https://docs.gitlab.com/ee/security/token_overview.html)


If you do take this approach, writing your access token in plaintext into a well-known file in your home directory seems like a strictly worse idea than the (also more broadly-compatible) Git insteadof rule to pull via SSH and your local agent.

Yes, though it should be noted that the access token is only valid for the duration of the Job.

For Go, I also take the vendor approach.

But for other stuff (namely private npm packages), (relatively) modern versions of Docker support build-time-only secrets: https://docs.docker.com/develop/develop-images/build_enhance.... Pass the secret as part of the docker build command and then access it inside RUN --mount=... stages.


What I've done on a recent project is to vendor the dependencies before build(which are excluded from git, you could cache these if you're using a CICD system like circle), copy everything into the build container, build with -mod=vendor and then copy the resulting binary into a new container.

Don't know how that'll work out for a larger project but it works well enough in this small API I wrote.


Thank you for the link. I have tried in the past to find this exact information and for some reason it just never came up. User error, I suppose.

As others have already pointed out you don't need to use Github. Some people call it vanity imports[0]. In my experience 'vanity imports' give better results when googling stuff about custom import paths.

[0] - https://blog.bramp.net/post/2017/10/02/vanity-go-import-path...


All you need to do is have a webpage that has a meta tag with name="go-import" and a content tag with content="URL src-control-method source-control link"

See xyzc.dev/go/ppgen and inspect the source for an example.


I think you can? The urls in package names are just urls. So if you have a url to a git repo, it'll work.

Okay, interesting. I tried looking that up, but I never once found an example and, well, I simply never thought to try it.

Thank you.


It isn't, but it is limited to a few source control solutions. If you use something other than Git, Bazaar, SVN, Mercurial, or Fossil, then you're not going to be able to publish Go modules.

Also, Go modules are dependent on DNS, so any domain name change for your repo will have to result in code (import path) changes.


A change in your repo does not need to result in an import path change if you set up vanity URLs for your import path. For instance, all my go libraries use the import path of gomod.garykim.dev even though the repos are actually hosted on GitHub. That means if I ever change to self-hosting, for example, I just need to update some of the meta tags on the gomod.garykim.dev website to change the repo location.

That still means that if you lose your domain name (gomod.garykim.dev) for whatever reason, you have to modify your code.

You can use a replace directive to handle cases like this without updating the code.

https://golang.org/ref/mod#go-mod-file-replace


Only if you're the final consumer of that module - if you publish it so others use your module, the replace directive will be ignored and they will get a failed build (or have to add their own replace directive for your sub-dependency).

Yes, but this is roughly the same as any packaging system - the ones that don't use DNS as a default namespace require you to add explicit package repositories, which is no more or less onerous than the replace directive.

I've found explaining multi-module repos w/ go.mod files in subdirs and how that affects using a pseudo-version vs a real version to be the most challenging part. That could be a good addition.

I'm always surprised just how difficult it is to have Go code in a monorepo given Google's famous use of one. I guess they just don't use modules internally?

It works great if you don't use modules internally. Either go with plain GOPATH, or use something like Bazel/rules_go/Gazelle.

Modules are not for monorepos and internal components, modules are for third party dependencies that need to be pulled in at compatible versions, and for general multirepo work.


"third-party" dependencies can easily exist between teams in the same company sharing the same mono-repo. It's perhaps not as common with git, but other version control systems are much more often used as monorepos, often for building multiple unrelated projects, with some common tooling libraries used as modules.

This is trivial to do with any other module system I've used (Maven, Nuget, Konan, pip, cargo), but it is extraordinarily brittle with Go.


> "third-party" dependencies can easily exist between teams in the same company sharing the same mono-repo. [...] This is trivial to do with any other module system I've used (Maven, Nuget, Konan, pip, cargo), but it is extraordinarily brittle with Go.

I don't understand? This should literally just be an import statement. If your Git repository is anchored at importpath "source.example.com/git", and your code lives under "app/service/backend", and wants to import "lib/db/mysql", it just does so via `import "source.example.com/git/lib/db/mysql"`. No need to use Go modules at all. Or if you do need to use Go modules, just have one module for the entire repository.

The only reason I can see this not work is if you have multiple Go modules in a single repository, or even worse have module import paths not corresponding to paths within the monorepo. But in that case the fix is easy: just don't do that?


What if I want to use version 1.4.1 in one project, but 1.0.3 in another project?

Modules serve a purpose inside a (large) organization just as between organizations, and tying modules to your source control choices is a generally bad idea.


Then you're not using a monorepo.

The whole point of a monorepo is that everything is using the most recent version of every dependency, always. If they don't, then you broke the build. Versioning does not exist, dependency hell does not exist, everything works together. Dependencies are simply imported and atomic commits upgrading both client and server are possible. If you need to use an old API for a dependency, then that is a new dependency that is added to the monorepo as a separate project that you now have to maintain.

There are plenty of downsides too, but these are the advantages and the reasons why Google had no incentive to design a good module system for Go v1.0.


This is such a strange take. It's always simpler to have a single repo rather than multiple repos, especially if using something like SVN or P4 where you can easily clone or branch a single directory. So regardless of how you develop and how easily you can use latest vs known-safe builds, there may just not be any down-sides to a monorepo, depending on your VCS.

> What if I want to use version 1.4.1 in one project, but 1.0.3 in another project?

One of the main reasons to use a monorepo is to work at a single HEAD, letting you do large scale refactors within single/linked commits, without ever having to fork any code due to breakage (as you change both sides of a potentially breaking contract in the same change). I would argue that if you're attempting to do multiple code versions in a monorepo, you're doing monorepos wrong. Or at least, you're not doing them Google-style, and that's really what GOPATH and Go has been engineered to work well with.

In that case, yes, you will likely hit an impedance mismatch with Go OOTB. But I'm also quite sure it's possible to write custom tooling around the Go compiler to work with this setup (ie. use `go tool compile` and set things up manually for compilation, without using `go build` which is quite opinionated).

However, if you have a monorepo with multiple code versions per subpath, what you're really doing is multirepo disguised as a technically-a-monorepo-I-guess-I-mean-it's-a-single-repo-that-makes-it-a-monorepo, with none of the benefits of either approach.


Having a monorepo is the default, especially for centralized VCS. You start with one repo, and never stop adding to it. Different projects live in different directories of the monorepo, and they each maintain their own branches etc.

This ensures you have a common place to look for code, and more importantly, it ensures you have a unique history for all of your code - what often happens is that ProjectA has some internal library LibA; after a while, ProjectB needs functionality similar to LibA, so now you move LibA from /ProjectA to /Libs/LibA, and can easily re-use it across ProjectA and ProjectB. But in older branches of ProjectA (e.g. a hotfix branch for an already released version) it is still a sub-folder, and you can keep moving bug fixes from here to there. If you split all of this into different repos, you can't really carry over history, at least not with ease, and definitely not while both versions live together.

However, code artifacts should be versioned separately from all of this - you create builds all the time, and they, by default, use fixed versions of both external and internal libraries, as there is no reason to spend time updating a library that already does all you need - especially not while it is work-in-progress. So you have a build system that creates and consumes artifacts, and that tracks dependencies between artifacts.


> However, code artifacts should be versioned separately from all of this - you create builds all the time, and they, by default, use fixed versions of both external and internal libraries, as there is no reason to spend time updating a library that already does all you need - especially not while it is work-in-progress.

That is an anti-pattern from my experience, as it's a vicious cycle that ends up with stale builds. Library authors, unburdened by the requirement to always have a working library will start to ship breaking changes ('you should've pinned it!', they will say, not even thinking about possibly making the change non-breaking or at least having a grace period). This in turn makes library users pin to some 'stable' versions early, and these pins will never get updated. That in turn makes library authors care even less about not breaking master, and this escalates into a tangled ball of mutual version pins and no easy way to untangle it. The purest form of dependency hell. That in turn has negative effects on long-term maintenance: for example, if a library introduces a security fix, or some other change that needs to be rolled out ASAP to all users. Who is responsible for updating the dependencies then, or backporting the fixes? The library authors, who now have to backport fixes to all possible forks, with an urgent change blocked by having to chase down all library users and convincing them to pull in the backport? Or application authors, who might not know anything about the library codebase? What if there is some other transitive dependency in the chain, that is also pinned at an ancient version? This quickly gets very, very hairy, and tends to be a combinatorial explosion of complexity as new edges of the multiversioned dependency graph appear.

Contrast this to the radically different approach of never breaking HEAD. On every commit, CI picks up what build targets did that particular change influence, and re-runs tests for _all_ of them. Change your own code? That's fine, your affected tests run. Change a library? That's also fine, but tests of all code that depends on it now also run, and you better not have introduced a bug, or that change will simply not be allowed to be merged. That's it. No pinning, no releases for any library, just continue to get code changes merged, and make sure they pass tests. No special process for backports, no special coordination of version bumps, not having to declare stability contracts for every library, no having to mentally juggle a complex tree of versioned dependencies and the backports involved when trying to get a fix landed. Whatever code you push must work and that's it. It is then the job of release teams/automation to qualify a version of the software at a given monorepo version, and if anything breaks unexpectedly - just get it fixed in the code and cut a new release.

Granted, this requires tooling around your monorepo: proper CI, a fast build system that can re-run only affected targets, a proper code review system that lets you work on large changes in chunks, and then merge them at once onto HEAD. But that's something that's generally unavoidable with monorepos.

tl;dr: Choosing to 'not spend time updating a library that already does all you need' is just kicking the can down the road, whatever library you use you must be able to use a newer version of, and it better be early in the process when things can still be changed, rather than when things are on fire.


The thing is, the advantages of depending on HEAD vs. versioning dependencies have little to do with monorepo vs multiple repos. That is more a concern that depends on how your VCS is managed and how your build system interacts with your VCS, outside of Go tooling.

With normal build/package management systems, nothing stops you from setting up your CI/CD system to pick up the latest from master in 20 repos, and nothing stops you from configuring maven to pick up dependencies from different moments in history from various places in your monorepo.

Also, this whole 'build everything from head' can't work when shipping versioned software to customers that demand patches and security updates, unless you have absolutely perfect test coverage AND absolutely perfect backwards compatibility, up to the internal API level. This would be like shipping the latest kernel to fix a security problem on a 5 year old system running 2.something.


I figured GOPATH was designed around the monorepos and they've been trying to get rid of it using the module system.

"/v2", git tags but not branches, and the blast radius of trying to use a forked dependency: Golang's awful module system is top of my list for reasons to recommend away from golang. The design is full of weird kludges and your choices are zero documentation or overwhelming documentation that obscures what you really need.


We use bazel. It's weird how different it is from the Go cli tooling.

The original GOPATH approach to dependency management is probably a better fit for monorepos.

That's only true if everyone uses the latest version of the code. If you want to have an internal package ecosystem, with stable versions of packages, feature branches etc, then GOPATH is an even worse kludge.

I want to be on the new wave of Go modules, but a Nix-managed GOPATH works just as well if not better, especially if you use niv to manage the hashes & versioning.

This is considering the fact that nixpkgs has Go module support.

You can even use Nix's recursive, lazy nature to declare dependencies between libraries too in an equally lighweight way as go mod.


I agree that GOPATH makes integration with other package managers much easier.

However, this is a lost cause at this point. Go is removing GOPATH support in the next release (Go 1.17). See: https://blog.golang.org/go116-module-changes


That's a real shame!

Missing go mod download -json for when you want to print the checksums (e.g when adding bazel to external repo)



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: