I see a lot of comments in here about the v2+ rule’s reason for existing being to allow importing multiple major versions of a module. That’s not it at all. As the blog post (which was easily Googled by me and explains things quite well IMO) states: “If an old package and a new package have the same import path,
the new package must be backwards compatible with the old package.”
They’re trying not to break compatibility until a downstream developer explicitly opts in to that breakage. The simplest way to do that is to give the module a new name. And the simplest way to do that is append the major version to the name.
That's weird, how then do other languages manage to solve importing packages/modules cleanly without requiring that the name of the package include the version number, while still allowing developers to specify exactly which version of the module they want to use?
Surely someone else has solved this problem before, like Rust's Cargo, or Java's Maven, or JS's NPM, or Python's PIP, Ruby's Gems, or...
Unlike in every single one of those other languages, this Golang decision means that you can't pin to a specific minor version, or a specific patch version (in the sense of MAJOR.MINOR.PATCH).
It's very in-character for the Golang team though: they value simplicity-for-themselves no matter what the complexity-for-you cost is.
A lot of languages still haven't. A problem here is that this can't be solved by the package manager alone but needs support in the module loader too (often built into the language).
Python's PIP is getting a proper dependency solver, but there can still only be one package with a given name in an environment. So if package A needs numpy>=2 and package B needs numpy<2 there is no solution.
If you release a new major version of your package and you expect this will be a problem for people, you have to use a different package name (e.g. beautifulsoup4, jinja2). That is if the name for the next version isn't getting squatted on pypi.org.
That's a fair question and something I should have clarified in my comment (though I have a feeling it is likely addressed in the blog post I linked to).
It gets into the deeper motivation behind not breaking backwards compatibility within the same-named module. There are two bad extremes here:
1. Never update dependency versions for fear of breakage. This leaves you open to security vulnerabilities (that are easy to exploit because they are often thoroughly documented in the fixed version's changelog / code). And/or you're stuck with other bugs / missing features the upstream maintainer has already addressed.
2. Always just updating all deps to latest and hoping things don't break, maybe running the test suite and doing a "smoke test" build and run if you're lucky. Often a user becomes the first to find a broken corner case that you didn't think to check but they rely on.
The approach outlined by Rich Hickey in that Spec-ulation talk I linked to allows you to be (relatively) confident that a package with the same name will always be backwards-compatible and you can track the latest release and find a happy middle ground between those two extremes.
Go's approach is one of the few (only?) first-class implementations of this idea in a language's package system (in Clojure, perhaps ironically, this is merely a suggested convention). The Go modules system has its fair share of confusing idiosyncrasies, but this is one of my favorite features of it and I hope they stick to their guns.
It seems like this approach (including the major version number in the package name) has been practiced at the distro level for a long time. E.g. SDL 1.2 vs 2.0 have distinct package names on every distro I'm aware of.
It's short sighted to think that the Go package managment is "simple", there was a lot of thoughts and considerations that was put into it and improvements over other languages ( even recent one like Rust ).
"A lot of thought and consideration" was put into the ten previous approaches they've attempted to solve this problem.
I'm starting to think that the reason they've tried so hard to avoid solving interesting problems in the language is that every time they've tried they've made something worse than every other alternative that existed in the problem space.
Other languages don't allow you to import multiple versions of a package. Allowing this opens a can of worms, mostly to do with global state. The two versions of the module can still be competing for the same global resources, say lock files, or registering themselves somewhere, or just standard library stuff like the flag package. Unless developers actually test all their major versions running together, you are just crossing your fingers and hoping. It was the same problems we had with vendoring, and one of the reasons the 'libraries should not vendor' recommendation is made.
You can shade a dependency, which can with the requisite budget in frustration allow for importing a library with multiple versions in the same program.
So why can't the module developer choose to change the url if they want to introduce breaking changes?
Why is this built into the module system and triggered by a v2? I don't think there is a compelling reason to do so, and there are several compelling reasons not to do so.
I think I prefer the status quo where there is very strong pressure to remain backwards compatible if your module is used by a few people. This leads to far less churn in the ecosystem and more stable modules.
The article makes the assumption that a major version number is used only for breaking changes, but many software packages use such version numbers for major feature releases (as indeed Go2 seems to plan to when/if they release it). I'm not clear why they should be forced to adopt urls ending in /v2 as well.
I guess you are getting in to Go design philosophy. These questions are similar to "why go does not allow lint to just issue warning for unused variables / packages instead of giving hard compiler error?" The answers to these are unfortunately unsatisfactory, no matter how logical.
Reasonable thing is to use language which align your philosophy.
I've used Go for almost a decade and am pretty happy with it, including the rule you mention. It's fine to disagree with decisions made, it may or may not be heeded, but if no-one ever disagrees the language would be poorer for that IMO.
A module developer _can_ change the URL if the want to introduce a breaking change. Absolutely no problem here. But they do not _have_ to: Adding the version as v2, v3, ... to the module name works also. Nobody is "forced to adopt urls ending in /v2". Go modules work pretty well, are very flexible but seem to differ too much from what people are used to and nobody really seems to read all the available documentation and descriptions of why that way and not how language X does it.
As I understand it importers are forced to use urls ending in /v2 for imports, and to find out which major version a project is on when they do import, please do correct me if wrong.
I'd rather this was a choice made by package maintainers individually rather than the go tooling. Most packages simply don't need that as they strive to be backwards compatible and never introduce large breaking changes. Should they always be relegated to versions like 1.9343.234234, because the go tool requires it?
Honestly, yes. According to semver, a major version change is for when you make breaking api changes. If a project is backwards compatible, it wouldn’t need to increment the major version.
I think this is a reasonable enough approach. You see it in Debian for dependencies so I'm okay with it for Go. It outlines clearly what your dependencies are.
Re: "They need at the very least a section of the documentation that lays it out clearly and in layman's terms."
As a Go outsider who's trying to get his feet dipped in the lake, trying to use modules has felt... uneasy? I tried to look up the official resources on the matter and found the official blog posts to be lacking concrete and concise examples on the usage; documentation feels like it's written for people who work on the project and already digested the unit tests as examples.
As for the case of v2, similarly messy and undocumented. I also appreciate author's call for action on warning messages in tooling.
Wasn't the unified tooling supposed to be a selling point for Go? It just doesn't feel good to use.
I have found this to be almost universal when it comes to reading / writing documentation, not just Google.
I have come to the understanding that to write good documentation, you should write it as soon as you have learned it, or at least, try and explain in a similar way in the way you learned (which is not easy at all). I feel the issue stems from abstraction. Once you become somewhat an expert in a topic or develop a deeper understanding of said topic, you automatically abstract away the information that lead you to that understanding in the first place.
Otherwise you end up reading or writing documentation with a bunch of assumptions for knowledge and understanding, which are not only not stated but may not be available to the person trying to get to grips with the technology. A bit of a digression from the post, but I don't feel this is just a Google thing.
My point is that it's possible/ we shouldn't settle for "that's the universal status quo, nothing can be done about it". Something _can_ be done. And we _should_ be demanding more - especially so from tech giants like Google.
I absolutely agree. There is no reason to settle for anything less. Google certainly has the resources to fill the gap - recognising it would be the first step. Looking at the Go documentation, it seems they have feedback mechanisms for the features of Go, but no means of providing feedback on the actual documentation. Rust however have both Github contributions, as well as a channel on discord specifically for documentation discussions. I would assume this may contribute to the lack of satisfactory documentation for Go.
> Google certainly has the resources to fill the gap
Quite possibly they dont have the resource.
The eng culture today is dominanted by visible impact. Documents are great for everyone, every Googler knows that, but it just cannot be measured.
I wrote some of the most spectacular docs in my previous Google teams. Everyone loves it when they saw them. And no one mentioned them in any formal scenarios. And I am aware of the measurement rules well enough that I didn't bother to waste my time to promote them.
For me, I just care the feelings of the users so much that I personally feels rewarded, but for Google as a whole, there cannot be enough resources for documentation, by the design of the engineering system.
Lots of very relevant details aren't in Android developer documentation, rather scattered around in Stack Overflow, G+ (when it existed), Twitter, Medium or the developer personal blog.
Apparently the team keeps forgeting that Android has its own documentation website.
People who write code know it too well to write good documentation. You need someone else to come in and write documentation. This is expensive and many/most companies deicde to not hire that person and as a result you get documentation written by people who know the code too well and so they skip many parts as obvious that are not obvious, while going into great detail about esoteric parts that are only rarely used.
I documented an API a couple years back and then got to see questions from some engineers at another company who were using the tool.
I did a good job of documenting how to call each function, but they were actually struggling with how dynamic linking worked in C. At the time, I was surprised and questioned their ability to problem solve.
Looking back now, it's common for engineers to jump between skills and work in unfamiliar places. Adding a simple Makefile example would have helped them immensely, and may have helped others as well.
I disagree that it needs to be someone else, but you need some empathy, and the ability to observe your users struggling to understand how to improve your documentation. You can't write it for yourself and expect it to be helpful to everyone.
> People who write code know it too well to write good documentation.
I doubt this.
Like Eistein said, if you understand something well enough, you will have no problem breaking it down in to simple, easy to digest information.
The idea that someone knows something well enough and therefore not able to write good docs, misses the crux of the problem: The writer here failed to understand his audience.
And when someone is educated to be conscious about their audience, I see no reason why the one knows the system best in any way hindered by his/her knowledge.
Erlang's creator Joe Armstrong once said it like this (quoting from memory) "There's a moment when you go from not getting it to getting it, and you have about 24 hours when you remember both before and after, and that's when you have to write things down. After that you won't remember how it was not getting it"
Thanks for sharing this with me. I didn't know this was a widely known phenomenon. I'll take a look at the link to help avoid having these blind spots myself, thanks again!
I've seen projects with much worse documentation than Go. And I think it's unfair to blame the quality of the documentation on godoc, which is actually a pretty powerful documentation tool, and the fact that it's included with the language is a great advantage. As with other aspects of Go, they may have taken the "keep it simple" principle a bit too far here too (e.g. godoc could detect and highlight if you mention the names of a method's parameters in a doc comment), but what's there is very solid. Of course, as with any documentation tool, you have to actually use it by writing good doc comments (end examples, and additional documentation files etc.) to make it useful, but that's not the tool's fault...
Yeah, OCaml's docs are FAR worse as they're often just type signatures. I'm also not a fan of Oracle's Java docs as they do explain what each method does, but often in a rather opaque way. Where Elixir and Kotlin get things right, is they tend to also give quick examples of how one might use any given method.
A big problem with this documentation is that it's not with the rest of the docs and isn't found in search on golang.org.
The docs provide enough to make modules work but are far from excellent in terms of docs.
These docs also highlight things about how the Go team thinks. For example, if a project is versioned at v2 or later they recommend incrementing the major version of your code when adopting modules. Instead of modules being support tooling for the app it's designed and thought of as important as a major version change.
Arguably another feature that could use some shouting from the rooftops. I see it fairly rarely from public packages, and it's something a lot of public packages really ought to have. Little example snippets for specific functions are great, but round-trip, top-to-bottom examples of using your code base are very useful.
go doc is one of my favorite programming tools. You can get information about any function from the command-line, without having to open a browser or manually grep for what you're looking for. How does strings.Index work? "go doc strings.Index"
func Index(s, substr string) int
Index returns the index of the first instance of substr in s, or -1 if
substr is not present in s.
It even tells you what import statement to use to get the version of the code that you're reading the documentation about. It isn't perfect, but it's my favorite documentation tool and would be my number one reason against switching to another language. (For example, I would kill for this in Typescript.)
Last time I checked it was not possible to actually use godoc to render the doc, it just started an internal server. That made it super hard to publish docs internally.
I think Gophers borrowed the idea of generating docs from header/source comment from Java. Java itself spec'd it in 1996. Where did Gosling and friends got the idea, I don't know. It may have precedents either in Sun's other languages or possibly an earlier precedent.
Some relatively minor differences:
In Go, documentation is externalized IFF top level code (function, type, var defs and declarations) immediately follows a comment line.
In Java comments have two forms. The comment form using two asterisks is externalized.
In Go comments (iirc) support some very basic text styling (of the generated doc).
In Java externalized comments can use a basic set of markup ala HTML. This includes comment level hyperlinks to javadocs of referenced elements.
Having used both languages rather extensively, Go's approach lends itself to CLI usage. Java provides richer markup and hyperlinks via java and is much better for someone who wants to explore the API via documentation.
On page 309 the `Metaclass Protocol` is defined. This class has a property “comment” [with value semantic of] ‘commentString’. So possibly they got it from Smalltalk. Don’t know.
It's not only that the documentation is quite bad. It's also the rest of go.
Here's what it spews when you specidy a dependency incorrectly
Dependency:
github.com/robfig/cron v3.0.1
go's handling of this:
require github.com/robfig/cron: version "v3.0.1" invalid:
module contains a go.mod file, so major version must be
compatible: should be v0 or v1, not v3
I come to judge language's quality of documentation almost solely on their documentation around local and module imports. If they don't provide clear documentation on that subject, which is paramount imo to getting past simple scripts, then the rest of their higher level documentation is probably lacking. I've been upgrading some personal Go projects I have to a more modern folder layout (pkg, internal, etc. vs. src folder) and I really found their documentation lacking. Rust for example, in their imports, is so so, but I've found way better external examples for imports in Rust than Go.
One of the takeaways from this article was, "there needs to be more documentation", and I think I can speak to that:
First, thanks for the feedback. We also want there to be a loooot more documentation, of all kinds.
To that end, several folks on the Go team and many community members have been working on Go module documentation. We have published several blog posts, rewritten "How to write Go code" https://golang.org/doc/code.html, have been writing loads of reference material (follow in https://github.com/golang/go/issues/33637), have several tutorials on the way, are thinking and talking about a "cheatsheet", are building more tooling, and more.
If you have ideas for how to improve modules, module documentation, or just want to chat about modules, please feel free to post ideas at github.com/golang/go/issues or come chat at gophers.slack.com#modules.
I noticed that a lot of the documentation seems to combine "how this works" with "why this works the way it does"; the why is great when you're interested in diving deeper, but it's frustrating when all you're interested in is the how.
For example, the linked blog post spends a lot of time talking about diamond dependencies and other package managers. This is just noise that gets in the way when you're trying to figure out how does this work?
If you did want to combine both in a single reference doc, I would move the why out into separate, skippable sections.
When I first was learning Go, I was really impressed by how easy it was to understand the language just by reading the spec. I've found the opposite to be true for Go modules. (Which also, as near as I can tell, doesn't have a spec, but just various scattered blog posts, for various different iterations of the idea.)
Googlers tend to explain "why" of their decisions.
I'd suggest just taking the advises and decide the best course of actions. The why part is generally only meaningful to the decision maker and not something people care or have enough context to appreciate.
This way a lot of potential misunderstanding was avoided.
As an everyday user of Go perplexed by this making it into the mainline, I'd like to second the request to make this feature optional. More documentation would be nice, but I'd prefer the default to change.
The assumptions in that v2 go modules article around the meaning of major semantic versions do not jibe with the way the majority of software in use today uses version numbers - they are most often used to denote new features, which may or may not have breaking changes large or small, and small breaking changes are tolerated all the time, often in minor versions. This assertion in particular seems wrong to me for most software in use today:
By definition, a new major version of a package is not backwards compatible with the previous version.
Semver is very clear on what a minor vs a major change means.
> the majority of software in use today uses version numbers - they are most often used to denote new features, which may or may not have breaking changes large or small, and small breaking changes are tolerated all the time, often in minor versions
We're getting into opinion here. Let's be clear: semver very strictly, objectively disagrees with this approach. In general, this approach of "what's a few breaking changes in a minor release amongst friends" leads to terrible user experiences.
Go modules takes the cost of churn, which in some languages gets externalized to all users, and places it on the module author instead. That is far more scalable and results in much happier users, even though module authors sometimes have to be more careful or do more work.
Thanks for working on the docs and engaging here, I know it can sometimes be a thankless task.
I don't think it's a matter of opinion that the vast majority of software in common use does not use strict semantic versioning, most likely including the web browser and operating system you are using to read this comment, popular packages like kubernetes, and the Go project itself in the mooted 2.0 with no breaking changes. It is highly desirable to avoid significant breakage, even to the point of ignoring strict semver and avoiding it across major version changes! So I'm not arguing for encouraging packages to break, but rather the reverse, I prefer the status quo pre go mod where packages are assumed not to break importers, though sometimes small breakage happens and/or is acceptable.
Most packages use a weaker version of semver than the one you describe, which is still useful, so I'm not clear why the go tools have to impose the very strong version which is not commonly used. The difficulties introduced seem to outweigh any benefit to me.
Because the version of web browser and the kernel more or less doesn't matter because they care way more about backwards compatibility than the "vast majority of software".
The kernel doesn't break user space. Web browsers' api generally remains backwards compatible (how long did it take to remove flash - and you can still install it if you want!)
You mention "major semantic versions", but semver itself explicitly says that the point of major version changes is backwards incompatibility.
> This assertion in particular seems wrong to me for most software in use today: By definition, a new major version of a package is not backwards compatible with the previous version.
It is true for any package manager using semver, cargo, npm, pub, etc.
In practice, that is not true for many of the packages on those managers or even for the mooted Go 2.0. Major versions are often used for major feature releases or changes of direction, which may or may not break importers, minor versions sometimes break things. There is more chance of breakage in a major version but it's not a given. And that's ok.
The meaning of versions is a negotiation between producer and consumer, not a fixed rigid set of rules as strict semver would have you believe. In practice the definitions are more fluid, something like major: big changes, may be breakage, minor: minor changes, should be no or minimal breakage, patch: small fix, no breakage.
Putting versions in the import path is not something any of the popular package managers do AFAIK, and they certainly don't force you to do that, nor do they force you to use strict semantic versioning.
> There is more chance of breakage in a major version but it's not a given. And that's ok.
I think you have are looking at this from the perspective of the package consumer, but versioning is controlled by the package maintainer and its their notion of "breaking" that determines the versioning story.
Yes, many users of a package will in practice not be broken by a "breaking change". I could, for example, depend on your package but not actually call a single function in it. You could do whatever you want to the package without breaking me.
But the package maintainer does not have awareness of all of the actual users of their code. So to them, a "breaking change" means "could this change break some users". If the answer is yes, it is a breaking change.
> not a fixed rigid set of rules as strict semver would have you believe.
Semver is a guideline so is naturally somewhat idealistic. Yes, there are certainly edge cases where even a trivial change could technically break some users. (For example, users often inadvertently rely on timing characteristics of APIs, which means any performance chance for better or worse could break them.)
But, in general, if you're a package maintainer, your definition of "breaking change" means "is it possible that there exists a reasonable user of my API that will be broken by this?", not "is there actually some real code that is broken?" Package maintainers sort of live as if their code is being called by the quantum superposition of all possible API users and have to evolve their APIs accordingly. Package consumers live in a fully-collapsed wave function where they only care about their specific concrete code and the specific concrete package they use.
> Putting versions in the import path is not something any of the popular package managers do AFAIK, and they certainly don't force you to do that,
That's correct. Go is the odd one out.
> nor do they force you to use strict semantic versioning.
The package manager itself doesn't necessarily care if package maintainers strictly follow semantic versioning. The version resolution algorithm usually presumes packages do. But if they don't, the package manager don't care.
Instead, this is a relationship between package consumers and maintainers. If a consumer assumes the package maintainer follows semver but the maintainer does not, the consumers are gonna have a bad time when they accidentally get a version of the package that breaks them. This is acutely painful when this happens deep inside some transitive dependency where none of the humans involved are aware of each other.
When consumers have a bad time, they tend to let package maintainers know, so there is a fairly strong effective social pressure to version your packages in a way that lets consumers reliably know what kind of version ranges are safe to use. Semver is just one enshrines consensus agreement on how to do that.
The package manager itself doesn't necessarily care if package maintainers strictly follow semantic versioning. The version resolution algorithm usually presumes packages do. But if they don't, the package manager don't care.
This was the core point I was trying to make - other package managers correctly leave this negotiation on how much breakage is acceptable to producers and consumers, they do not impose strict semver but a looser one, and importers choose how strict they want to be on tracking changes in their requirements file (go.mod or similar), while producers choose how strict they are going to be with their semver (strict semver is almost never used in its pure form for good reasons, versions communicate more than breaking changes).
The result of this change to go imposing strict semver on both parties will be IMO far more breaking changes, because it explicitly encourages them and forces importers to always choose a major version. It's a change of culture from previous go releases and will have significant impact on behaviour.
We'll also end up with a bifurcated ecosystem with producers who don't like the change staying on v1 and others breaking their packages all the time and leaving frustrated consumers behind on older versions without bug fixes.
> Some projects like GORM sidestep the issue entirely by tagging their 2.0 releases as far flung 1.x releases. That something of a solution, but smells terrible.
Amusingly, the Google protobuf team did a similar thing for the Go protobuf bindings. The new, backwards-incompatible `protobuf` module started with version 1.20, while the old module is at 1.4.
If you've followed this, you know I've omitted one detail - they actually changed the name of the module as well, from `github.com/golang/protobuf` to `google.golang.org/protobuf`. Because, of course, Golang modules names are actually URLs, not identifiers like in every other package manager out there.
Kubernetes, arguably the largest open source project using go, tags 1.17 as 0.1.17.
I don't know how anyone in this thread wants to continue arguing for the merits of the current v2 scheme. It clearly did not catch on in the community and the status quo of hoping a go get will smoothen out all rough edges of your dependencies is terrible, almost as terrible as dep was (you certainly couldn't use consistent versions of the kubernetes components without hardcoded override blocks).
> What's the difference between `google.golang.org/protobuf` and `org.golang.google.protobuf`?
There isn't one, because that's also a (mangled) URL, namely "http://protobuf.google.golang.org/", and should be rejected for the same reasons. A identifier would be "protobuf"; note the lack of any reference to the DNS domain.
Do that and soon you're going to have to introduce namespaces, and you're back to something that looks like urls.
People use these longer import identifiers because they avoid polluting the global namespace, which be reserved for very important packages (in the case of go the std lib).
> Do that and soon you're going to have to introduce namespaces
Everyone says this but there is strong evidence from npm and other package ecosystems that you can go surprisingly far with a single flat global namespace.
I think there is a paranoia among software developers about potential name collisions. But in reality, name collisions are rare and easily avoided. There are 308,915,776 unique six-letter identifiers. Go to eight and you have 208,827,064,576 names to choose from.
> There are 308,915,776 unique six-letter identifiers. Go to eight and you have 208,827,064,576 names to choose from.
To be scrupulously fair, no one wants to name their package "udjwhc"; the problem is more "protobuf" (by google) vs "protobuf" (by someone other than google), both implementing similar but subtly different interfaces for doing the thing they're named after.
Of course, npm did eventually introduce a second level of namespacing, in the form of scopes (such as @ansible or @vue). Unpaid package hosting is still limited to one level.
The problem with that is that you're essentially using a hierarchy to attach some piece of metadata to each package. The question then is what piece of metadata is both so valuable to be worth enshrining in the name and yet so unchanging that if its value changes, you're OK with every user of that package having to migrate to the new identifier.
I have yet to see any particular bit of metadata that fits into that set. Using "owner" like you do in your example here is a common choice, but also very frequently changes. If you look around, you'll find lots of examples where a package moved to a new owner and caused all sorts of problems because now all the old name references no longer work.
You could try to pick some essentially descriptive property of the package itselfs—maybe, say, "Net" if it has to with networking—like CPAN does, but it's not clear that doing so adds any real value beyond satisfying our human desire to put things in neat littly cubbyholes.
I do have some sympathy with your views on this. Package identifiers don't have to live in a hierarchy or include origin/owner. That they do in so many systems indicates people find this information useful. I do disagree that owner changes frequently - for the vast majority of packages that's not the case.
Personally I find knowing the owner sometimes useful, and quite like the use of urls in go imports as it also lets me look up the source. I've never found it a huge problem (and I have moved packages from one place to another a few times).
In the go ecosystem, packages are imported per file. Go is designed for large systems to evolve over time. You need to be able to refer to v1 and v2 at the same time.
Thus the "/v2" requirement.
Yes, it is different. Stop complaining, do it, and it just works. Here is the tutorial.
1. Modify go.mod package path to include the "/v2" element.
2. Commit.
3. Tag as "v2.0.0"
4. Push tag.
I'm not seeing what is too confusing about releasing.
To consume, on packages that you want it, import the new package. Done.
Being able to import multiple versions without conflict has been extremely helpful when I've needed new features in one version, but didn't want to update (at the same time time) all the other places.
I really don't get why people don't like it, other then it is different then what they expect elsewhere.
There's no guarantee of backwards compatibility without limiting of forward functionality. The trade-off can't be escaped.
If you can make do without any persistent state, or global or externally visible resources (network connections, configuration files, etc.), then you're OK. But that's a limitation.
Modules that rely on global state for anything other than memory pooling or what have you should be avoided. It’s a lot more testable and clean to return a high level data structure that contains any of that state you would have held before globally in the module and have that be the “context” or just the parent data structure to any others that are spawned.
Global state makes thing impossible like parallel unit tests that all use another module, or changing things like “MaxConcurrency” in a way that is synchronized across goroutines that might already be calling into the third party module.
Of course. I'm not advocating global state. It's just that most real systems have things like file systems, config files, databases, listening network sockets, etc. These are inherently non-local.
As I pointed out elsewhere in this thread, I think this is an inaccurate statement, as somehow numerous other mainstream languages have had ecosystems with libraries that provide backwards-compatibility, because they have real package management systems that allow you to target specific module versions.
Speaking from experience working on large golang projects, it's just a baseless claim repeated by people over and over. In practice, golang is worse than languages and systems like Java and C#.
> For what it's worth, the official Go Blog tried to clear the situation up some, and posted a write-up about it.
> blog.golang.org/v2-go-modules
> Even as a seasoned developer, I found this write-up somewhat impenetrable. For how important and unusual of a requirement this is, the write-up is not accessible enough.
Somewhere in the comments someone mentioned that "the actual info" on modules is here: https://github.com/golang/go/wiki/Modules which is slightly better but for some reason isn't in the official docs, and also basically brushes aside things like actually specifying v2+ modules in go.mod.
And as the article shows with links to examples, people forget to specify v2+ modules (yes, even Google forgets this), or just don't do the whole v2+ thing.
I'm still really not clear on why the Go team thought this was a good idea.
It makes no sense to me to try to impose this rule on the ecosystem - the costs (confusion, multiple urls, multiple ways of doing it) vastly outweigh the benefits (multiple versions in use at once), and if a particular project wishes to use this rule, they could do so, without imposing it on the thousands of packages which might want to use higher numbers to indicate large feature releases or are already using higher numbers.
In theory a larger number in semantic versioning indicates breaking changes, in practice small breaking changes are made all the time, it's a matter of degree and the context of a given project, and people also use version numbers for marketing or indicating lots of new features - a good case in point being Go 2.0 which may well be backwards compatible to Go 1.0. That makes this assumption in their blog false:
> By definition, a new major version of a package is not backwards compatible with the previous version.[0]
That's not the only, or even the main, reason projects use higher version numbers.
I imagine lots of people will keep their version forever in the 1.x range to work around this rule. I too believe Go should drop this requirement before it is too late.
There is a difference between a software (executable) and a library. Semver is overrated but at least common and well understood with clear semantics. And no, Go 2 probably will be Go v1.23.
I knew about this rule and I still got bitten by it last week when I was trying to upgrade a package. It's one of the many things I hate about go modules.
As I understand it the reason for this design decision was to allow a project to use multiple versions of the same dependency at once. To me this seems like a pretty rare edge case. They burdened the 99% of users with weird V2 syntax so that 1% of users could handle this edge case (which could have been handled by other means).
It's not really an edge case, it's sort of to the core of how modules work, and allows a whole slew of issues to be elided. See https://research.swtch.com/vgo-import for more background and examples.
> They burdened the 99% of users with weird V2 syntax so that 1% of users could handle this edge case (which could have been handled by other means).
Really the only burden lands on package authors who are making breaking changes. And really, the path provided is so much easier than actually making breaking changes for your users. Everyone wins.
> Really the only burden lands on package authors who are making breaking changes. And really, the path provided is so much easier than actually making breaking changes for your users. Everyone wins.
The burden falls on every module developer who wants to use a version number above 1.x. That is not congruent with developers who make breaking changes as you seem to assume.
There's an assumption here about v2 meaning breaking changes that not even Go 2 itself may adhere to (as it may well be more of a 'new features' version number than a 'breaking changes' version number increment).
> The burden falls on every module developer who wants to use a version number above 1.x.
And this is fine as it gives module developers an opportunity to make breaking changes which they wanted for quite some time. It makes much clearer for module user as opposed to Java where one does not know which module will blow up code was it 2.11 or may be 2.13.2 etc.
> There's an assumption here about v2 meaning breaking changes that not even Go 2 itself may adhere to
If anything Go is trying to be even more conservative that they are avoiding breaking changes in 2.0 as opposed to being rash making breaking changes in 1.x. I see no problem with that.
Linking multiple versions is rare until you allow it. NodeJS uses a quadratic linking model. It's why Hello World examples frequently require downloading thousands of dependencies from the Internet. So it sounds like Go is opening Pandora's Box. There will be anarchy.
A consequence of the complexity is that some devs stay below v2 forever because it’s easier.
Backwards incompatible changes pile up in v0.x’s.
It’s then really difficult for consumers to install two versions of the same package, which can happen with diamond dependencies.
Go has a bigger tooling problem. Build systems are hard, and Go has decided it doesn't want to be hard so it's ignored anything other than the "happy path". I wish other Go dependency tools still existed.
I'm a huge fan of Go, and not a fan of dependencies at all, so I tend to vendor everything.
Go mod just looks like a total mess from the outside. To be honest I (like most gophers I've talked to) never really understood why dep wasn't adopted as the official solution for this.
I'm resisting converting my projects to modules until there's a clear and definite advantage to it. So far, none.
I tried dep for a project of mine and could never even get it to the point of doing anything useful. Conversely go mod took about three minutes to migrate to and was super straightforward.
I am very happy that they did not adopt dep, it was in no way ready (and I don't think it was going to significantly improve given more time).
True, there were problems with it, it wasn't perfect by any means.
I guess the main complaint at the time was that it appeared to be "not invented here" syndrome. The community was converging on dep, and then the Go team decided to do something different, for reasons that weren't completely clear. It kinda rammed home the reality that Go is not a community project, it's a Google product controlled by a small team. That definitely has benefits and good points, but it doesn't match everyone's expectations.
> I'm resisting converting my projects to modules until there's a clear and definite advantage to it. So far, none.
Go modules are not going to go away so what benefits will waiting as long as possible give you?
I'm not a big fan but it's fine in day to day life and the most common issue for me is that some indirect dependency is having issues which is usually solved with some "replace" lines in the go mod file. Not ideal but also not too annoying.
Author of the post here. Really neat to see something I posted last night on HN the very next morning.
With a night to sleep on it, I think requiring `/v0` and `/v1` would have been the better option, that way you'd hit the issue right away rather than years into your project. Teach things early. I've updated the post with a note of this idea.
I think that's likely not something they could add at this point however.
I don't even mean to be critical of the v2-ing itself, I just think it's entirely non-obvious.
What I don't understand is why they chose to put the major version as part of the path part of the uri (/), as opposed to some other separator, e.g. fragment (#). Especially since it's optional. You can always convert "example.com/foo#v2" into the "/v2" if needed, but if you see "foo/v8" is that a repo "foo" version major v8, a subfolder of "foo" named "v8" or a repo "v8" in the project "foo"?
It’s traditional at least with web browsers to not send the fragment to the server. Query param would work, but implies your routes don’t change much between major versions.
Import paths are used to figure out where to fetch the source code from. To simplify a bit, if you see `example.com/foo/bar`, it works because there's an appropriate meta tag at https://example.com/foo/bar?go-get=1 . The import paths are pretty directly used as URLs.
"I have seen many very large projects including Google owned projects get it wrong."
As an ex-Googler I can tell you that this above statement can be more broadly applied across _many_ developer facing Google products (most notably for me, Android frameworks and libraries). Google's apps don't use the guidance and recommendations we gave to external Android devs.
I wonder if there is something cultural about Google not even being able to coordinate changes and improvements within the org itself.
I don't understand whats so hard at making it easy, if one just use source repo tags (which in go module land you should be using).
1) your go.mod should include the version
2) the module itself always assumes its the version of it's go.mod file
3) to release a version you just tag it based on the version in the go.mod file (with later semantic versions being considered the best and what go will use by default)
so module authors have to do.
so module authors dont have to do anything special besides include the version in the go.mod file (and keep it up to date) + tag release appropriately in a way that keeps to go's compatibility promise.
4) when you import a module, you specify what version you want (with a default assumption of v1) so your go.mod would either include the module or modules/v2...
5) your imports in actual .go files would not have to include the /v2 as they would know based on the go.mod file.
you as the end user of the module will follow v0/v1 tag for as long as it exists, but will have to modify your go.mod to use v2 if you want to switch.
The point of having separate imports for separate versions is to allow for use of (for example) both v1 and v2 of a module within the same codebase, if required. Your idea would force a single version, which means that would no longer be possible.
ok, reasonable, though it would seem that one could do both. i.e. for those that dont want to include multiple versions, be easy, for those that need them, then you have extra complexity.
> Modules must be semantically versioned according to semver, usually in the form v(major).(minor).(patch), such as v0.1.0, v1.2.3, or v1.5.0-rc.1. The leading v is required. If using Git, tag released commits with their versions
2-3 years ago I was working on a large project in Go with many dependencies. We moved in the span of 12 months from go get, to glide, to dep, and when vgo was announced I realized I was on a pointless treadmill, despite community assurances that the current package manager is "blessed" and will be maintained. I then decided that Go is actually quite unstable and is not yet ready for enterprise projects, and recommend against it despite the fact that I really like and agree with the design philosophy of the language.
It would have been best to have skipped glide and dep. Sorry.
The Go the language is stable, and Go modules is stable and is the future. Everything, including new features, is based on Go modules. That isn't changing.
Unlike most ecosystems, just because a problem is obviously important doesn’t mean the Go team will solve it in a hurry. They don’t like to change their minds once they commit (exactly the property you’re looking for!). So Go is a very stable, sometimes a bit irritatingly stable :), core surrounded by continuous experiments. Once rsc or equivalent has blessed something, you can rely on it being that way for a long time. Before that, hedge your bets.
This is why I never use anything but "1" as the first number in my go modules. The second number is the major (breaking changes) version field, and the third number is the minor field (non breaking changes).
These are just the kinds of things you put up with when writing go code and need to work around bad designs of the language and ecosystem.
I think people are too quick to declare v1. All of my modules are (deliberately!) v0 because v1 means "I commit to never breaking this API." Most Go modules should be v0.year.minor, eg v0.20.5 for the fifth release this year. You should only be using v1 if you have a lot of users, a set of core maintainers, etc. etc.
They know they cannot rely on a stable API, which means they have to carefully read the diff for every single upgrade, so saving no work over explicitly picking a major version.
I feel semver combines two separate ideas into the meaning of v1:
By labeling as v1 you're:
- communicating the package is stable and will be maintained
- communication that breaking changes will now increment the major version (saying nothing about stability, backported bug fixes, or any maintenance)
I wish they were not combined. Incrementing major versions frequently on early stage projects would make them work better with package managers, but without the expectation of maintenance from the community.
Not incrementing major versions makes it hard for package managers to make sensible decisions.
Tag your version v2.0.1, import it as .../v2 in project - done. You don't even have to read docs or blog posts to understand it, just look at the source code of a new project, notice the ../vN part, see the corresponding tag in repo and do the math.
I don't see any big problem. There is a blog post on the Golang website explaining everything. (My only complaint is there is no cheatsheet on golang.org for this) Also it's actually quite nice that it's possible to keep all maintained - and possibly incompatible - versions on the same branch without the overhead of a 3rdparty acting as package curator/distributor.
I wonder if they should've just required v1 or v0 when using Go modules†, so that v2 and above didn't seem like a red-headed step child / surprising / non-obvious by comparison.
The basic idea of having separate import paths for non-compatible versions of a project seems, in fact, really good; an elegant solution to a tricky problem. It certainly seems much nicer than secretly using multiple versions of a package behind your back (especially when you consider that if this happens and a package uses mutable internal state, you could actually get broken behavior).
I guess for me the takeaway is in aggregate, people will be maximally lazy. You need to make the correct thing be the easiest thing.
(†Also release a patch to older versions of Go to just ignore v1/v0, and error for above, so you can have source compatible w/ both module and non-module Go.)
The whole premise that the "communication of this rule has been weak" is rather weak. It has been hammered in throughout the development process of modules, and is part of the official documentation.
If the author has missed the numerous blog posts throughout the development, and hasn't read the official documentation, well, I'm not sure we can fault someone else for that.
Luckily, it appears that the community has picked up the memo somewhere, as there are quite a few v2+ modules around.
It has gotten much better but in our project we got bitten by this. I gave a talk about it at GoSF last year if looking for some Go v2 modules schadenfreude :)
https://www.youtube.com/watch?v=8xAaZDSDWOc
This is where "technical writer" profession shines, I know companies who have technical writing teams do excellent job there, but if we leave it for busy coders, that's what we got in the end.
Modules had been in discussion for like years. Modules HowTo has been around from day one and it tells about this very clearly. Modules adoption rolled over a couple of releases which gave all the pointers they could..
Any minimally maintained project uses modules already. It is a must for any Go programmer to be skilled at managing modules because otherwise things just break. And yet it seems some people have lived under rocks.
People don't have time to RTFM, but apparently lots of time to complain and write about things they should have known with minimal due diligence around the tech they use. shrug
> I brought the issue up at my local Go meetup, and no one had ever heard about the rule. They were very skeptical of me.
This resonates with my experience writing Golang (plus Bazel) this year. Anything that isn’t working, you must have just messed up and did something wrong. It can’t be that something is actually broken here.
Even in this comment thread, “stop complaining and just do it.”
This kind of dismissive behaviour have been in the Go community since the beginning, unfortunately. It's the product of Rob Pike's behaviour and his rejection of "complexity". But you can't just dismissing complexity by deeming it irrelevant, it's there.
Ian Lance Taylor is probably the only person in the Go team that has a more pragmatic approach to things. The rest is a bunch of deaf people led by the blinds and a few vocal gatekeepers outside Google that will publicly humiliate you if you stray away from the current orthodoxy.
It's absurd that golang didn't take established solutions such as Java's packaging system, and had to re-invent the wheel in a completely suboptimal way.
The problem is that the official solution, as mentioned in the article, seems really bad... you just have to copy the whole project into the `v2` folder, then do the same for a future `v3` and so on... it kind of goes against normal practices of using version control tools for this kind of thing, where each version would be in a different branch, not folder.
Not to mention having duplicate files like this can make finding files in your editor a bit confusing as well.
Though I like how Go solved dependency management and dared to go on a different path than almost every other package manager, this particular solution seems to go a bit too far?!
Copying into versioned folder is probably how large monorepos work. If so, then it is another example of Google influence on Go even though core team denies it.
I do only this. Copying the codebase into a subdirectory seems like a kluge. It baffles me that they recommend that solution; seems like too much backward compatibility.
> kind of goes against normal practices of using version control tools for this kind of thing, where each version would be in a different branch, not folder.
I mean, it goes against how got works. SVN does the branch-in-a-folder thing, so it's not anathema to "version control".
That said, as for branch-in-a-folder, "thanks, I hate it".
What a mess. Starting with the GOPATH. This is far worse than the lack of generics which has had much airtime. For all the historical credits Pike and Thompson have, I don't know what they're doing here, it's making them look like buffoons; maybe they're not involved with this. I've mostly abandoned Go because of this ongoing red flag mess. Is there an end goal? The "we're watching the community" sounds like a weak excuse.
They’re trying not to break compatibility until a downstream developer explicitly opts in to that breakage. The simplest way to do that is to give the module a new name. And the simplest way to do that is append the major version to the name.
Here is the post: https://blog.golang.org/v2-go-modules
For more background on this principle, I recommend Rich Hickey’s Clojure/conj keynote “Spec-ulation” from 2016: http://blog.ezyang.com/2016/12/thoughts-about-spec-ulation-r...