They’re trying not to break compatibility until a downstream developer explicitly opts in to that breakage. The simplest way to do that is to give the module a new name. And the simplest way to do that is append the major version to the name.
Here is the post: https://blog.golang.org/v2-go-modules
For more background on this principle, I recommend Rich Hickey’s Clojure/conj keynote “Spec-ulation” from 2016: http://blog.ezyang.com/2016/12/thoughts-about-spec-ulation-r...
Surely someone else has solved this problem before, like Rust's Cargo, or Java's Maven, or JS's NPM, or Python's PIP, Ruby's Gems, or...
Unlike in every single one of those other languages, this Golang decision means that you can't pin to a specific minor version, or a specific patch version (in the sense of MAJOR.MINOR.PATCH).
It's very in-character for the Golang team though: they value simplicity-for-themselves no matter what the complexity-for-you cost is.
Python's PIP is getting a proper dependency solver, but there can still only be one package with a given name in an environment. So if package A needs numpy>=2 and package B needs numpy<2 there is no solution.
If you release a new major version of your package and you expect this will be a problem for people, you have to use a different package name (e.g. beautifulsoup4, jinja2). That is if the name for the next version isn't getting squatted on pypi.org.
It gets into the deeper motivation behind not breaking backwards compatibility within the same-named module. There are two bad extremes here:
1. Never update dependency versions for fear of breakage. This leaves you open to security vulnerabilities (that are easy to exploit because they are often thoroughly documented in the fixed version's changelog / code). And/or you're stuck with other bugs / missing features the upstream maintainer has already addressed.
2. Always just updating all deps to latest and hoping things don't break, maybe running the test suite and doing a "smoke test" build and run if you're lucky. Often a user becomes the first to find a broken corner case that you didn't think to check but they rely on.
The approach outlined by Rich Hickey in that Spec-ulation talk I linked to allows you to be (relatively) confident that a package with the same name will always be backwards-compatible and you can track the latest release and find a happy middle ground between those two extremes.
Go's approach is one of the few (only?) first-class implementations of this idea in a language's package system (in Clojure, perhaps ironically, this is merely a suggested convention). The Go modules system has its fair share of confusing idiosyncrasies, but this is one of my favorite features of it and I hope they stick to their guns.
Good read: https://research.swtch.com/vgo-principles which answers some of your questions.
I'm starting to think that the reason they've tried so hard to avoid solving interesting problems in the language is that every time they've tried they've made something worse than every other alternative that existed in the problem space.
Yes, it is possible to pin to minor version.
The version is specified in the `go.mod` file.
Look at this [go.mod example](https://github.com/ukiahsmith/modbin1/blob/main/go.mod).
The upstream `modlib1` library at v4 has tagged versions for v4.0.0 and v4.1.0. The go.mod file pins the version to the _older_ v4.0.0.
Why is this built into the module system and triggered by a v2? I don't think there is a compelling reason to do so, and there are several compelling reasons not to do so.
I think I prefer the status quo where there is very strong pressure to remain backwards compatible if your module is used by a few people. This leads to far less churn in the ecosystem and more stable modules.
The article makes the assumption that a major version number is used only for breaking changes, but many software packages use such version numbers for major feature releases (as indeed Go2 seems to plan to when/if they release it). I'm not clear why they should be forced to adopt urls ending in /v2 as well.
Reasonable thing is to use language which align your philosophy.
I'd rather this was a choice made by package maintainers individually rather than the go tooling. Most packages simply don't need that as they strive to be backwards compatible and never introduce large breaking changes. Should they always be relegated to versions like 1.9343.234234, because the go tool requires it?
I think this is a reasonable enough approach. You see it in Debian for dependencies so I'm okay with it for Go. It outlines clearly what your dependencies are.
As a Go outsider who's trying to get his feet dipped in the lake, trying to use modules has felt... uneasy? I tried to look up the official resources on the matter and found the official blog posts to be lacking concrete and concise examples on the usage; documentation feels like it's written for people who work on the project and already digested the unit tests as examples.
As for the case of v2, similarly messy and undocumented. I also appreciate author's call for action on warning messages in tooling.
Wasn't the unified tooling supposed to be a selling point for Go? It just doesn't feel good to use.
I have come to the understanding that to write good documentation, you should write it as soon as you have learned it, or at least, try and explain in a similar way in the way you learned (which is not easy at all). I feel the issue stems from abstraction. Once you become somewhat an expert in a topic or develop a deeper understanding of said topic, you automatically abstract away the information that lead you to that understanding in the first place.
Otherwise you end up reading or writing documentation with a bunch of assumptions for knowledge and understanding, which are not only not stated but may not be available to the person trying to get to grips with the technology. A bit of a digression from the post, but I don't feel this is just a Google thing.
My point is that it's possible/ we shouldn't settle for "that's the universal status quo, nothing can be done about it". Something _can_ be done. And we _should_ be demanding more - especially so from tech giants like Google.
Quite possibly they dont have the resource.
The eng culture today is dominanted by visible impact. Documents are great for everyone, every Googler knows that, but it just cannot be measured.
I wrote some of the most spectacular docs in my previous Google teams. Everyone loves it when they saw them. And no one mentioned them in any formal scenarios. And I am aware of the measurement rules well enough that I didn't bother to waste my time to promote them.
For me, I just care the feelings of the users so much that I personally feels rewarded, but for Google as a whole, there cannot be enough resources for documentation, by the design of the engineering system.
Lots of very relevant details aren't in Android developer documentation, rather scattered around in Stack Overflow, G+ (when it existed), Twitter, Medium or the developer personal blog.
Apparently the team keeps forgeting that Android has its own documentation website.
I did a good job of documenting how to call each function, but they were actually struggling with how dynamic linking worked in C. At the time, I was surprised and questioned their ability to problem solve.
Looking back now, it's common for engineers to jump between skills and work in unfamiliar places. Adding a simple Makefile example would have helped them immensely, and may have helped others as well.
I disagree that it needs to be someone else, but you need some empathy, and the ability to observe your users struggling to understand how to improve your documentation. You can't write it for yourself and expect it to be helpful to everyone.
I doubt this.
Like Eistein said, if you understand something well enough, you will have no problem breaking it down in to simple, easy to digest information.
The idea that someone knows something well enough and therefore not able to write good docs, misses the crux of the problem: The writer here failed to understand his audience.
And when someone is educated to be conscious about their audience, I see no reason why the one knows the system best in any way hindered by his/her knowledge.
The docs provide enough to make modules work but are far from excellent in terms of docs.
These docs also highlight things about how the Go team thinks. For example, if a project is versioned at v2 or later they recommend incrementing the major version of your code when adopting modules. Instead of modules being support tooling for the app it's designed and thought of as important as a major version change.
The real documentation is here: https://golang.org/cmd/go/#hdr-Modules__module_versions__and...
and is identical to the output of `go help modules`
func Index(s, substr string) int
Index returns the index of the first instance of substr in s, or -1 if
substr is not present in s.
I think Gophers borrowed the idea of generating docs from header/source comment from Java. Java itself spec'd it in 1996. Where did Gosling and friends got the idea, I don't know. It may have precedents either in Sun's other languages or possibly an earlier precedent.
Some relatively minor differences:
In Go, documentation is externalized IFF top level code (function, type, var defs and declarations) immediately follows a comment line.
In Java comments have two forms. The comment form using two asterisks is externalized.
In Go comments (iirc) support some very basic text styling (of the generated doc).
In Java externalized comments can use a basic set of markup ala HTML. This includes comment level hyperlinks to javadocs of referenced elements.
Having used both languages rather extensively, Go's approach lends itself to CLI usage. Java provides richer markup and hyperlinks via java and is much better for someone who wants to explore the API via documentation.
On page 309 the `Metaclass Protocol` is defined. This class has a property “comment” [with value semantic of] ‘commentString’. So possibly they got it from Smalltalk. Don’t know.
sum: xValue with: yValue
"sums two values"
^ xValue + yValue
Several Lisp variants also have a similar approach.
This product probably deserves its own hn post:
There's the important bit. Code can be wrong, but it gets less out of date than its comments.
Here's what it spews when you specidy a dependency incorrectly
require github.com/robfig/cron: version "v3.0.1" invalid:
module contains a go.mod file, so major version must be
compatible: should be v0 or v1, not v3
I solved it by looking through this issue that comes at the top of Google Search for it: https://github.com/golang/go/issues/35732
I'm one of the co-authors of https://blog.golang.org/v2-go-modules.
One of the takeaways from this article was, "there needs to be more documentation", and I think I can speak to that:
First, thanks for the feedback. We also want there to be a loooot more documentation, of all kinds.
To that end, several folks on the Go team and many community members have been working on Go module documentation. We have published several blog posts, rewritten "How to write Go code" https://golang.org/doc/code.html, have been writing loads of reference material (follow in https://github.com/golang/go/issues/33637), have several tutorials on the way, are thinking and talking about a "cheatsheet", are building more tooling, and more.
If you have ideas for how to improve modules, module documentation, or just want to chat about modules, please feel free to post ideas at github.com/golang/go/issues or come chat at gophers.slack.com#modules.
For example, the linked blog post spends a lot of time talking about diamond dependencies and other package managers. This is just noise that gets in the way when you're trying to figure out how does this work?
If you did want to combine both in a single reference doc, I would move the why out into separate, skippable sections.
When I first was learning Go, I was really impressed by how easy it was to understand the language just by reading the spec. I've found the opposite to be true for Go modules. (Which also, as near as I can tell, doesn't have a spec, but just various scattered blog posts, for various different iterations of the idea.)
I believe the tutorials that are in the work take more of the "short and sweet" approach, which should help with this.
I'd suggest just taking the advises and decide the best course of actions. The why part is generally only meaningful to the decision maker and not something people care or have enough context to appreciate.
This way a lot of potential misunderstanding was avoided.
The assumptions in that v2 go modules article around the meaning of major semantic versions do not jibe with the way the majority of software in use today uses version numbers - they are most often used to denote new features, which may or may not have breaking changes large or small, and small breaking changes are tolerated all the time, often in minor versions. This assertion in particular seems wrong to me for most software in use today:
By definition, a new major version of a package is not backwards compatible with the previous version.
> the majority of software in use today uses version numbers - they are most often used to denote new features, which may or may not have breaking changes large or small, and small breaking changes are tolerated all the time, often in minor versions
We're getting into opinion here. Let's be clear: semver very strictly, objectively disagrees with this approach. In general, this approach of "what's a few breaking changes in a minor release amongst friends" leads to terrible user experiences.
Go modules takes the cost of churn, which in some languages gets externalized to all users, and places it on the module author instead. That is far more scalable and results in much happier users, even though module authors sometimes have to be more careful or do more work.
I don't think it's a matter of opinion that the vast majority of software in common use does not use strict semantic versioning, most likely including the web browser and operating system you are using to read this comment, popular packages like kubernetes, and the Go project itself in the mooted 2.0 with no breaking changes. It is highly desirable to avoid significant breakage, even to the point of ignoring strict semver and avoiding it across major version changes! So I'm not arguing for encouraging packages to break, but rather the reverse, I prefer the status quo pre go mod where packages are assumed not to break importers, though sometimes small breakage happens and/or is acceptable.
Most packages use a weaker version of semver than the one you describe, which is still useful, so I'm not clear why the go tools have to impose the very strong version which is not commonly used. The difficulties introduced seem to outweigh any benefit to me.
The kernel doesn't break user space. Web browsers' api generally remains backwards compatible (how long did it take to remove flash - and you can still install it if you want!)
It worked fine.
> This assertion in particular seems wrong to me for most software in use today: By definition, a new major version of a package is not backwards compatible with the previous version.
It is true for any package manager using semver, cargo, npm, pub, etc.
The meaning of versions is a negotiation between producer and consumer, not a fixed rigid set of rules as strict semver would have you believe. In practice the definitions are more fluid, something like major: big changes, may be breakage, minor: minor changes, should be no or minimal breakage, patch: small fix, no breakage.
Putting versions in the import path is not something any of the popular package managers do AFAIK, and they certainly don't force you to do that, nor do they force you to use strict semantic versioning.
I think you have are looking at this from the perspective of the package consumer, but versioning is controlled by the package maintainer and its their notion of "breaking" that determines the versioning story.
Yes, many users of a package will in practice not be broken by a "breaking change". I could, for example, depend on your package but not actually call a single function in it. You could do whatever you want to the package without breaking me.
But the package maintainer does not have awareness of all of the actual users of their code. So to them, a "breaking change" means "could this change break some users". If the answer is yes, it is a breaking change.
> not a fixed rigid set of rules as strict semver would have you believe.
Semver is a guideline so is naturally somewhat idealistic. Yes, there are certainly edge cases where even a trivial change could technically break some users. (For example, users often inadvertently rely on timing characteristics of APIs, which means any performance chance for better or worse could break them.)
But, in general, if you're a package maintainer, your definition of "breaking change" means "is it possible that there exists a reasonable user of my API that will be broken by this?", not "is there actually some real code that is broken?" Package maintainers sort of live as if their code is being called by the quantum superposition of all possible API users and have to evolve their APIs accordingly. Package consumers live in a fully-collapsed wave function where they only care about their specific concrete code and the specific concrete package they use.
> Putting versions in the import path is not something any of the popular package managers do AFAIK, and they certainly don't force you to do that,
That's correct. Go is the odd one out.
> nor do they force you to use strict semantic versioning.
The package manager itself doesn't necessarily care if package maintainers strictly follow semantic versioning. The version resolution algorithm usually presumes packages do. But if they don't, the package manager don't care.
Instead, this is a relationship between package consumers and maintainers. If a consumer assumes the package maintainer follows semver but the maintainer does not, the consumers are gonna have a bad time when they accidentally get a version of the package that breaks them. This is acutely painful when this happens deep inside some transitive dependency where none of the humans involved are aware of each other.
When consumers have a bad time, they tend to let package maintainers know, so there is a fairly strong effective social pressure to version your packages in a way that lets consumers reliably know what kind of version ranges are safe to use. Semver is just one enshrines consensus agreement on how to do that.
This was the core point I was trying to make - other package managers correctly leave this negotiation on how much breakage is acceptable to producers and consumers, they do not impose strict semver but a looser one, and importers choose how strict they want to be on tracking changes in their requirements file (go.mod or similar), while producers choose how strict they are going to be with their semver (strict semver is almost never used in its pure form for good reasons, versions communicate more than breaking changes).
The result of this change to go imposing strict semver on both parties will be IMO far more breaking changes, because it explicitly encourages them and forces importers to always choose a major version. It's a change of culture from previous go releases and will have significant impact on behaviour.
We'll also end up with a bifurcated ecosystem with producers who don't like the change staying on v1 and others breaking their packages all the time and leaving frustrated consumers behind on older versions without bug fixes.
Amusingly, the Google protobuf team did a similar thing for the Go protobuf bindings. The new, backwards-incompatible `protobuf` module started with version 1.20, while the old module is at 1.4.
If you've followed this, you know I've omitted one detail - they actually changed the name of the module as well, from `github.com/golang/protobuf` to `google.golang.org/protobuf`. Because, of course, Golang modules names are actually URLs, not identifiers like in every other package manager out there.
I don't know how anyone in this thread wants to continue arguing for the merits of the current v2 scheme. It clearly did not catch on in the community and the status quo of hoping a go get will smoothen out all rough edges of your dependencies is terrible, almost as terrible as dep was (you certainly couldn't use consistent versions of the kubernetes components without hardcoded override blocks).
They don't have to be URLs, though. What's the difference between `google.golang.org/protobuf` and `org.golang.google.protobuf`?
There isn't one, because that's also a (mangled) URL, namely "http://protobuf.google.golang.org/", and should be rejected for the same reasons. A identifier would be "protobuf"; note the lack of any reference to the DNS domain.
People use these longer import identifiers because they avoid polluting the global namespace, which be reserved for very important packages (in the case of go the std lib).
Everyone says this but there is strong evidence from npm and other package ecosystems that you can go surprisingly far with a single flat global namespace.
I think there is a paranoia among software developers about potential name collisions. But in reality, name collisions are rare and easily avoided. There are 308,915,776 unique six-letter identifiers. Go to eight and you have 208,827,064,576 names to choose from.
To be scrupulously fair, no one wants to name their package "udjwhc"; the problem is more "protobuf" (by google) vs "protobuf" (by someone other than google), both implementing similar but subtly different interfaces for doing the thing they're named after.
I have yet to see any particular bit of metadata that fits into that set. Using "owner" like you do in your example here is a common choice, but also very frequently changes. If you look around, you'll find lots of examples where a package moved to a new owner and caused all sorts of problems because now all the old name references no longer work.
You could try to pick some essentially descriptive property of the package itselfs—maybe, say, "Net" if it has to with networking—like CPAN does, but it's not clear that doing so adds any real value beyond satisfying our human desire to put things in neat littly cubbyholes.
Personally I find knowing the owner sometimes useful, and quite like the use of urls in go imports as it also lets me look up the source. I've never found it a huge problem (and I have moved packages from one place to another a few times).
As far as I know, `go mod` will simply connect to https://$MODULE_NAME and retrieve metadata and contents from there.
It makes no sense to me to try to impose this rule on the ecosystem - the costs (confusion, multiple urls, multiple ways of doing it) vastly outweigh the benefits (multiple versions in use at once), and if a particular project wishes to use this rule, they could do so, without imposing it on the thousands of packages which might want to use higher numbers to indicate large feature releases or are already using higher numbers.
In theory a larger number in semantic versioning indicates breaking changes, in practice small breaking changes are made all the time, it's a matter of degree and the context of a given project, and people also use version numbers for marketing or indicating lots of new features - a good case in point being Go 2.0 which may well be backwards compatible to Go 1.0. That makes this assumption in their blog false:
> By definition, a new major version of a package is not backwards compatible with the previous version.
That's not the only, or even the main, reason projects use higher version numbers.
I imagine lots of people will keep their version forever in the 1.x range to work around this rule. I too believe Go should drop this requirement before it is too late.
Thus the "/v2" requirement.
Yes, it is different. Stop complaining, do it, and it just works. Here is the tutorial.
1. Modify go.mod package path to include the "/v2" element.
3. Tag as "v2.0.0"
4. Push tag.
I'm not seeing what is too confusing about releasing.
To consume, on packages that you want it, import the new package. Done.
Being able to import multiple versions without conflict has been extremely helpful when I've needed new features in one version, but didn't want to update (at the same time time) all the other places.
I really don't get why people don't like it, other then it is different then what they expect elsewhere.
To the degree that a module is stateful, designed to be used as a framework, and have long-term persistence, it doesn't.
OK for calculations; less than OK for a network server or data store.
If you can make do without any persistent state, or global or externally visible resources (network connections, configuration files, etc.), then you're OK. But that's a limitation.
Global state makes thing impossible like parallel unit tests that all use another module, or changing things like “MaxConcurrency” in a way that is synchronized across goroutines that might already be calling into the third party module.
If this was the goal, it was certainly not make clear in any prominent places and occasions.
Right. And all tis is described in accessible form where exactly?
Also, here's the very helpful error that go spits at you: https://news.ycombinator.com/item?id=24432511
> For what it's worth, the official Go Blog tried to clear the situation up some, and posted a write-up about it.
> Even as a seasoned developer, I found this write-up somewhat impenetrable. For how important and unusual of a requirement this is, the write-up is not accessible enough.
Somewhere in the comments someone mentioned that "the actual info" on modules is here: https://github.com/golang/go/wiki/Modules which is slightly better but for some reason isn't in the official docs, and also basically brushes aside things like actually specifying v2+ modules in go.mod.
And as the article shows with links to examples, people forget to specify v2+ modules (yes, even Google forgets this), or just don't do the whole v2+ thing.
As I understand it the reason for this design decision was to allow a project to use multiple versions of the same dependency at once. To me this seems like a pretty rare edge case. They burdened the 99% of users with weird V2 syntax so that 1% of users could handle this edge case (which could have been handled by other means).
> They burdened the 99% of users with weird V2 syntax so that 1% of users could handle this edge case (which could have been handled by other means).
Really the only burden lands on package authors who are making breaking changes. And really, the path provided is so much easier than actually making breaking changes for your users. Everyone wins.
The burden falls on every module developer who wants to use a version number above 1.x. That is not congruent with developers who make breaking changes as you seem to assume.
There's an assumption here about v2 meaning breaking changes that not even Go 2 itself may adhere to (as it may well be more of a 'new features' version number than a 'breaking changes' version number increment).
And this is fine as it gives module developers an opportunity to make breaking changes which they wanted for quite some time. It makes much clearer for module user as opposed to Java where one does not know which module will blow up code was it 2.11 or may be 2.13.2 etc.
> There's an assumption here about v2 meaning breaking changes that not even Go 2 itself may adhere to
If anything Go is trying to be even more conservative that they are avoiding breaking changes in 2.0 as opposed to being rash making breaking changes in 1.x. I see no problem with that.
Go mod just looks like a total mess from the outside. To be honest I (like most gophers I've talked to) never really understood why dep wasn't adopted as the official solution for this.
I'm resisting converting my projects to modules until there's a clear and definite advantage to it. So far, none.
I am very happy that they did not adopt dep, it was in no way ready (and I don't think it was going to significantly improve given more time).
I guess the main complaint at the time was that it appeared to be "not invented here" syndrome. The community was converging on dep, and then the Go team decided to do something different, for reasons that weren't completely clear. It kinda rammed home the reality that Go is not a community project, it's a Google product controlled by a small team. That definitely has benefits and good points, but it doesn't match everyone's expectations.
Go modules are not going to go away so what benefits will waiting as long as possible give you?
I'm not a big fan but it's fine in day to day life and the most common issue for me is that some indirect dependency is having issues which is usually solved with some "replace" lines in the go mod file. Not ideal but also not too annoying.
I know I'll have to do it one day. Hopefully that day will be after they've brought more sanity to it.
Ah, TIL, thanks!
I meant more "why must the import syntax use specifically this scheme to denote versions". I guess backwards compat?
As an ex-Googler I can tell you that this above statement can be more broadly applied across _many_ developer facing Google products (most notably for me, Android frameworks and libraries). Google's apps don't use the guidance and recommendations we gave to external Android devs.
I wonder if there is something cultural about Google not even being able to coordinate changes and improvements within the org itself.
1) your go.mod should include the version
2) the module itself always assumes its the version of it's go.mod file
3) to release a version you just tag it based on the version in the go.mod file (with later semantic versions being considered the best and what go will use by default)
so module authors have to do.
so module authors dont have to do anything special besides include the version in the go.mod file (and keep it up to date) + tag release appropriately in a way that keeps to go's compatibility promise.
4) when you import a module, you specify what version you want (with a default assumption of v1) so your go.mod would either include the module or modules/v2...
5) your imports in actual .go files would not have to include the /v2 as they would know based on the go.mod file.
you as the end user of the module will follow v0/v1 tag for as long as it exists, but will have to modify your go.mod to use v2 if you want to switch.
does my conception not make sense?
I've never been able to find docs on what exact format those should have. One of the many problems with Go modules.
(I find this reference exceedingly helpful, and wish it were somewhere more official than the `golang/go` wiki, but c'est la vie)
The Go the language is stable, and Go modules is stable and is the future. Everything, including new features, is based on Go modules. That isn't changing.
With a night to sleep on it, I think requiring `/v0` and `/v1` would have been the better option, that way you'd hit the issue right away rather than years into your project. Teach things early. I've updated the post with a note of this idea.
I think that's likely not something they could add at this point however.
I don't even mean to be critical of the v2-ing itself, I just think it's entirely non-obvious.
These are just the kinds of things you put up with when writing go code and need to work around bad designs of the language and ecosystem.
They know they cannot rely on a stable API, which means they have to carefully read the diff for every single upgrade, so saving no work over explicitly picking a major version.
By labeling as v1 you're:
- communicating the package is stable and will be maintained
- communication that breaking changes will now increment the major version (saying nothing about stability, backported bug fixes, or any maintenance)
I wish they were not combined. Incrementing major versions frequently on early stage projects would make them work better with package managers, but without the expectation of maintenance from the community.
Not incrementing major versions makes it hard for package managers to make sensible decisions.
Tag your version v2.0.1, import it as .../v2 in project - done. You don't even have to read docs or blog posts to understand it, just look at the source code of a new project, notice the ../vN part, see the corresponding tag in repo and do the math.
The basic idea of having separate import paths for non-compatible versions of a project seems, in fact, really good; an elegant solution to a tricky problem. It certainly seems much nicer than secretly using multiple versions of a package behind your back (especially when you consider that if this happens and a package uses mutable internal state, you could actually get broken behavior).
I guess for me the takeaway is in aggregate, people will be maximally lazy. You need to make the correct thing be the easiest thing.
(†Also release a patch to older versions of Go to just ignore v1/v0, and error for above, so you can have source compatible w/ both module and non-module Go.)
If the author has missed the numerous blog posts throughout the development, and hasn't read the official documentation, well, I'm not sure we can fault someone else for that.
Luckily, it appears that the community has picked up the memo somewhere, as there are quite a few v2+ modules around.
As is the usual case with these write-ups
Any minimally maintained project uses modules already. It is a must for any Go programmer to be skilled at managing modules because otherwise things just break. And yet it seems some people have lived under rocks.
People don't have time to RTFM, but apparently lots of time to complain and write about things they should have known with minimal due diligence around the tech they use. shrug
This resonates with my experience writing Golang (plus Bazel) this year. Anything that isn’t working, you must have just messed up and did something wrong. It can’t be that something is actually broken here.
Even in this comment thread, “stop complaining and just do it.”
Ian Lance Taylor is probably the only person in the Go team that has a more pragmatic approach to things. The rest is a bunch of deaf people led by the blinds and a few vocal gatekeepers outside Google that will publicly humiliate you if you stray away from the current orthodoxy.
The problem is that the official solution, as mentioned in the article, seems really bad... you just have to copy the whole project into the `v2` folder, then do the same for a future `v3` and so on... it kind of goes against normal practices of using version control tools for this kind of thing, where each version would be in a different branch, not folder.
Not to mention having duplicate files like this can make finding files in your editor a bit confusing as well.
Though I like how Go solved dependency management and dared to go on a different path than almost every other package manager, this particular solution seems to go a bit too far?!
I mean, it goes against how got works. SVN does the branch-in-a-folder thing, so it's not anathema to "version control".
That said, as for branch-in-a-folder, "thanks, I hate it".