Hacker News new | past | comments | ask | show | jobs | submit login
Go’s Major Versioning Sucks – From a Fanboy (qvault.io)
167 points by lanecwagner 6 days ago | hide | past | favorite | 163 comments

I'm the author of the other recent post that made it to the front page about the trouble with Go's v2+ modules.

I disagree with much of this post.

> Go makes updating major versions so cumbersome that in the majority of cases, we have opted to just increment minor versions when we should increment major versions.

I disagree with this, wholeheartedly. I think Go's module system making modules separate packages on major versions is actually great once you actually know how it works. The problem is simply that it's non-obvious and not well understood. I think they could have made it more obvious requiring in on v1 for starters.

Beyond that you should absolutely be tagging major versions for every breaking change, and it should be something a developer has to stop and think about. Not something to be taken lightly, EVEN with regard to internal libraries.

You can learn how to code things preemptively so they are flexible for additive changes and require fewer breaking changes. It's almost always worth the effort in the long run.

To my posts point, the problem is just people don't know how it works, and it's design makes it something you learn late into a project, when you're tagging v2, rather than something you hit early.

FWIW, my post for anyone didn't catch it:


No, that is not the problem, people are not just holding it wrong.

The problem is a mismatch between the Go team's perception of semver usage (or perhaps how they think it should be used) and its real-world usage. Some examples:

Major versions are used to communicate more than breaking changes.

Project identity doesn't change if a project changes, it is not a new package.

Hardly any projects use strict semver in the real world, and other package managers allow a more flexible, looser semver.

Just what is a breaking change anyway? Real-world packages make small security changes which break compatibility, sometimes minor changes do too, and that's ok if breakage is very limited in scope.

In the current go modules, new major versions are not easily discoverable or announced, and new importers will need to consult docs or know about go get @latest to find the latest version. This could be fixed with tooling if the problem is recognised. There are possible solutions if they want to keep /v2 paths, but we shouldn't pretend there are no problems caused by this change. It makes it easier to produce breaking changes and harder to version increment just for new features (as most projects from linux to rails do).

Personally I think it condones breaking changes and will lead to more breaking changes, not fewer - if you're forced to change import paths at higher versions why not just delete that api you don't like? Within Google say there are significant downsides to this and pressure to fix your own breakage, outside, not so much, including in packages people are required to use, like api client packages for popular services.

Significant breaking changes introduce real pain for consumers and in an open ecosystem should be avoided at all costs.

Okay, here's a real world example. Three months ago, stretchr/testify was version 1.3 and supported Go 1.10. Yesterday, we tried to build some project, and suddenly tests don't run because, you see, now testify is version 1.6 and they support only Go 1.13 and up. That's a bloody breaking change, okay? And it happens all the time with Go packages: they're constantly in the process of dropping support for older Go versions without bumping major versions.

That's a good example of the difficulties here.

If they used go mod and didn't see it as a breaking change you'd be in the same situation. I think many people would debate whether this is breaking or not or deserves a major version - I think most people would say no if that was the only change.

If they used go mod and did see this as a breaking change, they'd increment the major version and you'd never notice. The unfortunate consequence is most of their consumers would continue happily using a version 1.3 multiple major versions behind and missing critical security fixes (most projects backport fixes for say two major versions) - and they'd probably end up with users on a range of major versions from 1.x to 21.x if this is your definition of a significant breaking change.

In theory under strong semver every single possible breaking change would be a new major version and every bit of software more than a few years old would be on version 945.5.1 - in practice, in the real world we never see that sort of versioning, and people use major versions for something very different (for significant changes in api surface (whether breaking or not), and sometimes for significant breaking changes). It's a signal rather than a hard objective rule.

None of this is insurmountable of course, but it does need to be attended to. At present go mod seems to assume strong semver and the result would be IMO a proliferation of breaking changes and outdated software being used (as opposed to current Go which almost forces everyone to be on HEAD and to try to avoid breaking others).

Dropping a support for a language version is a breaking change. You literally can't compile the dependency itself. If that is not a breaking change by your definition, then nothing is. And I don't know if they use go mod or not because, as you probably know, Go 1.10 doesn't support go mod, it was started being introduced in Go 1.11.

I don't argue for strict semver, you can never be sure if a supposedly minor change won't actually break things, sure, but some changes are guaranteed to break things. Why not at least mark them with the version bump?

The impact varies by project and the consumers of that project, so this is really difficult to have absolute rules about and this is a good illustration of well-meaning people disagreeing about the impact of changes.

I'm pretty sure a huge number of Go projects no longer work with earlier Go versions, it's just a question of how far back you go. If you're back at Go 1.10 I'd recommend freezing your dependencies and putting them in your own repo, it's the only way to be sure. Otherwise this is likely to happen to you again as people aren't great about supporting all versions of Go 1.x - it's quite a support burden.

I would note that even the Go language itself has dropped architectures and support for bootstrapping with older Go versions without bumping the major version. I think that's fine. Of course that does break strict semver (oops), but nobody cares because nobody actually expects strict semver in real software. What's important is the impact - it's sometimes ok to remove features if nobody is using them and I've never seen a project use a major version for that. I've also never seen a project bump a major version for a breaking bug fix.


Do they have an announced support policy for versions? If they have a policy that says they support the 3 most recent stable major versions, I wouldn't count deprecating support for a non-supported version of Go to be a breaking change. They aren't breaking anything because your version of Go is no longer supported on those versions. Whether you think that deprecating support for a language version (regardless of whether it continues to build with that version or not) requires a major version change is probably a matter of personal opinion. I would tend to say no, but I can see valid arguments for it (mainly because semver doesn't actually give us enough bits to communicate granular info).

What would be ideal is if we had a means of communicating more information about releases of code and their relationship than semver provides. I.e. I can publish a repo and add some kind of "language-support.json" document that specifies I support the 3 latest major releases of Go, and have the package manager figure out whether my version of Go is supported. Other ideas for metadata would be the ability to add labels to releases, and have the option to filter/prioritize upgrades based on those.

I would love if package managers supported labels as part of the metadata, and I could get a summary of all the labels between my current version and the new version. So on an upgrade, I could get a label diff for the package versions like "security:cve-1234 feature:oauth2-support bugfix:stale-kafka-messages". Those are cherry picked things that make nice labels, not everything makes sense in those messages. But sometimes it feels like we just do global updates and make sure everything builds, just to keep the tech debt low. I have no idea what actually changed in the package, and as long as it builds and tests pass I don't have to know. That's because we have so many dependencies, and it would take ages to read the release notes for all of the versions of all of the dependencies between our current version and the new one. Labels provide a means to communicate a succinct version of what changed; succinct enough for someone to read while they're waiting on tests to run.

I think it unreasonable to expect “go mod” to support any compiler version other than what is officially supported for Go - latest Go major and one older version. You are right about your case, but your use case of continuing to use an old version of Go is not something they support.

And is Go so hard to upgrade for your application?

Go the language is really best in class when it comes to keeping backwards compatibility.

Well, in this particular case it was hard. The used version of one our internal libraries didn't support Go 1.13, so we bumped its version, and that required some more dependencies brought in and updated, and then it turned out that the container with Go 1.13 was misconfigured so it literally had no access to that internal library, at all (something with Gitlab deploy keys...), so in a hurry we tried to vendor this internal library wholesale, and then there were some problems with gomod-ed libs and those without go.mod, and some linters had to be explicitly told to skip the 'vendor' directory (wtf), and...

And at this point, I just stopped digging down that rabbit hole and instead added

    TESTIFY := github.com/stretchr/testify

    mkdir -p "$(GOPATH)/src/$(TESTIFY)" ; git clone --depth 1 --branch v1.3.0 https://$(TESTIFY).git "$(GOPATH)/src/$(TESTIFY)"
before 'go install' in the build script, and ran it in the container with Go 1.10. This change took 5 minutes to make and worked without any problems.

Isn't the one of the reasons we use semver in the first place so that after you do some small change to the current application you're working on, you don't suddenly find yourself having to update 2/3 of your the environment just to compile it?

But this is still a breaking change though. If I build my app today I should be able to build it again tomorrow without any issue.

> Hardly any projects use strict semver in the real world. Most other package managers don't enforce it.

I think strict semver is reasonably possible for things that are libraries rather than products in their own right.

But you're right, people want major versions to indicate something significant and new (and occasionally major versions have a contractual significance as well - requiring customers to pay an upgrade fee), rather than small breaking changes. It's even sometimes possible to create a new library that has huge changes in the intended way you use it, its conceptual model and masses of new functionality, but it maintains backwards compatibility - it's perfectly understandable that the package owner wants to indicate this with a major version bump.

The other issue I've come across is that what constitutes a breaking change is much more subjective than many people realise. Any change is a breaking change if someone is reading in your library and tweaking bytes in specific locations. Of course, for most libraries they shouldn't be doing that, but that means that if someone is using your library wrong, then you don't worry about breaking them. But the question of if someone is using your library wrong is pretty subjective. At an extreme, that could be considered to be relying on any behaviour not explicitly documented as intended. Ultimately, it comes down to the judgement of the package maintainer, and that doesn't always match up with the judgement of the user.

Having version numbers that mean something to machines is very useful when it works though. Perhaps we should just separate those from human-targetted versions rather than go full sentimental versioning: http://sentimentalversioning.org/ Maybe something like <human version>.<machine readable api version>.<unique incrementing build number> could work.

I think because of the many problems listed and the significant one you add (Just what is a breaking change? It turns out pretty much any change can be breaking for some user), the tooling needs to recognise that strict semver is unsound and a looser version is required. This isn't just a question of pesky humans with their marketing versions vs unerring machines - there are complexities in the problem space which shouldn't be ignored.

Different producers/projects have different expectations for strictness in semver, and that's ok, they can use it differently in a negotiation with their consumers. Also different consumers have different requirements, and most tools provide a way for them to specify that on a per-import basis (upgrade only minor of these pls unattended, upgrade nothing on this one, only breaking changes on that one).

In short, loose semver is a feature, not a bug.

I think you conflate marketing with technical change.

You would like to announce a new major version with fanfare, bumping a version number.

A programmer wants to be sure that his piece of software continues working if there has only been a minor version bump of a library he is using.

Did you read the reasoning of Russ Cox in respect to vgo, versioning and the module system in general? It reads very plausible and understandable.

Just because so many peopele interpret semver in their sense instead what it really means shouldn't prevent the Go core team to try to do it right.

I think you conflate marketing with technical change.

There are multiple reasons given above why strict semantic versioning is not used in real-world software, that is a reductive summary of one of them.

Yes, I've read those articles a while back when they were written. In a very real sense, a consumer can never be sure that software keeps running properly if a library changes in any way without extensive checks. Real-world loose semantic versioning is a promise, not a proof, and it works better that way.

I don't personally think the current go proposal is terrible or sucks, but I do think it could be improved, and it'll be improved by listening to how people use versions.

>Project identity doesn't change if a project changes, it is not a new package.

It is a new package. From a practical perspective once you break backwards compatibility the new major version is its own independent thing. If you pretend its the same thing then you run into the situation where two libraries need v1 and v2 and therefore can't coexist. This is going to destroy your entire ecosystem. Replacing a few strings is nothing in comparison.

I'm not sure which ecosystem you are referring to (golangs or the package itself) but it seems like there are plenty of existing ecosystems chugging along even though they do, in fact, treat the two versions as the same thing.

Not saying its right or wrong, but clearly it is not a death sentence.

But it is cumbersome.

Linux distros have the same issue, rpm or .deb packages cannot have the same name but still support having 2 versions of a package/library installed.

If it's a shared library and they don't get the SONAME versioning right for incompatible backwards compabiliry, it's even more cumbersome, and incurs more work for everyone to consume that library.

It is a new package, from one perspective.

That perspective isn't appropriate to use outside of a narrow scope. In particular it's not appropriate to use at an ecosystem level, it's too surprising.

>Hardly any projects use strict semver in the real world

Part of the appeal of GO is simplicity: get some things down in a minimal yet complete way then consistently do it on the strength that it's a net-win: less complexity to learn that takes you further in most cases. This isn't the only game in town, but it is a good game. C++ is different: we've got 21 million ways of doing things, but you only pay for what you use. That's fine too if you know what you're doing.

Back to GO: The fact that the real world is wishy-washy here sometimes in the mood, sometimes not, sometimes I like blondes some days I like brunettes other days isn't the go way ... is a sometimes depressing part of DEVOPS: not formal. It's not the GO way either.

So I think GO's standpoint that changing something to explicitly ask for a breaking change isn't out of line. It can't be that tough of an ask to mean what you say and say what you mean when we label code with a semantic version.

Both this post and the other V2 problem GO post fail to better get at the crux of the problem: In GO there are two names:

- the path we import in code

- the actual GIT URL (or vendor ref) down to tag/commitId in go.mod that the previous item ultimately resolves to

The module name in code is ambiguous. It's desirable to some that code is unchanged ever wrt to imported module name. If true, then we must turn our attention to what's in go.mod. Here the ask is for hints that there's a breaking version available ... and it'll be up to the DEV to use it or ignore it changing go.mod as they deem makes sense.

If false, then you've gotta change the code to reference the breaking version if that's the desire ... realizing it's still ambiguous because it doesn't resolve down to a commitId or tag ... so that leaves go.mod on the table for change too.

Nobody debates that code must change to reflect breaking changes if the breaking change is included in go.mod. The questions are: did we ask? Where? Did we know there's a breaking change, and can't we get some help knowing there's a breaking change since GO hits GIT anyway?

At it's crux then ... there are two names. Leaving the code unchanged and changing go.mod with some tool help is better IMHO.

> Just what is a breaking change anyway?

It's just an API kind of thing. If a function / constant / var / type was available and is not anymore, it's a breaking change. It's an objective measure, there is nothing left to subjectivity.

Now, of course a bug fix is going to "break" things if your code relied on the bogus behavior. An updated algorithm can also break things (I maintain a SAT solver, whenever I update the underlying algorithm, even if it's way faster on average, in some limited cases it can make one very specific problem way slower to solve, which can have bad consequences). But as it has no consequence on the API part of things, it's not a major change in semver's meaning.

There are far more extensive possibilities than what you export - almost every change can be breaking change, often without the producer knowing about it. This is why nobody uses strict semver in practice.


The parent commenter is correct in the sense that API changes can be, and should be, automatically checked for server.. you're just saying that doing that still does not guarantee there's no breaking changes because, besides API changes, there's also semantic changes, and those are difficult, perhaps impossible, to automatically check.

Or are they? Check this out: https://www.unisonweb.org/docs/tour#-the-big-technical-idea

Thanks for the link, a merkle tree for code sounds interesting.

Are we talking about semver in general or semver the way it is (supposed to be) used in go mod?

Well both, because there is a disconnect between the two.

Strict semver (as proposed by go mod) - any breaking change means a new major version, and a 'new package' at a new import url. This means older users are left behind unless they explicitly upgrade and producers are encouraged to make breaking changes because it is easy for them. At present there are no measures to alert consumers to upgrade or communicate what the changes are. This doesn't reflect current practice even in the Go project itself (at 1.x despite small breaking changes/deprecations).

Loose semver (as used in the real world) - package identity is constant, minor breakage happens at every patch level and the level of breakage is negotiated between producers and consumers at the package level. semver is used to signal changes (major - big changes, possible breakage; minor - small changes, less breakage; patch - tiny changes, possible breakage to that area). Note major versions usually are used for big changes, not just breaking changes and big changes can be just as painful for consumers (e.g. in v3 we're introducing a new api for payments, start using it, the old one is still there but will go away soon in v3.1). Usually package management systems provide mechanisms to help that negotiation (automatic upgrades within minor/patch on the assumption breakage is minimal).

There are good reasons for the current go mod defaults (simple dependency resolution, simple migration between versions), but they do ignore real-world usage IMO and will lead to a bit of pain without some further work to resolve those contradictions.

> It's an objective measure, there is nothing left to subjectivity.

A convenient tip: in the real world this is almost never true, so when you find yourself insisting that it is, stop and think.

I'd be glad to be presented a real-world counter-example in a (non-toy) go library.

To add, making the version part of the namespace is the right thing to do, also because it avoids conflicts with transitive dependencies. Changing the namespace means that the same project can depend on multiple different major versions of the same library.

And that's fine, because when you break compatibility, you're actually not talking about the same library. And this makes upstream developers think twice about breaking compatibility. Accretion of features should be preferable to breakage.

It's what Rich Hickey talks about in his Spec-ulation talk: https://www.youtube.com/watch?v=oyLBGkS5ICk

It's not like putting the version into the namespace is necessary for this, though. Rust's Cargo will let you use multiple versions of a library too, without extra namespacing.

...and needs a package manager to solve NP-complete dep hell: https://research.swtch.com/version-sat

Go's package management is really well designed I think and also actually semantically relying on semver

Accretion of features should be preferable to breakage.

I think most people, particularly those using Go, would agree with this.

It does not follow that we should make breaking changes easier or routine, nor that we should force people to use strict semver (which is not widely used for good reasons).

I see why they've done that as it simplifies assumptions but prefer the way other package managers handle this where it is left to producers and consumers to negotiate how strict they want to be.

> Changing the namespace means that the same project can depend on multiple different major versions of the same library.

Do you actually want that? Do you actually know that the multiple versions of the dependency you happen to be using do not conflict? If the dependency is in different namespaces their locks and other globals are also in different namespaces.

Yes, I do, because this isn't just about my source-code, but about the dependencies of dependencies.

The two approaches to this are to force resolving dependencies (can be painful and require humans), or duplicate code (can cause bugs).

There are disadvantages to both and in particular duplicating dependencies should be seen as a short-term fix for a transition, not something you do routinely. If you do it routinely you'd see bugs due to shared locks, state, or changed data structures - it breaks assumptions about pkg globals for example, an important part of the language used extensively in the stdlib.

>There are disadvantages to both and in particular duplicating dependencies should be seen as a short-term fix for a transition, not something you do routinely.

If you don't duplicate versions then you will have to wait for every single library in existence to update to the latest major version. This can take decades and still fail. Just take a look at Python 3.

Or even conflicts rising from competing use of state space in some other app. For example, if your database have a limited set of connections it can set up then you absolutely don't want different dependencies to handle connection pooling by themselves.

So yes, you want that. But how about the next question? The dependencies of your dependencies may conflict with different versions of themselves. Because nobody thought 'what happens if there is another version of this code running in a different namespace'. And nobody tested it, so you are trusting to luck rather than any sort of rigor. Thankfully, the conflicts when they happen are usually fairly obvious (two copies of the connection pool built into your database driver, or a crash on startup because the run-once initialization ends up being run twice). But it could be more subtle, like multiple versions of the dependency that handles logging not sharing a mutex and your logs end up interleaved or corrupt under load.

Yes! This is exactly why people do shading in Java!

I'm not sure it's just an education issue; it should be an end to end solution.

If people aren't using it, it's weird.

That smells of trouble to me... either it should be good enough people want to use it, or painful enough not using it that people grit their teeth and do the right thing because its less effort in the long term (eg. you cannot use module at all unless you do it right).

If there is no obvious downside to doing the 'wrong' thing, and it's less effort, and people are doing the 'wrong thing', in practice, in the wild...

...well, I'm not sure this has really worked out ideally.

Certainly, "great" is not how I would describe it.

I think most of the frustration I have seen comes from people expecting symmetry and uniformity in the vendor/package management solution. This is the same way some looking at an architecture diagram expects clean straight lines or aligned boundaries.

That said, Go team should have made it clear right in the title (for all blog posts) that they are attempting a unique non-intuitive solution to a hard problem of transition from 'no-versioned' world of existing packages to 'semver' and the explicit desire for force people to avoid versioning if they can (just like the old days). There is nothing wrong with weird solutions. We don't have to follow Rust or Ruby or npm. Their situation is different.

Of course then the protobuf or grpc team (I forget which one) went and created a new "v2" without the "v2" suffix and that pissed a few people off. Of course that was still within the reasoning Russ explained (it was a reimplementation and domain part of the URL already changed, so there was no need to add v2), and Russ probably doesn't control the particular package's team. But that is life! People still expect perfection from opensource projects.

What I dislike about Go's solution, and somewhat about semver in general is that no one makes major version changes anymore even if they break backwards compatibility.

However if I use semver internally, like it's supposed to and make regular major changes (let's say once a month), my code repo becomes a mess. After a 2 years, I'll have folders for v1 up to v24. And the fact that I have to copy -R/grep|sed, means that people will be less likely to properly signal that a backwards incompatible change has occured.

Disregarding uniformity, for the reasons above I don't think their design is a good one. It treats major version changes as a rare event.

If you're introducing breaking changes once per month, then your library really isn't ready.

Tell people you have an unstable library. Breaking changes _will_ happen. Then, when you've used your library in production for a while and are reasonably sure it wont have to change, tell people it's stable. If you make breaking changes after that, then start with a v2 suffix.

For some reason, semver has made people stop going through the alpha, beta and rc stages of a release, even though the point of such things is to discover breaking changes before you're commited to maintaining the version for the time to come.

Major version changes should be a rare event. If it's not, you're really just shipping around a (pre)alpha library, and should be explicit about that.

> If you're introducing breaking changes once per month, then your library really isn't ready.

The whole software industry moved away from waterfall to agile because it's unreasonable to assume that no requirements will emerge. I would disagree, for example, that Python3 is not ready just because they're introduced backward incompatible changes in some minor releases.

> The whole software industry moved away from waterfall to agile because it's unreasonable to assume that no requirements will emerge.

There's also an insidious side effect that people tend to think of the more common semver of features than strict semver, and they associate versions with progress and effort. You could easily get to version 50.0.0 just sorting out finishing touches on the API with strict semver if you don't have good communication with the customer. Some enterprising middle manager will start asking you've spent 50 major versions worth of effort on this library or app.

As a result of those effects and the unequal application of semver, I usually encourage people at work to choose other versioning schemes. Without strictly enforcing semver, there's really not a huge advantage to semver over a plain incrementing version. Incrementing versions also have the nifty feature of making it trivial to see how many versions behind you are. Or some people go with timestamped versions, which tells you how old your version is.

I mean python is basically the prime example of how to poorly manage version upgrades with their failure of 2 => 3.

Can you provide a single example that supports your thesis that migration from Python 2 to 3 was poorly managed semver-wise?

If anything, Python is chastised for having maintained two major versions in parallel without giving any compelling reason to force major rewrites on the while ecosystem.

They were just desperate to do a better job of screwing up than Perl with it's 5 => 6 updates...


As we can see from the Perl ecosystem having completely moved over to the extent that Perl 5 is now out of support while Python is still mired in the update.

That's funny, because I see the opposite. Perl 6 was a failure that had to be spun into a separate language and people stayed with perl 5, recently work on perl 7 started, but as far as I know perl 5 is still the version in use, recently FreeBSD forced me to update perl to version 5.32.

Regarding Python, I no longer see 2.7 in the wild (I have no doubt it is still in some places) all projects I'm working on are in Python 3. AWS Lambda supports 3.8 (which is the latest stable version), Ubuntu comes with 3 installed by default. FreeBSD set 3.7 as default, depreciated and will remove 2.7 completely by the end of this year.

Many Python 3 packages were released as python 3 only, the packages that that previously supported 2 dropped their support. Currently running 2.7 is a liability. Not as much that it is EOL and there are no security fixes anymore, but also packages dropping support (which is IMO a bigger deal).

Yes my comment was ironic.

ah, feeling dumb now :)

> The whole software industry moved away from waterfall to agile because it's unreasonable to assume that no requirements will emerge

That's completely unrelated to library versioning. The software industry that I work at absolutely relies on library authors to diligently introduce breaking changes only in rare occasions, and preferably using semver to the extent possible.

If you can't guarantee your library will be kept up-to-date with bug and security fixes, without me having to re-write my application on minor library updates because you decided to change your API to make it prettier, then I most certainly won't be using your library (indeed, I have removed dependencies from projects I worked on for this reason).

>re-write my application on minor library updates

This is the tension of semver - a minor library update might include a breaking change, but that doesn't mean you have to rewrite your application. For example, lets say I have a library that returned an int32 in a single function that would silently corrupt your data, but now has to return an int64.

The right thing to do would be to issue a major upgrade where the function returns an int64. If you never used this function you wouldn't have to rewrite your application. Major updates in semver only mean backwards compatibility is broken, it doesn't mean it's am whole new api.

To me, the Go approach takes your approach to major upgrades - reserve them for when you do something like rewriting the API. That's to me isn't a good assumption to bake in, I'd like to change a single function signature without having to copy my entire library to a new folder.

> To me, the Go approach takes your approach to major upgrades - reserve them for when you do something like rewriting the API. That's to me isn't a good assumption to bake in, I'd like to change a single function signature without having to copy my entire library to a new folder.

Then make a new function within the same folder that works in a new way. If necessary, mark the old function as deprecated. An exception can be made if the function never worked to begin with, but that should really have been cought by tests or people using a pre-release version of the library.

> That's completely unrelated to library versioning.

It really is not. The initial assertion was that stumbling on breaking changes once per month was somehow a sign that a library isn't ready. Thus assertion flies in the face of the reality of software engineering. If your whole process is designed to allow requirements to emerge then you will have to deal with emergent requirements that you need to accommodate, and in moderately sized projects that always involves changing behavior.

> It really is not.

You seem to be assuming we use libraries to implement the quickly changing business requirements? If you do that, then sure, you'll have a hard time using versioned libraries at all. But you don't need to do that. Doing that is just a pain.

Have a single code base with separate modules by all means, but don't pretend the modules are versioned separately and independent.

In other words: don't make them libraries unless you MUST. A library, by definition, is a re-usable software component. Something can only be re-usable when it's stable (if it keeps changing, it's by definition not re-usable as consumers would also be expected to keep changing, and that's not the case for everything as you seem to assume). Things like base64, encryption algorithms, data structures... even spec-backed things like OAuth and HTTP libraries don't just change every month.

I think GP was talking about internal software. Since business requirements are prone to change a lot, don’t blame the developers that the software needs to change.

I think the issue is about having to cross a module boundary, which is really about monorepo vs multirepo. You have much more freedom to make breaking changes when you don't need to cross a (repo/module) boundary.

I think the Go module system encourages you to avoid adding unnecessary boundaries. As someone else pointed out already, one of the problems is that you don't find out that boundaries are bad until you hit v2, and then you discover that you have this chunk of technical debt (unnecessary repo boundaries that slow down development) that you didn't know about.

Were I work we have several internal libraries. When we crossed the point were one library was used in more than 3 services, we stopped doing breaking changes because it was simply to time consuming to upgrade to the next major version of the package.

Imagine spending an significant portion of your day migrating all your internal apps to the latest major version of an internal package, then you can imagine what we faced once a month.

We stopped doing breaking changes after a while. Instead we're effectively doing versioning at the module level. If module A could really use a fixup, we make an entirely new module A.V2, and start using that in new code. Things that don't need the new functionality can keep using the old API, which still works, and upgrade whenever we have some extra time (we never have extra time) or we really _really_ need to.

We've also adopted a policy that nothing gets added to our internal packages before they've been used in an application for a certain amount of time. Code duplication between applications is fine if we don't know exactly how the API should work, that way you have flexibility to change the code to fit a specific use case for a specific application.

If that common functionality really has to be common _straight_ away (which we surprisingly haven't hit yet) we could place it in a module A.Unstable to make certain everyone knows that it can break at any point.

In either case. It's just as important to avoid breaking changes in internal software as in everywhere else. Breaking changes are not free, except for the library authors.

You don't get to define what "ready" means for me. I get to define that, as the author of the package.

Go doesn't require a folder for each version. You can simply change go.mod to declare the module as github.com/nemothekid/foolib/v2 or whatever.

That's one of the points of confusion — first that the go.mod module declaration doesn't need to exactly match the module location, and secondly, following this, that using a folder instead is a fallback option.

Go is famous for favouring simplicity in its design, but this is a rare instance where it's not made things simple enough.

If you write something for internal use and can enforce version upgrades could you not just keep the last, say, 3 versions in the repo HEAD and delete folders v1-v20? Between go.mod and go.sum, historical dependencies will still resolve fine because they point at the old commit. If you know you will never release another patch for a particular major version I think you could just drop the directory from the repo.

For a designed library, major version changes should be a rare event.

For a collection of code that someone calls a library, all bets are off.

"For a designed library, major version changes should be a rare event."

If you properly honor semver, for any non-static library, major version changes will either be a modestly-frequent event, or you're really straightjacketing yourself with your first v1, or you're a genius who got everything right the first time and never have to make any changes.

Or, most likely, you're not honoring semver properly and kindasorta squint at your changes until they don't seem important anymore and then ship an incremental anyhow.

Go may make it somewhat more annoying than the average language to ship a new major version in some ways, but libraries either not rev'ing their major version number when they technically should have and mostly expecting to mostly get away with it, or libraries bending themselves slowly into pretzels to stay on the same major version according to semver when they really ought to just make a new major version are things I see in many languages.

> If you properly honor semver, for any non-static library, major version changes will either be a modestly-frequent event, or you're really straightjacketing yourself with your first v1, or you're a genius who got everything right the first time and never have to make any changes.

IMO, someone tackling a problem for the first time shouldn't be writing public libraries. At the very least, people shouldn't be using it.

Quality, stable libraries tend to be written by people who already have at least 1 or 2 iterations under their belt. Maybe not complete iterations, and rarely public, but enough experience with the nuts & bolts of the problem, including the problem of balancing concerns between caller & callee, to have a good idea about what the proper interfaces should look like to ensure long-term stability. If someone doesn't have a strong sense that they could support the v1 interface in perpetuity, then maybe don't create a separate project. Not only does it save everybody else from the pain, it often saves the author the pain of constant churn, presuming the author even uses it in more than a handful of their own projects.

For many of my own open source projects, I often have years of abandoned false starts lying around, or various just-so solutions manually copied and hacked into various other projects. Only when I feel like things have finally congealed do I then bother to write a dedicated library, whether private or public, and migrate everything to it. One way to know things are looking good is when you realize you're no longer fiddling with the API, or maybe just vacillating between two equally decent alternatives.

> you're really straightjacketing yourself with your first v1, or you're a genius who got everything right the first time and never have to make any changes

In my head - that's what v0 is for. Many people try to jump the v1 gun well before they've allowed the real world to impose itself on the unwarranted assumptions that should have been shaken out in v0

Rarely is not never.

"For a well architect house, fires should occur very rarely. As such we should not install smoke alarms."

If it can happen it will happen and whatever you're designing should be built with that in mind. Say nothing of an entire language's package management semantics

There is a big gap between "breaking changes every month" and every couple of years.

That doesn't really matter. For a large enough code base "a breaking change every couple of years" for each library in your source tree could easily be a breaking change every every day.

When the surface you're looking at is "all open source code supporting this language" are you confident saying that there will only be 1 breaking change every 2 years for your tools?

Imagine you're making a GUI. I don't think there's any GUI lib in the mobile OS space that hasn't added or deprecated a major feature in the last 2 years. Android is very good at this and there's still large deprecation events (see Jetpack).

I picked up a Clojure project I wrote three-four years ago last month and upgraded it to all its latest dependencies.

No breaking changes.

The HTML-based applications I've written going back the last six years all still work on modern browsers.

The few apps I've written for Android didn't work across all Android versions even when in active development. Android really is the best example of how miserable development can be when backwards compatibility (or really, just compatibility) isn't taken seriously.

What open source project is a designed library?

This will just pollute the API interface for the sake of maintaining backwards compatibility.

Qt hasn't broken binary compatibility since the release of Qt 5.0 in 2012. The last one before that was the release of Qt 4.0 in 2005. It's a very impressive standard they set out for themselves (and met). https://wiki.qt.io/Qt-Version-Compatibility

>There is nothing wrong with weird solutions.

I don't want to go as far as argumentum ad antiquitatem but if you break KISS and the principle of least astonishment it should probably be worth the hassle.

>Their situation is different.

Is it really, though?

No-version is incredibly dumb for the simple fact that vulnerabilities exist.

If you make it hard for tools like Snyk to work with your language I'm not going to recommend it to anyone.

I actually like Golang in specific circumstances (backend simple micro services, anything docker, and utility binaries) but they picked the versioning scheme as the, "This is the dumb hill we're going to die on," that every language has in some fashion (ex: no case statements in python, perl 6 coming soon, it's 2001 and we like it that way for php, java... just, all of it).

I mean isn’t no-version effectively just date versioning so tools like that just say “any version after $date” is patched?

Kind of off topic but that is how I name the cloned git repositories. foo-git says nothing. foo-xxxxxxx says nothing, really. foo-20200916 at least tells me when I cloned the repository, and if I really wanna know which was the last commit, I can just git log it (usually, well, always a git repository).

Only if the date is in the name.

I'm one of the people responsible for the choice of versioning for the Go protobuf module. (Sorry!)

The original Go protobuf implementation is in the module "github.com/golang/protobuf".

A couple years back, we decided that it was time to do a major overhaul of the protobuf implementation. Protobufs were one of the first major pieces of Go code outside the standard library (the second post ever on the Go blog is about it), and there were a number of design decisions that made sense at the time but which we wanted to go back and revisit. Doing all of this was not going to be possible while preserving API compatibility.

Given that, we needed a new module path. That's the core of Go modules and API changes: You change the module path when you make an incompatible change. You can do this by bumping the major version of the module (since the module path includes the version), or by changing the module path entirely. (There's another option, which I think doesn't get enough attention: You can just make the change, and force your users to deal with it. If you're okay with breaking your users when they upgrade, nothing stops you from doing this. We take backwards compatibility very seriously, so that wasn't an option for us.)

Given that we needed a new module path anyway, we took advantage of the opportunity to move to a path that isn't specific to a particular code hosting provider: "google.golang.org/protobuf". GitHub is great, we love GitHib, but we'd much rather have a provider-agnostic name for the module.

So we have a new module path. The question then was: What version do we tag the new module as when we release it? Remember that there has never been any version of "google.golang.org/protobuf" at this point; it's a completely new module.

One option was to tag it as v1.0.0. But that might be confusing, because this is "version 2" of the protobuf implementation, even if it's under a different module path. Also, if someone complains about a problem with "v1.2.0" and doesn't mention which module they're using, we won't have any way to tell which one they're talking about.

Another option was to tag it as v2.0.0. But that might be confusing, because there's never been a v1 of this package. Also, this would likely cause unnecessary confusion with "proto2" and "proto3" (versions of the protocol buffer language).

We waffled for quite a while, and eventually settled on v1.20.0, which avoids the missing v1 weirdness but also ensures that there is no overlap in versions between the two modules.

That confused people. Alas. Perhaps every option would have led to confusion.

In retrospect, perhaps we should have gone with v2.0.0 (and a module name of "google.golang.org/protobuf/v2"). Too late now; oh, well. If this was the worst decision we made in the new implementation (and so far, it looks like it may have been), I'll be quite happy.

The versioning model says you have to rename your package for breaking changes. The v2 convention is for when you want to mostly keep the old name. Protobuf package got renamed because they didn't like the old name and wanted to change it more significantly.

Why do people still say that you can’t expect quality from open source projects when there are multi-million to multi-billion dollar organizations behind them?

I don't know why people think that Google having multiple billions should mean they are willing to spend billions on every thing they do. That is not sustainable. Every project has a value to that company and any sane company will only spend whatever level matches that.

They are paying for, say 100 engineers (just making up this number). Isn't that enough?

Because those projects are a small fraction of all OSS.

Perfection is different from quality.

Because dollars are not correlated with quality.

I think for a package versioning system to work, the ecosystem has to adhere to its principles. This is true for any package system.

I think this post is an example of a package not adhereing to its principle; a major revision implies breaking changes, so it doesn't make sense to have a tool "automatically update" things for you, because theres no garuntee that it'll work.

I feel like the issue is that this post doesn't totally understand how the package versioning is meant to work in the first place; if you're changing your struct names and it has no external effects, i don't see why you /want/ to semver, because you're exactly getting much value from semver'ing other than the vanity of pretty numbers. Maybe I'm missing something?

Im curious if an ideal package system that would work for this particular project, but I don't think one does.

Exactly this.

The whole point of semver is to address the issue of backward-incompatible changes breaking downstream dependencies (well, among important other things).

What Go is doing is forcing you to use Semver the way it is intended to be used; from the semver specification:


Given a version number MAJOR.MINOR.PATCH, increment the:

    MAJOR version when you make incompatible API changes,
    MINOR version when you add functionality in a backwards compatible manner, and
    PATCH version when you make backwards compatible bug fixes.
Additional labels for pre-release and build metadata are available as extensions to the MAJOR.MINOR.PATCH format.


If you don't want to adhere to this, just cut PATCH versions and MINOR versions.

It's an absolute terrible idea though.

As annoying as it may seem, there's a lot of benefits to using semver the way it is intended to: it forces you, the developer, to think a little more about how you design your code.

The small amount of productivity you lose getting used to do the proper thing (i.e. cut MAJOR version, or spend more time designing your code so that it more forward thinking) will pay off in the long term.

The tiny amount of productivity you gain by not doing the right thing will come back to haunt you and make you wish you had been more rigorous.

His startup might be small right now, but if it becomes successful, their number of internal users might explode overnight and they'll suddenly wish they had used semver properly all along.

Personally, there is no "too small for semver". Feels like every time I've done the lazy thing and failed to bump a major, even if it's small package Foo written only by me used in Bar, also only by me, there comes a time a few weeks later of digging through to figure out what broke.

When you are crunching for a deadline, the last thing you want is "oops the tool automatically bumped all the major versions on you" when you didn't explicitly intend to. Having a dedicated flag would be nicer than grep/sed though.

How does semver handle the case where I have v2.x.y and I want to change the API but I need to experiment for a few releases?

I want to get to a v3.x.y but I need. a bunch of pre-3 major versions first. Linux kernel used to do with odd numbers for experimental versions and even numbers for stable versions.

Go is not forcing you to use semver the way it's meant to be used. Go is adding constraints to package naming and versioning above and beyond what semver requires.

You're confusing "it should be easy to stay in the lines" with "it should be very cumbersome to draw outside the lines".

Its trivial to bump versions in countless package managers but its frustrating in Go. Telling this dev that he's holding it wrong seems like undue push back. Why should this be cumbersome? I don't think accidentally taking major versions is a problem in most package managers.

It's bad to design the language to make it convenient for newbies in a way that makes it painful forever once you've written a nontrivial amount of code. Put newbie hacks in a playground where they belong.

The one criticism that seems valid to me is that the tooling should tell you if there is a new major version available (while not updating for you)

I am not a Go user, but if it is truly the case that you get no notification when a new major version is available, that could be annoying.

> The one criticism that seems valid to me is that the tooling should tell you if there is a new major version available (while not updating for you)

Only if the older major version is deprecated. I do not want to be annoyed with the existence of new major versions unless I have to. My code already works with the older version.

The older major versions are likely not getting bug fixes, and might not get security patches if they are old enough.

The major reason to have separate import paths is to support multiple major versions of the same module, which is important to not end up stuck in situations where package A wants dep D1 and package B wants dep D2, and you can't fix it.

Of course, you could handle this w/ more complexity in the package manager/build system, but having separate import paths makes it very clear, and is a very "Go"ish solution.

Since you're only doing this when you've got breaking changes coming in anyway, i.e., you've already got to fix the code, is it really such a big deal to update the import paths while you're at it? It's like a 2 second grep/replace...

That said, having Go tell you about major version upgrades available, and also an option to fix the code to use the upgraded versions import path, are good ideas.

> Since you're only doing this when you've got breaking changes coming in anyway, i.e., you've already got to fix the code

Just because the module broke compat doesn't mean you were using the things that broke though. Maybe they dropped or changed functions that you didn't use.

It's interesting that your comment and xyzzy_plugh's comment make opposite points!

But you're right, in some cases the breaking changes won't break you. In any case, I do think it's worth having a dead-easy way to fix the imports, rather than having to fuss about with grep.

(And what if you could actually ship rules to transform code to the upgraded version? That would be cool...)

Then you don’t have breaking changes, though?

Breaking and Naming is a property of the whole package, not the subset one client uses.

> also an option to fix the code to use the upgraded versions import path, are good ideas.

In theory this sounds good but in practice major versions can be drastically incompatible, and I wouldn't expect this to work very well at all in practice.

One of the excellent features Go's use of semantic versioning enables is for lower major versions to be updated to use the higher major version implementations, as a sort of backwards-compatible shim. In this case, no tools needed.

I'll never understand HN's love for arbitrary ill-informed blogposts about topics that have already been debated at length by experts on the mailing lists.

Writing about a complaint in a blog post and saying Tweet me is a tacit admission that the author isn't willing to face criticism from experts and just wants to get attention from the casual unininformed masses.

That's kind of an elitist viewpoint, though. I imagine the "experts on the mailing lists" are largely contributors, and really only represent a certain class of users of the language at large.

I'm not a Go user at all, but it reminds me a bit of like back when Gnome3 took a direction completely different than what I loved about Gnome2 (Apple worship, largely), which eventually lead to projects like MATE and Cinnamon.

The go-nuts mailing list is open. Anyone can (and many do) write in to discuss concerns.

You don't have to defer completely to the authority. But if you want to be treated as one, you should be willing to engage and consult with them.

Not sure. One is for end users other thing is for software engineers. Or may be we have reached to a point where people can also call data integrity or ACID compliance elitist view as it is often difficult to do.

So basically this is a complaint about having to manually upgrade packages in the manifest because his small company makes lots of regular, breaking changes?

If that is the case do you have to follow rules for semver? (edit- that is indeed what they do) Your users are all internal, its not like they are going to rat you out to the police for agreeing to do something different internally. It's just a couple of numbers...

I don't do go but, as described, the version system seems fair and reasonable on paper, for a system designed to avoid suddenly breaking code that depends on it.

Not defending the import versioning strategy, but I found it helpful to read Russ Cox’s writeup on the topic: https://research.swtch.com/vgo-import

Helpful? Yes, sort of. Should you have to read that to understand how the module versioning system works? Absolutely not.

The Go module system is a debacle. It's not intuitive and it's badly broken right now.

The underlying problem is hard. I’m not aware of any packaging system that isn’t a mess.

Not making apologetics here, I just don’t know what a good package system looks like.

Examples of excellent package systems:

- Haskell: Stack with stackage (mainly because of stackage)

- Rust: Cargo and crates

They are both above and beyond the rest of the space.

Having used both of these... what makes them "excellent"? What makes Cargo and crates different?

I'm not convinced that the stackage approach is scalable to larger ecosystems... Haskell has a fraction of Go's popularity.

You don't have to read it. It Just Works. If you want bugfixes and enhancements, you upgrade the package.

If you want to break your existing code, import a different package.

If Haskell followed this principle, cabal hell wouldn't exist. If Windows followed this system, DLL hell wouldn't exist.

Cabal hell hasn't existed for quite a while. See https://cabal.readthedocs.io/en/3.4/nix-local-build-overview...

I actually think the parent comment is sort of correct. If everyone was good (perfect?) about making sure that new versions of a package would work with code written against any old version of that package, then there would not have been cabal hell in really any of its forms.

Of course, if this increased the number of forks, it would increase the times we need to convert between types provided by "different" libraries - which may or may not be a problem, but isn't "a Cabal problem".

How it would actually work out in practice would depend a lot on how it shaped behavior; it sounds like srtjstjsj believes it went well for go, and I don't know enough to weigh in there in either direction.

I have no particular opinion on the main point of the comment.

I just wanted to point out that the big problems with dependencies in the Haskell ecosystem, sometimes called "cabal hell", have not existed for at least the last few years (since the introduction of v2-style builds).

Yeah, worth surfacing!

Nix doesn't solve the problem of incompatible versions in one binary. Nix only solves the problem of finding compatible versions if they exist. Cabal hell is more than one problem.

nix-style local builds (although the name is kinda bad) don't really have anything to do with nix, they're only the (admittedly bad) name for the new v2-* style commands (that now, at least as of cabal- and higher, are the default)

Correct. I am personally tracking this issue and I hope to do something to improve the situation


Far as I've seen Microsoft kinda learned it it's lesson.

looks at watch

20 years ago.

Makes you wonder about the go team, like where were they exactly between 1985 and 2005?

They were non-existent and when they did exist they didn't have a billion roads in funding to build everything out before launch.

> If you want bugfixes and enhancements, you upgrade the package. If you want to break your existing code, import a different package.


It does kind of suck. But it’s better than npm... or rubygems... or the python ecosystem... hmm, what language’s package management system doesn’t suck?

yeah no. rubygems/python/npm are stare of the art compared to go. it’s painful to watch people doing mental gymnastics to justify bad/disconnected from reality decisions that were made.

it’s pretty sad really if you think about it. that there are legitimate use cases where go is really really well suited for the problem.

my vote is on someone coming up with a sane approach at some point and people slowly adopting it.

Python package mgmt is aweful between the virtual env you need, pip that install different things on different OS, sometime the exec is no even in your path, I don't recall how many time I had to install awscli because it was broken, on top of that you add all the problem with python 2/3 ( pip3 ect .. ). Pip installing things as root or in your home folder ...

From all the mgmt I had to use, Python is by far the worst.

Every time I have something in Python to use I pray that pip will install it correctly which is roughly 50% of the cases.

Yeah, the current state of Python is the worst. Especially in machine learning, it's common that a project only supports one of conda or pip installation, which are often but not always compatible. The idea of storing all dependencies in "requirements.txt" is a good idea but has not standardized across the Python community. IMO all the 2 -> 3 upgrade difficulties have slowed down the development of the Python tooling ecosystem.

this has not been my experience. what you describe usually happens for poorly packed things.

also, there is a difference between installing packages and environment management

I take it you’ve never used npm

I take it you've never used npm for a large project that's spanned multiple years of different developers while supporting multiple versions of your application while also staying up to date with dependencies.


JAPH. :)

There’s one I like. It’s in this language from Google.

I like Maven. Fight me.

Won’t fight you - I agree. Every packaging system has stupid faults, but I generally have fewer headaches with Maven than the rest.

Say what you want about Maven the build system (I’m not going to die on that hill, even though I also prefer Maven to virtually every other build tool I’ve worked with) - but Maven Central and Maven the Package Manager are solid.

My day job is on a modular monolith that uses mvn. I'm no expert but I've set up some IT's, setup build pipeline from jenkins, configure some plugins on various steps of the lifecycle, etc. and I guess it's ok, xml takes a lot of space though and it gets ridiculous on big code bases.

I've been working on an android app and use gradle with the kotlin dsl. I haven't really done any programming with the build steps but this is pretty sexy and doesn't need three open/close tags every plugin.

  dependencies {
    implementation(fileTree(mapOf("dir" to "libs", "include" to listOf("*.jar"))))

Gradle used to use groovy dsl, but with kotlin the syntax is more consistent and auto complete is actually useful.

I've done a ton of work with both Maven and Gradle and I feel like eventually everyone just ends up back to wanting Maven.

The problem with Gradle is that it's flexibility is... dangerous and without some constraint can quickly lead to a lot of pain.

level 35 POM boss. residing in nexus!

Me too.


The author doesn't seem to understand the idea behind this, which I very much agree with, that a new major version should be handled as a new module. You are breaking contract with your existing users, upgrading will have difficulties for them.

Ross Cox explains the concept in this talk from 9:20: https://youtu.be/F8nrpe0XWRg?t=560

You are breaking contract with _some of_ your existing users, upgrading will have difficulties for _some of_ them.

I regularly update across major versions of packages in other languages and don't touch a thing other than the package version. I appreciate the version bump so I can do my due diligence to research the changes, but I just don't use the entire API of libraries in most cases so I'm not always affected by changes.

HN seems to be beginning a transition I recognize from proggit: thoughtful discussion by experts devolving into superficial critiques for the social media masses.

Perhaps I’m yelling at clouds, but it’s disappointing to find an article that opens with “fanboy” and closes with “suck it” making a debut on the first page of HN.

I interpreted those phrases as just a humorous writing style.

You're always going to have trouble if you try to "enforce" semantic versioning, because it's a social commitment by project maintainers, not a theorem that can't ever possibly exist about "breaking changes" (which has a fuzzy, social definition).

The best option for projects is probably to loudly declare that you are not using semantic versioning, and then use it anyway, on the good-faith basis that is the only actually possible way to use it. Declaring that you don't use it will head off trouble from people who expect it to be an impossible magic constraint, while using it will help people make sense of your version numbers.

100% agree. Semver is a _social_ construct not a _technical_ one. There is Semver the spec as originally designed, and Semver as it is currently observed. Semver has become something other than it was originally intended, likely because the idea of "breaking changes" ends up being way more encompassing than people wanted to admit [1]. To ignore that reality is a futile fight.

[1] - https://www.youtube.com/watch?v=tISy7EJQPzI

It feels like there are some inconsistencies in this article:

* we're using projects internally and don't want the hassle of rewriting import paths when making backwards-incompatible changes

* we don't want to just give up on semver and publish breaking changing on the v0/1 numbering scheme

I feel like you have to pick one. If it's the first, then it's actually a positive thing that explicit import path rewrites are needed in my mind at least.

I have some experience developing multi-version modules(https://github.com/GrigoriyMikhalkin/sqlboiler-paginate). And i quite disagree with this article.

> new copy of the entire codebase

What i have done in my module, and what i think is a common way -- is to move common codebase to common submodule, which will be used by all other versions.

> Allow package maintainers to specify the major version simply by updating git tags

Wouldn't it cause problems if developers would need to maintain multiple versions in parallel?

> command has no way to automatically update to a new major version

To be honest, i don't see why there should be auto update to new major versions, since, usually new major version means breaking changes to API.

this seems to be an overall google engineering problem.

go's not alone in breaking changes between minor versions, and "what worked yesterday no longer works" problems. v8 is very similar, where a very large breaking change can occur from one minor version to another.

even without a loose "agreement" like semver, it feels like there is no thought to "outside the bubble".

Another problem is that if a branch is used for the new version (for example as hashicorps hcl/v2 does), then the master or main branch is no longer the active development branch, and developers have to know to look at a different branch. Not to mention that github search only works on the master branch.

> Package-Side Problems > I also find it problematic because it breaks (in my mind) one of the most useful things about module names – they reflect the file path.

Sorry, this is incorrect: https://golang.org/ref/mod#module-path. A module path describes where to find it, starting with its _repository root_ (github.com, golang.org, etc) and then the subdirectory in the repository that the module is defined in (if not the root).

So, if a module lives in golang.org/username/reponame/go.mod, its module path is likely golang.org/username/reponame. If a module lives in golang.org/username/reponame/dirname/go.mod, its module path is likely golang.org/username/reponame/dirname. (and so on with a /v2 folder, etc)

I mention this because it appears that OP's major gripe in "package side problems" is that the /v2 dir "breaks" the (mis)conception that a module path describes _only_ the repo root.

(see also: multi module repositories)

> In other words, we should only increment major versions when making breaking changes

No, you can increment major versions whenever you want (though it's painful to your users). But, you _should_ increment a major version when you make a breaking change.

> I think a simple console warning would have been a better solution than forcing a cumbersome updating strategy on the community.

Could you elaborate on how a console warning solves the problem of users becoming broken when module authors make incompatible changes within a major version?

> Another problem on the client-side is that we don’t only need to update go.mod, but we actually need to grep through our codebase and change each import statement to point to the new major version:

What if you need to use v2 and v4 of golang.org/foo/bar? How would you import them both without one having a /v2 suffix and a /v4 suffix? Are you proposing that users should only be able to use one major version of a library at a time?

(I assume you are talking about a user upgrading to a new major, not the package author bumping to a new major. If the latter, a grep and replace is quite approachable and is shown in the blog you linked :) )

> Go makes updating major versions so cumbersome that in the majority of cases, we have opted to just increment minor versions when we should increment major versions.

I'm reminded of when the "unused imports not allowed" rule was lambasted, and then goimports was released and the conversation was snuffed out. This situation feels analagous.

You praise the toolchain and the good decisions in modules, but then hinge your thesis against it on "it's cumbersome". That's a valid concern, but it's likely that a tool that makes major version upgrades easier will resolve your issue. A wholly different design certainly is not needed .

Check out https://godoc.org/golang.org/x/exp/cmd/gorelease for one tool that's under development aimed at helping version bumps. It sounds like you also need something that will create the v2 branch/directory, change all the self-referencing import paths in that branch/directory, and change the go.mod path. That sounds like an easy tool to write - I expect something like that should come out quite quickly if it doesn't already exist.


Side note: In my opinion, dependency management is a rat's nest of choices that seem good at the outset and end up with terrible consequences later on. Go modules make super well thought out decisions, and is a very very simple design, built to last for a long time without regrets. Sometimes the right answer is to work around a small problem with some tooling or documentation rather than go a totally different direction that will have large, sad consequences later.

That is: all choices have downsides, but it's good to choose the best choice whose downsides can largely be tackled with easy solutions like tools and docs. Decisions like "we should build a SAT solver" have sad, sad, sad consequences that can't be tool'd and doc'd away, for example.

Well, misreading the title as "God's" and clicking the link left me feeling somewhat disappointed.

Go's versioning sucks?

Welcome to the Disillusioned Fanboy Club, there's a bunch of Perl guys who eventually gave up and moved to Python waiting at the bar with some war stories for you... (I'm the one alternating beers and bourbons...)

Wait, don't we have versions already and they written in go.mod and matched with git tags or specific commits? Aren't they committed to go.sum?

Why does that have to change if you move from v1 to v2?

Because v2 requires you change your import paths.

I like how even the programming languages of today suffer the same technical treadmill anxiety as the frameworks and libraries they are used to author.

i guess i am still unaware of these issues since i am still using glide.

Can this stop being posted? The same user posted it 12h ago.

Hey dang, you may want to look into this.

OP seems to have a history of repeatedly spamming their company's posts https://news.ycombinator.com/submitted?id=lanecwagner

Might be worth shooting them an email, in case dang doesn’t see this message

If at first you don't succeed, try, try again...

Damn that sucks. I thought HN had better anti-spam methods.

I mean technically its a company, but really its just my personal blog. I read the HN rules and it says you can post again after ~X hours if it didn't get off of "new" (which it didn't)

If there is an article I wrote that I think will do well I usually give it 2 or 3 chances

Applications are open for YC Winter 2021

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact