
A Proposal for Package Versioning in Go - ArmandGrillet
https://blog.golang.org/versioning-proposal
======
pdeuchler
How many iterations of this do we have to go through? Go hasn't had mature
dependency management since its inception and this constant thrash is really
starting to make things difficult.

For all practical purposes dependency management for programming languages is
a solved problem, with many open source examples. The only explanation for
this constant re-invention that I can come up with (and that's shared among
others I know) is that Google doesn't care/need a dependency tool because of
their mono repo. Which is rather frustrating since we get features (cough
aliases cough) forced down our throats when Google suddenly finds a need for
them.

~~~
merb
+1

I actually also liked dep with the vendoring approach, it is just better to ci
a repo with vendored dependencies (no more Broken Building because of
Internet/Proxy or github Problems). And in Go it was/is Even simple to upgrade
them. Sadly go is slowly moving away from that Part.

the new approach is Bad, because they still pull Sources from Github, this
Makes their approach unsuitable.

E: damn iPhone autocorrect

~~~
wink
> Dep was released in January 2017.

I know, Go is not a very old language, but all the Go projects I was involved
in were started before that and I haven't written any Go in like 5 months, but
still - this is too new for _all_ the Go developers to have looked at it in
depth (I certainly didn't come around to try it) and yet there's something new
here.

We're all joking about a new JavaScript framework being released every week,
but dependency management in Go feels a little alike.

I'm not a huge fan of Go anymore after my last experience (writing some web
stuff, with CRUD) after having been a huge fan (writing monitoring checks and
non-HTTP daemons), and I have no immediate plans - so I've no stake here, but
I can really only hope this annoying part (which least-bad dependency
management system will I use?) is finally over.

~~~
lobster_johnson
We've been using Dep for about 6 months. It's definitely production ready. The
interface may change a bit in the future, and of course now Dep may be
scrapped altogether, but right now it's very usable. We haven't encountered
any bugs.

------
nemo1618
I'm excited to see Go trying something new here. Something just feels clunky
about the current "manifest+lockfile" approach. There are a couple of ideas in
vgo that I really like, e.g. "incompatible versions must have different import
paths." I think rsc is justified when he compares these conventions to upper-
casing exported identifiers in Go packages: for a while it seemed weird, but
now it feels natural and obvious. In other words, Go has a history of breaking
from conventional approaches, with surprisingly consistent positive outcomes.
Some bikeshedding is inevitable if this proposal is accepted, but I hope that
the main ideas are preserved.

~~~
wwwigham
> I'm excited to see Go trying something new here.

"Semantic import versioning" is equivalent to versioned APIs in the web world,
no? Though I have never seen anyone utilize the same concepts for language-
level packages; it certainly maps well!

~~~
Shoothe
For the record Roy Fielding is _against_ using version numbers in URLs:
[https://www.infoq.com/articles/roy-fielding-on-
versioning/](https://www.infoq.com/articles/roy-fielding-on-versioning/)

~~~
sitkack
Roy Fielding isn't a deity. The version would be encoded in the 'Accepts'
content type header. There is little distinction, one is in-band (in the path)
and one is out-of-band (in the header), but the information is still
represented.

~~~
mstade
I don’t think you have this right. First, both the URL and the headers are in-
band. Out-of-band would be you knowing that the version is v2.3.7, because you
just know. In-band is anything in the request and response – headers and all –
anything else is an assumption and as such out-of-band.

Moreover, if I understand Roy’s argument correctly, he’s arguing against
versioning URLs, because they are meant to be opaque identifiers that doesn’t
convey any universal meaning. If you know the second segment of a URL path to
be a version, it’s because you have out-of-band knowledge that it is, not
because there’s anything in the request to say it is.

If however you have a header that tells what version you wish then that is in-
band information that allows you to negotiate the content appropriately with
the server. It also means you don’t have to change URLs whenever you make a
change to your API, breaking or not. Of course the server and client needs to
understand this header (to my knowledge there’s Accept-Version header) and if
it’s non-standard then it can be argued that it’s just as bad as versioned
URLs. Perhaps, but unless you get very granular with your versions (and most
don’t, let’s just be honest) your still at the mercy of what the server
chooses to present you at any given time. In fact, REST gives you no
guarantees about what you’ll receive at any given point – you may just as well
be given a picture of a cat. REST says you should be able to gracefully deal
with this. Most clients (that aren’t web browsers or curl) don’t.

~~~
justincormack
No! Headers are much worse. There is nothing in an API response that tells you
in a standard way how to set headers, but there is something that tells you
how to follow links (URLs). Accept-version is a non standard header, not
defined in the http spec so definitely that is not restful - how do you know
if the site supports it or what the version numbers are? You have to hard code
in the client!

~~~
mstade
Arguable. At least an unknown header is likely to just be ignored, whereas a
URL that’s changed to update the version will at best cause 404s, unless you
keep the old URLs around. If those URLs are still recognized, but routed to
the later version anyway then that’s just as bad as using an unrecognized (and
likely ignored) header.

In any case, what you’re saying is the point I was trying to make in the
latter part of my post. However a typo (or rather, a missing word: “no”)
unfortunately changed my meaning entirely. :o(

It should’ve read:

> (to my knowledge there’s _no_ Accept-Version header)

Oops. My apologies for the confusion.

------
mfer
__Minimal Version Selection __

This is the piece that is very different from other dependency managers and is
worth people looking at.

Instead of trying to get the latest version it's going to try to get the
oldest. If you want to update to newer version of transitive (dependencies of
dependencies) that your dependencies have not updated for you'll need to start
tracking those yourself.

There's an issue touching on this at
[https://github.com/golang/go/issues/24500](https://github.com/golang/go/issues/24500)

Other packages managers use the opposite of minimal version selection. Many of
them even have a don't make me thing command to update the whole dependency
tree (e.g., `npm update`).

What do folks think of the implications of MVS, especially for transitive
dependency management?

~~~
stouset
> What do folks think of the implications of MVS, especially for transitive
> dependency management?

I can't think of more than a handful of times over the past twelve years
(primarily Ruby and Rust) where a newer package broke an existing one, and the
majority of those times the problem was a new major version that wasn't
appropriately accounted for (e.g., the author should have depended upon '~>
2.x.x' instead of '>= 2.x.x'). I'd much rather have _that_ problem addressed.
Assume semver, and make the easiest way of specifying a dependency cap it to a
major version.

On the other hand, I _regularly_ deal with the consequences of upgrading
infrequently — when you do upgrade, it's a nightmare. As a rule of thumb, the
more frequently you update your dependencies, the less net pain you
experience. Upgrading once a week is relatively painless. Upgrading once a
year (or even less frequently) is a nightmare due to the sheer volume of
changes coming in at once.

Also, having transitioned to the security side of things over the past few
years, encouraging stale dependencies just means that security patches will
never get incorporated. A stable project that releases mostly security fixes
and few feature changes will — in practice — never see its minimum version
bumped in projects that depend upon it.

My prediction is that this will just result in security patches not being
applied and the problem of upgrading dependencies will be made generally
worse. I will be happy, though, if I am proven wrong.

~~~
adrianmonk
According to my reading of it, there _is_ an update-all operation. It has just
been separated out in an attempt to give the user control over initiating it.

But I agree that procrastinating on updates creates problems. (At some point,
the entire point of version pinning is to allow you the choice to defer, but
it's still a problem.)

I think it might be worth exploring reporting as a solution. Everyone knows
that old versions are a problem, but what if I had tools to tell me how bad or
good the situation is for my project at this moment? And what if they ran by
default either periodically or as part of my build or both?

Examples of stuff it could tell me:

* Am I one minor version behind on this one library and it doesn't matter?

* Or am I a major version behind on this other library and I'm using code that isn't even supported or maintained?

* Are there security fixes that I haven't taken?

* Are there libraries that have security fixes but no release is available yet?

* How about a list of libraries that I'm not on the latest minor version of AND the latest version has been available for more than 2 weeks? (Maybe I don't want to fall behind but I don't want to be a guinea pig either.)

* Or a list of libraries that I'm not on the latest major version of and a newer major version has been available for 6 months?

Since this is important, it would be great to have real visibility into it.
Right now, every build I've ever done, this is just something that people
track in their heads and just assume they have a good handle on. Doing regular
releases and always taking the latest version of everything helps somewhat,
but sometimes a release gets canceled. Or maybe there's a system that isn't
being regularly worked on and doesn't have regular releases, yet dependencies
are being updated, it is behind, but by how much?

------
teabee89
To all the skeptics, please try it out and provide constructive feedback.
Experiment with the corner cases you think won't work. Judging just by the
blogpost won't help anyone.

Personally, I'm very excited and impressed by the ability of the Go team to
innovate by diving deep and understanding every aspect of the problem at hand
and the existing solutions, instead of just blindly adopting whatever already
exists.

~~~
bigdubs
The lack of support for private repos is still a dealbreaker for both `dep`
and `vgo`.

~~~
JepZ
How is it different from using 'go get'?

~~~
Groxx
`go get` has so many other blockers for enterprise use that "support for
mirrors" barely registers. But yes: it's no different, `go get`'s lack of
mirror support is also a blocker for tons of businesses.

------
fwip
My initial reaction to Minimal Version Selection (MVS) is concern, that
developers won't get security updates and bug fix patches in their
dependencies applied.

But dependency software has been trending toward the use of lock files for a
while now - and without explicit developer intent, those won't get bugfixes
either.

I think I'm mostly concerned about how this affects transitive dependencies.
My package Foo depends on Bar_1.3.0, which depends on Baz_3.2.4. If Baz gets a
security update to Baz_3.2.5, either I need to add an explicit dependency
"Baz_3.2.5" to Foo, or wait for Bar to release 1.3.1 that depends on
Baz_3.2.5.

If go adds tooling to identify and make these transitive dependency upgrades
as easy as "npm update", then I will be a little bit less uneasy.

~~~
jerf
"But dependency software has been trending toward the use of lock files for a
while now - and without explicit developer intent, those won't get bugfixes
either."

You know, I wonder if there's something here that a next-generation language
can't get in on, some sort of help to provide to the developer who says "OK,
I'd like to upgrade this package for people, could you please help me ensure
that I'm not going to break anybody in the process?"

Possibly this line of thought terminates in very richly dependently-typed
languages, which is a bit of a utopia. But perhaps there's something in
between? Or something that can be added to an existing language like Rust?

I'm not even initially certain what that would look like. A version-aware
programming environment in which one can sensibly say "Yes, for 1.1 I upgraded
the unit test but please run the 1.0 unit tests against the 1.1 code"?

It seems like this is a growing problem and there's probably an opportunity of
some sort here.

~~~
jasonwatkinspdx
> It seems like this is a growing problem and there's probably an opportunity
> of some sort here.

I think there's an opportunity even within existing languages: more shared CI
infrastructure. Imagine if project authors had some easy way of running their
downstream consuming project's test suites as they develop?

~~~
zbentley
It's not all the way to what you're proposing, but CPAN has a very nice means
of testing packages for system compatibility. By default, users installing new
packages run the tests for those packages. Most CPAN clients can be configured
to report those test results back to central locations. This isn't the same as
running the tests _for the consuming project_ , but it demonstrates the
feasibility of "crowdsourcing" tests for system/language-runtime-
version/other-package-version incompatibilities.

------
Groxx
I'm honestly confused about why "this package manager is a SAT solver" is
being trotted out as a bad thing. Repeatedly. Having used multiple such
package managers in the past: the runtime is _utterly_ dominated by time-to-
download or even simply disk access, not time-to-compute. Compute time has
uniformly been far below human-visible times - e.g. a few hundred dependencies
resolves in `dep` in around 100ms (for the solver) on my machine.

SAT is not a problem _at all_. Yes, you can construct a worst-case scenario
for it that will chew up a ton of CPU. In practice it simply doesn't happen,
and trying to defend against it is both a waste of effort and leads to
crippled decision-making.

~~~
secure
This is not my experience at all. Doing an “apt dist-upgrade” after not
upgrading for a week or two regularly takes _minutes_ on my i7-8700K to
resolve dependencies.

~~~
djhaskin987
Apt doesn't use a SAT solver it only removes conflicts if it finds them:
[https://aptitude.alioth.debian.org/doc/en/ch02s03s02.html](https://aptitude.alioth.debian.org/doc/en/ch02s03s02.html)

------
room271
Yes, Rich Hickey (amongst others I am sure) has talked about precisely this
model of backwards compatibility - i.e. if you break the contract you have to
rename the thing.

This feels far better than the current model used in most languages. If you've
ever had struggles creating an uberjar you know this pain.

~~~
stephen
Agreed, after years of JVM dependency hell (which happens in any language), I
think "never make breaking changes" should be non-negotiable for publicly-
published code.

E.g. repo managers like Maven central/etc. should use binary API analysis to
reject any jar upload that has breaking changes.

My only hesitation is that, AFAIK, semantic import versioning has never been
tried at scale, so having to constantly bump imports from "com.foo.v1" to
"com.foo.v2", and deal with "app1 wants to pass com.foo.v1 objects to app2,
but it expects com.foo.v2 objects" might introduce more pain than expected.

Granted, right now app1/app2 are blithely passing around "com.foo" objects
that may/may not be compatible, but if it's an 80/20 thing, or 99/1 thing, and
most of the time you get lucky and it works, perhaps that's good enough.

But would be great to have go be the first community to try this at scale and
see how it goes. I like it.

------
twic
Okay, so i'm reading the minimal version selection algorithm article [1], and
there's either a flaw, or something i don't get.

Here's a modified version of the example graph, written as pidgin DOT [2]:

    
    
        A -> B1.2
        A -> C1.2
        B1.2 -> D1.3
        C1.2 -> D1.4
        D1.3 -> X1.1
        D1.4 -> Y1.1
    

The key thing here is that D1.3 depended on X1.1, but the new D1.4 depends on
Y1.1 instead. I guess X is old and busted, and Y is the new hotness.

What is the build list for A?

Russ says:

 _The rough build list for M is also just the list of all modules reachable in
the requirement graph starting at M and following arrows._

And:

 _Simplify the rough build list to produce the final build list, by keeping
only the newest version of any listed module._

The list of all modules reachable from A is B1.2, C1.2, D1.3, D1.4, X1.1, and
Y1.1, so that's the rough build list. The list of the newest versions of each
module is B1.2, C1.2, D1.4, X1.1, and Y1.1, so that's the build list.

The build list contains X1.1, even though it is not needed.

Really?

[1] [https://research.swtch.com/vgo-mvs](https://research.swtch.com/vgo-mvs)

[2]
[https://www.graphviz.org/doc/info/lang.html](https://www.graphviz.org/doc/info/lang.html)

~~~
StefanKarpinski
I believe that implicit in this scheme is that you cannot remove dependencies.
It seems possible to add new dependencies, however. Changing dependencies can
be seen as removing one and adding the other, but you can only do one, not the
other. This limitation could probably be patched up by resolving a set of
versions and then noticing that X1.1 is unreachable and discarding it. The
trouble with that fix would be if X1.1 forces some other package to an
unnecessarily high version even though it ends up being discarded.

------
twhb
Why not just put the full version number in the import path?

Both the manifest (dependency section) and lock files become unnecessary. It’s
DRY.

Dependencies are specified where they’re used, improving componentization.

Upgrading and forgetting to update one location is easily fixable: tooling
already scans go code for a list of imports, modify this to warn on version
differences. Or even to update them.

Git history gets “cluttered”, but shouldn’t it be? The behavior of values in a
file is changing. This constitutes a change of requirements on the file’s
code, or at least needs a moment’s review to decide the code needn’t change.
Seeing that change in history would make tracking down any bugs it causes
easier. Besides, we’re talking more files changed in a single edit, not more
edits.

Semantic versioning is a qualitative description, not a guarantee. Due to
edge-case use or human error, every minor or patch update may be a breaking
change. It would be better to have a layer of human interpretation between
semantic versions and code changes, rather than tooling that assumes them to
always be correct.

~~~
skybrian
It doesn't scale. If you have a->b->c and _b_ version-bumps every time _c_
does, then this also means _a_ has to version-bump. So basically changing one
file means everything that depends on it, even indirectly, has to change.

Furthermore, if you have diamond dependencies, _a_ might end up version-
bumping more than once.

Regular edits would get lost in a sea of version-bumps.

~~~
twhb
If C changes and B doesn't reflect the change in its own behavior, then
there's no need for a new release of B. Just update it and group it in with
the next normal release.

If C changes and B's behavior does reflect it, then A does need to know about
it.

The "sea" is limited to times that direct dependencies change their behavior,
which is what it already is.

~~~
skybrian
You're assuming that B does regular releases. But if B is "done" it might not
happen for quite a while.

Suppose C has a security patch, and B doesn't have any new functionality to
release? Does it keep using the version without the security patch forever,
because that's what's listed in the import statement? Or does someone have to
update it?

~~~
twhb
I include “security patch” in “behavior”, subject to both conditions above.

If B isn’t otherwise updated at a similar time, then yes, I advocate
dependency update-only releases. I think this is as it should be. A project
should be able to know what code it runs, specifically.

And yes, if a project is abandoned then security patches don’t get magically
applied. Again, as it should be. The fact that you have an unmaintained
dependency is itself a problem. You can’t just auto-apply security patches and
expect things to keep working, ask any Linux distro. What needs to happen is a
fork (or dropping the dependency). Tools warning you encourages this; silently
auto-applying patches encourages everybody to separately do their own fixes
and workarounds.

~~~
skybrian
I don't think it's reasonable to require each library owner to do regular
updates, whether or not they have any changes to make themselves. This is
unnecessary busywork. Library maintainers should only be responsible for
fixing their own bugs, not responding to everyone else's bugs.

------
JepZ
I am very happy that Russ Cox et al. are still brave enough to propose new
ideas when they see something not working as intended.

------
solidsnack9000
The explicit `v2` in import paths is a case where Go has abandoned
conventional abstraction and information hiding in an area where maybe there
didn't need to be any in the first place.

------
TheDong
Sam boyer had a take on this which is worth reading considering he is the
current developer of dep: [https://sdboyer.io/blog/vgo-and-
dep/](https://sdboyer.io/blog/vgo-and-dep/)

~~~
i2cpsi
To preface this I have not looked into the internal details of vgo, so it
could be the world's most interesting solution to this problem. I feel like
this is a punch in the gut for a lot of people that spent tons of time looking
at ways to solve dependencies in Go. From his post, it looks like Russ never
really looked into dep closely as he gave it his "blessing" during gophercon.
It seems like whenever there is an issue that gets brought in a proposal (not
always but it seems like a lot) it gets shot down by the "higher ups". Not
until it gets their attention, or it becomes a big deal to someone (Cloudfare)
then is it important. Look don't get me wrong, vgo could be a great technical
solution, but disregarding the work of those who actually cared about this
issue before you did is a big mistake. It will alienate people in the
community who care on improving the ecosystem for those outside of Google. I
want to thank all of those involved in dep, it's a great tool that solved my
dependency problems! Without you, we would not be having the discussion about
vgo and a better way to dependency management in Go.

~~~
dilap
It's tough, but sometimes you need to throw away work. It's better ultimately
in the long run. I think Russ has been doing a pretty good job of giving
recognition and credit to the dep folks for their exploratory work.

------
the_mitsuhiko
I want to know what they consider the limitations of the cargo/rust approach.
It has been really nice to use as a user.

~~~
strkek
To me, the one thing I'd like to change about Cargo is that required initial
clone of a 100MB+ git repository before you can even install something.

IIRC this was because a libgit2 issue preventing them from doing a shallow
clone though, so there's no way around it for now.

Disclaimer: I use both Go and Rust on a daily basis and think both are nice in
their own way.

~~~
the_mitsuhiko
That’s more of an implementation detail though. The design does not demand
that. I assume that no concern initially when the ecosystem was tiny.

~~~
ngrilly
I agree, but it's not a detail in practice. My feeling is that Rust is very
attentive to theoretical details, and Go to practical details. Of course it's
an over-simplification, and both approaches are pertinent and complementary
;-)

~~~
kibwen
That's the exact opposite of what the grandparent is talking about, though.
Cargo was the one saying "this works for now, we'll fix libgit2 later", which
is firmly in the practical camp. vgo is the one saying "we can't emulate other
package managers because SAT solvers are slow", which ignores that in practice
they're not, valuing strictly theoretical considerations instead (and, in
practice, Cargo doesn't even use a SAT solver anyway, so they didn't do their
homework).

~~~
ngrilly
I meant that Go usually focuses more on solving practical issues than
theoretical issues. But I have to agree that it is the exact opposite in the
example I replied to ;-)

Yes, Cargo doesn't use a SAT solver, but Cargo' source code acknowledges that
"solving a constraint graph is an NP-hard problem" and uses "nice heuristic to
make sure we get roughly the best answer most of the time". [1]

It's not just a theoretical consideration. It can create real problems. See
for example "Abort crate resolution if too many candidates have been tried" at
[https://github.com/rust-lang/cargo/issues/4066](https://github.com/rust-
lang/cargo/issues/4066). I'm not saying it's big issue, but it's something to
consider in the design space and this is why the Go team is considering other
options.

[1] [https://github.com/rust-
lang/cargo/blob/master/src/cargo/cor...](https://github.com/rust-
lang/cargo/blob/master/src/cargo/core/resolver/mod.rs)

~~~
kibwen
Again, in practice, this has not created real problems, which Russ Cox seems
to fail to appreciate. I've been using Cargo for years, working with other
programmers for years (including programmers using large Rust codebases in
production at large companies), and teaching programmers new to Rust (both
online and off) for even longer. The number of times I have had crate
resolution abort, or found the heuristic-chosen dependencies undesirable, or
seen any other person ever complain about either of the former: zero. My
sample size is not small.

I respect Russ Cox's decision to favor different considerations for Go's
versioning story. The approach of constraining to minimal versions is not bad,
merely different (especially since the -u flag exists). But the framing of
this as solving some problem with existing package managers is simply
mistaken, as Russ would know if he had used these tools in practice, rather
than instinctively reeling at the theoretical implications.

------
munificent
_> It is of course possible to build systems that use semantic versioning
without semantic import versioning, but only by giving up either partial code
upgrades or import uniqueness. Cargo allows partial code upgrades by giving up
import uniqueness: a given import path can have different meanings in
different parts of a large build. Dep ensures import uniqueness by giving up
partial code upgrades: all packages involved in a large build must find a
single agreed-upon version of a given dependency, raising the possibility that
large programs will be unbuildable. ... Semantic import versioning lets us
avoid the choice and keep both instead._

It's a useful exercise to be critical of existing systems and see if there are
opportunities to improve that they missed. That's how progress happens.

At the same time, there is a common fallacy (especially among very smart
people) that this might be an instance of. Even if it isn't and the Go folks
came up with something brilliant, I think it's worth talking about because it
occurs a lot elsewhere. I'll call it the "Missing Third Feature".

Cox's claim boils down to. "X gives you A but not B. Y gives you B but not A.
I've just come up with Z which gives you both A and B, so it's superior to
both."

In many cases, though, the reality is. "X gives you A _and C_ but not B. Y
gives you B _and C_ but not A. I've just come up with Z which gives you A and
B, _but not C_ (because I'm likely not even aware that C exists)."

I could be wrong, but if I had to guess, C is that vgo has no ability to
express compatibility across multiple major versions.

Let's say my package foo depends on bar. The maintainers of bar add a
parameter to a function in it as well as fixing a number of other bugs. That
signature change is a breaking change, so they rev foo to v2. My package bar
does not call that function. In order to get the bug fixes, though, I have to
change all of my imports to use foo/v2. Anyone using bar must either fix all
of their imports of foo to v2, or end up with two versions of foo in their
application (which might cause weird behavior if values from one foo end up
flowing to another).

The key problem is that from a package's own perspective, "breaking change" is
defined conservatively — any change that _could possibly_ break even a single
user is a breaking change and necessitates a major version bump. From a
package _consumer 's_ perspective, many nominally breaking changes do not
actually break that particular consumer. Package managers like Cargo let you
express that gracefully. My package foo can say, "I depend on bar and work
with bar >1.2.3 <3.0.0" if I know that foo isn't impacted by any of the
potential breakages in bar 2.0.0 or 3.0.0. That in turn lets my package be
used in a greater number of version contexts without causing undo pain to _my_
consumers.

This may not turn out to be a big deal. It's hard to tell. But my general
inclination when I'm designing a system is that if my idea seems unilaterally
better than the competition in all axes, then I strongly suspect those systems
have a feature that I am oblivious to and that mine lacks and I try to figure
out what it is they know that I don't.

~~~
yiyus
> "Missing Third Feature"

Very good observation. I also see this relatively often, but had never seen
such a clear explanation of the phenomenon and it is great to have a name for
it.

> C is that vgo has no ability to express compatibility across multiple major
> versions

This is true. However, I think (hope) it will be a non-issue. Some tool may
easily take charge of modifying import paths and go.mod files from v1 to v3
(to follow your example). This should not be, in practice, more difficult than
adding the <3.0.0 constraint in the lock file.

On the other hand, if you are importing from your foo package some other
package baz that also uses bar, and baz requires bar v1 because it is using
that function you did not care about, you may have problems using a lock file.
Either you will import the wrong version, breaking the build, or you will have
two packages with the same import path and different major versions.

It is hard to predict how all this will work in practice. Although I do not
think this particular example will be a problem, I agree it is important to
keep our eyes open for that "Missing Third Feature".

~~~
munificent
_> Some tool may easily take charge of modifying import paths and go.mod files
from v1 to v3 (to follow your example)._

No, in my example foo works with _all_ of versions 1, 2, and 3 of bar. vgo has
no way to express that. You have to pick a single major version. That's a drag
because it means anyone who wants to use foo now has to adopt this
artificially narrow constraint. Maybe they want to use bar 3.0.0 for other
reasons but I put bar/v2 in my imports in foo.

It means there are many valid package constellations that _would_ work, and
where package maintainers _know_ they would work, but the package manager
isn't smart enough to understand them.

 _> if you are importing from your foo package some other package baz that
also uses bar, and baz requires bar v1 because it is using that function you
did not care about, you may have problems using a lock file. Either you will
import the wrong version, breaking the build, or you will have two packages
with the same import path and different major versions._

in practice what this means is that the version solver picks another version
of baz until it finds a set of versions where everything is happy. This is
hard (NP-complete) in theory, but in practice it works out really well. It
gently incentivizes package maintainers to check and make sure that their
packages work with the latest versions of their dependencies, which in turns
keeps the ecosystem healthy and moving forward without being forcibly
_dragged_ forward.

~~~
yiyus
Ah, yes. Sorry, I misunderstood your example, but it is clear now. This is
indeed a potential issue and it may turn into a real problem.

------
colek42
We have just moved to using dep, yes, it exposed some issues in our dependency
tree, but it works now and I don't think our organization will be looking to
move anytime soon.

------
bishop_mandible
I feel like this is the package management equivalent to the shift from
geocentrism to heliocentrism. Yes, you can approximately predict the movements
of the planets with the geocentric model and lots of epicycles (Bundler/npm
model), but it suddenly makes so much more sense and it's so much simpler if
you assume the sun as the center of the solar system.

------
nstart
Having just built my first project in Go, I'd be happy if some form of
standardization is introduced. Everything I read was mostly along the lines of
"well there's this community standard process we use that's not official but
it might as well be". But when I started installing packages, it was clear
that the result of this lack of focus on package versioning meant that people
were simply pulling packages from master or the latest release on github
depending on which tool you used.

Examples of how I had to deal with packages:

Yaml parsing - Used gopkg.in/yaml.v2 . What that means? I don't know. v2 could
be one git hash today, and another tomorrow.

Slack SDK - Used nlopes/slack. dep ensure -add dumps me on to the latest
release of Github. That was fine although I didn't realise that was what
happened. The latest release is 12 commits behind master at the time of this
writing

A CLI framework - Used urfave/cli. Same thing with dep ensure -add. Except I
realised that the last release to Github was eons ago. August 2017 IIRC. The
code I needed in particular had been released in October 2017 and since then
had many commits to master. I just added a constraint to my Gopkg.toml file to
lock the pulling to master.

The point is that in all 3 of these examples, each person had an entirely
different way of doing things. All 3 libraries are mature in terms of
functionality and the people behind them being experienced go devs. I'm not
entirely sure where the "Independently, semantic versioning has become the de
facto standard for describing software versions in many language communities,
including the Go community." statement comes from (the including the Go
community bit). Because that was not my experience.

If this tool introduces some clarity and helps push the community towards
developing and releasing more sensibly I'll be really happy. Otherwise I
foresee that instead of using semantic versioning, I'll be locking to
different git commit hashes to choose which API version I want.

~~~
favadi
> Yaml parsing - Used gopkg.in/yaml.v2 . What that means? I don't know. v2
> could be one git hash today, and another tomorrow.

I'm sure that you can use a fixed git hash if you want.

> Slack SDK - Used nlopes/slack. dep ensure -add dumps me on to the latest
> release of Github. That was fine although I didn't realise that was what
> happened. The latest release is 12 commits behind master at the time of this
> writing

Do you expect the author to tag a release every time he pushed to master? And
you can use master branch if you want.

> A CLI framework - Used urfave/cli. Same thing with dep ensure -add. Except I
> realised that the last release to Github was eons ago. August 2017 IIRC. The
> code I needed in particular had been released in October 2017 and since then
> had many commits to master. I just added a constraint to my Gopkg.toml file
> to lock the pulling to master.

It is barely 6 months old! How often you expect the author to release new
version?

~~~
nstart
I'm not faulting anyone. Or any of the projects. I'm pointing out the fact
that standardization of package version mangement from the package
maintainer's side is something the Go community hasn't agreed on.

All your replies describe workarounds. Use a git hash. Use the master branch.
I did those. The point is that even if we haven't figured out package
management within the software industry, we do have more mature practices than
this.

Do I expect the author to tag a release every time they push to master? Kind
of actually. It depends on what's going into master. If it's typo fixes. Sure
that can wait. If it's major bug fixes, that probably should be a release. If
it's brand new features added, then yes! It has to be a release!

Right now, people are treating each commit to master as a new release.

I'm not angry or railing against anything. I'm an outsider who went through a
lot of strange things in my first project in Go. I think this line from
nlopes/slack README highlights the general package management methods:

"v0.2.0 - Feb 10, 2018

Release adds a bunch of functionality and improvements, mainly to give people
a recent version to vendor against."

All I'm saying is that it'll be great if the community does figure this out
and settles with a single agreed on practice.

\---

One last example:

stretchr/testify is a library I considered using for testing. Their README
encourages people to depend on the master branch. And even this PR
acknowledges the issues that has caused:
[https://github.com/stretchr/testify/pull/274](https://github.com/stretchr/testify/pull/274)
.

I agree that there are workable workarounds. But that's exactly what they are.
Workarounds.

------
jrochkind1
> exemplified by Rust’s Cargo, with tagged semantic versions, a manifest, a
> lock file, and a SAT solver to decide which versions to use.

I think Cargo got this from ruby's bundler, the first modern tool to do all
this? Why does cargo get the credit! haha.

~~~
swsieber
I think it's because Cargo was the blessed solution baked into the paradigm
from the beginner. Rust get's credit for a lot of things it shouldn't (well,
not credit for being the first), because it pulls many well done things
together nicely. It might not be novel in a lot of areas, but is seems like
it's got a lot of polish.

Or, perhaps it's brought up as example because Rust occupies a space a lot
more similar to Go than Ruby does.

------
snarfy
I was recently tasked with learning Go.

Overall I liked it, but the lack of versioning on packages was the deal
breaker for using it for any big project. The bigger the project, the more
dependencies, and the more things that can go wrong upstream.

------
deafcalculus
Why won't the simple approach of not allowing packages to break API
compatibility work? Fixed some bugs or added new functions to a library foo?
Keep calling it foo. Made an incompatible change? Call it foo2.

At least, this encourages backward compatibility. Incrementing the version
number is an easier decision to make than explicitly creating a whole new
thing. One of the awesome features of Go is how stable the language and
standard libs are. Why not expect the same of the community?

------
mappu
MVS requires import path compatibility. For semver-major changes the
/v2/-style workaround is OK, except when a package is still v0.

What happens in vgo if MY_APP depends on A and B, which both depend on
different, incompatible v0 tags of C?

I understand that A and B can declare which v0 they want in go.mod, but it
can't be solved for MY_APP.

This unfortunately isn't hypothetical...

------
cdelsolar
I literally just switched to dep.

~~~
tcheard
dep is still the officially recommended way to go for now. Russ Cox has
mentioned in his blog posts on vgo that they will be working hard to ease the
transition from dep to the built in versioning tools.

~~~
ospider
Which means at some point, we have to switch from dep to vgo. although the
transaction could be smooth, but the methodology behind it must have changed,
and that would not be a smooth transaction for our mindset.

------
ramses0
Much love to the author for clearly thinking through. As GIT was to DVCS, and
SemVer to individual package versioning, I feel this proposal is "the one true
way" as a starting point by which all future progress will be judged.

~~~
gkya
> As GIT was to DVCS...

?

Git did not create a novel concept. For one, it was written nearly
contemporaneously to Mercurial, which is DVCS too and uses a very similar if
not identical model. SemVer is a similar story too. It was not like before it
people were using /dev/random to number their releases.

I really like the fact that Go is catching up in this area, but I fail to see
the revolution everybody's talking about here.

~~~
geodel
I think 99% of comments are wtf... they should have copied package management
xyz 5 years ago.

~~~
mseepgood
Which is how you don't find better solutions.

------
erikb
What I don't understand is why Go is not simply copying something from another
community. And yes it will be incompatible with the "old way", but what is the
"old way" with such a young language anyways. But then at least you don't need
to learn the basics again that have been solved a hundred times already. Just
copy&paste and then move on to other topics.

And while you are at it, also add real exception handling instead of asking
people to reraise exceptions under each and every function call. And instead
teach people to actually handle most of the exceptions from underlying
libraries, maybe reraising a more context-appropriate one on the current
abstraction level.

~~~
crispinb
> What I don't understand is why Go is not simply copying something from
> another community

The proposal explains this pretty clearly, using Cargo as an example, and
listing what the Go team sees as its shortcomings (at least for golang). Is
there an alternative extant approach that you think they've missed?

> And while you are at it, also add real exception handling

They're not going to do this. There are philosophical disagreements between
advocates of different approaches here, and Go has very clearly picked its
side. [https://blog.golang.org/errors-are-
values](https://blog.golang.org/errors-are-values) is a good exposition of the
case. If it doesn't convince (or at least mollify) you, you probably should
either suck it up, or avoid Go where you can.

~~~
erikb
> The proposal explains this pretty clearly, using Cargo as an example, and
> listing what the Go team sees as its shortcomings (at least for golang). Is
> there an alternative extant approach that you think they've missed?

That IS what I claim is wrong. There is no reason to argue about it. Yes, each
existing solution has a problem. But so will their final solution. The
differnce is that an existing solution doesn't need much time to invent, it's
already invented. And you also already know what its shortcomings will be so
you can handle them as well. The problem is having the arrogance to come up
with something better given another 10 years. Unlikely and too huge an
investment.

> There are philosophical disagreements between advocates of different
> approaches here, and Go has very clearly picked its side.

What is this discussion anyways? When I see go tools reraising errors from
their underlying libraries, that is a clear, community wide bug. The user
shouldn't care about what library you use, and certainly he shouldn't know
your choice of library's ins and outs.

Also it's very clear that if a 1 line function call has at least 3 more lines
of code, that it is more spammy than Java.

Yes, there are philosophical differences, but go didn't choose any of them but
instead did something that is worse.

In some regards it's the same as with package management. The choice was taken
to solve a problem that was solved a thousand times already, and they had the
hope that investing another 10 years of developer time they would come up with
something better. Well, as statistics suggests they failed, badly.

Would you invest 10 years of a whole community to invent a machine that keeps
your milk cool? Probably not. You would probably buy a fridge and be done with
it.

> you probably should either suck it up

Where does this arrogance come from? Are you 15 or something? A grown up
person is confident when they achieved something, not when they failed at
reinventing the wheel, which many people told them is a waste of time.

~~~
crispinb
You seem terribly emotional on this relatively trivial topic. I'll leave you
to it.

------
Keats
It says it is reproducible but is depending on mutable git repositories? In an
earlier post they mentioned proxy servers to have some kind of cache, will
there be an official one?

~~~
strkek
Well, you'll always have to trust some point on being immutable.

A git repository can mutate, but so can a central registry or a proxy. Once a
version is cached in a registry or a proxy, nothing stops people with enough
privileges from modifying that cached version.

So really the only thing that changes is just who you trust.

~~~
Keats
crates.io is immutable, you can't edit/delete published versions. And npm is
as well finally.

> So really the only thing that changes is just who you trust.

With an immutable central registry, I only need to trust the organization
running it. Without it, my build depends on the maintainers of all my
dependencies (including transitive) to build. In the case of Go that would be
trusting Google versus trusting an unknown number of random persons.

Even recently in Go there was this event:
[https://www.reddit.com/r/golang/comments/7vv9zz/popular_lib_...](https://www.reddit.com/r/golang/comments/7vv9zz/popular_lib_gobindata_removed_from_github_or_why/)
where someone deleted their account and someone else signed up and created
another repository with the same name. I believe the Minimal Version System
will ensure that no one gets automatically updated to a new version of
potentially bad code there though.

~~~
zbentley
If you want an artifact repository for Go, several exist with varying levels
of immutability/reliability/etc.

However, Go also has an answer to the situation of "I want dependencies
directly from the community without a caching/authorizing intermediary, but
don't want to be a victim of someone rewriting their history on
GitHub/account-squatting/whatever", and that answer is vendoring plus checking
diffs on package upgrade. That's often less convenient than using a trustable
caching registry, but it's comparing apples and oranges: a cache like that
requires a centralized solution at present, though usable distributed ones
might come about at some point; vendoring just requires some of your disk
space and a git clone.

------
solatic
I don't get this. The foundation upon which semantic versioning is built is
that the entire semantic version is handled outside the code itself, which
allows different packages to communicate how they adhere to a common standard
compared to _each other_. For instance, if I depend on A at 1.0.2, and I
depend on B at 2.3.0, even if A and B have nothing to do with one another, as
a consumer of A and B, I have a common contract with both of them, which makes
it easier to resolve my dependency requirements. In this case, it doesn't
matter if B's author never de-facto issues patch releases, because my
dependency resolver tool treats them the same.

If I break that foundational assumption by putting the major version directly
in the code, i.e. "semantic import versioning", what benefit do I have to
track minor and patch separately? Why not just replace them both with a
timestamp?

As a library packager, I fit one of the following situations:

* I don't bother with minor or patch versions at all, every release is denoted as possibly breaking. Timestamps mean nothing to me since every major release is effectively a timestamp anyway.

* I don't bother with patch versions, only minor versions. Same as before, each minor version is effectively a timestamp anyway.

* I don't bother with minor versions, only patch versions. Also effectively a timestamp.

* I bother with both minor and patch versions, but after I increase a minor version number, I never issue new patches for old minor versions, e.g. 1.0.0 -> 1.0.1 -> 1.0.2 -> 1.1.0 -> 1.1.1 -> 1.2.0. This is effectively a linear release pattern and therefore can be replaced with timestamps (if your software depends on 1.0.2, and the library packager issues 1.1.0, which is not supposed to break compatibility with 1.0.2, then why wouldn't you upgrade?).

* I bother with both minor and patch versions, and I'm committed to releasing patches for old minor versions for a certain amount of time, and therefore my linear release history conflicts with timestamps, i.e. if I release in the order of 1.0.0 -> 1.0.1 -> 1.1.0 -> 1.0.2, despite 1.0.2 having a later timestamp than 1.1.0, consumers of 1.1.0 should not "upgrade" to 1.0.2 since it may break compatibility with features introduced in 1.1.0. Implicit in this versioning practice isn't just the willingness to patch old versions of the software, but the recognition that at least a subset of your users desire to reduce risk as much as possible by avoiding even seemingly non-breaking changes and yet not avoiding patch changes. Is that even rational? Should dependency management for _statically-linked binaries_ (not dynamically linked binaries where patch version updates can be introduced into an environment with a pre-existing binary, not networked APIs where rolling back minor versions can impact other systems using that API, not to mention the effects on backing datastores...), the statically-linked binaries in the end either will or will not compile and pass a test suite, so should dependency management for Go even bother to support this use case?

Why not have "semantic import versioning" with different major versions
supporting different import paths, and then keep track of the timestamps of
each dependency as it was used, with some "go dependencies update" checking
for new timestamped versions (remember, none of which should result in
breaking changes), keep track of which timestamped versions are used in
version control and allow builds without updating the timestamps. This way a)
gets you reproduceable builds, b) makes it easy to check if updates will
(inadvertently, through a mistake of the library packager) break your build,
and roll back if necessary, c) takes advantage of Go's property that it
creates statically-linked binaries to ensure that such builds are safe in
spite of the reduction in resolution vis-a-vis melding minor and patch
versions into one.

Dependency resolution is then a trivial matter of compiling a list of all
major versions semantically imported in the codebase, throwing them in a file
matched with what were at some point the latest timestamp, and optionally
checking for and updating the timestamps in the dependency resolution file. If
no code references a given major version anymore, remove it and its timestamp
from the dependency resolution file.

------
msoad
Isn't yarn locking system a proof that semantic versioning has failed?

~~~
zbentley
Not at all. Fingerprinted lockfile systems (yarn or otherwise) provide
separate benefits from semver:

\- The potential for deterministic builds. Saying "my system will work after
update because all of the updates are minor versions" is a big benefit
provided by semver, but saying "it will work after update because the bits of
the built files are the same" is a lot more assurance. "No Trespassing" signs
haven't "failed" because door locks and fences exist; it's just different
levels of effort/assurance.

\- Security/package verification.

\- A speed advantage in dependency fetching. Using a lockfile, you no longer
have to ask the registry for something that matches your semver pattern, or
make multiple round trips. For a single package that advantage is negligible,
but across multiple packages with shared dependencies, having a lockfile
improves the ability to do parallel package downloads and gives your system
knowledge of which dependencies are shared much earlier in the process.

\- Auditability/visibility. If you check your lockfile in and someone else is
having problems reproducing a bug, being able to diff lockfiles makes it very
easy to see if your environments differ. Since the full tree is serialized in
a predictable place, it's easy to diagnose situations like "you're using a
different version of package X, which is causing weird behavior because it's
depended on by a package we use". That semver's fault; people put '>' without
'<' in semver strings all the time, and that's no more a failing of semver
than misspelings are a failing of your text editor. Lockfiles also make it
easy to identify the opposite scenario: "our installations are identical; must
be something else on your system that's causing the problem--maybe check your
environment/kernel rev?"

Edits: the soul of wit.

------
jasonwatkinspdx
Really glad to see the change of course on this issue.

------
Communitivity
I just finished reading the actual proposal. I am very happy to see this
addressed in Go. Every time I've pitched Go the versioning has been a sticking
point, despite the many great features of Go.

A potential problem I see with this proposal though is that when I verify
everything works, and is secure, with version 2 of a package I am
automatically upgraded to version 2.1 because the import path is
'my/thing/v2/sub/package' which will grab 2.1 when it's available, if I am
understanding it right. At that point I am using a dependency I have not run
through my additional checks (e.g., static analysis), without knowing it. This
is complicated further by the proposal's suggestion that only the major
version be allowed in semantic import paths. How then can I fix the version to
exactly what I want, with exactly the amount of flexibility I want?

I am not sure, but I think that this issue is not completely addressed by the
concept of a 'high-fidelity' build, as discussed in the proposal. If the
minimally compatible version based on dependencies is 2.1, but I want to use
2.2 then the high-fidelity build won't do as it would be stuck at 2.1.

It is significantly more complex to handle, but I'd much rather see something
along the lines of 'my/thing/$v/2/1#sub.package'. This also lends itself to
things like 'my/thing/$v/latest#sub.package',
'my/thing/$v/2/latest#sub.package', and 'my/thing/$v/2/0#sub.package'.

A less complex approach might be to mandate major, minor, and patch version
numbers in the semantic import path, but allow 'x' to be used where the author
just wants the latest. Examples would then be: 'my/thing/v2.1.1/sub/package'.
This also lends itself to things like 'my/thing/v2.1.x/sub/package',
'my/thing/v2.x.x/sub/package', and 'my/thing/v2.0.x/sub/package'

I'll leave you with what I think is a relevant quote from Cool URIs Don't
Change, by Tim Berners-Lee.

"It is the the duty of a Webmaster to allocate URIs which you will be able to
stand by in 2 years, in 20 years, in 200 years. This needs thought, and
organization, and commitment.

URIs change when there is some information in them which changes. It is
critical how you design them. (What, design a URI? I have to design URIs? Yes,
you have to think about it.). Designing mostly means leaving information out.

The creation date of the document - the date the URI is issued - is one thing
which will not change. It is very useful for separating requests which use a
new system from those which use an old system. That is one thing with which it
is good to start a URI. If a document is in any way dated, even though it will
be of interest for generations, then the date is a good starter.

The only exception is a page which is deliberately a "latest" page for, for
example, the whole organization or a large part of it.

[http://www.pathfinder.com/money/moneydaily/latest/](http://www.pathfinder.com/money/moneydaily/latest/)

is the latest "Money daily" column in "Money" magazine. The main reason for
not needing the date in this URI is that there is no reason for the
persistence of the URI to outlast the magazine. The concept of "today's Money"
vanishes if Money goes out of production. If you want to link to the content,
you would link to it where it appears separately in the archives as
[http://www.pathfinder.com/money/moneydaily/1998/981212.money...](http://www.pathfinder.com/money/moneydaily/1998/981212.moneyonline.html")

\-- Cool URIs Don't Change, by Tim Berners-Lee

~~~
nicpottier
Not sure you are understanding the proposal completely. You are never
automatically upgraded to a version, semantic or otherwise. The only time vgo
"chooses" a version is when you first bring it into a project with `vgo get`,
then it assumes you want to start with the latest major/minor/patch version.

From then on, you will be "pinned" to that version unless you explicitly
choose to upgrade it via an update to your mod.go or a `vgo get -u`

As to putting even MORE in the import paths, I think the idea fits in with Tim
Berners-Lee, just that non-major versions are like "edits" to the URLs rather
than fundamental changes to the the content of a page. IE, you don't rename
the URL in your case when you make a spelling mistake, same as not "renaming"
your imports when you are just bringing in a patch release.

~~~
Communitivity
Ah, ok. Thank you for clarifying. In that case there is nothing to not look
forward to, and hopefully the proposal will be fully adopted soon.

------
billytetrud
How has go not solved this when it's had npm as an example for 6 years?

~~~
ovao
It’s explained in the post. To summarize: the Go team felt it would be best if
users developed the tooling to solve the problem rather than having an
official solution. This resulted in too many solutions, which is itself a non-
solution.

Pretty clearly this was a mistake by the Go team, and the blog post says as
much.

------
nine_k
In short: they follow the Rust's approach. Great!

Update: no, I'm wrong.

~~~
TheDong
That is inaccurate. In short, Russ Cox has been bashing cargo's approach of
preferring newer versions and lockfiles and a SAT solver (or really an
approximation of one).

Apparently making the dependency-manager's code simpler is a goal worth
removing developer's ability to be expressive and removing security updates.

~~~
tcheard
> Apparently making the dependency-manager's code simpler is a goal

That isn't the goal at all. The goal of Minimal Version Selection is to ensure
that the version selected is closest to that which was tested by the app and
its dependencies.

> worth removing developer's ability to be expressive and removing security
> updates.

And that isn't true either. It allows the developer of both the app and its
dependencies to express the version they have tested against, and still allow
the developer to receive security updates by explicitly increasing the
version, allowing them to test that the update doesn't create problems while
doing so.

It allows for version pinning without the need for a lock file.

