Hacker News new | comments | show | ask | jobs | submit login
An Analysis of vgo (sdboyer.io)
184 points by bpineau 68 days ago | hide | past | web | favorite | 68 comments

I think the current approach of announcing the death of `dep` before even having a stable alternative (`vgo`) is deeply flawed. With `dep` the ecosystem finally had something most of the developers agreed on, despite whatever shortcomings it had. The model was working. They could have adopted dep and improved on it. The packaging story for Golang has gone from worse to OK to worse again.

After all this progress, we still don't have a stable alternative and no clear date as to when it might be ready or even worse, the community is still not sure vgo will be the final one. Golang should be about pragmatic, practical decisions. The packaging situation is just the opposite.

I don't disagree — dep has been a huge improvement on Go's package management story. Until dep was stable, we were using Glide, which is extremely buggy.

That said, if you ignore the doubts and arguments about the sanity of its dependency resolution algorithm, I think vgo's introduction of modules is its real contribution.

Go packages have been problematic since the start since their design, naively, conflates a bunch of concepts that most developers keep separate: File location, file structure and source repository. That is, when you do 'import "github.com/foo/bar"', there's a whole bunch of conventions at play that dictate where it can be fetched from and where it should be in the file system. Aside from the fact that the $GOPATH convention is maddeningly annoying to many developers, the design has several issues, such as that an entire git repository has to be fetched in order to use a nested package, or that a package can have multiple import paths. But getting rid of $GOPATH is in itself a huge deal.

I'm personally less concerned with the new MVS algorithm (though I personally prefer the more traditional lock file approach such as that used by dep, Cargo, Bundler, NPM, etc.), as long as I can get modules. While getting modules retrofitted into depwithout all the other stuff in the proposal would be lovely, I don't know if the module design is too dependent (heh) on the rest of vgo's versioning thesis to be separated out.

All of this feels like a classic case of "the perfect is the enemy of the good". Package and dependency management in Go is honestly a total mess today and with vgo challenging all current assumptions about how to generally approach packagement, it brings complete paralysis to the simple question of "how should I organize my dependencies right now".

As much as I loathe JavaScript or Ruby these days for their language, using npm or bundler simply feels sane despite all of their warts. Yes, `left-pad`. Yes, they do have their issues. But seriously, they just work and all open source packages comply to their specifications. And they can and do evolve gradually, like yarn shows.

It's just so funny how careful the Go community seems to tread about not breaking their language with all these Go 2 proposals and general resistance to new features — and then completely ignore this spirit to "move fast and break things" when it comes to package management.

> using npm or bundler simply feels sane

I feel the opposite. Lock files are awesome when your developing a package, but after you release it, you are at the mercy of all your dependencies in making sure that they don't break semvar. Each "npm install" will ignore any lock file your package was using during development.

It would be awesome if I could distribute a package-json.lock file with my npm package and have yarn/npm use it when resolving packages on fresh installations.

Yeah, this would mean that there may be duplicated libraries in your node_modules tree, each with a slightly different version, even if semver compat. However, people break semver to often anyway.

I want to deploy a package to npm and be sure that it will never break, ever.

Can't you do this right now by specifying exact dependency versions in package.json? Lock files should be for applications, not libraries.

Yes, but "npm install" will by default add a semver-compat package version. The vast majority of packages out there use what "npm install" gives you be default.

Furthermore, even if I use exact versions in my package.json, that doesn't stop my referenced packages from internally referencing semver-compat versions.

At the very least, you should reference exact versions at the top level. It is better than nothing.

> Each "npm install" will ignore any lock file your package was using during development.

That's true for npm, but not for bundler (luckily).

This is essentially what shrinkwrap does. Apparently it has issues, but it's designed to be the lock file for your package.json.

WRT modules, that seems like very confusing terminology. It seems to me that vgo calls what all others call packages moduels, and the fact that module is a really overloaded term does not help.

Go already has "packages", so it's not something that can be used for what vgo calls modules.

A Go package is similar to a Java or Python package; it's just a namespace. A vgo module can be thought of as a collection of packages that have a version number and a single canonical name.

I don't find the terms particularly confusing, given that Go doesn't already have anything called a module. It would really only be confusing if you're deep into another language that has "packages" and "modules". I personally don't find it difficult to context switch like that.

AFAIK for many languages a module is a unit of source code, e.g. a .py file for Python, and many modules constitute a package. And many times C/C++ are criticised for not having "modules" and for people having to #include header files which need to be maintained as a redundant module interface for the .o files. In Haskell every file generally starts with "module ModName where", and again, many modules make up a package.

Certainly it's not that big of a confusion, but still will probably be one in discussions, especially for polyglot programmers.

> They could have adopted dep and improved on it.

Not really because:

- The scope is different. dep is a package management tool. vgo is more than that. vgo intends to fully replace the go command. This is how we deprecate GOPATH.

- The underlying principles are different enough to make it difficult to "improve" on dep. It looked easier to start from scratch.


"Go += Package Versioning", Russ Cox, https://research.swtch.com/vgo-intro

"FAQ: Why is the proposal not “use Dep”?", Russ Cox, https://github.com/golang/go/issues/24301#issuecomment-37122...

dep was always written with the intention that it would be merged into or replace to go tool. And it's still not settled that the underlying principles of vgo are valid -- which this article begins to unpack.

Yes! Speaking as a sysadmin who writes Go tools from time to time along with a lot of Bash, Ruby, etc, I’m no expert in the land of Go. But in terms of designing my code to be shareable and figuring out how to integrate it with other source code I deal with every day is a mess. dep had seemed a very promising direction, and so I’m really frustrated to see it’s been deprecated and we are still seemingly years out from a solution which may not even work well in the end. Quite honestly the situation is actively holding back Go, as it’s very hard to convince developers used to lock files and well defined packaging to build anything in a language with no official story on this, even after a decade. Go is rapidly losing the momentum it had for a while now that Rust is starting to become a more stable target and given Go’s inability to address package versioning.

From what I read `dep` isn't deprecated, and is right now the recommended way to track dependencies. I used `dep` in a project I started a bit ago (around the time of the announcement of `vgo`) and it's been working out just great.

That said I agree it's super frustrating, and the vgo announcement just had zero finesse. This isn't a language of experimenters hacking away, there are large production systems that run on solely Go code. Upending the main tool (`go` itself) and the versioning strategy is not to be taken lightly.

> Go is rapidly losing the momentum it had for a while now that Rust is starting to become a more stable target and given Go’s inability to address package versioning.

Here I disagree. Rust is fine for small utilities right now, but if you start building larger apps, developing with futures, tokio, and some of the other async libraries is a nightmare. The APIs are moving really quickly (`futures` hit 0.2, `tokio` is on 0.1 so I don't blame them really) and the ecosystem is still extremely immature. I'm excited for Rust, but it's not yet at the point that I want to build a complex application.

Though this is well written and I'm sure Sam is trying to set a tone of respect and appreciation of what is good in vgo so as to start on the right foot, he is also "burying the lead" in that he never gets to the point.

I think Russ did an unusually good job of explaining the ideas around vgo, grounded by using real world examples. He set a high bar there, but hopefully Sam adopts some of the same methods of very concrete examples to explain his criticisms.

As it stands this is an interesting piece of reading but is too high level and abstract in its criticism to be effective. Looking forward to future installments that hopefully that will make Sam's criticisms clear.

From what I can tell one piece of criticism is the inability of vgo to define incompatibilities. That is that if you know these versions won't work, you can say never allow that version in your dependency graph. I'd love to hear more examples of this in the real world. Since vgo dependencies are basically always "pinned" this seems like that not big a deal to me, but maybe I'm missing something. Perhaps this is something that occurs with having two of your dependencies having different dependencies on a third package and one of those works only with dep3@v1.2 and the other only works with dep3@v1.3 or somesuch.

Maybe there are ways around that or maybe that's not the actual problem (again, hope Sam explains more in the future). I think there is a really interesting element to MSV that will actually drive Go culture though (and I'm a big believer in the importance of the culture of a language) which I think may minimize the problems that Sam is talking about and for the betterment of the community.

As others have noted, I hope that Sam comes out with the rest of this series quickly so that the go community can commit do a direction sooner rather than later and move on to actually building this new ecosystem.

One interesting thing the article doesn't seem to cover at all is what it takes to push a community in a particular direction.

It sort of asserts various things work or don't in the real world, which is strange to me, as a lot of that behavior depends on what incentives/etc exist and these are very emergent systems that change.

It doesn't seem to consider very heavily whether vgo will be successful in changing the way people operate at all (despite this kind of thing happening all the time), and instead asserts based on the current state of the world , and random assertions about how people work, what the future will be, and asserts vgo won't change that in various ways. It's all very minimal actual explanation wrapped in a lot of language. It's also all random opinion, simply stated in a matter of fact way, with no real data cited anywhere to back it up. The only thing coming close to presenting a data backed argument then says "we'll look at this in a future part of the series".

> he is also "burying the lead" in that he never gets to the point.

He does say this is the first of a series of articles, so I think this is an uncharitable criticism. Dependency management is a very big topic and talking about it with sophistication — which you need to do to actually solve the problem well — takes a lot of words.

> I think Russ did an unusually good job of explaining the ideas around vgo, grounded by using real world examples.

Sure, it's always easier to understand something when you talk about it in terms of small carefully-crafted examples. But with something like the emergent effects of dependency management tool UX choices on the health of a software ecosystem, you lose a lot when you limit your discussion to "here's what the algorithm does on these three toy packages".

Back when I used to manage libraries that were required and required other libraries, I only ever really "knew" when particular version of a dependency didn't work. Do you ever really know that something works? No, there may always be a bug, some way things interact poorly. From a maintainer's point of view there are things like:

1. Probably this dependency version works, because I have no reason to believe otherwise

2. This dependency existed at the time I released a version of my library

3. This dependency existed, and I used it during testing and development

4. This dependency is co-maintained, so future versions of the dependency are likely to be compatible per semantic versioning

5. My library means nothing to this dependency, and they may or may not break it based on their whims

6. We don't really talk, but if there's an issue whoever you tell first may well fix it

7. I will release dot-releases to make sure future versions of the dependency are supported

8. I have specifically confirmed certain versions of my library and the dependency are incompatible

Most of these assertions are social and incomplete. Assuming a version, once released, is never changed or retracted, then one of the few firm guarantees you can declare is incompatibility. Declaring compatibility doesn't make it so, it only indicates a desire (and it's not even clear whose desire).

On top of that, compatibility should really be more like metadata, not included in the package. Compatibility and incompatibility is a discovery process (along with security).

In practice it's fine though, because it's ultimately the integrator (the person making some actual application) who has the responsibility to ensure everything works together.

> he is also "burying the lead"

The expression is burying the lede, not lead.

Hah, thanks, learn something every day! https://www.merriam-webster.com/words-at-play/bury-the-lede-...

The basic upshot of all this is that vgo's attempt to avoid NP-completeness in the dependency resolution algorithm is solving a problem that doesn't matter in practice, and is simultaneously creating downsides that do matter in practice.

This is consistent with what I've seen with systems like Cargo. If I had to make a list of top 10 issues I run into with Cargo, theoretical scalability of the core dependency resolution algorithm wouldn't make the cut.

On the other hand, git became successful by being fast and by not solving certain of the hard theoretical problems that specialists were obsessed with solving. It's hard to know before hand what the healthy compromise might be.

git may have become successful, but that doesn't mean it's right. I'd argue that most developers still don't really understand git. I know seasoned, senior developers who struggle when git gets into an unfamiliar state, or who are completely mystified as to how it actually works. And don't forget that git spent years improving its initially horrific UI to make it easier and more palatable to people who aren't kernel developers.

If you compare git to rival tools such as Mercurial and Fossil or even Darcs, it's pretty evident that git didn't really win on technical merit, user friendliness or indeed hardly any other metric. It was fast, and it had the dubious advantage of being better than a lot of crappy alternatives at the time, and of being the product of a famous developer. But if technical merit was how we chose software, we might all be using Mercurial today. It's quite possible we'd be better off, too.

git's punting of "certain of the hard theoretical problems" lead to major shortcomings in its design. For example, git does not track branching histories. Once you merge your changes back into a branch and delete the origin branch, the path that your commits took are lost. Another example: Since git's data model works entirely on repository snapshots (there are no "patches" or logical operations, only snapshots), it doesn't know about file renames. Some commands, like log and diff, do understand how to detect renames (but not out of the box!), but it's fundamentally a fuzzy-matching operation that can only work perfectly if you separate your rename commits from modification commits, which is rarely feasible.

Don't get me wrong, git has overall been a force for good, but I think it's a pretty terrible example in this particular case. In fact, I'm not sure I can think of many cases where the "worse is better" philosophy was ultimately a good idea. In every instance that comes to mind (MySQL, MongoDB, PHP), the fast-and-loose attitude ultimately came back to hurt everyone, and years were spent paying for those mistakes, plugging one hole at a time.

The original decision by the Go team to build package management into the language ("go get") was made without really understanding the problem, and this decision has been haunting us pretty much since day one, with developers having to chase unofficial package management tools of the week in order to get their work done. I really want it to be done right, even if takes a bit longer. It's a problem worth solving well.

Git isn’t ideal, but it is right. It’s right because it’s solving source code control for a huge percentage of software projects. Something better could be even more right, and adoption would be the measure.

> git may have become successful, but that doesn't mean it's right.

Gits adoption was driven by github, which won by making public repos free & private repos cost.

If bitbucket had that business model instead of the inverse we’d all think of git as that weird bad hg that Linus forces the Linux devs to use.

Git is a classic example of other factors than the software driving use.

Great then you should be able to do even better, right?

Seriously give it a try

First we need to power up to Linus skill level, then we can apply the same community influence spells.

Git would be dead on the water if it wasn't for being written by Linus and a requirement to interact with Linux development.

> Git isn’t ideal, but it is right. It’s right because it’s solving source code control for a huge percentage of software projects.

Your statement is only true to the degree that it's a tautology: Git's popularity demonstrates that it's right and the definition of "right" is "popular".

But if you want to use "right" to mean anything else, maybe to say something about it's technical merits independent of sociological factors like the network effect, first-mover advantage, high-profile early adopter, etc. then your sentence doesn't add any information.

Personally, I think Git's user experience is an unremitting shitshow, the kind of disaster that makes one reconsider Hanlon's Razor.

The technical capabilities buried under that UX are pretty nice, though you could probably discard 1/3 of them without impacting any noticeable fraction of users.

The performance is excellent and it's very easy for developers to underrate how much that effects user satisfaction.

And it had the good fortune to win on almost all of the sociological factors that largely determine product success.

I get the general thrust, but this argument can be used for any incumbent. `git` is popular because _github_ works. Had github's founders stumbled upon Mercurial first, we would all be in a moderately better place. But yes, at the end of the day, Git Is Good Enough (TM).

You compare vgo to git but I think your comparison is backward, in regard of not solving “hard theoretical problems” git is more like current package manager. They don't care about the hypothetical NP-hardness of the problem. Vgo is an attempt to work-around this hard theoretical problem that doesn't even matter in practice.

This rings as a wrong interpretation of history.

When git became successful, there were scant few other options with a comparable feature-set regardless of speed.

Its distributed nature, ability to easily merge and rebase sets of changes, etc, were all wonderful solutions to real problems.

I'm unconvinced that its success was because it solved fewer problems than the state of the art, but rather that it solved more. This is a stark contrast to vgo which is intentionally solving fewer problems than other modern dependency management tools.

In addition, git must be performant enough to handle a git repo (namely the linux kernel, its original usecase). Beyond that, I think there's no evidence it intentionally cut features or complexity to be faster.

In the case of vgo, it cuts additional user-facing features in order to be faster, but there's 0 evidence that the speed matters, that there are any go projects in the world that cannot be solved quickly enough with an approximation of a sat solver.

My company was using Darcs at the time when git (mid-2008) was gaining popularity. Github wasn't around yet. At the time, git felt like a big step backwards technologically. It had a terrible UI, and the whole thing felt like something put together with tape and paperclips. Meanwhile, Darcs was (and still is) pure magic.

At the time, the choice of git wasn't that obvious. There were several popular, quite attractive contenders, including Mercurial, Monotone, Bazaar and GNU arch (and its forks), and it wasn't obvious who would win. But then Github arrived and changed everything, and it felt like everyone was suddenly caught up in a historic momentum whether they liked it or not.

We've had lots of those historic moments, some of them more slow-moving than others. Mac and NeXT being outmaneouvered by Windows; Lisp, Dylan and Smalltalk being relegated to the dustbin through the rise of more popular, worse languages; and so on. Not all of these developments are terrible (I love that Nginx prevailed over Apache and that Rails took over from Java app servers), but it's ridiculous how often the paperclip solutions win over the more thoughtful ones.

Mercurial was pretty close to contemporaneus. And when git was first launched it was extremely bare bones and even more incredibly hard to use than it is now. Darcs, Monotone, and Arch were slower and I think predated git. Darcs in particular blew the doors off of git as far as ease of use given the problem domain.

Git won almost entirely due to two factors.

1. The linux kernel used it so that brought some prestige.

2. Github made git hosting easy and free for a lot of people and had the cultural cachet to drive adoption.

Git has a very solid underpinning. But it's user interface and lack of guard rails has always made it painful. People endure the pain because if they don't they won't be able to use the tool that their industry has chosen. But some of us wish a different choice had been made in the dvcs arena.

It's also important to remember just how far Git was ahead of most competitors in performance. I was using monotone quite a bit around this time, and while its UI was frankly not much better than git's, it at least worked acceptably quickly. Darcs was mostly defined in the public eye by the exponential merge issue. Hg was fine, for some definition of fine, but it was enough slower than git to be annoying -- maybe not deal-breaking, but noticeable on pretty much any task.

I remember installing cogito to act as a front-end for git because very early on it seemed obvious that (a) git was going to win, and (b) it was going to win in spite of its UI, which was saying something.

Darcs was easy to use? Are you referring to the same darcs whose manual tried to explain concepts by making analogies to the simpler and better-understood theory of quantum mechanics?

> This is a stark contrast to vgo which is intentionally solving fewer problems than other modern dependency management tools.

The thing is, modern dependency management tools for development suck. They are complex and slow and terribly unreliable.

A fresh, unconventional approach is needed, and vgo may be it.

Back when DCVS was just becoming a thing, I started with darcs and rather liked its model. I soon discovered that darcs had a nasty problem of quietly accepting large binary files that it could not manage -- you only discovered that the file was too large when you needed a revert or something similar. This led me to give up on darcs, and I switched to git. I could have gone to mercurial, but it appeared too sane and not as exotic.

(from memory) there was arch (larch,arx,) and monotone and others.

They were glacially slow and difficult to use. But they had very roughly the feature set; in particular the distributed nature.

Darcs was one of the well-known examples that could be brandished

That's the same hand. Unless I'm misunderstanding, pcwalton said that solving the hard theoretical problem doesn't even rate on his list.

>On the other hand, git became successful by being fast

In a totally different domain with involves working diffs for and managing gigabytes of source code....

I'm curious to hear what problems you've had with cargo. I've found it to be incredibly reliable and easy to use compared to most other similar tools in other languages like java, python, c++ (if its fair to pick on c++ here...)

My pet peeve with cargo is lack of binary dependencies.

Sure you can get around it with having a common workspace where all build artifacts are dumped, but still have to go for a coffee while dependencies like gtkrs-pango get compiled.

> If there are two algorithms that satisfy the same requirements, and only one is NP-complete, you pick the other one.

Problems can be NP-complete. Algorithms cannot - they can be exponential time (and if your choice is between exponential and double exponential, you pick the former!)

In practice, and especially for SAT solving, as far as I understand the state of the art is that the general case is NP-hard but we have tools that work surprisingly well in those cases that reality tends to produce. In some more academic cases, we even have heuristic solvers that are not even guaranteed to terminate, but usually do so within a couple of seconds, so you can get away with the "beginner's solution to the halting problem".

> In practice, and especially for SAT solving, as far as I understand the state of the art is that the general case is NP-hard but we have tools that work surprisingly well in those cases that reality tends to produce.

You're correct. Modern version solvers need fairly sophisticated algorithms, but it's not rocket science. It's mostly a non-problem for most users of most package managers most of the time.

The thing that worries me about this line of thinking is that as far as I know, we don't know much about why SAT solvers tend to work well in practice. So if I ever happen to hit a problem that the solver cannot solve, all we can say is "well bad luck". If the algorithm fails, I won't have any way to know why. This strikes me as a bad situation to be in.

Could anyone distill this down for the casual reader? The author has a fairly laborious style and seems to be writing for an audience that is intimately familiar with the internals of dep and vgo.

The article is about vgo, a proposal (with working prototype) to build package management into Go itself.

vgo bundles several improvements, a major one being the introduction of "modules", which are versioned collections of packages. Modules, among other features, will let us decouple the import path from the file system structure and source repository, making the awkward and much-maligned $GOPATH finally obsolete.

vgo also introduces a new dependency resolution algorithm, minimal version selection (MVS), which is different from traditional dependency resolution algorithms in that it will at any time choose the oldest allowed version that satisfy the minimum version constraints specified in the list of imported packages, where other systems will choose the newest. As MVS does not require (as with dep) a complex boolean SAT solver with potentially pathological, NP-complete cases, it is much simpler and faster to compute. However, it also has downsides. (Edited, thanks to pa7ch for correction.)

The author likes pretty much everything about vgo except the MVS algorithm, which has been deeply controversial since the proposal was first published. The author goes into explanations of why he thinks MVS is a bad fit, but this is a complicated topic, so you really have read both the vgo proposal and this rebuttal to understand what it's all about.

The article was written by Sam Boyer, one of the designers/authors of dep. While he has been courteous about it, there are obviously emotions at play; during its development, dep seemed to have the blessing of the official Go team, and this proposal came out of the blue and probably felt like an ambush.

Minor correction: MVS requires modules to specify the minimum version to use of a module. The algorithm will then select the newest among those based on the idea that semver is forward compatible among minor versions. The name minimal version selection throws people off.

Other algorithms would select the newest release within a major version even if no module has ever tested it and it was released 5 seconds ago.

When authors do make incompatible changes and MVS selects a broken dependency, as expected, there are manual escape hatches. More complicated NP complete algo's would have more of an automated answer here by allowing dependencies to have boolean constraints (greater then X, less then Y, not Z etc.) on version so that there is this collaborative summation of constraints to work around authors violating semver or bugs. To summerize poorly: It seems Sam Boyer regrets that dep didn't do what vgo does by default, but believes that a SAT solver could kinda do MVS for the happy path but still solve against the introduction of boolean constraints.

Personally, I'd like to let vgo play out a bit, I'm not convinced boolean constraints are something I really want imposed on me by my dependencies.

More info for those who are looking for more context: https://research.swtch.com/vgo https://github.com/golang/go/issues/24301

Whether you agree or disagree, its novel research in this field and excellent technical writing worth reading in its entirety over a cup of coffee.

Thanks for the correction; I worded that unclearly. My point is that with MVS, the solver will favour the oldest package that is allowed by the semver version constraint; given transitive dependencies on multiple packages, it will conservatively pick the oldest that satisfies all the constraint. You can bump the version you want by updating the version constraint, but unlike other tools (e.g. Cargo [1]), it will always favour the oldest allowed version.

[1] https://research.swtch.com/cargo-newest.html

Well, sort of. It's more like it has implicit versioning based on a lock file.

In other systems like Cargo and NPM you have a "lock" file which specifies the exact current version of dependencies to use, e.g. if you say you depend on "foo >=1.0" and when you build your project it actually downloads "foo 1.4", this will be written to the lock file. When foo 1.5 is released, it won't be automatically upgraded even though you said it should be fine.

vgo is exactly the same. When you first add foo1 it will implicitly (via semver) assume you meant "foo >=1.0" and download the latest version - foo 1.5. That version will be written to the lock file (go.mod) and when foo 1.6 is released it won't be automatically used. Exactly the same as NPM and Cargo.

And just like NPM and Cargo you can trivially update to the latest compatible versions of your dependencies with `vgo get -u` (like `cargo update`).

It's not that different. The main differences are you can't specify maximum versions, and different major versions are treated as different packages so you can easily have them both.

MVS has another quirk, which is that there are no version ranges. Only minimum versions.[1]

It relies completely on the community adopting "Semantic Import Versioning", which is basically "v2 is any breaking API change, which must live along-side v1 because it's actually a totally different import". So it blends some of the pros and cons of both dep/cargo-like package management with those of npm's.

It's... interesting. And I love that it's not an easy answer either way - it's a significant alternative, and it's bringing up lots of new(ish) ideas and approaches that have kinda fallen on the side. That said, I've had concerns about vgo from the beginning, primarily around the "does this make sense in an ecosystem". Package managers are not just "this algorithm is efficient", they have major code-culture effects (and the ecosystem can support or destroy the package manager's goals). This article is hitting most of my concerns squarely on the head, in quite a bit more detail than I've managed to figure out.

Also, https://medium.com/@sdboyer/so-you-want-to-write-a-package-m... is very good. sdboyer has been thinking about this stuff for quite a while - emotions may be there, but he has a LOT of experience backing it up.

[1]: To work off / echo pa7ch's comment here: it picks the highest minimum when there are conflicts in transitive dependencies. So if you use A (which needs C v1.1+) and B (which needs C v1.3+) you get C v1.3, even though v1.4 is available. You can upgrade to v1.4, but only by putting "C at v1.4" in your top-level dependencies list (which the tool automates for you) to override those values, and it's assumed safe because it's still importable, therefore still v1-compliant.

For an introduction to vgo and to see the other side of argument, watch the presentation from Russ Cox: https://www.youtube.com/watch?v=F8nrpe0XWRg

I was sceptical but now kind of sold on the idea after wathing the presentation.

Anyone have a bullet-pointed summary of the "crux" of what the author dislikes about MVS?

I find MVS to be a very natural and simple solution to versioning - I feel like it's much closer to how people actually manage their dependencies in practice.

After an admittedly low-effort skim of this article, most of the arguments seem to be that the author doesn't think it "feels right". Is there something more concrete hidden in here?

Its a long post, but I don't think he actually gives the reasons, rather he intends to write several additional posts that will.

Honestly, Russ Cox published his first post about vgo on February 20 (https://research.swtch.com/vgo). It's been almost 3 months then. It's not fair to ask the community to wait any longer. If Sam Boyer has good reasons to oppose vgo, then he should publish them now.

Russ waited over a year and a half before engaging meaningfully with the community on package management, and waited another nine months before coming up with this greenfield proposal. There's no reason to rush things now.

> There's no reason to rush things now.

Golang is hurting right now because of this issue. In fact this and the lack of a canonical GUI approach are the only two caveats I have when suggesting the use of Go.

I'm not saying to rush, but the constant churn in this space needs to end as soon as it can. For a language with as much of a focus on simplicity and stability as Go, it's embarrassing that this has been an issue for so long.

I have been using Go professionally[0] starting sometime between 1.1 and 1.4 and in that time I have something like 4-5 different ways of handling dependencies in my repositories based on when those projects were started and/or last overhauled. Each time I changed my approach I was following the current best practices, or so I believed. It's madness, and it has to end sometime.

[0]: Started playing with it in 2009, in fun projects I'll use whatever works and/or is fun and/or gets the job done soonest. In professional projects I'm much slower to adopt new tools.

> Golang is hurting right now because of this issue.

Go was hurting _three years ago_ because of this issue. The pain was tractable and critical. The community made a thorough and good-faith attempt to end the churn with `dep` -- which was summarily dismissed, ignored in part and whole.

I don't see why `vgo` should get the fast track now, when the core team has been dragging their feet on the issue for years.

> The community made a thorough and good-faith attempt to end the churn with `dep` -- which was summarily dismissed, ignored in part and whole.

I've used `dep` on a few projects and I don't mind it. If that's what we use that's fine with me.

> I don't see why `vgo` should get the fast track now, when the core team has been dragging their feet on the issue for years.

I think you misunderstand my point. I'm not throwing my weight behind `vgo`, I'm throwing my weight behind making a long term decision of any kind.

I get that it's important to take your time and get things right. That way you can avoid starting down one path and wasting everyone's time, then switching and declaring that everyone follow you down this one, and so on, and so on...

Oh wait - that happened anyway.

> There's no reason to rush things now.

I disagree. I want things to move forward as soon as possible. I've been waiting so long for a dependency management solution and for GOPATH deprecation!

Russ came with his "greenfield" proposal because he was not convinced that dep could become the long term solution to dependency management in Go.

Sam and the whole dep team have been working on package management for more than a year now. If they see important issues with vgo, I think they should be able to explain them succinctly in a post, at least to convince other gophers to hold on.

i think one fundamental problem is that for most ecosystems, versioning is almost entirely arbitrary, manual and dissociated from language semantics.

the elm package manager seems to take a step in the right direction where changing a function signature will force a version bump if you want to publish the update. (iirc)

are there other languages/ecosystems that do something interesting with versioning?

We're working on similar tooling to that in Rust https://github.com/rust-lang-nursery/rust-semverver

This badly needs a TL;DR. If there's something seriously wrong with vgo's dependency versioning then it should be possible to explain it in less than 100k words.

> If there's something seriously wrong with vgo's dependency versioning then it should be possible to explain it in less than 100k words.

That is a very lazy argument.

It's perfectly reasonable to claim something is flawed for numerous complex reasons. It's equally reasonable to claim that a complex flaw may be difficult to explain concisely.

Most importantly though, I don't think Sam is claiming that there's an obvious fatal flaw in vgo, but rather that it is not the ideal choice given the problem space.

To make that more subtle argument, it is necessary to fully define the alternatives and problem space, which is what a lot of the words are attempting to do.

The tl;dr would be along the lines of "vgo knowingly makes a set of tradeoffs which I think are worse than the tradeoffs that are possible with another hypothetical option, which I am also proposing"... but that tl;dr is not really a very useful one, and attempting to condense a more meaningful one will lose a bit too much nuance I think.

The post could stand to have an introduction that lays out its arguments at a high level, as is hammered into students at every level of English class.

I am afraid it is because lot of expectations were built that dep author would reveal some rather serious flaws with vgo. But since he has nothing explicit, many thousand words are required to cover that up.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact