Hacker News new | past | comments | ask | show | jobs | submit login
A Proposal for Package Versioning in Go (golang.org)
448 points by ArmandGrillet on March 26, 2018 | hide | past | favorite | 322 comments



How many iterations of this do we have to go through? Go hasn't had mature dependency management since its inception and this constant thrash is really starting to make things difficult.

For all practical purposes dependency management for programming languages is a solved problem, with many open source examples. The only explanation for this constant re-invention that I can come up with (and that's shared among others I know) is that Google doesn't care/need a dependency tool because of their mono repo. Which is rather frustrating since we get features (cough aliases cough) forced down our throats when Google suddenly finds a need for them.


Actually I think the process shows a strength of the Go community. The initial lack of a dependency management tool certainly had to do with the practises at Google - as Google employees, the processes of Google certainly shaped the needs of the Go creators.

But I am very happy, that they did not pick a random management system, just because they "had to have one". We can see enough bad examples in the wild. They left this aspect out of the language standard, to tackle the challenge later, with proper consideration.

As far as I understand all details, Russ Cox has presented a very thoroughly worked out spec, and it seems to solve a lot of problems, other dependency management tools have. It is also completely compatible with Go so far and also fully optional.


>> But I am very happy, that they did not pick a random management system, just because they "had to have one"

But they have, multiple times? First it was just giving people links to godep when people complained in github issues. Then it was a tacit "we like this" about gps and glide. Then it was a pretty-much-official-blessing of dep. Now it's a super-official-now-we-mean-it implementation of vgo. I recognize this as a pattern because I do it all the time, software engineers constantly re-write things from the ground up when they don't care about actually solving the problem and just want to play around with cool new ideas.

As far as the vgo spec, I'm not against it per se... I like most of the general conclusions, but none of it is new. We've had these ideas in the Go world for a while. Most of them have even been implemented! Why can't we just slowly transition an existing tool? Or maybe implement a common library that individual tools can use to solve the problem in their own way? Oh wait we already did that and it got abandoned. Until we see a tool with a stable API that solves the 80% use case and lasts for over a year I'm not going to get excited about the new flashy thing.


> But they have, multiple times? First it was just giving people links to godep when people complained in github issues. Then it was a tacit "we like this" about gps and glide. Then it was a pretty-much-official-blessing of dep. Now it's a super-official-now-we-mean-it implementation of vgo. I recognize this as a pattern because I do it all the time, software engineers constantly re-write things from the ground up when they don't care about actually solving the problem and just want to play around with cool new ideas.

No. Golang hasn't had an official/pretty-much-official dependency management system. The community recommends some tools over others, but that's about it.

With dep it is a bit different - the creators were google engineeers and they wanted it to become the official tool, with integration as a go subcommand etc.pp. - but that was never signed off by Pike or any other lead, only confirmed that it would be a possibility if the tools performs well.

With vgo its similar to dep - a proof of concept to see _if_ it works nicely.

Regarding a library multiple tools can use; Yes, that would be a nice thing - however when designing such a tool _and_ a library at the same time you will get into bigger problems. Generally you'll want a library designed by someone who already solved a given problem in at least one way. That brings enough experience to make the correct architectural decisions early on as well as a good, usable API.

I'm happy that we don't just get half-assed solutions just to have one. I have seen enough projects go to shit because a major step in the project was reconsidered a few times before management put a lock on it because it went through 'enough' revisions.


>"With dep it is a bit different - the creators were google engineeers and they wanted it to become the official tool, with integration as a go subcommand etc.pp. - but that was never signed off by Pike or any other lead, only confirmed that it would be a possibility if the tools performs well."

I'm frustrated. I started using Go during version 0.8 and have been using it since with no dependency mgmt beside my own forks because I hated everything I tried. Then dep comes along, I give it a few months to mature, try it, and it worked great. It solves my common use cases near perfectly.

What this blog post and proposal lacks IMHO is a clear case for why dep isn't "good enough". All I see is handwaving:

"Early on, we talked about Dep simply becoming go dep, serving as the prototype of go command integration. However, the more I examined the details of the Bundler/Cargo/Dep approach and what they would mean for Go, especially built into the go command, a few of the details seemed less and less a good fit."

So "a few of the details seemed less and less a good fit" is a statement that is impossible to refute because it is too scare on any details.


This.

You want feedback now, on this new proposal?

Why bother?

It's patently obvious that our feedback on dep has been totally ignored.

What indication is there anyone will pay attention to any feedback now?

I can't be bothered any more.

I just want one solution to be picked, blessed, implemented and then not changed five minutes/months later.


There's many pages here that explain in detail the problem with the existing solutions. Read all of them. https://research.swtch.com/vgo

If you read the minimal version selection algorithm, it's clear that the proposed solution solves the problem in a novel way.



In my eyes, it would be a warning sign, if the new proposal contained too much completely new ideas. That would mean, that they are not tested in practise. That is why we needed all those project which implemented versioning so far. But getting to an officially blessed, many years supported proposal, starting with a clean sheet, but based on all the experienced lerned, is a good approach.


Welcome to IT as a career.

Having started in the 90s, I have been through these same conversations regarding Adobe, Oracle (oh lawd Oracle) MS, Dell, Apple to a lesser extent.

“Vendor the industry currently relies on heavily is changing things or yanking the rug again?”

The free market works by big players yanking every one else around. Guh.


The big players aren't yanking anyone around. They take responsibility by announcing things like alpha periods, beta periods, release candidates, Long Term Stability branches, etc. And enterprise developers take advantage of these, ensuring that they only code production applications against stable/LTS libraries and APIs.

It's SMB/"indie" developers who should be taking the responsibility here, I think, for relying on these officially-unstable packages in their own production systems, and especially for encouraging others to rely on such. Yes, sometimes, it's the only way to be competitive/have a https://en.wikipedia.org/wiki/Unique_selling_proposition. But that doesn't mean that it isn't their responsibility to cope with these changes. They're taking on that risk-of-change by using those unstable libraries or APIs.

Heck, this is half of the point of software Venture Capital: being able to underwrite the risk of relying on an unstable platform/ecosystem, so that you can have enough runway to recover and still finish your product if "the ground moves under you", and therefore can "safely" rely on these unstable technologies.

(And one not-oft-talked-about property of bootstrapped startups is that they have to underwrite that risk themselves. If a bootstrapped startup isn't just gambling with its founders' money, it has to behave more like an enterprise development shop, building on only stable foundations.)

Theoretically, you could move this underwriting role into the companies themselves, such that the companies themselves would monetarily cushion the blow of changes in their unstable APIs. But they'd need a pretty big interest in the involved consumers to do so. In such a world, you likely wouldn't have separate startup businesses using the BigCorps' unstable APIs; rather, you'd have a set of BigCorp incubators with large shares in most new startups. Maybe that hypothetical world would have higher https://en.wikipedia.org/wiki/Gross_world_product than our own! But I bet there'd be fewer startups, because starting a startup would be a lot more like a regular BigCorp job.


My experience IBM products like websphere tell different story. The were slow and buggy and never slow and solid.


There's no comparing proprietary products with open source projects. The Go project is a beautiful body of work and there's no call for comparing it to proprietary crapware.


>Actually I think the process shows a strength of the Go community. The initial lack of a dependency management tool certainly had to do with the practises at Google - as Google employees, the processes of Google certainly shaped the needs of the Go creators.

Do you think that's a community process? I always have the impression from Go developments that all major decisions are "my way or the highway" affairs of 2-3 of the initial designers. Possibly the same ones writing those vague "exploratory" posts and dismissals of common proposals, that voidlogic captures well in his comment below.


It's far from a solved problem.

Many ecosystems haven't gotten past the "single dependency file and lock file" system where an entire project has to use a single version of each dependency. There are people splitting their project into a thousand "microservices" and making them interoperate over HTTP instead of regular function calls so that each part of their project can have its own dependencies.


Yes, its kinda funny, I mean package managers are probably the ones with the most solid experience in dependency management and when I look at them I can see many different concepts at work there. Some like for example pacman are bold and are not afraid to break a system during upgrade (as it is very unlikely that the upgrade will fail) and others put an abnormal amount of effort to ensure an always functional system at any time (especially on source based distributions, e.g. paludis).

And while I have been using development snapshots for years with my system and didn't have more bugs to cope with compared to other systems, I can understand that developers in corporate contexts need to be able to specify dependencies with versions. So I appreciate that proposal very much and hope it will lead to a better solution for every body involved by providing a way to use versioned dependencies without introducing too much overhead for those who don't need versioned dependencies.


Pacman, apt-get, et al also have something most language specific package managers don't, maintainers. Maintainers back port security fixes and are careful not to break binary compatibility. A random $lang specific package is unlikely to have either, if you expect a bug fix you have to upgrade to the latest version. Developers that don't want to provide this sort of stability typically hate distro package managers.

No package manager can fix a cultural problem.


If you rely on dependencies that are not maintained you are already in trouble. Being the author/maintainer of a package comes with a responsibility. NPM already showed us what happens if you just let people do their thing.


With package managers I worry more about how I distribute my dependencies then anything else. Golang's vendor/ directory concept support was a revelation for me - just ship you dependencies alongside your code.

Meanwhile, I've heard people spend ages complaining that this is totally wrong...while overseeing setting up local artifactory instances and wrangling proxying to do the exact same damn thing.


>Many ecosystems haven't gotten past the "single dependency file and lock file" system where an entire project has to use a single version of each dependency.

Perhaps they shouldn't go past that.


also so that a crash doesn’t stop everything like a BSOD, also so that you can use the language most appropriate for the task at hand because assuming a single language is good for everything is just wrong, also to be able to incrementally migrate things to new PAAS/build systems/etc in the future so you don’t do big bang... and plenty more

multiple library versions is one of the lowest priorities when doing micro services id say

this is not to say that people don’t significantly over use micro services, but there are plenty of hard problems they solve


No kidding. I'm so impressed with most of Go, but this is just silly. I feel like Rust started out with solid dependency management in place, at least it's been there from the early days. I work a bit with Node and Ruby too. Both have solid dependency management in place. I mean there are warts, but generally the problem is solved. Actually, now that I think about it, Yahuda Katz has had a hand in working out dependency management for all of those technologies, Rust, Ruby and Node. Maybe the Go guys should bring in Yahuda to help out. If nothing else, he could probably help them get out of the paralysis of analysis loop they seem to be in here.


Rust is good but not great. Being able to import two major versions of a lib into the same compilation unit (am I understanding this correctly?) is a huge win. Rust can have transitive deps that only differ by version number, but they can't come in contact with each other (don't cross the versions).

Reread the post, the authors make some great, mature realizations that I hope other languages will listen to. I don't use the language, but I use languages that Go has affected and this is a great thing.


> Rust is good but not great. Being able to import two major versions of a lib into the same compilation unit (am I understanding this correctly?) is a huge win.

The one time I needed that I found a pretty good workaround: you make two crates `helper_old` and `helper_new` and have each depend on different versions of that crate. Then I just pub re-exported the crate it depends on.

That way your other crate can now depend on two different versions of that dependency.


This is also the standard approach in the java world, where it's known as "shading."


"Shading" obviosuly comes from the plugin of the same name. But I think of shading to refer more to the practice of repacking classes into a single jar, than the renaming of package name spaces. As far as the standard approach to supporting muliple library versions in one application, i think OSGI is probably the closest thing to a workable solution.


Genius! I trust you will be authoring a new book "RustOps in Practice".


seems like that’d be a decent compiler trick


> import two major versions of a lib into the same compilation unit

When does this situation come up? What is a compilation unit in this case?


It comes up with apis that deal with data storage or IPC concerns.

You might need a handler for both versions if you have customers that use the old api and ones that use the new one. Or different departments of the same customer.

Or if you dodge that bullet but have to write a custom migration tool to get data from the old version and cook it to go into the new one. Half of the code is read only and the other write mostly but they still have to coexist if you do anything more complex than a backup/restore script.


If the newer version of some library has some breaking changes, you can migrate bit by bit instead of doing it all in one go.


If this is the only motivation for multiple dependency versions in the same compilation unit (crate), I'm not convinced. You would be trading off the simplicity of each crate specifying a range of acceptable dependency versions for specifying `N` ranges of acceptable dependencies, and requiring one of each. Much better to enforce one version per compilation unit, and if you want to get complex with your dependencies, then break your project into multiple compilation units.


That needs language support too, some languages simply do not support it. I believe this includes Go due to how linking works. This could be fixed but it would be hard, maybe a 2.0 thing and it does make package management harder. And in a monorepo it doesnt happen so Google doesnt care.


> I feel like Rust started out with solid dependency management in place, at least it's been there from the early days.

Rust had actually been through three failed package manager projects before they decided to bring in the domain experts at Tilde. Nice that at least that bit of churn has been forgotten - but yeah, it became obvious over that time just how non-trivial the problem is. Cargo makes it seem easy though ;)


> How many iterations of this do we have to go through?

Many, I think, based on what the author of dep says (https://sdboyer.io/blog/vgo-and-dep/). I don't get why rsc would just throw away the code and work that was the culmination of years of community work. Isn't rsc largely responsible for creating this problem in the first place? I'm doubtful he's going to fix the dependency problem from scratch. There's a certain pigheadedness to a lot of golang decisions, and I'm afraid this is another one of them...


I definitely see this with Go decisions too, but I think that same "pigheadedness" has also been a driving factor behind a lot of what makes Go so effective to work with. They've thrown out an awful lot of stuff, which sometimes makes me bristle but I usually have to admit they were right.

I think they threw out a bit too much, but that's only a problem if the gaps don't get filled or the compromises aren't adequately explained. What's left is a very pragmatic selection of extremely useful tools that can be used to cut a tonne of working code very, very quickly. I can't say I always like it, but I benefit tremendously from it.


I really like Go overall, especially for systems work. But there's various quirks that hurt my day to day productivity, and I think they only exist due to stubbornness.

For example, why do unused variables and imports, a style issue, cause a compiler error? This is a constant pain if commenting out lines of code during debugging or prototyping. On the other hand, the compiler doesn't care if I ignore or forget to check error values - which is almost always incorrect behavior.


> For example, why do unused variables and imports, a style issue, cause a compiler error?

Here's why - https://golang.org/doc/faq#unused_variables_and_imports


Yeah for sure, it's definitely got its warts. I suspect the import thing was far easier to code as an error than to compute whether they need to be pruned. Gotta love those go build times though! With vim-go and goimports, it's handled automatically for me on save so I don't find that an issue any more. Unused variables, on the other hand...


Sometimes thrashing is good. Consider the pile of crap that is NuGet in the .NET world. Sure there's Paket, but it has nowhere near the adoption/support of the Microsoft blessed NuGet package manager.

At least the Golang team can cherry pick the best ideas & lessons-learned from these community-driven package managers.


I've been hearing this argument for about 4 years now. Pointing out that .NET has terrible dependency management does nothing for me when Google's main languages (Python, Java, C++) all have solved dependency management for their own use cases. If the Golang team has had enough time to cherry pick the best ideas and lessons learned to build a language they've had enough time to do so for dependency management.

At the very least they could have continued their hard core hands off approach of allowing the community to solve it. But instead they half-assed it and started capriciously anointing chosen solutions and honestly we're probably in a worse spot than we were before dep came along.

Plus, lets be honest here, there's no excuse for constantly changing APIs and binaries. If this was truly about getting the best ideas and lessons learned we'd just see refactors of existing tools with slow migrations to new concepts, but instead we've seen multiple complete rewrites, despite multiple efforts to build common libraries that should prevent such events!


>when Google's main languages (Python, Java, C++) all have solved dependency management for their own use cases.

Huh? Of those three, I would only consider Java "solved", and both Maven & Ant are far from the panacea of package management.

virtualenv is a community project (much like dep), is pretty new in the grand scheme of things, and isn't completely standardized (some projects use tox, some list out requirements.txt (rarely version pinned), and others vendor.

As for C++, almost everyone does something a bit different so I don't see how that is at all relatable.


> virtualenv is a community project (much like dep)

virtualenv was a community project but based on that venv was created and has been part of the official Python distribution since Python 3.3, released in 2012.

https://docs.python.org/3/library/venv.html


>virtualenv is a community project (much like dep)

It doesn't stop there either I've seen python projects using nothing, virtualenv, venv, pip-tools, and pipenv.

I finally feel like pipenv is the true solution and it seems very new. All that said and Python is quite old relative to Go.


Python is slowly catching up. For example in JS/Elm/Rust you have a clean approach with:

* a central package registry

* a file to describe dependencies

* a command line tool to install, build and publish packages

In Python things are more fragmented. You have:

* a central package registry

* a file to store some of your dependencies (Pipfile)

* a command line tool to install packages (Pipenv)

* a file called setup.py in which you need to specify your dependencies again (in another format). You can execute this file to build/publish packages.

It think it would allow for a much nicer user experience if pipenv/pipfile would handle packaging/building/publishing as well..


With all the bad press npm has had, I would not put it in the clean approach bag.


For java, I would look at gradle, not maven.

Of course, gradle hooks into the whole maven ecosystem but that is one of its advantages. Everybody in Java land understands Maven packages but how you generate doesn't matter.


Gradle degraded the dependency experience significantly from maven. Maven had a deterministic, sensible (and selectable) way of dealing with conflicts - gradle just defaults to "largest version wins". Similarly, gradle chucked the whole idea of being able to manage common sets of dependencies across multiple projects easily (the maven super pom). I mean, it's all programmable in gradle, but it pretty much ignored the excellent work maven had done except to use its repos.


Android builds have renewed by love for Maven.

If it wasn't for Android, I wouldn't bother 1 second to learn Gradle.

But I guess Groovy needs some project to keep it alive, now that no one remebers the days JSF beans would be written in Groovy or JUGs were holding Grails talks month after month.


Don’t worry, Gradle is moving to kotlin for build scripts already. Groovy will soon be a (failed) thing of the past.


Seems like every other site is converting its Gradle builds into Bazel.


Python has several widely-used dependency management systems.

C++ has no widely-used dependency management systems.

Both languages are doing fine.


Because of the lack of dependency management C++ suffers from "mega dependencies" that include the entire kitchen sink and more.


Which C++ projects have "mega dependencies"? There is no technical reason to have those (or any other kind of dependency management problems, for that matter). I'm not sure what it's like on Windows, but most other OSes support all of the major C++ dependencies in their default package managers (or things like MacPorts on Mac OS) and it's not difficult to install anything else manually.


I think he/she means something like boost or Qt.

On Windows, Microsoft is investing on vcpkg as package manager.

https://github.com/Microsoft/vcpkg


Interestingly, Debian/Ubuntu has standalone packages for most of the boost libs that need to be compiled.


You don't think Python also has thrash? I'm a huge fan of pyenv as it brings in manifest+lockfile+segregated install paths. But those three things are relatively new to Python and very new to get unified.


environment namespacing != dependency management

Regardless of your setup at the end of the day you're using pip to solve your dependency graph, and have been for over a decade.


I mistyped. I meant pipenv (https://docs.pipenv.org/) not pyenv. Pipenv does use pip but is a significant, recent upgrade to package management and is now the recommended tool.


There is only 24 hours in a day, and the Go team has limited resources. They focused on other issues in the last few years. Dependency management is the current focus now.


> If the Golang team has had enough time to cherry pick the best ideas and lessons learned to build a language

Except where have they done that? They've had 40 years and their solution to the C binary function error code return problem was... to add a special way to return an error code. There have been superior solutions to this for 20 years at least.

How long have we known about the value of generics?

To me, the fact that they can't figure out how to solve this isn't remotely suprising.


> If the Golang team has had enough time to cherry pick the best ideas and lessons learned to build a language they've had enough time to do so for dependency management.

Most of Go's features are those that proved their value over 40 years. There are no dependency management schemes that have that kind of reputation. Good ideas take time and misfeatures are expensive.


CSP the central premise of golangs concurrency model doesn’t meet that bar.

If they are willing to throw out that requirement for something as fundamental why should dependency management be held to so high a standard.


Not sure what you're referring to. CSP turns 40 this year, and it's not the central premise of Go's concurrency. Whatever the original intent, very little Go is written in true CSP fashion. Of course, even if I were wrong about these points (and I'm not), it wouldn't invalidate my original point as you suggest: just because most of Go's features are very old doesn't mean Go is forbidden from making an innovative move or two--for example, its scheduler is unprecedented (at least as far as I'm aware).


Something being formulated exactly 40 years ago in academia doesn’t count as proving its value over 40 years especially given that broad support of the concept is new being supported by golang and at best the marginally popular clojure. The golang faq specifically says that csp is the basis of its concurrency.

Combine those 2 facts & I think it’s fair to say golang is willing to base things (central ones even) on untried ideas, which directly contradicts your stated position which was precisely that no dependency management scheme lives up to the precedent of the other accepted features of the language.

This is trivially proven incorrect via looking at other language dependency management schemes that have much more broadly proven their worth.


> doesn’t count as proving its value over 40 years

The Go designers used CSP over the course of many decades via Newsqueak, Alef, Limbo and Plan 9 C's libthread.


As only a go user & not a contributer, that describes more the paradigm I’ve observed for how golang chooses the features to add.

Did the principles enjoy a feature while using it on Plan 9? Likely to be included. A feature having broad industry or academic backing, or a long history? Immaterial.


Newsqueak, Alef, Limbo and Plan 9 C's failed spectacularly in the market, nothing to brag about as proven technologies.

In fact, as nice as Go might be, if it had the AT&T Research Labs stamp instead of Google's, it would been just as successful.


> Combine those 2 facts & I think it’s fair to say golang is willing to base things (central ones even) on untried ideas, which directly contradicts your stated position

It absolutely doesn't contradict my stated position. I was clear about that in my previous post.

CSP inspired Go's features (in that sense alone it is "the basis for Go's concurrency" as alluded to by the Go FAQ) , but as I mentioned, it's not integral to Go in any way, and very few programs are modeled in CSP fashion.


What are some of your specific complaints about NuGet? I've never disliked it that much. For the most part it has "just worked" for me.


In large projects, NuGet takes FOREVER to resolve package dependencies - I mean 4+ minutes on a new i7 Thinkpad w/16gb RAM/1TB ssd.

Heaven-forbid you want to upgrade a package that exists in all 130 projects in the solution (not my call to have that many projects) - you may as well take a long lunch. I will try to make that the last task of the day so I can let it run for as long as it needs to.

The VS UI for NuGet is terribly buggy.


Uauu! 4+ minutes.

Try to build a C++ or Android project and then go for lunch instead.

NuGET is great.


That's 4 minutes to resolve one NuGet package's dependencies - not 4 minutes for a build.


Really? I never had it take that long.


Usually it doesn't take that long - but it is still generally, unacceptably slow IMO.) Last week I was working on fixing duplicate (version mismatches) package references in the our csproj files) in the branch/solution and I couldn't believe that took 4 minutes to resolve a single reference (after fixing the issue in this particular project!)


I see, I guess I have been lucky then.


NuGet / VS has no qualms with having two references to the same package, but different versions in the project file (i.e. Newtonson.Json.dll 9.0 and 10.0) The build will likely fail though and you'll get no visual feedback that there are two refs in the VS NuGet UI. How did we we get in that state to begin with? I suspect through bad project file merges (or possibly NuGet UI bugs, can't say for sure.)


> How did we we get in that state to begin with? I suspect through bad project file merges (or possibly NuGet UI bugs, can't say for sure.)

Most likely someone modified the packages for an individual project and not the solution as a whole. Always manage packages at the solution level and your much less likely to have these issues, unless you have multiple solutions...


That sounds great when you have a small team, with 30+ devs it's hard to police.


- no local dependencies^ - 'unable to resolve x, arbitrarily picked 1.4.3' - no lock file - nuspec file configuration hell - nuget v2 and v3 feeds arbitrarily going down - installing a specific nuget version on anything other than windows.

Basically, it 'just works' for simple senarios, its just not very good for anything else.

If you want a comprehensive guide to why its not awesome look at the 'packet' website, they cover the issues quite clearly.

^ you can actually use local dependencies, but its irritating and poorly supported (still uses the global system cache, forcing a cache flush to update).


When you install a package, nuget will install old dependencies of that package.


> Go hasn't had mature dependency management since its inception and this constant thrash is really starting to make things difficult.

What constant thrash? In the 6 years since Go 1.0 the only dependency management related change I'm aware of is the addition of `vendor/`.

Update: sounds like "thrash" refers to the variety of community vendoring tools. I guess of the few I've used they all felt fairly similar/interchangeable, so it didn't feel like thrashing changes.


The constant thrash caused by the lack of a standard package management tool—even a de facto standard. Just look at the various package managers that have come and gone over the years:

1. https://github.com/tools/godep

2. https://github.com/kardianos/govendor

3. https://github.com/robfig/glock

4. https://github.com/rogpeppe/govers

5. https://github.com/Masterminds/glide

6. https://github.com/golang/dep

I'm sure I'm forgetting a few. And just when it seemed the Go team might be ready to standardize around Dep (6), they threw this wrench in the works.

I'm of two minds about vgo. I think it has some interesting ideas, but Dep works today, is widely adopted, and has nearly all the features you'd expect from a modern package manager.


> The constant thrash caused by the lack of a standard package management tool

There is a standard, and has been one from the early days. The catch is that the standard is not loved by all, so others have created competing package management systems to fit their own needs.


What is it?


go get.


Does that finally work with private repos?


It always has. You have to edit your gitconfig. On mobile, so I can't paste mine. A Google search should get you there. It did for me like 5 years ago.


So having to reset my git config, either globally impacting everything I do, or manually for every project I use go get in...

Got it.


Between "go get", Godep, Govendor, Glide, gb, Dep, and this — and I'm actually omitting a bunch of other, less popular utilities [1] — there's certainly been churn.

There's definitely a feeling among Go developers that this problem should have been solved much earlier, and that the Go team's years-long refusal to address it caused the proliferation of mediocre tools (looking at you, Glide) that in many ways made the whole situation worse.

When Dep came along, a lot of people breathed a sigh a of relief, because we finally have something okay, and we can go back to being productive instead of fighting dependency management problems all day.

[1] https://github.com/golang/go/wiki/PackageManagementTools


> In the 6 years since Go 1.0 the only dependency management related change I'm aware of is the addition of `vendor/`.

This. Sure, new tools have appeared, but there's literally nothing wrong if you decide to just stick with godep or whatever you happen to be using.


There have been many packages for dependency management, which is not "official" thrash, but thrash nonetheless. Granted, mostly it's been thrash over the last few years. Especially bad since `go dep` was hailed as the king, and then recently abandoned entirely.


The blog post mentions some of this. It claims " For a long time, we believed that the problem of package versioning would be best solved by an add-on tool, and we encouraged people to create one. The Go community created many tools with different approaches"

These many tools were the thrash, including godep, gb, glide, dep, and more.

Each of these expressed versions with different manifests, many of them could import other format's, but not all. All-in-all it has been a mess.

Sure, the go team has not officially done much, but that's because their stance has been borderline negligent on the topic.


I agree that the thrashing is a nuisance, but the 'reinvention' has nothing to do with Google except that Go didn't ship with a package manager by default because it wasn't useful to Google for the monorepo reason you cited. Regarding "features forced down our throats", I think that has only happened with aliases, and the community pushed back to the effect that the feature addition was halted until the community had time to properly review it. Mostly I think that was a one-off problem.

As for "it's a solved problem", is it? Until perhaps the last year, pip, npm, and most other package managers have been plagued with serious issues. I'm not fond of the situation in Go, but I think it's not unreasonable to try something different and less binding than a package manager until a solution emerges which solves the problems faced by other package managers.


+1

I actually also liked dep with the vendoring approach, it is just better to ci a repo with vendored dependencies (no more Broken Building because of Internet/Proxy or github Problems). And in Go it was/is Even simple to upgrade them. Sadly go is slowly moving away from that Part.

the new approach is Bad, because they still pull Sources from Github, this Makes their approach unsuitable.

E: damn iPhone autocorrect


> Dep was released in January 2017.

I know, Go is not a very old language, but all the Go projects I was involved in were started before that and I haven't written any Go in like 5 months, but still - this is too new for all the Go developers to have looked at it in depth (I certainly didn't come around to try it) and yet there's something new here.

We're all joking about a new JavaScript framework being released every week, but dependency management in Go feels a little alike.

I'm not a huge fan of Go anymore after my last experience (writing some web stuff, with CRUD) after having been a huge fan (writing monitoring checks and non-HTTP daemons), and I have no immediate plans - so I've no stake here, but I can really only hope this annoying part (which least-bad dependency management system will I use?) is finally over.


We've been using Dep for about 6 months. It's definitely production ready. The interface may change a bit in the future, and of course now Dep may be scrapped altogether, but right now it's very usable. We haven't encountered any bugs.


Agreed. Dep had its warts (20 minutes for a very simple dep ensure...), but it solved the 80% use case in a way that was least objectionable for the majority. The fact that they can't (won't?) even refactor dep to suit their new ideas or try and revitalize the gps project just tells me that they don't really care about solving this for end users and just want to have fun writing a fancy dependency graph solver. Which is cool and all, but at least give me a stable API first.


> just want to have fun writing a fancy dependency graph solver

There is no graph solver, that's the whole point.


Other package managers, including the ones mentioned in the OP, don't have SAT solvers either. This isn't a new feature of dependency resolution algorithms.


The proposal explicitly allows for a "proxy" so you don't have to pull from GitHub et al:

> Define a URL schema for fetching Go modules from proxies, used both for installing modules using custom domain names and also when the $GOPROXY environment variable is set. The latter allows companies and individuals to send all module download requests through a proxy for security, availability, or other reasons.


The major problem I have with the proxy is that now I have yet another system to keep track of, update, secure, backup and maintain.

At least with vendoring I keep all of my code pinned in one portable repo and all I really need is dep and git. If I clone or backup the repo I don't have to think about cloning the state of the proxy also to get my code to compile.

I don't see that any serious business would be very enthusiastic about adding a code proxy as yet another cog in their development pipeline. One of the main appeals of Go is that it reduces many pain points of a development environment As a counterexample see e.g. the insane requirements of the Javascript web development ecosystem.

I also have some reservations about getting developers to adhere to a versioning repo layout standard - that /path/to/v2/ proposal and semantic versioning - absent any automated tools to enforce it (see cats, herding of). How many of the many Go github repos follow the recommended cmd/pkg code layout [0]? Not many. My cynical notion is that anything that's not automated out of the box is going to quickly run into the inevitable proclivity of humans to be messy.

Having said all this, I do think the discussion is worthwhile. My hope is that rather then completely switching dependency management systems the discussion identifies the things which are still painful with dep and fixes them. Let's face it, the vast majority of projects out there really don't need anything more complicated then dep, so I would hate to see it abandoned.

[0] https://github.com/golang-standards/project-layout


You don't have to "pull sources from GitHub". You can host your code as a .zip file on any static web server. Look at under "Download Protocol" in this article: https://research.swtch.com/vgo-module

About vendoring, I think Russ Cox is aware of the issue. Look at this discussion on his website: https://research.swtch.com/vgo-module#comment-3771676619


I haven't read the spec, only the blogpost from last month, but afair nothing in vgo prohibits the use of a cache in front of GitHub that makes CI independent of GitHub.com availability (unless, of course, you're bumping your deps).


You're right, nothing in the spec prohibits "vendoring" the .zip modules in a directory in your project. It's purely an implementation issue. vgo already provides GOPROXY, but a built-in "vendoring" mechanism would be easier for most people. Here is one I commented on Russ' blog: https://research.swtch.com/vgo-module#comment-3772245118


I thought they were adding support for vendoring? Was it abandoned again?


I think GP is referring to the fact that dep may not remain the blessed solution for long; recently, Russ Cox posted a number of articles suggesting looking into alternatives https://research.swtch.com/vgo


> For all practical purposes dependency management for programming languages is a solved problem

Hey Phil! It's been a long time. C'mon man, when we worked together, are you telling me you never ran into the never-resolving dependency map with berkshelf and chef? I know I did many times. Locks and a manifest did not solve that. It sounds like the proposed solution should prevent that.


> For all practical purposes dependency management for programming languages is a solved problem

It's obviously not solved satisfactory.


Yes, it is. Everything can be improved, but I see very few complaints with Cargo, and, most importantly, not complaints that would be solved with minimal version selection. (Most complaints I see are about the lack of package namespacing.)


One of the ways I'm hopeful that cargo can improve in this area is through the work being done on rust-semverver. If crates.io can either determine semver compliance or, perhaps, deny releases that are not properly semantically versioned, it would give cargo more license to select a single version to satisfy diamond dependency graphs. Those diamond dependency graphs are one of the ways that package management is famously very much unsolved.


I agree that Cargo is awesome and I would be happy to see Go follow suit, but I interpret "package management is solved" to mean "the industry has standardized on a single scheme". There does seem to be convergence in this space, but it's a relatively new phenomenon.


Yes, the industry has absolutely standardized on a single scheme: manifest plus lock as separate declarations, SAT solver, single dependency version per compilation unit.


That may be what things are converging on but in my opinion, any system that imposes a "single dependency version per compilation unit" policy is not a "real" dependency management solution. That approach creates a tremendous amount of friction because it pretty much ensures dependency hell if there are transitive dependencies with separate release cycles (and there very much should be able to be).

Edit: I just realized that we may have a different definition of "compilation unit". If you mean "single program" then my point stands. However, if you just meant some sectioned-off part of the program, then I think the drawbacks of that are far less. Basically, I think that it is useful to be able to have access to multiple versions of a datatype (e.g. if you want to write a converter from old -> new) but that use case is far more fringe.


As a sysadmin I object to the notion that old to new conversions are a "fringe" use-case. Pretty much all my coding (I'm a "devops" sysadmin-type) is exclusively "I need to wrangle all versions of thing 1 into a type of thing 2 or between several versions of thing 1".

If backwards compatibility is important, then letting me use multiple versions side-by-side in the same modules easily is damn important.


To be clear, Cargo fits the latter definition; you can't depend directly on two different versions of a package, but you can transitively.


If that's the case, it's a pretty recent development. Python and JS didn't have this until very recently. Yeah, Go is a bit late to the party, but hardly worth the dramatic criticism in this thread.


No, it's hardly a standard. It's just a common approach. The article presents another approach which differs on the three points you mention: manifest plus lock as separate declarations, SAT solver, single dependency version per compilation unit. And I think what Russ proposes is a really interesting tradeoff in the design space.


It is the _defacto_ standard and has been for many years. It is what's expected by the absolute and vast majority of software developers.


> I see very few complaints

If you live in a filter bubble (survivorship bias).


Only because the golang core team seem to insist on reinventing every wheel either without considering prior art, or at least without considering why prior art went with a particular approach.

I swear I should put together a list of these instances.


> Only because the golang core team seem to insist on reinventing every wheel either without considering prior art, or at least without considering why prior art went with a particular approach.

The popular criticism is that Go isn't reinventing enough; that it's stuck in the 70's and it's not sufficiently innovative.


I think it's two sides of the same coin.

I've heard go described as a language that doesn't try to solve any hard problems. Let's set aside for a moment whether or not that's a positive or a negative. The bigger problem is that they also seem to completely ignore everyone else's existing solutions to hard problems too, which leads to the criticism that it's stuck in the 70's.

Sure, a lot of the existing solutions to some of these problems aren't perfect or involve tradeoffs that don't work for everyone. In general though, more recent solutions have built upon the lessons of previous approaches and now we've got languages and tooling that get a hell of a lot right, straight out of the box (I'm looking at you Rust / Cargo).

But go isn't doing this. They appear to be trying to come up with de novo solutions to problems from the perspective of the 70's. Whereas everyone else is "standing on the shoulders of giants", they insist on discarding the lessons we've learned over the past fifty years and running one by one into all of the same problems that led us to the solutions we have today. As an observer from the sidelines it's painful to watch them relearn these same lessons the hard way.


I largely agree, and still for everything that Rust and company get right and for everything that Go gets wrong and for how much optimism I approach Rust and company with, I'm still far more productive in Go than any other language (even the ones with which I have much more experience). The only way I can explain this is that programming language design is something of an art form, and the integration of features and tools is far more important than the set of features and tools.


Rust current package manager is third attempt which worked. Async saga also is approaching 3rd attempt before we have it working. For module system there was big backlash on simplifying it for new users. so effect will be more likely that is understood by few developers who have spent significant time.


That's a really good perspective. Thanks for sharing.


I think that's an inaccurate/misleading framing: the typical criticism is that it has seemed to ignore innovations from elsewhere, there's no suggestion that it needs to reinvent them. For instance, people consider generics a solved problem as there's numerous schemes invented by other languages, and critics think one of them should be able to work for Go.


Yes, but Go has specificities that make difficult to directly use the scheme used by other languages: Go is AOT compiled (we can't use C# generics), Go favors compilation speed (we can't use C++ template), Go lets the developer control memory layout with arrays and structs (we can't use OCaml generics), etc. Of course, Go should get inspiration from other languages, but adding generics is not easy as simply transposing another language implementation. There are multiple detailed proposals in the bug tracker that have failed for multiple reasons.


There are also CLU, Ada, Modula-2, Modula-3, Eiffel, Sather, BETA as possible inspirations for implementing generics, all the way back to 1976 (CLU).

As for C#, people keep forgetting .NET always had a JIT/AOT compiler since version 1.0.

Just because not everyone bothers to sign their applications and call NGEN at installation time, it doesn't mean it isn't there.

Also Mono always supported AOT compilation, it is the way Xamarin works on iOS and the deployment story on Windows Store since it was introduced on Windows 8.


I remember Mono with AOT compilation had some limitations regarding generics in the past, but maybe this has been solved since then.


Yes Mono has some limitations, but it isn't 100% like the iOS Xamarin compiler.

http://www.mono-project.com/docs/advanced/aot/

In any case, a limited form of generics is better than not having any at all.

They don't need to provide a turing complete implementation of generics, even a basic one like CLU had would already be an improvement.

Using go generate feels like the old days, writing generic code in C and C++ with pre-processor tricks.


I agree. I hope the Go team will tackle this issue, when they are done with dependency management.


Swift has essentially all the same constraints, and fretting about compilation speed with the C++ approach are hard to stomach given one of the widely-used replacement is equivalent: manually copy-pasting code, instead of letting the compiler do it.

In any case, the truth of the complaint about generics isn't actually relevant here: the rebuttal is to "people complain about Go not reinventing enough", and they don't, as they (truthfully or otherwise) think it is ignoring already researched and widely-implemented solutions to the problem and complain about this (that is, the complaint is "uninventing", not "not enough reinventing").


I didn't mention Swift and I should have. I agree it's close enough to Go to provide useful inspiration.

But even Swift implementation of generics has its issues and tradeoffs. Look for example at this thread about "Compile-time generic specialization" in Swift: https://lists.swift.org/pipermail/swift-evolution/Week-of-Mo...

Repeating that Go ignores "already researched and widely-implemented solutions" is getting old, really. Would you say that OCaml doesn't have shared memory multicore parallelism because it ignores already researched and widely-implemented solutions??? Would you say that Haskell garbage collector causes long runtime pauses because it ignores already researched and widely-implemented solutions??? The truth is that solving that kind of things is a lot more complex than just copy-paste the code from another language.


I'm... intimately familiar with generics in Swift. :) Like all systems it has trade-offs, but the decision it makes line up with the decisions you said Go wanted to also make.

In any case, that specific concern doesn't apply to Go, since it doesn't have overloading.

> Repeating that Go ignores "already researched and widely-implemented solutions" is getting old, really

Sure, but that still isn't the point of this discussion: it's not what people should be doing, or the underlying truth, it's what people are doing in practice and they are complaining about the Go team seemingly ignoring the past 40 years of programming language development, not complaining that they're not inventive enough.


You're right about the issue I mentioned which is irrelevant since there is no overloading in Go. Thanks for pointing it out.

I didn't know you worked on Rust and now on Swift :-)

What strategy does Swift use to compile a generic function or method? Does it generate only one polymorphic version of the code, or multiple specialized versions for each expected parameter types (often called code specialization or monomorphization)?


One polymorphic version, like OCaml or Haskell, not specializing like C++ or Rust.

The compiler can and does specialize functions as an optimization, but that's not necessary nor part of the semantic model.


I always thought that generating one polymorphic version would be a better fit for Go than generating multiple specialized versions.

It's interesting that Swift adopted this approach and confirms what you said earlier about Swift having essentially the same constraints as Go and being a good source of inspiration of Go.

What's the status of Swift on Linux, to write web services and connect to databases?


There's lots of testing for Swift on Linux (e.g. every pull request runs tests on macOS and Linux), and Apple recently announced https://github.com/apple/swift-nio which seems to have been very well received, e.g. the Vapor framework is completely switching to it for Vapor 3.


Thanks for the link. Does Swift provide something similar to goroutines?


>OCaml doesn't have shared memory multicore parallelism because it ignores already researched and widely-implemented solutions?

OCaml people definitely not ignore them, they are carefully investigating existing code and elaborating modern and efficient solution [1](unlike Go, which is too opinionated).

[1] https://github.com/ocamllabs/ocaml-multicore/wiki/Memory-mod...


I'm aware of the ongoing work to enable multicore parallelism in OCaml. It's the reason why I mentioned OCaml in my comment. My comment was rhetoric. My apologies if it was not clear.

That said, I don't see the link between the work done on OCaml multicore parallelism and Go "being too opinionated"...

If I follow you, Go people, unlike OCaml people, are "too opinionated", and don't "carefully investigate existing code" and don't "elaborate modern and efficient code". Do you realize how arrogant (and a bit ridiculous) this sounds?


Is Swift scheme same as Rust which is same as Java which is similar to C++? If not then it seems to me language developers decided to use an approach what suited their purpose.


No, they're all different, which is my point: there's a variety of schemes that all have well-understood pluses and minus, meaning new languages don't have to do the exploration themselves.


> The popular criticism is that Go isn't reinventing enough; that it's stuck in the 70's and it's not sufficiently innovative.

That's popular, but unfounded, in my opinion.

I find ironic that some HN commenters are criticizing the Go team for not being stuck in the past when they are precisely trying to innovate and questioning the status quo on package management.


> Only because the golang core team seem to insist on reinventing every wheel either without considering prior art, or at least without considering why prior art went with a particular approach.

Actually it's pretty clear that they did their homework.


If you count as “their homework” the dozens of iterations of this we’ve been through.

First, it was literally argued that just `go get` just grabbing `master` of all of your dependencies was good enough, because if you ever need to release something backwards-incompatible, you should create a new repo. I wish I were making this up.

Then we just vendored everything and never updated our dependencies, and that too was good enough.

Then GoPM, go dep, and a deluge of other tools that I don’t care to track down or sort into their appropriate order.

And now this, which decides to buck the trends that everyone else follows yet again and attempts to satisfy dependency trees by choosing the minimal compatible versions instead of the maximal ones, with the completely foreseeable consequences that:

  * security patches won’t be applied
  * issue trackers will be filled with bug reports that are closed with messages saying “this was fixed months ago, update”, and
  * updating dependencies will become *harder*, since this is a straightforward result of doing it less frequently


If you can't go get -u all on your project at least you have done something 'wrong' or your upstreams have done something wrong.

First, the upstream behavior; not supporting the published interface (as it was and should therefore work forever) means they need to deprecate that interface and release another with the incompatible changes. However the PROPER way of handling this is to add new versions of exposed public methods and interfaces / constants with new names and only if such a major change is required that you can't support the old interfaces, at THAT point, release at least under a new inclusion path if not new dependency name.

Things local a developer might have done wrong: A) include an 'immature' upstream repository that behaves like above. B) use non-public components or other unsafe practices to go beyond the interface contract that was exposed.


I guess using any dependency, directly or transitively, that declares itself pre-1.0 in the go or semver meaning of the word, is thus right out.

At that point, almost all of the go ecosystem can't be used.


Do you have proof that the Go team has not looked at any other dependency management approaches or considered ones from other languages?


Obviously I don’t, but the available evidence — to me, anyway — points to this being a cultural reality in the golang ecosystem.

I left the language years ago, but the steady stream of articles of the form “Why go doesn’t need $x” followed by innumerable comments, articles, and blog posts by people struggling with the lack of $x, followed by the go core team (due to the completely foreseeable consequences of not having $x) begrudgingly adopting $y, which is supposed to (but doesn’t actually) obviate the need for $x… I’ve reached my own conclusions, and I’m finding more and more people who are leaving the ecosystem with those same conclusions.

I believe that Go excellently solves problems that Google has. I don’t believe it solves many problems that most other users have, even though it might seem like it at the outset.


Wait until they finally cave in to generics being one of the top requests on Go surveys and bring something to Go following that same design process.


What are the "benefits" of this approach?

To re-iterate a different post of mine against this question (implicitly, what are the benefits of NOT having library versions):

    * The only version to test against is HEAD
    * No fossilization of security or stability issues
    * Public libraries must support all Uppercased (exposed) declarations; that is the library interface.
    * If the build breaks in an odd way: update go, then: go get -u all
    * Simplicity, there's only one supported version and only one version to go get and develop against.
    * Also, why would building against an old version _ever_ be necessary?


Have you read the post? It provides answers to your claim that "dependency management is a solved problem".


> The only explanation for this constant re-invention that I can come up with (and that's shared among others I know) is that Google doesn't care/need a dependency tool because of their mono repo.

That's a pretty good argument for using a monorepo, don't you think? A single organisation's code should all be in a single repo, which means that a single version of a dependency is used within the organisation, and that updating to a new version of that dependency is a single, self-contained project.


We use a monorepo at our place of work and it’s great, so far. We also use govendor to manage external dependencies in the monorepo. We are rigorous about keeping packages up to date so have never had to run v1 and v2 of a package at the same time.


> forced down our throats when Google suddenly finds a need for them

You can say that about Go itself. And there are already other/better alternatives. So why use Go?


A language is valued by its programmer bench, tooling, available libraries, as well as the language itself. Honest question - what is better than go right now?


You'd have to specify the use case of interest. I don't think there's anything that uniformly dominates Go, but that's terribly interesting since there are so many dimensions of interest for a programming language that there aren't any languages in use that uniformly dominate some other language in use. (You can get uniform domination if you include unused language or esolangs, but who cares.) But there are certainly many use cases for which Go is not the best choice, not in the top 3-5, or straight-up not a viable choice.


C, c++, c#, Java, Python, Lua, lisp, Ada...

Go has its good sides, but it's just a tool. It's not the best, because tools aren't supposed to be the best. Tools are supposed to be useful for what they're made for.


I have a hard time with using Java to write micro-services. Even though lots of packages, such as Undertow, support non blocking I/O, a lot of the rest of the stack and the programmers that use it don’t know how to do NIO. What happens then is that the SRE/Ops team needs to configure large thread pools which consume lots of RAM just because multiplexing is I/O so difficult.

In comparison I can just start goroutines and Elixir processes for fast and simple I/O parallelism almost without thinking about it.


But it has had mature dependency management tools for a long time. They were similar to cargo and npm.


"Go hasn't had mature dependency management since its inception"

go dep is perfectly fine for most use case.


Agreed, but compared to Go itself, dep is incredibly recent, still in Alpha, not widely used yet and only the last in a series of incredibly many homegrown approaches that created lots of chaos and churn. And when everybody thought the rollercoaster was finally over, vgo came around the corner. It’s a great proposal, no doubt, but it‘s fair to say dependency management and generics were the two largest unsolved daily encountered problems with Go in real life. Let‘s just hope the community gets an Epiphany about the latter one as well soon...


I'm ok for now putting everything in one GOPATH as if I were google.

> How many iterations of this do we have to go through?

Fewer iterations than Mr. Cox has gone through. Isn't it great?


I'm excited to see Go trying something new here. Something just feels clunky about the current "manifest+lockfile" approach. There are a couple of ideas in vgo that I really like, e.g. "incompatible versions must have different import paths." I think rsc is justified when he compares these conventions to upper-casing exported identifiers in Go packages: for a while it seemed weird, but now it feels natural and obvious. In other words, Go has a history of breaking from conventional approaches, with surprisingly consistent positive outcomes. Some bikeshedding is inevitable if this proposal is accepted, but I hope that the main ideas are preserved.


> I'm excited to see Go trying something new here.

"Semantic import versioning" is equivalent to versioned APIs in the web world, no? Though I have never seen anyone utilize the same concepts for language-level packages; it certainly maps well!


For the record Roy Fielding is against using version numbers in URLs: https://www.infoq.com/articles/roy-fielding-on-versioning/


> For the record Roy Fielding is against using version numbers in URLs: https://www.infoq.com/articles/roy-fielding-on-versioning/

If Roy Fielding wanted developers to fully grasp whatever he was talking about with REST, he should have written a normative spec, not a dissertation, he didn't.


Roy Fielding isn't a deity. The version would be encoded in the 'Accepts' content type header. There is little distinction, one is in-band (in the path) and one is out-of-band (in the header), but the information is still represented.


I don’t think you have this right. First, both the URL and the headers are in-band. Out-of-band would be you knowing that the version is v2.3.7, because you just know. In-band is anything in the request and response – headers and all – anything else is an assumption and as such out-of-band.

Moreover, if I understand Roy’s argument correctly, he’s arguing against versioning URLs, because they are meant to be opaque identifiers that doesn’t convey any universal meaning. If you know the second segment of a URL path to be a version, it’s because you have out-of-band knowledge that it is, not because there’s anything in the request to say it is.

If however you have a header that tells what version you wish then that is in-band information that allows you to negotiate the content appropriately with the server. It also means you don’t have to change URLs whenever you make a change to your API, breaking or not. Of course the server and client needs to understand this header (to my knowledge there’s Accept-Version header) and if it’s non-standard then it can be argued that it’s just as bad as versioned URLs. Perhaps, but unless you get very granular with your versions (and most don’t, let’s just be honest) your still at the mercy of what the server chooses to present you at any given time. In fact, REST gives you no guarantees about what you’ll receive at any given point – you may just as well be given a picture of a cat. REST says you should be able to gracefully deal with this. Most clients (that aren’t web browsers or curl) don’t.


No! Headers are much worse. There is nothing in an API response that tells you in a standard way how to set headers, but there is something that tells you how to follow links (URLs). Accept-version is a non standard header, not defined in the http spec so definitely that is not restful - how do you know if the site supports it or what the version numbers are? You have to hard code in the client!


Arguable. At least an unknown header is likely to just be ignored, whereas a URL that’s changed to update the version will at best cause 404s, unless you keep the old URLs around. If those URLs are still recognized, but routed to the later version anyway then that’s just as bad as using an unrecognized (and likely ignored) header.

In any case, what you’re saying is the point I was trying to make in the latter part of my post. However a typo (or rather, a missing word: “no”) unfortunately changed my meaning entirely. :o(

It should’ve read:

> (to my knowledge there’s no Accept-Version header)

Oops. My apologies for the confusion.


> I don’t think you have this right.

That is probably true, I don't disagree with anything you are saying. To me the most in-band representation is everything passed in the URL with no user control over headers.


Using "in-band" here doesn't really follow from a technical understanding of the term; the TCP connection is the channel, and anything sent over it (such as an entire HTTP request/response, including headers) is "in-band".

Out-of-band would be a phone call, or perhaps an email - an entirely alternate method of communication.

Transparency to a user (via the client, i.e. browser) isn't really relevant, from a communications standpoint, to whether or not data is considered "out-of-band". Given the subject (APIs), you're not likely to be browsing to these anyway.


Well, I think from Roy's point of view REST is only HATEOAS. So URL's are irrelevant; you can put version numbers in there if you want but no client should realize it.

What you should be versioning is your media types (e.g. HTML4 vs HTML5).


He is not a deity, of course, he's "just" the one that invented REST. I guess what you mean is that he's not right about everything he says.

Well, what he exactly says is that URLs should not include versioning, because URLs are the interfaces names and REST estates that interface names should not be versioned, as that implies a breaking change in the API. It's just the wrong place for versioning in the REST way.

But he is not against versioning, as you say, you can use the Accept header. You could also use a custom header if you might, but the canonical way would be the Accept header.


I believe that Roy is nerd sniping us, I capitulate. Ok, Roy was wrong, versions in the URL don't break things, they make them stronger. Naming is hard, we republish a semantically related by different artifact and change the name rather than the varying part? Use etags instead? Change the domain?

I totally understand asking for the version (Accepts) during a GET request. If I agreed on that content-type in advance, if I haven't I need to communicate both the url and the content-type along with the version to the clients. We don't have a common container for (url,content,version), things are getting messy. In a package manager, what is the equivalent of a GET request with an accepts header?

    import foo.bar v1.2.3
    import foo.baz v2.1.3

    new_thing = ::v2:foo.baz.new_thing(1)
    old_thing = ::v1:foo.bar.old_thing(2)

Joe Armstrong has a great post on modules and versioning http://lambda-the-ultimate.org/node/5079

I am strongly infavor of immutable code, I think we should be able to import all versions of a library, referenceable by commit hash.


Interesting idea. I usually code with languages that have lib packaging and depencency management systems. Recently I have went back to code on linux C and I quite miss them now. We have pushed the version management to the dependency or build management tools, but it would be awesome to be able to add that meta in code and those tools could also automate the tasks. BTW, wouldn't it be import foo.bar in both? It would be more logical to keep the name of the lib, import it twice, but with different version metadata.

Thanks for the link as well! I would love to see that experimented somewhere and see if it works.


Specifically as it applies to REST, though, and HATEOAS. I don't think it has much bearing to package versioning in Go...


Minimal Version Selection

This is the piece that is very different from other dependency managers and is worth people looking at.

Instead of trying to get the latest version it's going to try to get the oldest. If you want to update to newer version of transitive (dependencies of dependencies) that your dependencies have not updated for you'll need to start tracking those yourself.

There's an issue touching on this at https://github.com/golang/go/issues/24500

Other packages managers use the opposite of minimal version selection. Many of them even have a don't make me thing command to update the whole dependency tree (e.g., `npm update`).

What do folks think of the implications of MVS, especially for transitive dependency management?


> What do folks think of the implications of MVS, especially for transitive dependency management?

I can't think of more than a handful of times over the past twelve years (primarily Ruby and Rust) where a newer package broke an existing one, and the majority of those times the problem was a new major version that wasn't appropriately accounted for (e.g., the author should have depended upon '~> 2.x.x' instead of '>= 2.x.x'). I'd much rather have that problem addressed. Assume semver, and make the easiest way of specifying a dependency cap it to a major version.

On the other hand, I regularly deal with the consequences of upgrading infrequently — when you do upgrade, it's a nightmare. As a rule of thumb, the more frequently you update your dependencies, the less net pain you experience. Upgrading once a week is relatively painless. Upgrading once a year (or even less frequently) is a nightmare due to the sheer volume of changes coming in at once.

Also, having transitioned to the security side of things over the past few years, encouraging stale dependencies just means that security patches will never get incorporated. A stable project that releases mostly security fixes and few feature changes will — in practice — never see its minimum version bumped in projects that depend upon it.

My prediction is that this will just result in security patches not being applied and the problem of upgrading dependencies will be made generally worse. I will be happy, though, if I am proven wrong.


According to my reading of it, there is an update-all operation. It has just been separated out in an attempt to give the user control over initiating it.

But I agree that procrastinating on updates creates problems. (At some point, the entire point of version pinning is to allow you the choice to defer, but it's still a problem.)

I think it might be worth exploring reporting as a solution. Everyone knows that old versions are a problem, but what if I had tools to tell me how bad or good the situation is for my project at this moment? And what if they ran by default either periodically or as part of my build or both?

Examples of stuff it could tell me:

* Am I one minor version behind on this one library and it doesn't matter?

* Or am I a major version behind on this other library and I'm using code that isn't even supported or maintained?

* Are there security fixes that I haven't taken?

* Are there libraries that have security fixes but no release is available yet?

* How about a list of libraries that I'm not on the latest minor version of AND the latest version has been available for more than 2 weeks? (Maybe I don't want to fall behind but I don't want to be a guinea pig either.)

* Or a list of libraries that I'm not on the latest major version of and a newer major version has been available for 6 months?

Since this is important, it would be great to have real visibility into it. Right now, every build I've ever done, this is just something that people track in their heads and just assume they have a good handle on. Doing regular releases and always taking the latest version of everything helps somewhat, but sometimes a release gets canceled. Or maybe there's a system that isn't being regularly worked on and doesn't have regular releases, yet dependencies are being updated, it is behind, but by how much?


> Instead of trying to get the latest version it's going to try to get the oldest.

No, minimal refers to the number of dependency changes between upgrades, not the version numbers of the dependencies. Dependency conflicts initially resolve to the higher of the two versions.

https://research.swtch.com/vgo-mvs section " Algorithm 1: Construct Build List"

In the example, a dependency is upgraded and no longer needs one of the shared transient dependencies. Instead of downgrading it to the version declared by the unchanged dependency, it keeps the higher version that you were already using. When upgrading, transient dependencies are never downgraded, only added (or removed).

https://research.swtch.com/vgo-mvs section " Algorithm 3. Upgrade One Module"


Let's say you have the following picture...

App

--> Dependency A --> Dependency C (at version 1.2.3)

--> Dependency B --> Dependency C (at version 1.3.4)

The latest release of Dependency C is 1.4.5. Which version would be used after running `vgo get -u`?

As you note it would use the newer version of those specified (version 1.3.4). The newer releases after those explicitly specified are not used.


I believe (having tried the tour) that you will get v1.4.5 at the App level, if you do vgo get -u.

If you just do plain vgo get, you will get 1.3.4, because of the minimum version approach.


Mvs upside sounds like that the dependencies of dependencies won't need change. However the downside is that how would you upgrade security patches into older packages? They would always fetch the older dependency and never get upgrades. There's a reason why npm install was made the way it was though it went too far in the other direction with always getting latest dependency.


As Russ points out, the current approach of pulling the latest leads to a situation where a package managers can be quite sloppy. For example, a package may have a stated dependency A@1.1.2. But because everyone is pulling the latest, it may turn out that the package actually no longer works with that version of A anymore.

The vgo approach will encourage package managers to update their dependency versions to what they have actually tested with, which is very useful information for consumers of the package.


> Instead of trying to get the latest version it's going to try to get the oldest.

Err, no. For each library it uses the newest version of the library that is used by any of its dependents (including transitive dependents).

See https://research.swtch.com/vgo-mvs

Especially the line: Simplify the rough build list to produce the final build list, by keeping only the newest version of any listed module.

Different major versions are considered different libraries essentially.

To be honest I thought this was all obvious and I'm sure this approach has been recommended by Go people in the past.

It has another nice advantage over Rust's system - you have have pre-releases of major versions. Rust doesn't really have a way to do that except for releases before version 1.0.

In other words when you want to do some incompatible changes and release version 2 of your library there's no good way to put test versions of it on Crates.io.


> It has another nice advantage over Rust's system - you have have pre-releases of major versions. Rust doesn't really have a way to do that except for releases before version 1.0.

Semver has a specific way to indicate that a feature is a pre-release, and people use it. The second post on /r/rust is advertising a pre-release of the next major version of rand: https://crates.io/crates/rand/0.5.0-pre.0

(Yes, this is before 1.0, but 1.5.0-pre.0 would work just fine too!)


Do you have a reference for that? It isn't mentioned here: https://doc.rust-lang.org/cargo/reference/manifest.html#the-...

I found this bug report and it seems like it has some issues: https://github.com/rust-lang/cargo/issues/2222


Reference: I maintain the semver library. I’m on my phone, so I can’t link you to the docs right now. I’ll add a link tomorrow.


So, looking back, we don't explicitly say that we have these things, we just say that we use semver versions, and the semver spec specifies them. I've been meaning to re-vamp the cargo docs, maybe this is worth explicitly calling out. Thanks!


Ah cool, sorry I wouldn't have asked for a reference if I'd noticed your username!


It's all good! <3


> Other packages managers use the opposite of minimal version selection. Many of them even have a don't make me thing command to update the whole dependency tree (e.g., `npm update`).

vgo does have that too: vgo get -u


That will work for direct dependencies but not dependencies of dependencies. See https://github.com/golang/go/issues/24500


Correct, it will get the newest version that is actually tested against your dependencies, which is by design.

Why would you want to pull a version that is newer than the author of the library you depend on has actually tested with? You can of course force this in vgo, but having the default use the versions specified by the authors makes a whole lot more sense than just using the newest.


> Why would you want to pull a version that is newer than the author of the library you depend on has actually tested with?

1. To install a security update or bug fix update you need in a transitive dependency that the author of the dependency you're using hasn't updated to.

2. To use the same workflows across all my dependency management tools (npm, cargo, composer, bundler, and the rest of the lot follow the same patters and vgo goes against the patterns used by the others)

There are two reasons.


You can do 1), you just have make that an explicit dependency of your own. It just won't grab newest by default for sub-dependencies. So yes, you can pull the latest.

2) seems like a bit of a silly reason, if we all wanted to make everything work the same all the time we wouldn't make much progress or try anything new. Whether vgo's approach is correct or not we don't know yet, but saying that it isn't familiar isn't a good enough reason to not try it out.

To me, vgo matches what we already do in our python projects with a lot of dependencies. Pin everything in a freeze and upgrade on a schedule when we need to. We have seen far too many failures doing it any other way. (IE, using the "latest" of everything, which often either breaks semantic versioning and actually breaks or has subtle bugs that didn't exist before)


Using something like pip-compile, or literally `pip freeze > requirements.txt`?


The latter yes.


3. To apply bugfixes that affect you, but weren't exposed by your direct dependency's tests.


I believe that vgo get -u works for both direct and indirect dependencies.

See the vgo tour : https://research.swtch.com/vgo-tour

The indirect dependency (rsc.io/sampler - which is a dependency of rsc.io/quote), is also upgraded to the latest version v1.99.99 when vgo get -u is done.


To all the skeptics, please try it out and provide constructive feedback. Experiment with the corner cases you think won't work. Judging just by the blogpost won't help anyone.

Personally, I'm very excited and impressed by the ability of the Go team to innovate by diving deep and understanding every aspect of the problem at hand and the existing solutions, instead of just blindly adopting whatever already exists.


The lack of support for private repos is still a dealbreaker for both `dep` and `vgo`.


I wanted to find out, so I looked at vgo's source code, and found that vgo does indeed have some solution to this using Github Access Tokens and ~/.netrc: https://github.com/golang/vgo/blob/b6ca6ae975e2b066c002388a8...


How is it different from using 'go get'?


`go get` has so many other blockers for enterprise use that "support for mirrors" barely registers. But yes: it's no different, `go get`'s lack of mirror support is also a blocker for tons of businesses.


govendor anyway will use local versions of deps if you have them in your gopath


My initial reaction to Minimal Version Selection (MVS) is concern, that developers won't get security updates and bug fix patches in their dependencies applied.

But dependency software has been trending toward the use of lock files for a while now - and without explicit developer intent, those won't get bugfixes either.

I think I'm mostly concerned about how this affects transitive dependencies. My package Foo depends on Bar_1.3.0, which depends on Baz_3.2.4. If Baz gets a security update to Baz_3.2.5, either I need to add an explicit dependency "Baz_3.2.5" to Foo, or wait for Bar to release 1.3.1 that depends on Baz_3.2.5.

If go adds tooling to identify and make these transitive dependency upgrades as easy as "npm update", then I will be a little bit less uneasy.


I don't think libraries have to do this, just applications. As a library owner you're not responsible for making sure downstream gets all the latest security patches. They can do it themselves at will, by running "vgo get -u".

https://research.swtch.com/vgo-tour


The example there dealt with direct dependencies but what about transitive (dependencies of dependencies)? If you have a module that asks for an old version of a dependency what will upgrade that? minimal version selection will keep with the old transitive versions. That's where there concern comes in.


I think the -u flag might already be transitive? Haven't tried it, though.


Exactly. MVS may sounds weird at first, but in practice it works identically to having a Lockfile (which is becoming the standard these days), just simpler.


"But dependency software has been trending toward the use of lock files for a while now - and without explicit developer intent, those won't get bugfixes either."

You know, I wonder if there's something here that a next-generation language can't get in on, some sort of help to provide to the developer who says "OK, I'd like to upgrade this package for people, could you please help me ensure that I'm not going to break anybody in the process?"

Possibly this line of thought terminates in very richly dependently-typed languages, which is a bit of a utopia. But perhaps there's something in between? Or something that can be added to an existing language like Rust?

I'm not even initially certain what that would look like. A version-aware programming environment in which one can sensibly say "Yes, for 1.1 I upgraded the unit test but please run the 1.0 unit tests against the 1.1 code"?

It seems like this is a growing problem and there's probably an opportunity of some sort here.


> You know, I wonder if there's something here that a next-generation language can't get in on, some sort of help to provide to the developer who says "OK, I'd like to upgrade this package for people, could you please help me ensure that I'm not going to break anybody in the process?"

Russ has proposed a "go release" command that is intended to help with that process. It's probably simple right now, but has lots of room to grow in that direction.

See: https://research.swtch.com/vgo-cmd


> It seems like this is a growing problem and there's probably an opportunity of some sort here.

I think there's an opportunity even within existing languages: more shared CI infrastructure. Imagine if project authors had some easy way of running their downstream consuming project's test suites as they develop?


It's not all the way to what you're proposing, but CPAN has a very nice means of testing packages for system compatibility. By default, users installing new packages run the tests for those packages. Most CPAN clients can be configured to report those test results back to central locations. This isn't the same as running the tests for the consuming project, but it demonstrates the feasibility of "crowdsourcing" tests for system/language-runtime-version/other-package-version incompatibilities.


I'm honestly confused about why "this package manager is a SAT solver" is being trotted out as a bad thing. Repeatedly. Having used multiple such package managers in the past: the runtime is utterly dominated by time-to-download or even simply disk access, not time-to-compute. Compute time has uniformly been far below human-visible times - e.g. a few hundred dependencies resolves in `dep` in around 100ms (for the solver) on my machine.

SAT is not a problem at all. Yes, you can construct a worst-case scenario for it that will chew up a ton of CPU. In practice it simply doesn't happen, and trying to defend against it is both a waste of effort and leads to crippled decision-making.


I think it has more to do with the complexity. Minimum version selection is trivial to implement and understand. The processing time is a side-note.


MVS is less complex to code, definitely agreed. But given how well studied and broadly implemented SAT is, I don't really think that's a useful distinction to its end goal - be a useful build system. Chucking the whole transitive-version-respecting thing is even simpler[1] but it's terrible and Go shouldn't do that just because it's simple.

And for "understandability" in a conceptual sense, personally SAT has always felt simple and predictable to me - "find something within all bounds or err" is something we do all the time by hand.

[1] e.g. pip essentially ignores version constraints if something's already installed or some other lib already mentioned it. it's absolutely terrible and causes many problems in anything long-lived.


A non-SAT system is understandable in the sense that you can, as a human, look at it and understand what the solution is should be. That's kind of cool.

It also seems within the realm of imaginable that having an easy-enough-to-implement scheme that you're not falling back on writing heuristic backtracking search might result in more tools being written.

Obviously, you are right that being NP-complete is clearly not a showstopper or a huuuge problem, in practice. Still, seems nice to avoid if you can!

By the way, that's interesting to hear about pip. I do my best to avoid python, but I inevitably end up reluctantly wanting to use some tool written in it, and inevitably it breaks. What a fucking shit-ass ecosystem.


Re Python: pretty much :| My personal favorite is that, because of this behavior, you can install a single top-level package into a fresh virtualenv and get invalid transitive dependencies. Both in theory and in practice.

If you ever get back into Pythonlandia, do check out pip-compile - it's a properly sane package manager, following the normal SAT solving path. Major lifesaver, 100% recommended.

I enjoy the language well enough - it's readable and expressive. But it's so terrible for building a business on top of, and that's largely due to the ecosystem.


This is not my experience at all. Doing an “apt dist-upgrade” after not upgrading for a week or two regularly takes _minutes_ on my i7-8700K to resolve dependencies.


Apt doesn't use a SAT solver it only removes conflicts if it finds them: https://aptitude.alioth.debian.org/doc/en/ch02s03s02.html


I've never seen this. Do you use only packages from your distro ?


Yes, Rich Hickey (amongst others I am sure) has talked about precisely this model of backwards compatibility - i.e. if you break the contract you have to rename the thing.

This feels far better than the current model used in most languages. If you've ever had struggles creating an uberjar you know this pain.


Agreed, after years of JVM dependency hell (which happens in any language), I think "never make breaking changes" should be non-negotiable for publicly-published code.

E.g. repo managers like Maven central/etc. should use binary API analysis to reject any jar upload that has breaking changes.

My only hesitation is that, AFAIK, semantic import versioning has never been tried at scale, so having to constantly bump imports from "com.foo.v1" to "com.foo.v2", and deal with "app1 wants to pass com.foo.v1 objects to app2, but it expects com.foo.v2 objects" might introduce more pain than expected.

Granted, right now app1/app2 are blithely passing around "com.foo" objects that may/may not be compatible, but if it's an 80/20 thing, or 99/1 thing, and most of the time you get lucky and it works, perhaps that's good enough.

But would be great to have go be the first community to try this at scale and see how it goes. I like it.


Russ actually mentions Rich Hickey's keynote talk, Spec-ulation (https://www.youtube.com/watch?v=oyLBGkS5ICk) in one of his blog posts about vgo


Well, the article is suggesting to put the major version number in the name, so that “change the name” and “bump the major version number” are equivelant actions, which is a little more subtle than just a plain rename.


Okay, so i'm reading the minimal version selection algorithm article [1], and there's either a flaw, or something i don't get.

Here's a modified version of the example graph, written as pidgin DOT [2]:

    A -> B1.2
    A -> C1.2
    B1.2 -> D1.3
    C1.2 -> D1.4
    D1.3 -> X1.1
    D1.4 -> Y1.1
The key thing here is that D1.3 depended on X1.1, but the new D1.4 depends on Y1.1 instead. I guess X is old and busted, and Y is the new hotness.

What is the build list for A?

Russ says:

The rough build list for M is also just the list of all modules reachable in the requirement graph starting at M and following arrows.

And:

Simplify the rough build list to produce the final build list, by keeping only the newest version of any listed module.

The list of all modules reachable from A is B1.2, C1.2, D1.3, D1.4, X1.1, and Y1.1, so that's the rough build list. The list of the newest versions of each module is B1.2, C1.2, D1.4, X1.1, and Y1.1, so that's the build list.

The build list contains X1.1, even though it is not needed.

Really?

[1] https://research.swtch.com/vgo-mvs

[2] https://www.graphviz.org/doc/info/lang.html


I believe that implicit in this scheme is that you cannot remove dependencies. It seems possible to add new dependencies, however. Changing dependencies can be seen as removing one and adding the other, but you can only do one, not the other. This limitation could probably be patched up by resolving a set of versions and then noticing that X1.1 is unreachable and discarding it. The trouble with that fix would be if X1.1 forces some other package to an unnecessarily high version even though it ends up being discarded.


Why not just put the full version number in the import path?

Both the manifest (dependency section) and lock files become unnecessary. It’s DRY.

Dependencies are specified where they’re used, improving componentization.

Upgrading and forgetting to update one location is easily fixable: tooling already scans go code for a list of imports, modify this to warn on version differences. Or even to update them.

Git history gets “cluttered”, but shouldn’t it be? The behavior of values in a file is changing. This constitutes a change of requirements on the file’s code, or at least needs a moment’s review to decide the code needn’t change. Seeing that change in history would make tracking down any bugs it causes easier. Besides, we’re talking more files changed in a single edit, not more edits.

Semantic versioning is a qualitative description, not a guarantee. Due to edge-case use or human error, every minor or patch update may be a breaking change. It would be better to have a layer of human interpretation between semantic versions and code changes, rather than tooling that assumes them to always be correct.


A published library /is/ an interface specification.

Fixing bugs or extending the interface (with additional and possibly corrected parameters) is one thing.

If the interface specification ever needs to have backward incompatible changes, the /library/ needs to be renamed (or at least have a different leading import path).

If something doesn't work, development's answers is always going to be "test it with the latest version" (at least shipped, if not the 'git HEAD').


It doesn't scale. If you have a->b->c and b version-bumps every time c does, then this also means a has to version-bump. So basically changing one file means everything that depends on it, even indirectly, has to change.

Furthermore, if you have diamond dependencies, a might end up version-bumping more than once.

Regular edits would get lost in a sea of version-bumps.


If C changes and B doesn't reflect the change in its own behavior, then there's no need for a new release of B. Just update it and group it in with the next normal release.

If C changes and B's behavior does reflect it, then A does need to know about it.

The "sea" is limited to times that direct dependencies change their behavior, which is what it already is.


You're assuming that B does regular releases. But if B is "done" it might not happen for quite a while.

Suppose C has a security patch, and B doesn't have any new functionality to release? Does it keep using the version without the security patch forever, because that's what's listed in the import statement? Or does someone have to update it?


I include “security patch” in “behavior”, subject to both conditions above.

If B isn’t otherwise updated at a similar time, then yes, I advocate dependency update-only releases. I think this is as it should be. A project should be able to know what code it runs, specifically.

And yes, if a project is abandoned then security patches don’t get magically applied. Again, as it should be. The fact that you have an unmaintained dependency is itself a problem. You can’t just auto-apply security patches and expect things to keep working, ask any Linux distro. What needs to happen is a fork (or dropping the dependency). Tools warning you encourages this; silently auto-applying patches encourages everybody to separately do their own fixes and workarounds.


I don't think it's reasonable to require each library owner to do regular updates, whether or not they have any changes to make themselves. This is unnecessary busywork. Library maintainers should only be responsible for fixing their own bugs, not responding to everyone else's bugs.


I like the idea too. Something along the lines of import "fmt#1.2.3". The problem is that you do repeat yourself: you have to sync the version across any files that use it.


It's not repetition if it sometimes varies, as the article discusses.

It's certainly a common case that they're all the same, and we can handle that by augmenting go, which already scans the project for import lines, to warn on different versions, or even update them in the import lines.


I am very happy that Russ Cox et al. are still brave enough to propose new ideas when they see something not working as intended.


The explicit `v2` in import paths is a case where Go has abandoned conventional abstraction and information hiding in an area where maybe there didn't need to be any in the first place.


Sam boyer had a take on this which is worth reading considering he is the current developer of dep: https://sdboyer.io/blog/vgo-and-dep/


To preface this I have not looked into the internal details of vgo, so it could be the world's most interesting solution to this problem. I feel like this is a punch in the gut for a lot of people that spent tons of time looking at ways to solve dependencies in Go. From his post, it looks like Russ never really looked into dep closely as he gave it his "blessing" during gophercon. It seems like whenever there is an issue that gets brought in a proposal (not always but it seems like a lot) it gets shot down by the "higher ups". Not until it gets their attention, or it becomes a big deal to someone (Cloudfare) then is it important. Look don't get me wrong, vgo could be a great technical solution, but disregarding the work of those who actually cared about this issue before you did is a big mistake. It will alienate people in the community who care on improving the ecosystem for those outside of Google. I want to thank all of those involved in dep, it's a great tool that solved my dependency problems! Without you, we would not be having the discussion about vgo and a better way to dependency management in Go.


It's tough, but sometimes you need to throw away work. It's better ultimately in the long run. I think Russ has been doing a pretty good job of giving recognition and credit to the dep folks for their exploratory work.


Saying that dep and prior art has been ignored or not "looked into" is unfair. Russ is very clear about where dep and similar tools fall short of a perfect solution. Not only that, but Google has contracted Sam to help with vgo, since he presumably has the most Go experience in the matter.

I'm sure it's disheartening for people who worked on dep. I've had a great experience with it too, especially after the .4.1 release.

While not everyone agrees, one of my favorite things about Go is that every decision is well thought out in terms of its effects on the entire ecosystem. Just because some other language or toolset does something isn't a good enough reason to force it into Go. I trust the overlords, they've been good to me so far. :)


I want to know what they consider the limitations of the cargo/rust approach. It has been really nice to use as a user.


To me, the one thing I'd like to change about Cargo is that required initial clone of a 100MB+ git repository before you can even install something.

IIRC this was because a libgit2 issue preventing them from doing a shallow clone though, so there's no way around it for now.

Disclaimer: I use both Go and Rust on a daily basis and think both are nice in their own way.


That’s more of an implementation detail though. The design does not demand that. I assume that no concern initially when the ecosystem was tiny.


I agree, but it's not a detail in practice. My feeling is that Rust is very attentive to theoretical details, and Go to practical details. Of course it's an over-simplification, and both approaches are pertinent and complementary ;-)


That's the exact opposite of what the grandparent is talking about, though. Cargo was the one saying "this works for now, we'll fix libgit2 later", which is firmly in the practical camp. vgo is the one saying "we can't emulate other package managers because SAT solvers are slow", which ignores that in practice they're not, valuing strictly theoretical considerations instead (and, in practice, Cargo doesn't even use a SAT solver anyway, so they didn't do their homework).


I meant that Go usually focuses more on solving practical issues than theoretical issues. But I have to agree that it is the exact opposite in the example I replied to ;-)

Yes, Cargo doesn't use a SAT solver, but Cargo' source code acknowledges that "solving a constraint graph is an NP-hard problem" and uses "nice heuristic to make sure we get roughly the best answer most of the time". [1]

It's not just a theoretical consideration. It can create real problems. See for example "Abort crate resolution if too many candidates have been tried" at https://github.com/rust-lang/cargo/issues/4066. I'm not saying it's big issue, but it's something to consider in the design space and this is why the Go team is considering other options.

[1] https://github.com/rust-lang/cargo/blob/master/src/cargo/cor...


Again, in practice, this has not created real problems, which Russ Cox seems to fail to appreciate. I've been using Cargo for years, working with other programmers for years (including programmers using large Rust codebases in production at large companies), and teaching programmers new to Rust (both online and off) for even longer. The number of times I have had crate resolution abort, or found the heuristic-chosen dependencies undesirable, or seen any other person ever complain about either of the former: zero. My sample size is not small.

I respect Russ Cox's decision to favor different considerations for Go's versioning story. The approach of constraining to minimal versions is not bad, merely different (especially since the -u flag exists). But the framing of this as solving some problem with existing package managers is simply mistaken, as Russ would know if he had used these tools in practice, rather than instinctively reeling at the theoretical implications.


> in practice, Cargo doesn't even use a SAT solver anyway, so they didn't do their homework

It's not fair to accuse people of not doing their homework when they actually are...

Russ Cox published "Version SAT" in December 2016. [1] The article specifically mentions "Rust's Cargo" which "uses a basic backtracking solver".

[1] https://research.swtch.com/version-sat


Rust might have theoretical problems but it doesn’t matter in practice. That’s all that matters to the Rust community.


See https://news.ycombinator.com/item?id=16423049 for a discussion between rsc and Rust developers.


Quoting the article:

> […] the import uniqueness rule: different packages must have different import paths. […] Cargo allows partial code upgrades by giving up import uniqueness.

> The constraints of Cargo and Dep make version selection equivalent to solving Boolean satisfiability, meaning it can be very expensive to determine whether a valid version configuration even exists.

> eliminates the need for separate lock and manifest files.

Note that Cargo is seen as a "gold standard" approach here, upon which rsc is trying to improve.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: