For all practical purposes dependency management for programming languages is a solved problem, with many open source examples. The only explanation for this constant re-invention that I can come up with (and that's shared among others I know) is that Google doesn't care/need a dependency tool because of their mono repo. Which is rather frustrating since we get features (cough aliases cough) forced down our throats when Google suddenly finds a need for them.
But I am very happy, that they did not pick a random management system, just because they "had to have one". We can see enough bad examples in the wild. They left this aspect out of the language standard, to tackle the challenge later, with proper consideration.
As far as I understand all details, Russ Cox has presented a very thoroughly worked out spec, and it seems to solve a lot of problems, other dependency management tools have. It is also completely compatible with Go so far and also fully optional.
But they have, multiple times? First it was just giving people links to godep when people complained in github issues. Then it was a tacit "we like this" about gps and glide. Then it was a pretty-much-official-blessing of dep. Now it's a super-official-now-we-mean-it implementation of vgo. I recognize this as a pattern because I do it all the time, software engineers constantly re-write things from the ground up when they don't care about actually solving the problem and just want to play around with cool new ideas.
As far as the vgo spec, I'm not against it per se... I like most of the general conclusions, but none of it is new. We've had these ideas in the Go world for a while. Most of them have even been implemented! Why can't we just slowly transition an existing tool? Or maybe implement a common library that individual tools can use to solve the problem in their own way? Oh wait we already did that and it got abandoned. Until we see a tool with a stable API that solves the 80% use case and lasts for over a year I'm not going to get excited about the new flashy thing.
No. Golang hasn't had an official/pretty-much-official dependency management system. The community recommends some tools over others, but that's about it.
With dep it is a bit different - the creators were google engineeers and they wanted it to become the official tool, with integration as a go subcommand etc.pp. - but that was never signed off by Pike or any other lead, only confirmed that it would be a possibility if the tools performs well.
With vgo its similar to dep - a proof of concept to see _if_ it works nicely.
Regarding a library multiple tools can use; Yes, that would be a nice thing - however when designing such a tool _and_ a library at the same time you will get into bigger problems. Generally you'll want a library designed by someone who already solved a given problem in at least one way. That brings enough experience to make the correct architectural decisions early on as well as a good, usable API.
I'm happy that we don't just get half-assed solutions just to have one. I have seen enough projects go to shit because a major step in the project was reconsidered a few times before management put a lock on it because it went through 'enough' revisions.
I'm frustrated. I started using Go during version 0.8 and have been using it since with no dependency mgmt beside my own forks because I hated everything I tried. Then dep comes along, I give it a few months to mature, try it, and it worked great. It solves my common use cases near perfectly.
What this blog post and proposal lacks IMHO is a clear case for why dep isn't "good enough". All I see is handwaving:
"Early on, we talked about Dep simply becoming go dep, serving as the prototype of go command integration. However, the more I examined the details of the Bundler/Cargo/Dep approach and what they would mean for Go, especially built into the go command, a few of the details seemed less and less a good fit."
So "a few of the details seemed less and less a good fit" is a statement that is impossible to refute because it is too scare on any details.
You want feedback now, on this new proposal?
It's patently obvious that our feedback on dep has been totally ignored.
What indication is there anyone will pay attention to any feedback now?
I can't be bothered any more.
I just want one solution to be picked, blessed, implemented and then not changed five minutes/months later.
If you read the minimal version selection algorithm, it's clear that the proposed solution solves the problem in a novel way.
Having started in the 90s, I have been through these same conversations regarding Adobe, Oracle (oh lawd Oracle) MS, Dell, Apple to a lesser extent.
“Vendor the industry currently relies on heavily is changing things or yanking the rug again?”
The free market works by big players yanking every one else around. Guh.
It's SMB/"indie" developers who should be taking the responsibility here, I think, for relying on these officially-unstable packages in their own production systems, and especially for encouraging others to rely on such. Yes, sometimes, it's the only way to be competitive/have a https://en.wikipedia.org/wiki/Unique_selling_proposition. But that doesn't mean that it isn't their responsibility to cope with these changes. They're taking on that risk-of-change by using those unstable libraries or APIs.
Heck, this is half of the point of software Venture Capital: being able to underwrite the risk of relying on an unstable platform/ecosystem, so that you can have enough runway to recover and still finish your product if "the ground moves under you", and therefore can "safely" rely on these unstable technologies.
(And one not-oft-talked-about property of bootstrapped startups is that they have to underwrite that risk themselves. If a bootstrapped startup isn't just gambling with its founders' money, it has to behave more like an enterprise development shop, building on only stable foundations.)
Theoretically, you could move this underwriting role into the companies themselves, such that the companies themselves would monetarily cushion the blow of changes in their unstable APIs. But they'd need a pretty big interest in the involved consumers to do so. In such a world, you likely wouldn't have separate startup businesses using the BigCorps' unstable APIs; rather, you'd have a set of BigCorp incubators with large shares in most new startups. Maybe that hypothetical world would have higher https://en.wikipedia.org/wiki/Gross_world_product than our own! But I bet there'd be fewer startups, because starting a startup would be a lot more like a regular BigCorp job.
Do you think that's a community process? I always have the impression from Go developments that all major decisions are "my way or the highway" affairs of 2-3 of the initial designers. Possibly the same ones writing those vague "exploratory" posts and dismissals of common proposals, that voidlogic captures well in his comment below.
Many ecosystems haven't gotten past the "single dependency file and lock file" system where an entire project has to use a single version of each dependency. There are people splitting their project into a thousand "microservices" and making them interoperate over HTTP instead of regular function calls so that each part of their project can have its own dependencies.
And while I have been using development snapshots for years with my system and didn't have more bugs to cope with compared to other systems, I can understand that developers in corporate contexts need to be able to specify dependencies with versions. So I appreciate that proposal very much and hope it will lead to a better solution for every body involved by providing a way to use versioned dependencies without introducing too much overhead for those who don't need versioned dependencies.
No package manager can fix a cultural problem.
Meanwhile, I've heard people spend ages complaining that this is totally wrong...while overseeing setting up local artifactory instances and wrangling proxying to do the exact same damn thing.
Perhaps they shouldn't go past that.
multiple library versions is one of the lowest priorities when doing micro services id say
this is not to say that people don’t significantly over use micro services, but there are plenty of hard problems they solve
Reread the post, the authors make some great, mature realizations that I hope other languages will listen to. I don't use the language, but I use languages that Go has affected and this is a great thing.
The one time I needed that I found a pretty good workaround: you make two crates `helper_old` and `helper_new` and have each depend on different versions of that crate. Then I just pub re-exported the crate it depends on.
That way your other crate can now depend on two different versions of that dependency.
When does this situation come up? What is a compilation unit in this case?
You might need a handler for both versions if you have customers that use the old api and ones that use the new one. Or different departments of the same customer.
Or if you dodge that bullet but have to write a custom migration tool to get data from the old version and cook it to go into the new one. Half of the code is read only and the other write mostly but they still have to coexist if you do anything more complex than a backup/restore script.
Rust had actually been through three failed package manager projects before they decided to bring in the domain experts at Tilde. Nice that at least that bit of churn has been forgotten - but yeah, it became obvious over that time just how non-trivial the problem is. Cargo makes it seem easy though ;)
Many, I think, based on what the author of dep says (https://sdboyer.io/blog/vgo-and-dep/). I don't get why rsc would just throw away the code and work that was the culmination of years of community work. Isn't rsc largely responsible for creating this problem in the first place? I'm doubtful he's going to fix the dependency problem from scratch. There's a certain pigheadedness to a lot of golang decisions, and I'm afraid this is another one of them...
I think they threw out a bit too much, but that's only a problem if the gaps don't get filled or the compromises aren't adequately explained. What's left is a very pragmatic selection of extremely useful tools that can be used to cut a tonne of working code very, very quickly. I can't say I always like it, but I benefit tremendously from it.
For example, why do unused variables and imports, a style issue, cause a compiler error? This is a constant pain if commenting out lines of code during debugging or prototyping. On the other hand, the compiler doesn't care if I ignore or forget to check error values - which is almost always incorrect behavior.
Here's why - https://golang.org/doc/faq#unused_variables_and_imports
At least the Golang team can cherry pick the best ideas & lessons-learned from these community-driven package managers.
At the very least they could have continued their hard core hands off approach of allowing the community to solve it. But instead they half-assed it and started capriciously anointing chosen solutions and honestly we're probably in a worse spot than we were before dep came along.
Plus, lets be honest here, there's no excuse for constantly changing APIs and binaries. If this was truly about getting the best ideas and lessons learned we'd just see refactors of existing tools with slow migrations to new concepts, but instead we've seen multiple complete rewrites, despite multiple efforts to build common libraries that should prevent such events!
Huh? Of those three, I would only consider Java "solved", and both Maven & Ant are far from the panacea of package management.
virtualenv is a community project (much like dep), is pretty new in the grand scheme of things, and isn't completely standardized (some projects use tox, some list out requirements.txt (rarely version pinned), and others vendor.
As for C++, almost everyone does something a bit different so I don't see how that is at all relatable.
virtualenv was a community project but based on that venv was created and has been part of the official Python distribution since Python 3.3, released in 2012.
It doesn't stop there either I've seen python projects using nothing, virtualenv, venv, pip-tools, and pipenv.
I finally feel like pipenv is the true solution and it seems very new. All that said and Python is quite old relative to Go.
* a central package registry
* a file to describe dependencies
* a command line tool to install, build and publish packages
In Python things are more fragmented. You have:
* a file to store some of your dependencies (Pipfile)
* a command line tool to install packages (Pipenv)
* a file called setup.py in which you need to specify your dependencies again (in another format). You can execute this file to build/publish packages.
It think it would allow for a much nicer user experience if pipenv/pipfile would handle packaging/building/publishing as well..
Of course, gradle hooks into the whole maven ecosystem but that is one of its advantages. Everybody in Java land understands Maven packages but how you generate doesn't matter.
If it wasn't for Android, I wouldn't bother 1 second to learn Gradle.
But I guess Groovy needs some project to keep it alive, now that no one remebers the days JSF beans would be written in Groovy or JUGs were holding Grails talks month after month.
C++ has no widely-used dependency management systems.
Both languages are doing fine.
On Windows, Microsoft is investing on vcpkg as package manager.
Regardless of your setup at the end of the day you're using pip to solve your dependency graph, and have been for over a decade.
Except where have they done that? They've had 40 years and their solution to the C binary function error code return problem was... to add a special way to return an error code. There have been superior solutions to this for 20 years at least.
How long have we known about the value of generics?
To me, the fact that they can't figure out how to solve this isn't remotely suprising.
Most of Go's features are those that proved their value over 40 years. There are no dependency management schemes that have that kind of reputation. Good ideas take time and misfeatures are expensive.
If they are willing to throw out that requirement for something as fundamental why should dependency management be held to so high a standard.
Combine those 2 facts & I think it’s fair to say golang is willing to base things (central ones even) on untried ideas, which directly contradicts your stated position which was precisely that no dependency management scheme lives up to the precedent of the other accepted features of the language.
This is trivially proven incorrect via looking at other language dependency management schemes that have much more broadly proven their worth.
The Go designers used CSP over the course of many decades via Newsqueak, Alef, Limbo and Plan 9 C's libthread.
Did the principles enjoy a feature while using it on Plan 9? Likely to be included. A feature having broad industry or academic backing, or a long history? Immaterial.
In fact, as nice as Go might be, if it had the AT&T Research Labs stamp instead of Google's, it would been just as successful.
It absolutely doesn't contradict my stated position. I was clear about that in my previous post.
CSP inspired Go's features (in that sense alone it is "the basis for Go's concurrency" as alluded to by the Go FAQ) , but as I mentioned, it's not integral to Go in any way, and very few programs are modeled in CSP fashion.
Heaven-forbid you want to upgrade a package that exists in all 130 projects in the solution (not my call to have that many projects) - you may as well take a long lunch. I will try to make that the last task of the day so I can let it run for as long as it needs to.
The VS UI for NuGet is terribly buggy.
Try to build a C++ or Android project and then go for lunch instead.
NuGET is great.
Most likely someone modified the packages for an individual project and not the solution as a whole. Always manage packages at the solution level and your much less likely to have these issues, unless you have multiple solutions...
Basically, it 'just works' for simple senarios, its just not very good for anything else.
If you want a comprehensive guide to why its not awesome look at the 'packet' website, they cover the issues quite clearly.
^ you can actually use local dependencies, but its irritating and poorly supported (still uses the global system cache, forcing a cache flush to update).
What constant thrash? In the 6 years since Go 1.0 the only dependency management related change I'm aware of is the addition of `vendor/`.
Update: sounds like "thrash" refers to the variety of community vendoring tools. I guess of the few I've used they all felt fairly similar/interchangeable, so it didn't feel like thrashing changes.
I'm sure I'm forgetting a few. And just when it seemed the Go team might be ready to standardize around Dep (6), they threw this wrench in the works.
I'm of two minds about vgo. I think it has some interesting ideas, but Dep works today, is widely adopted, and has nearly all the features you'd expect from a modern package manager.
There is a standard, and has been one from the early days. The catch is that the standard is not loved by all, so others have created competing package management systems to fit their own needs.
There's definitely a feeling among Go developers that this problem should have been solved much earlier, and that the Go team's years-long refusal to address it caused the proliferation of mediocre tools (looking at you, Glide) that in many ways made the whole situation worse.
When Dep came along, a lot of people breathed a sigh a of relief, because we finally have something okay, and we can go back to being productive instead of fighting dependency management problems all day.
This. Sure, new tools have appeared, but there's literally nothing wrong if you decide to just stick with godep or whatever you happen to be using.
These many tools were the thrash, including godep, gb, glide, dep, and more.
Each of these expressed versions with different manifests, many of them could import other format's, but not all. All-in-all it has been a mess.
Sure, the go team has not officially done much, but that's because their stance has been borderline negligent on the topic.
As for "it's a solved problem", is it? Until perhaps the last year, pip, npm, and most other package managers have been plagued with serious issues. I'm not fond of the situation in Go, but I think it's not unreasonable to try something different and less binding than a package manager until a solution emerges which solves the problems faced by other package managers.
I actually also liked dep with the vendoring approach, it is just better to ci a repo with vendored dependencies (no more Broken Building because of Internet/Proxy or github Problems). And in Go it was/is Even simple to upgrade them. Sadly go is slowly moving away from that Part.
the new approach is Bad, because they still pull Sources from Github, this Makes their approach unsuitable.
E: damn iPhone autocorrect
I know, Go is not a very old language, but all the Go projects I was involved in were started before that and I haven't written any Go in like 5 months, but still - this is too new for all the Go developers to have looked at it in depth (I certainly didn't come around to try it) and yet there's something new here.
I'm not a huge fan of Go anymore after my last experience (writing some web stuff, with CRUD) after having been a huge fan (writing monitoring checks and non-HTTP daemons), and I have no immediate plans - so I've no stake here, but I can really only hope this annoying part (which least-bad dependency management system will I use?) is finally over.
There is no graph solver, that's the whole point.
> Define a URL schema for fetching Go modules from proxies, used both for installing modules using custom domain names and also when the $GOPROXY environment variable is set. The latter allows companies and individuals to send all module download requests through a proxy for security, availability, or other reasons.
At least with vendoring I keep all of my code pinned in one portable repo and all I really need is dep and git. If I clone or backup the repo I don't have to think about cloning the state of the proxy also to get my code to compile.
I also have some reservations about getting developers to adhere to a versioning repo layout standard - that /path/to/v2/ proposal and semantic versioning - absent any automated tools to enforce it (see cats, herding of). How many of the many Go github repos follow the recommended cmd/pkg code layout ? Not many. My cynical notion is that anything that's not automated out of the box is going to quickly run into the inevitable proclivity of humans to be messy.
Having said all this, I do think the discussion is worthwhile. My hope is that rather then completely switching dependency management systems the discussion identifies the things which are still painful with dep and fixes them. Let's face it, the vast majority of projects out there really don't need anything more complicated then dep, so I would hate to see it abandoned.
About vendoring, I think Russ Cox is aware of the issue. Look at this discussion on his website: https://research.swtch.com/vgo-module#comment-3771676619
Hey Phil! It's been a long time. C'mon man, when we worked together, are you telling me you never ran into the never-resolving dependency map with berkshelf and chef? I know I did many times. Locks and a manifest did not solve that. It sounds like the proposed solution should prevent that.
It's obviously not solved satisfactory.
Edit: I just realized that we may have a different definition of "compilation unit". If you mean "single program" then my point stands. However, if you just meant some sectioned-off part of the program, then I think the drawbacks of that are far less. Basically, I think that it is useful to be able to have access to multiple versions of a datatype (e.g. if you want to write a converter from old -> new) but that use case is far more fringe.
If backwards compatibility is important, then letting me use multiple versions side-by-side in the same modules easily is damn important.
If you live in a filter bubble (survivorship bias).
I swear I should put together a list of these instances.
The popular criticism is that Go isn't reinventing enough; that it's stuck in the 70's and it's not sufficiently innovative.
I've heard go described as a language that doesn't try to solve any hard problems. Let's set aside for a moment whether or not that's a positive or a negative. The bigger problem is that they also seem to completely ignore everyone else's existing solutions to hard problems too, which leads to the criticism that it's stuck in the 70's.
Sure, a lot of the existing solutions to some of these problems aren't perfect or involve tradeoffs that don't work for everyone. In general though, more recent solutions have built upon the lessons of previous approaches and now we've got languages and tooling that get a hell of a lot right, straight out of the box (I'm looking at you Rust / Cargo).
But go isn't doing this. They appear to be trying to come up with de novo solutions to problems from the perspective of the 70's. Whereas everyone else is "standing on the shoulders of giants", they insist on discarding the lessons we've learned over the past fifty years and running one by one into all of the same problems that led us to the solutions we have today. As an observer from the sidelines it's painful to watch them relearn these same lessons the hard way.
As for C#, people keep forgetting .NET always had a JIT/AOT compiler since version 1.0.
Just because not everyone bothers to sign their applications and call NGEN at installation time, it doesn't mean it isn't there.
Also Mono always supported AOT compilation, it is the way Xamarin works on iOS and the deployment story on Windows Store since it was introduced on Windows 8.
In any case, a limited form of generics is better than not having any at all.
They don't need to provide a turing complete implementation of generics, even a basic one like CLU had would already be an improvement.
Using go generate feels like the old days, writing generic code in C and C++ with pre-processor tricks.
In any case, the truth of the complaint about generics isn't actually relevant here: the rebuttal is to "people complain about Go not reinventing enough", and they don't, as they (truthfully or otherwise) think it is ignoring already researched and widely-implemented solutions to the problem and complain about this (that is, the complaint is "uninventing", not "not enough reinventing").
But even Swift implementation of generics has its issues and tradeoffs. Look for example at this thread about "Compile-time generic specialization" in Swift: https://lists.swift.org/pipermail/swift-evolution/Week-of-Mo...
Repeating that Go ignores "already researched and widely-implemented solutions" is getting old, really. Would you say that OCaml doesn't have shared memory multicore parallelism because it ignores already researched and widely-implemented solutions??? Would you say that Haskell garbage collector causes long runtime pauses because it ignores already researched and widely-implemented solutions??? The truth is that solving that kind of things is a lot more complex than just copy-paste the code from another language.
In any case, that specific concern doesn't apply to Go, since it doesn't have overloading.
> Repeating that Go ignores "already researched and widely-implemented solutions" is getting old, really
Sure, but that still isn't the point of this discussion: it's not what people should be doing, or the underlying truth, it's what people are doing in practice and they are complaining about the Go team seemingly ignoring the past 40 years of programming language development, not complaining that they're not inventive enough.
I didn't know you worked on Rust and now on Swift :-)
What strategy does Swift use to compile a generic function or method? Does it generate only one polymorphic version of the code, or multiple specialized versions for each expected parameter types (often called code specialization or monomorphization)?
The compiler can and does specialize functions as an optimization, but that's not necessary nor part of the semantic model.
It's interesting that Swift adopted this approach and confirms what you said earlier about Swift having essentially the same constraints as Go and being a good source of inspiration of Go.
What's the status of Swift on Linux, to write web services and connect to databases?
OCaml people definitely not ignore them, they are carefully investigating existing code and elaborating modern and efficient solution (unlike Go, which is too opinionated).
That said, I don't see the link between the work done on OCaml multicore parallelism and Go "being too opinionated"...
If I follow you, Go people, unlike OCaml people, are "too opinionated", and don't "carefully investigate existing code" and don't "elaborate modern and efficient code". Do you realize how arrogant (and a bit ridiculous) this sounds?
That's popular, but unfounded, in my opinion.
I find ironic that some HN commenters are criticizing the Go team for not being stuck in the past when they are precisely trying to innovate and questioning the status quo on package management.
Actually it's pretty clear that they did their homework.
First, it was literally argued that just `go get` just grabbing `master` of all of your dependencies was good enough, because if you ever need to release something backwards-incompatible, you should create a new repo. I wish I were making this up.
Then we just vendored everything and never updated our dependencies, and that too was good enough.
Then GoPM, go dep, and a deluge of other tools that I don’t care to track down or sort into their appropriate order.
And now this, which decides to buck the trends that everyone else follows yet again and attempts to satisfy dependency trees by choosing the minimal compatible versions instead of the maximal ones, with the completely foreseeable consequences that:
* security patches won’t be applied
* issue trackers will be filled with bug reports that are closed with messages saying “this was fixed months ago, update”, and
* updating dependencies will become *harder*, since this is a straightforward result of doing it less frequently
First, the upstream behavior; not supporting the published interface (as it was and should therefore work forever) means they need to deprecate that interface and release another with the incompatible changes. However the PROPER way of handling this is to add new versions of exposed public methods and interfaces / constants with new names and only if such a major change is required that you can't support the old interfaces, at THAT point, release at least under a new inclusion path if not new dependency name.
Things local a developer might have done wrong: A) include an 'immature' upstream repository that behaves like above. B) use non-public components or other unsafe practices to go beyond the interface contract that was exposed.
At that point, almost all of the go ecosystem can't be used.
I left the language years ago, but the steady stream of articles of the form “Why go doesn’t need $x” followed by innumerable comments, articles, and blog posts by people struggling with the lack of $x, followed by the go core team (due to the completely foreseeable consequences of not having $x) begrudgingly adopting $y, which is supposed to (but doesn’t actually) obviate the need for $x… I’ve reached my own conclusions, and I’m finding more and more people who are leaving the ecosystem with those same conclusions.
I believe that Go excellently solves problems that Google has. I don’t believe it solves many problems that most other users have, even though it might seem like it at the outset.
To re-iterate a different post of mine against this question (implicitly, what are the benefits of NOT having library versions):
* The only version to test against is HEAD
* No fossilization of security or stability issues
* Public libraries must support all Uppercased (exposed) declarations; that is the library interface.
* If the build breaks in an odd way: update go, then: go get -u all
* Simplicity, there's only one supported version and only one version to go get and develop against.
* Also, why would building against an old version _ever_ be necessary?
That's a pretty good argument for using a monorepo, don't you think? A single organisation's code should all be in a single repo, which means that a single version of a dependency is used within the organisation, and that updating to a new version of that dependency is a single, self-contained project.
You can say that about Go itself. And there are already other/better alternatives. So why use Go?
Go has its good sides, but it's just a tool. It's not the best, because tools aren't supposed to be the best. Tools are supposed to be useful for what they're made for.
In comparison I can just start goroutines and Elixir processes for fast and simple I/O parallelism almost without thinking about it.
go dep is perfectly fine for most use case.
> How many iterations of this do we have to go through?
Fewer iterations than Mr. Cox has gone through. Isn't it great?
"Semantic import versioning" is equivalent to versioned APIs in the web world, no? Though I have never seen anyone utilize the same concepts for language-level packages; it certainly maps well!
If Roy Fielding wanted developers to fully grasp whatever he was talking about with REST, he should have written a normative spec, not a dissertation, he didn't.
Moreover, if I understand Roy’s argument correctly, he’s arguing against versioning URLs, because they are meant to be opaque identifiers that doesn’t convey any universal meaning. If you know the second segment of a URL path to be a version, it’s because you have out-of-band knowledge that it is, not because there’s anything in the request to say it is.
If however you have a header that tells what version you wish then that is in-band information that allows you to negotiate the content appropriately with the server. It also means you don’t have to change URLs whenever you make a change to your API, breaking or not. Of course the server and client needs to understand this header (to my knowledge there’s Accept-Version header) and if it’s non-standard then it can be argued that it’s just as bad as versioned URLs. Perhaps, but unless you get very granular with your versions (and most don’t, let’s just be honest) your still at the mercy of what the server chooses to present you at any given time. In fact, REST gives you no guarantees about what you’ll receive at any given point – you may just as well be given a picture of a cat. REST says you should be able to gracefully deal with this. Most clients (that aren’t web browsers or curl) don’t.
In any case, what you’re saying is the point I was trying to make in the latter part of my post. However a typo (or rather, a missing word: “no”) unfortunately changed my meaning entirely. :o(
It should’ve read:
> (to my knowledge there’s no Accept-Version header)
Oops. My apologies for the confusion.
That is probably true, I don't disagree with anything you are saying. To me the most in-band representation is everything passed in the URL with no user control over headers.
Out-of-band would be a phone call, or perhaps an email - an entirely alternate method of communication.
Transparency to a user (via the client, i.e. browser) isn't really relevant, from a communications standpoint, to whether or not data is considered "out-of-band". Given the subject (APIs), you're not likely to be browsing to these anyway.
What you should be versioning is your media types (e.g. HTML4 vs HTML5).
Well, what he exactly says is that URLs should not include versioning, because URLs are the interfaces names and REST estates that interface names should not be versioned, as that implies a breaking change in the API. It's just the wrong place for versioning in the REST way.
But he is not against versioning, as you say, you can use the Accept header. You could also use a custom header if you might, but the canonical way would be the Accept header.
I totally understand asking for the version (Accepts) during a GET request. If I agreed on that content-type in advance, if I haven't I need to communicate both the url and the content-type along with the version to the clients. We don't have a common container for (url,content,version), things are getting messy. In a package manager, what is the equivalent of a GET request with an accepts header?
import foo.bar v1.2.3
import foo.baz v2.1.3
new_thing = ::v2:foo.baz.new_thing(1)
old_thing = ::v1:foo.bar.old_thing(2)
I am strongly infavor of immutable code, I think we should be able to import all versions of a library, referenceable by commit hash.
Thanks for the link as well! I would love to see that experimented somewhere and see if it works.
This is the piece that is very different from other dependency managers and is worth people looking at.
Instead of trying to get the latest version it's going to try to get the oldest. If you want to update to newer version of transitive (dependencies of dependencies) that your dependencies have not updated for you'll need to start tracking those yourself.
There's an issue touching on this at https://github.com/golang/go/issues/24500
Other packages managers use the opposite of minimal version selection. Many of them even have a don't make me thing command to update the whole dependency tree (e.g., `npm update`).
What do folks think of the implications of MVS, especially for transitive dependency management?
I can't think of more than a handful of times over the past twelve years (primarily Ruby and Rust) where a newer package broke an existing one, and the majority of those times the problem was a new major version that wasn't appropriately accounted for (e.g., the author should have depended upon '~> 2.x.x' instead of '>= 2.x.x'). I'd much rather have that problem addressed. Assume semver, and make the easiest way of specifying a dependency cap it to a major version.
On the other hand, I regularly deal with the consequences of upgrading infrequently — when you do upgrade, it's a nightmare. As a rule of thumb, the more frequently you update your dependencies, the less net pain you experience. Upgrading once a week is relatively painless. Upgrading once a year (or even less frequently) is a nightmare due to the sheer volume of changes coming in at once.
Also, having transitioned to the security side of things over the past few years, encouraging stale dependencies just means that security patches will never get incorporated. A stable project that releases mostly security fixes and few feature changes will — in practice — never see its minimum version bumped in projects that depend upon it.
My prediction is that this will just result in security patches not being applied and the problem of upgrading dependencies will be made generally worse. I will be happy, though, if I am proven wrong.
But I agree that procrastinating on updates creates problems. (At some point, the entire point of version pinning is to allow you the choice to defer, but it's still a problem.)
I think it might be worth exploring reporting as a solution. Everyone knows that old versions are a problem, but what if I had tools to tell me how bad or good the situation is for my project at this moment? And what if they ran by default either periodically or as part of my build or both?
Examples of stuff it could tell me:
* Am I one minor version behind on this one library and it doesn't matter?
* Or am I a major version behind on this other library and I'm using code that isn't even supported or maintained?
* Are there security fixes that I haven't taken?
* Are there libraries that have security fixes but no release is available yet?
* How about a list of libraries that I'm not on the latest minor version of AND the latest version has been available for more than 2 weeks? (Maybe I don't want to fall behind but I don't want to be a guinea pig either.)
* Or a list of libraries that I'm not on the latest major version of and a newer major version has been available for 6 months?
Since this is important, it would be great to have real visibility into it. Right now, every build I've ever done, this is just something that people track in their heads and just assume they have a good handle on. Doing regular releases and always taking the latest version of everything helps somewhat, but sometimes a release gets canceled. Or maybe there's a system that isn't being regularly worked on and doesn't have regular releases, yet dependencies are being updated, it is behind, but by how much?
No, minimal refers to the number of dependency changes between upgrades, not the version numbers of the dependencies. Dependency conflicts initially resolve to the higher of the two versions.
https://research.swtch.com/vgo-mvs section "
Algorithm 1: Construct Build List"
In the example, a dependency is upgraded and no longer needs one of the shared transient dependencies. Instead of downgrading it to the version declared by the unchanged dependency, it keeps the higher version that you were already using. When upgrading, transient dependencies are never downgraded, only added (or removed).
https://research.swtch.com/vgo-mvs section "
Algorithm 3. Upgrade One Module"
--> Dependency A --> Dependency C (at version 1.2.3)
--> Dependency B --> Dependency C (at version 1.3.4)
The latest release of Dependency C is 1.4.5. Which version would be used after running `vgo get -u`?
As you note it would use the newer version of those specified (version 1.3.4). The newer releases after those explicitly specified are not used.
If you just do plain vgo get, you will get 1.3.4, because of the minimum version approach.
The vgo approach will encourage package managers to update their dependency versions to what they have actually tested with, which is very useful information for consumers of the package.
Err, no. For each library it uses the newest version of the library that is used by any of its dependents (including transitive dependents).
Especially the line: Simplify the rough build list to produce the final build list, by keeping only the newest version of any listed module.
Different major versions are considered different libraries essentially.
To be honest I thought this was all obvious and I'm sure this approach has been recommended by Go people in the past.
It has another nice advantage over Rust's system - you have have pre-releases of major versions. Rust doesn't really have a way to do that except for releases before version 1.0.
In other words when you want to do some incompatible changes and release version 2 of your library there's no good way to put test versions of it on Crates.io.
Semver has a specific way to indicate that a feature is a pre-release, and people use it. The second post on /r/rust is advertising a pre-release of the next major version of rand: https://crates.io/crates/rand/0.5.0-pre.0
(Yes, this is before 1.0, but 1.5.0-pre.0 would work just fine too!)
I found this bug report and it seems like it has some issues: https://github.com/rust-lang/cargo/issues/2222
vgo does have that too: vgo get -u
Why would you want to pull a version that is newer than the author of the library you depend on has actually tested with? You can of course force this in vgo, but having the default use the versions specified by the authors makes a whole lot more sense than just using the newest.
1. To install a security update or bug fix update you need in a transitive dependency that the author of the dependency you're using hasn't updated to.
2. To use the same workflows across all my dependency management tools (npm, cargo, composer, bundler, and the rest of the lot follow the same patters and vgo goes against the patterns used by the others)
There are two reasons.
2) seems like a bit of a silly reason, if we all wanted to make everything work the same all the time we wouldn't make much progress or try anything new. Whether vgo's approach is correct or not we don't know yet, but saying that it isn't familiar isn't a good enough reason to not try it out.
To me, vgo matches what we already do in our python projects with a lot of dependencies. Pin everything in a freeze and upgrade on a schedule when we need to. We have seen far too many failures doing it any other way. (IE, using the "latest" of everything, which often either breaks semantic versioning and actually breaks or has subtle bugs that didn't exist before)
See the vgo tour : https://research.swtch.com/vgo-tour
The indirect dependency (rsc.io/sampler - which is a dependency of rsc.io/quote), is also upgraded to the latest version v1.99.99 when vgo get -u is done.
Personally, I'm very excited and impressed by the ability of the Go team to innovate by diving deep and understanding every aspect of the problem at hand and the existing solutions, instead of just blindly adopting whatever already exists.
But dependency software has been trending toward the use of lock files for a while now - and without explicit developer intent, those won't get bugfixes either.
I think I'm mostly concerned about how this affects transitive dependencies. My package Foo depends on Bar_1.3.0, which depends on Baz_3.2.4. If Baz gets a security update to Baz_3.2.5, either I need to add an explicit dependency "Baz_3.2.5" to Foo, or wait for Bar to release 1.3.1 that depends on Baz_3.2.5.
If go adds tooling to identify and make these transitive dependency upgrades as easy as "npm update", then I will be a little bit less uneasy.
You know, I wonder if there's something here that a next-generation language can't get in on, some sort of help to provide to the developer who says "OK, I'd like to upgrade this package for people, could you please help me ensure that I'm not going to break anybody in the process?"
Possibly this line of thought terminates in very richly dependently-typed languages, which is a bit of a utopia. But perhaps there's something in between? Or something that can be added to an existing language like Rust?
I'm not even initially certain what that would look like. A version-aware programming environment in which one can sensibly say "Yes, for 1.1 I upgraded the unit test but please run the 1.0 unit tests against the 1.1 code"?
It seems like this is a growing problem and there's probably an opportunity of some sort here.
Russ has proposed a "go release" command that is intended to help with that process. It's probably simple right now, but has lots of room to grow in that direction.
I think there's an opportunity even within existing languages: more shared CI infrastructure. Imagine if project authors had some easy way of running their downstream consuming project's test suites as they develop?
SAT is not a problem at all. Yes, you can construct a worst-case scenario for it that will chew up a ton of CPU. In practice it simply doesn't happen, and trying to defend against it is both a waste of effort and leads to crippled decision-making.
And for "understandability" in a conceptual sense, personally SAT has always felt simple and predictable to me - "find something within all bounds or err" is something we do all the time by hand.
 e.g. pip essentially ignores version constraints if something's already installed or some other lib already mentioned it. it's absolutely terrible and causes many problems in anything long-lived.
It also seems within the realm of imaginable that having an easy-enough-to-implement scheme that you're not falling back on writing heuristic backtracking search might result in more tools being written.
Obviously, you are right that being NP-complete is clearly not a showstopper or a huuuge problem, in practice. Still, seems nice to avoid if you can!
By the way, that's interesting to hear about pip. I do my best to avoid python, but I inevitably end up reluctantly wanting to use some tool written in it, and inevitably it breaks. What a fucking shit-ass ecosystem.
If you ever get back into Pythonlandia, do check out pip-compile - it's a properly sane package manager, following the normal SAT solving path. Major lifesaver, 100% recommended.
I enjoy the language well enough - it's readable and expressive. But it's so terrible for building a business on top of, and that's largely due to the ecosystem.
This feels far better than the current model used in most languages. If you've ever had struggles creating an uberjar you know this pain.
E.g. repo managers like Maven central/etc. should use binary API analysis to reject any jar upload that has breaking changes.
My only hesitation is that, AFAIK, semantic import versioning has never been tried at scale, so having to constantly bump imports from "com.foo.v1" to "com.foo.v2", and deal with "app1 wants to pass com.foo.v1 objects to app2, but it expects com.foo.v2 objects" might introduce more pain than expected.
Granted, right now app1/app2 are blithely passing around "com.foo" objects that may/may not be compatible, but if it's an 80/20 thing, or 99/1 thing, and most of the time you get lucky and it works, perhaps that's good enough.
But would be great to have go be the first community to try this at scale and see how it goes. I like it.
Here's a modified version of the example graph, written as pidgin DOT :
A -> B1.2
A -> C1.2
B1.2 -> D1.3
C1.2 -> D1.4
D1.3 -> X1.1
D1.4 -> Y1.1
What is the build list for A?
The rough build list for M is also just the list of all modules reachable in the requirement graph starting at M and following arrows.
Simplify the rough build list to produce the final build list, by keeping only the newest version of any listed module.
The list of all modules reachable from A is B1.2, C1.2, D1.3, D1.4, X1.1, and Y1.1, so that's the rough build list. The list of the newest versions of each module is B1.2, C1.2, D1.4, X1.1, and Y1.1, so that's the build list.
The build list contains X1.1, even though it is not needed.
Both the manifest (dependency section) and lock files become unnecessary. It’s DRY.
Dependencies are specified where they’re used, improving componentization.
Upgrading and forgetting to update one location is easily fixable: tooling already scans go code for a list of imports, modify this to warn on version differences. Or even to update them.
Git history gets “cluttered”, but shouldn’t it be? The behavior of values in a file is changing. This constitutes a change of requirements on the file’s code, or at least needs a moment’s review to decide the code needn’t change. Seeing that change in history would make tracking down any bugs it causes easier. Besides, we’re talking more files changed in a single edit, not more edits.
Semantic versioning is a qualitative description, not a guarantee. Due to edge-case use or human error, every minor or patch update may be a breaking change. It would be better to have a layer of human interpretation between semantic versions and code changes, rather than tooling that assumes them to always be correct.
Fixing bugs or extending the interface (with additional and possibly corrected parameters) is one thing.
If the interface specification ever needs to have backward incompatible changes, the /library/ needs to be renamed (or at least have a different leading import path).
If something doesn't work, development's answers is always going to be "test it with the latest version" (at least shipped, if not the 'git HEAD').
Furthermore, if you have diamond dependencies, a might end up version-bumping more than once.
Regular edits would get lost in a sea of version-bumps.
If C changes and B's behavior does reflect it, then A does need to know about it.
The "sea" is limited to times that direct dependencies change their behavior, which is what it already is.
Suppose C has a security patch, and B doesn't have any new functionality to release? Does it keep using the version without the security patch forever, because that's what's listed in the import statement? Or does someone have to update it?
If B isn’t otherwise updated at a similar time, then yes, I advocate dependency update-only releases. I think this is as it should be. A project should be able to know what code it runs, specifically.
And yes, if a project is abandoned then security patches don’t get magically applied. Again, as it should be. The fact that you have an unmaintained dependency is itself a problem. You can’t just auto-apply security patches and expect things to keep working, ask any Linux distro. What needs to happen is a fork (or dropping the dependency). Tools warning you encourages this; silently auto-applying patches encourages everybody to separately do their own fixes and workarounds.
It's certainly a common case that they're all the same, and we can handle that by augmenting go, which already scans the project for import lines, to warn on different versions, or even update them in the import lines.
I'm sure it's disheartening for people who worked on dep. I've had a great experience with it too, especially after the .4.1 release.
While not everyone agrees, one of my favorite things about Go is that every decision is well thought out in terms of its effects on the entire ecosystem. Just because some other language or toolset does something isn't a good enough reason to force it into Go. I trust the overlords, they've been good to me so far. :)
IIRC this was because a libgit2 issue preventing them from doing a shallow clone though, so there's no way around it for now.
Disclaimer: I use both Go and Rust on a daily basis and think both are nice in their own way.
Yes, Cargo doesn't use a SAT solver, but Cargo' source code acknowledges that "solving a constraint graph is an NP-hard problem" and uses "nice heuristic to make sure we get roughly the best answer most of the time". 
It's not just a theoretical consideration. It can create real problems. See for example "Abort crate resolution if too many candidates have been tried" at https://github.com/rust-lang/cargo/issues/4066. I'm not saying it's big issue, but it's something to consider in the design space and this is why the Go team is considering other options.
I respect Russ Cox's decision to favor different considerations for Go's versioning story. The approach of constraining to minimal versions is not bad, merely different (especially since the -u flag exists). But the framing of this as solving some problem with existing package managers is simply mistaken, as Russ would know if he had used these tools in practice, rather than instinctively reeling at the theoretical implications.
It's not fair to accuse people of not doing their homework when they actually are...
Russ Cox published "Version SAT" in December 2016.  The article specifically mentions "Rust's Cargo" which "uses a basic backtracking solver".
> […] the import uniqueness rule: different packages must have different import paths. […] Cargo allows partial code upgrades by giving up import uniqueness.
> The constraints of Cargo and Dep make version selection equivalent to solving Boolean satisfiability, meaning it can be very expensive to determine whether a valid version configuration even exists.
> eliminates the need for separate lock and manifest files.
Note that Cargo is seen as a "gold standard" approach here, upon which rsc is trying to improve.