However, I am also in the unusual position that I had to design and implement a package manager for my own little language recently. My original hunch was to just create a Cargo clone, but after reading Russ Cox's writings, I ended up cloning vgo (with a few simplifications). The implementation was wonderfully simple - everything just fit together. The algorithms are trivial. Operationally, it is easy to understand what the package manager is doing (even more so than for vgo, as vgo has do deal with various Go idiosyncracies and backwards compatibility).
This is in stark contrast to my experience with other package managers, which are temperamental beasts at the best of times. When they work, everything's groovy, but their error modes are easily incomprehensible. I think vgo's approach of restricting expressivity and streamlining processes is the right one. But since nobody has used such a package manager before, it remains to be seen whether it works in practice.
For my own experiment, I do have one data point: I showed people the (rather brief) documentation for my vgo-inspired package manager, and they all felt it was very simple to follow and easy understand what it did, and how.
I think it's less that Go has significantly different needs, but it's more that people overestimate what their actual needs are.
I think you are right. And I doubt it hurts that SAT solving is a fun problem!
My main package management experience has been with Haskell, which has used the cabal tool for many years. Cabal was a traditional solver-based tool (with the added pain of a global mutable package database, although that is going away), and it frequently broke down in confusing ways. Cabal hell was a widely used term. A few years ago, another tool arrived on the scene, Stack, which used the same package format and such as cabal, but snapshotted the central package list (Hackage) by gathering subsets of packages and versions that were guaranteed to work together (or at least do not have conflicting bounds). It works well, and although it does in principle result in a major loss in flexibility, it's rarely something I miss. Importantly, the improvement in reliability was nothing short of astounding. That certainly helped convince me that flexibility may not be a needed feature for a (language) package manager.
: There are all sorts of socio-political stack/cabal conflicts in the Haskell community now, but I'm not sure they are founded in technical issues.
* Semantic Import Versioning. A package is identified by some name. I think vgo calls it an "import path", but I have also seen "package path" used. Every version of that package must remain compatible with previous versions. You may not break compatibility (i.e. increment the major semver number) without also renaming the package. There is some syntactic sugar that makes it clear that a version 2 is closely related to the original version 1, but from the point of view of vgo, the two versions are completely distinct packages. This also means a program may depend on several major versions of the same package (which Russ claims is necessary when doing gradual migrations of large programs).
* Minimal Version Selection. In contrast to pretty much every other package manager, vgo tries to install the oldest version of a packages that still satisfies any lower bounds specified anywhere in the dependency tree. Upper bounds are not supported. This both makes the solving algorithm trivial (which probably doesn't matter much), but also gives you reproducible builds without using a lockfile (which is nice), and it gives you a very simple operational model for what the package manager is doing (which I think is crucial).
All package managers suck and all package managers break, but I think when vgo breaks, it will be more obvious what is wrong. Time will tell whether it works in practice!
(And I'm not even a Go programmer; I just like the thoughtfulness that goes into the tooling.)
> vgo tries to install the oldest version of a packages that still satisfies any lower bounds specified anywhere in the dependency tree
Another benefit of this that may be a little less obvious (it wasn't obvious to me) is that this means every package-version selected by MVS in the entire dependency tree is explicitly specified somewhere in the dependency tree. This means that every selected package-version was specifically tested with at least one other package (assuming, reasonably, that libraries are tested with the versions that they specify). You don't get this with SAT, in fact it's possible for SAT to select a set of packages where no two package-versions were ever tested with each other anywhere else before.
I feel that this property, combined with upgrades requiring affirmative action by the root package maintainer (by editing go.mod or having a tool do it), will make vgo-managed packages much more reliable over time.
I find the claims that vgo removes the lockfile disingenuous. Yes it's technically correct in that there's no lockfile, but it's incorrect in that you've basically turned the file that declares dependencies into the equivalent of a lockfile.
With something like cargo, I can `cargo update` and it will fetch new versions of my dependencies and update my lockfile. Net result, I have one file to commit changes to (the lockfile), with zero manual edits.
With vgo, updating a dependency requires manually editing the dependency list to specify the new version. Net result, I have one file to commit changes to (the dependency list), but it required manual editing.
In both scenarios, the one file that I have to commit changes to specifies the version of the package that will be used. Though that's not actually strictly true with vgo; there it specifies the minimum version that will be used, but that's not necessarily the actual version if another dependency requires a later version. Not a big deal, but it does mean there's no single file I can inspect to find out what the resolved package version is.
So that difference isn't actually all that different.
The "known good set" of packages as used by buildout for any given plone version were not really easy to work with.
Which points to things like:
Which aren't that much worse/better than a gemfile.lock for a rails project, I guess.
Now, I guess the better answer is to have smaller projects, with different interfaces (eg: don't give all modulles/parts access to your zodb object database, have a ome json/rest interfaces etc).
But apparently there are still many "big systems" being built.
That's really interesting. Obviously this would lead to fewer regressions, but it seems opposed to the "security problems fixed for free" scenario you hear about with other package dependency schemes. Were we all just imagining that scenario? Is there another way we should be handling vulnerabilities, i.e. by "repudiating" old versions rather than simply releasing new ones? Then you could have something like "Minimal Unrepudiated Version Selection".
You can't have it both ways.
You either get reproducible builds (by default with vgo, by adding lock files in other systems) or you get silent upgrades that potentially fix security issues (but also potentially introduce security issues).
You also mis-characterize the position of vgo creator. He doesn't think that lock files are a huge issue per se.
The difference is in default behavior of the system: vgo by default picks predictable, consistent version of dependencies. That version doesn't change if dependencies release new versions.
In fact, it's not just the default behavior but the only behavior.
Other systems allow specifying complex rules for which version of dependency to pick and they all allow for a scenario where you run the same algorithm over the same rules but pick different version because in the meantime some dependency has released a new version.
It's such a big problem that all those system end up introducing lock files, which is a tacit admission that what vgo does by default is a good idea.
vgo doesn't need lock files to get the benefit of lock files, which is a nice cherry on top but the real advancement is in changing the default behavior of how resolving versions of dependencies work.
That is the great thing about lock files. You don't have to check them in. If you want reproducible builds, you do check them in. If you don't, you don't. The user, not the package manager, gets to make this choice on a case-by-case basis.
> The difference is in default behavior of the system: vgo by default picks predictable, consistent version of dependencies. That version doesn't change if dependencies release new versions.
And that right there is the problem. You don't get the latest version unless you explicitly ask for it. This makes the user do what a package manager is perfectly capable of doing. It thrusts the problem onto the user instead of applying a well-known solution that was causing problems for nobody.
If you are writing an application, don't you want your transitive dependencies to get security fixes without having to trawl through the tree?
And how does it guarantee that this actually results in anything that works, if upper bounds are not supported?
This is somewhat misleading. Most package managers use a human-edited manifest containing constraints and a machine-edited lockfile, containing the chosen versions. You periodically (or after changing dependencies to require newer versions) run a tool that fetches the newest versions satisfying your constraints and writes out a lock-file, which you then commit and use at install-time to get reproducibility.
Go modules use a single file that serves as both and is both human- and machine-edited. You periodically (or after changing dependencies to require newer versions) run a tool that fetches the newest versions satisfying your constraints and updates the manifest with them. The manifest is then (also) used at install-time with minimum version selection to get reproducibility.
What that means is that in practice, Go modules never install older versions than the equivalent in other package managers would do. The maintainer specifies constraints and uses a tool to decide what version will actually used to at build-time. In both cases, that tool will choose the newest versions available.
What is correct (and contentious) is that Go modules have no concept of upper bounds for dependencies. So the complaint is, that as a library author you can not prevent a newer version from being chosen by one of your reverse dependencies. It remains to be seen how much of a problem that will be in practice.
So, if anything, the problem is that as opposed to other package managers, Go modules sometimes choose too new versions, from the perspective of some people. It never chooses too old versions.
This alone would make npm 100x less painful to use.
Admittedly, the "update all packages to latest compatible version" flag described above sounds very nice. There is approximately zero chance the npm developers would ever accept such a PR, but there would be nothing wrong with a utility that accomplished the same thing.
Lots of mentions in Peter's post about things the "dep committee" may or may not have agreed with. Isn't this the same appeal to authority he is throwing at Russ? When did the "dep committee" become the gatekeepers of Go dependency discussions and solutions? Looks like a self-elected shadow government, except it didn't have a "commit bit". Someone should have burst their balloon earlier, that is the only fault here. Russ, you are at fault for leading these people on.
Go is better off with Russ's module work and I personally don't care if Sam and Peter are disgruntled.
Note also that he doesn't dispute that Russ tried to bring them into the fold with his concerns and direction but they rejected his arguments. That's a fight they should have known they were going to lose.
Maintainers / BDFLs get the final say, that's just how it is. Just because there is a large community does not mean they get the final say, just because you put in a lot of work does not mean you get the final say. The reason BDFLs are in the position of making that decision is because they have proven through the accumulation of their previous decisions that they get it right more than they get it wrong. And that's the very reason there is a community around them at all.
If you walk into ANY project, public or private, open sourced or closed, and ignore the lead's objections when you go off in a direction they don't agree with, then yes, you are very likely in for some wasted work. That's just how the world works.
On another note I hope this is the last we hear about all this, please lets move on.
I only became aware of this by going to a meetup and seeing a talk. I just want something that's workable and not broken.
The reason BDFLs are in the position of making that decision is because they have proven through the accumulation of their previous decisions that they get it right more than they get it wrong. And that's the very reason there is a community around them at all.
This is the same reason given for totalitarian rule generally. The problem is that the dictator has enough power to make a big mistake, committing the entire communitiy's resources. In this position, if you can avoid making a decision, you should. Toyota doesn't simply make such decisions by fiat. Instead, they have a competition.
I mean in a way didn't the `dep` folks do exactly the same thing to the `glide` and other existing solutions?
Was the competition squashed?
Note also that the risk you propose for bad decisions is mitigated for Open Source. If that decision really turns out to be so bad then it will be forked and a wiser leader(s) will have their shot at decision making.
`dep` isn't squashed! You are free to continue using it, it's just that they likely won't find much audience anymore.
No, that doesn't work in practice. The network effects are so insanely dominant in open source projects that forks are nowhere near an efficient market.
The market doesn't have to be anywhere near efficient. It just has to allow any escape whatsoever from a death-march/death spiral.
Also this seems like a misuse of the term "survivorship bias". You claimed upthread that forks don't work because of network effects. The response was "some forks work". That is a direct refutation, no matter what the percentages are. Besides, you're misunderstanding the varied purposes of forks. I have several forks right now, not because I hate the original maintainers but because it was convenient to change a small thing for my own purposes. Whether or not upstream eventually agrees with me, my fork "works" perfectly well.
Citation needed. In my corner of the community this is a very contentious issue.
The golang dependency management story has been a disaster for years. Nothing about the modules or vgo story have seemed to solve that.
In fact, its Russ who seems to be going against the norms of the community. Choosing cutting edge and untried technologies, over well understood and tested ones. If he was going to get edgy with his leadership decisions couldn't he have done it somewhere more valuable like cleaning up the type system?
The only curious element here is that a (the?) judge also had the winning entry. Russ claims he got buy-in from the other core members and so far no one has contradicted that.
AND...vgo is better than dep. Let's not forget that. Peter can hem and haw about politics but in the end the best horse won.
Go developers want the tools to be as good as possible, 99% of them don't know anyone who contributes to Go by name so this drama is irrelevant to them.
Nobody has worked with minimum version selection and compared it to the traditional approach enough to know. I have serious reservations as to how it will work in practice.
Facts not in evidence.
You're comparing very different things. Programming languages and empires/nations/companies aren't the same thing. Rails has stayed on the course DHH wants it and hasn't become a mess mainly because DHH remains the BDFL. Go could benefit from similar structure. We're now about to see how Python will do post-Guido.
\* and DHH happens to make good decisions frequently enough.
For every successful BDFL example, history is littered with a hundred dead languages and frameworks designed by a single visionary leader who made the wrong decisions.
E.g. there's the namesakes for Rails and Ruby, i.e. Grails and Groovy. Virtually no-one's upgraded to Grails version 3 since it came out 3 yrs ago, or started new projects in Grails version 2. The Grails 2 plugin ecosystem is as good as dead. It only has its "single visionary leader" listed for the 3 contact persons (owner, admin, tech) in the grails.org DNS registration.
As for Apache Groovy, it's hanging on as the build language for Gradle and Android Studio, but doesn't seem to have any other significant use besides its original use case of glue code and testing harnesses. Groovy's problem is its creator, who had successfully added closure functionality to a clone of Beanshell, left the project after 3 yrs and the "despot" at Codehaus who subsequently claimed the title Project Manager was someone who didn't have the aptitude for many programming tasks.
Fortunately Perl5 still does the Pumpking thing which is effectively a rotating BDFL.
I would say that this hasn't been the case for the past 10 years at least. Patrick Michaud has been Perl 6 pumpking since then, to make room for Jonathan Worthington about 2 years ago. Hardly a revolving door and hardly clowns.
(It's not all the pumpkings' fault, either. I think Parrot would still have been doomed if its leadership were constantly flawless, just because it was designed before Perl 6's object system was.)
> Russ failed to convince the committee that it was necessary, and we therefore didn’t outright agree to modiying dep to leverage/enforce it.
> The community of contributors to dep were ready, willing, able, and eager to evolve it to satisfy whatever was necessary for go command integration.
If you're willing to say: "Sorry BDFL, you didn't convince us this was necessary so we're not doing it." Then you're not willing to do whatever is necessary, BDFL objections are necessities. Peter and Sam just found out what happens when an unstoppable force meets a quite movable object.
I agree with the sentiment that Russ's only mistake here was leading them on for too long. I see this as an argument in favor of Linus' style of BDFLing, in this case he would have cussed them out over their design's flaws and told them in no uncertain terms that it was never getting merged and probably that they were stupid. It would have hurt more at the time, but also probably would have saved them a ton of time and, in the long run, might have even hurt less.
I remember being excited when the “dep committee” was formed, because (if I remember correctly) it was set up with the explicit blessing of the Go Team, with a Go Team Member on it to facilitate two-way communication.
Your characterizations are both inaccurate and ugly.
I think it may be blunt but seems true to me. Sam calling integration of Go module as sad milestone does seem disgruntled.
Dep folks have decided not to take high road which is fine but others are free to call this out.
My impression is that the dep folks understand that too. The problem is that there is no consensus on what was learned from it.
The dep folks seem to have come away convinced that a SAT-solver approach is the better approach. rsc is clearly convinced of the opposite.
Everyone knows it is ultimately rsc's call, so I don't think talking about the power dynamics is very interesting. What I am more interested in is whether or not it's the right call. A good faith interpretation is that the dep folks aren't sad that their solution lost, it's that what they believe is a better solution lost.
Satisfiability problems of this kind appear in a ridiculous number of fields and applications
(and not just by reduction).
The vast majority of them, in practice, are approximated rather than exactly solved.
Most of the ones that are exactly solved are in software verification, model checking, etc. Areas where having an exact answer is very critical.
Outside of that, much like you see in MVS, they approximate or use heuristics. And it's fine. You don't notice or care.
The idea that "package management" is one of those areas that absolutely must be exactly solved to generate acceptable results seems to me to be ... probably wrong.
There are much more critical and harder things we've been approximating for years and everyone is just fine with it.
(IE not clamoring for faster exact solvers).
Thus i have trouble saying a sat solver is a better solution.
It certainly would be a more "standard" one in this particular instance, but that's mostly irrelevant.
It's also a very complex one that often fails in interesting ways in both this, and other, domains.
The logical conclusion of your statement is that minimal version selection is the wrong approach! Minimal version selection is an "exact" solution, in contrast to the traditional one. You would only arrive at MVS if you considered the problem of selecting precise dependencies to be so important that it's worth making it the user's problem instead of having the tool solve it. The philosophy of the traditional package management solution is that it's best to have the tool do the right thing--select most recent versions that satisfy constraints--so that the user is free to worry about more important things.
> There are much more critical and harder things we've been approximating for years and everyone is just fine with it.
Yes! That's why MVS, and by extension vgo, is oriented around solving a non-problem!
> It's also a very complex one that often fails in interesting ways in both this, and other, domains.
I cannot name one example of a single time SAT solving has failed in Cargo.
"Minimal version selection is an "exact" solution, in contrast to the traditional one. "
It's not an exact solution to SAT, it's an exact solution to a simpler problem than SAT (2-SAT). A problem that admits linear time solutions, even.
That is in fact, what a lot of approximations actually are - reduction of the problem to a simpler problem + exact solving of the simpler problem.
Some are heuristic non-optimal solvers of course, but some are not.
Certainly you realize the complexity and other differences between "an exact solver for SAT" and "an approximation of a SAT problem as a 2-SAT problem + an exact solver for 2-SAT"
I can write a linear time 2-SAT solver in about 100 lines of code and prove it's correctness. It's even a nice, standard, strongly connected component based solver.
Here's a random one:
So if i have an SCC finder implemented somewhere, it's like 20 lines of code.
Past this, your argument about "taking the user's time" is so general you could apply it to literally any problem in any domain. You can just plug in whatever domain you like and whatever solution you happen to like into this argument.
Here it's backed by no data - you have surfaced zero evidence of your premise - "that it is taking user time". This entire thread in fact has exactly no evidence that it's taking any appreciable amount of user time, so it definitely fails as an argument.
(in fact, the only evidence presented in this thread is that the algorithm simply works on existing packages)
If you actually have such evidence, great, i'm 100% sure that go folks would love to see it!
Because right now the main time spend, in fact, seems to be people arguing in threads like these.
I'm saying that the theoretical complexity of the core dependency resolution algorithm is irrelevant in practice. Therefore, removing useful features to reduce SAT to 2-SAT is not a good trade. I, and everyone else who has worked with Cargo, keep saying this, but nobody listens. :(
> Here it's backed by no data - you have surfaced zero evidence of your premise - "that it is taking user time". This entire thread in fact has exactly no evidence that it's taking any appreciable amount of user time, so it definitely fails as an argument.
Minimum version selection makes it the user's problem to fetch the newest version of dependencies. That is the entire premise of minimum version selection. If you want to upgrade your versions, you have to use "go get -u". That command blindly updates all minor versions of packages. The problem arises when you have some packages that did not follow the semver rules (or are on 0.x) and you need to hold them back to avoid breaking your build. That is when the more fine-grained version control that systems like Cargo support becomes essential. Vgo has unfortunately decided to omit that support in favor of some theoretical benefits that make no difference in practice.
> If you actually have such evidence, great, i'm 100% sure that go folks would love to see it!
The Go team has the evidence in that every other package manager uses maximal version selection instead of minimal version selection, because of the problems with minimal version selection.
I strongly suspect that the problems in minimal version selection will become apparent over the years as people hit the limitations, at which point it will become apparent that Go made a mistake, but it will be difficult to fix. In particular, I think that, several years down the line, there's a good chance that running "go get -u" in large software projects is going to result in a broken build, because people are imperfect and don't perfectly follow semver. So people just won't upgrade their packages very often.
You've correctly identified the problem with vgo.
The package managers for many successful languages and distributions use lockfiles and constraint solvers. Not only is that empirical evidence that it works technically, it is evidence that it works socially — users are able to understand and work with it, and the package ecosystems for those languages have evolved with those rules in place.
Empirical data from Go's own package ecosystem is useful too, but you can only learn so much about package management from a corpus that does not have sophisticated package management. The ecosystem has already learned to work within the restrictions so you'll mostly see packages that confirm the system's own biases.
It's like countering passers-by on a bike trail and concluding that the only vehicles users need are bikes.
I'm not saying vgo isn't better. But it's an unproven approach where lockfiles and constraint solving are proven, multiple times over. The burden of proof lies on vgo.
The empirical data from Go's package ecosystem is drawn from a corpus with sophisticated package management: dep. The argument is that dep is unnecessarily powerful and that a simpler approach will suffice. The evidence supports that argument. Note that this is not an argument about Cargo, or Bundler, or any thing else. Right now, the ecosystem is using dep, and there is evidence that it can be done simpler.
To stick with your analogy, I think it's fair to conclude that the only vehicles users need on bike trails are bikes.
Additionally, I have done an analysis of two Rust projects that have been brought up in my discussions on this issue. Specifically, LALRPOP and exa. In both cases, throughout the entire history of the project (hundreds of changes over 4-5 years), Cargo only had to select the largest semver compatible version . Again, I would love to find examples of projects where this strategy was not sufficient.
 There is one complication: in Cargo, a ^ constraint (the default kind) on a v0 dependency is only good up to the minor version. In other words, ^0.1.0 means >=0.1.0 and <0.2.0, where ^1.0.0 means >=1.0.0 and <2.0.0. Selecting the largest semver compatible version is meant in this way because of the community norms around breakage in v0. In an MVS world, any breaking change is a major version bump, and would have the same properties, but with different version strings.
One of the main benefits of semantic versioning is that you can upgrade packages to new minor and patchlevel versions without breaking backwards compatibility, thus allowing you to easily pick up bugfixes by simply updating your dependencies. Of course, you should still test after updating, as packages could introduce new bugs, but on the whole updating minor and patchlevel versions is far more likely to fix bugs than it is to introduce them. But MVS discards this benefit and says you cannot get bugfixes unless you're willing to manually edit your dependency list to declare that you want the newer package. The net result is that packages that use vgo are likely to end up stuck on old versions of dependencies. This is especially true for indirect dependencies. If I publish a library, my incentive is to declare the oldest version of my own dependencies that I'm compatible with, in order to give my upstream user the most control over dependency versions. But if my library's client doesn't know about my own dependencies, then this means my library's dependencies will almost certainly resolve to a really old version, and my library's client won't even know about it and so won't be in a position to request the newer, less-buggy package version. Which then means I have an incentive to instead constantly update my library to list the newest versions of my dependencies, which then forces my library's client to upgrade those dependencies even if my library's client would prefer to be conservative and not upgrade those dependencies simply because they need to upgrade my library.
I think using a SAT solver for package installation arose out of dealing with much more complex requirements than are likely to arise in a Go project. The Smart package installer used heuristics to find a solution depending upon the operation:
Poetry, Cocoapods, and Dart are all apparently using this SAT solver:
FWIW, I wrote a package installer that worked with RPMs, Solaris packages and AIX packages about 15 years ago and ended up with a minimal version selection similar to vgo. I wasn't a genius or anything... I just wasn't aware of SAT solvers at the time and it was the simplest thing that worked.
The notion that this committee speaks for the community seems a weak one to me and I've seen my view mirrored with many of my peers. Its good they attempted such research but it seems like confirmation bias as dep just seemed like a re-write of glide with similar fundamentals and a improved user experience. This appeal to authority by the committee to represent the Go community seems unproductive and unnecessarily divisive/misleading.
I wasn't thrilled when I saw become Sam elected to lead the implementation of the "official experiment" because I wasn't a fan of Glide at all. There was no community vote to oversee this, just one maintainer of one flavor of go pkg management was declared the expert and just re-wrote the existing Glide solution with some lesson's learned. Many other maintainers (experts) of other go pkg management solutions favor vgo.
I've navigated many thorny dependency problems in Go before and never have I ever been convinced that the solution was NP complete version constraints. MVS, good tooling, and finally SIV is enough to make this miles better then previous attempts in this space. Also hooray for GOPATH elimination and project based workflows I've always loved GB by Dave Cheney.
If that is the case why the solution wasn't developed already outside of Go team's ambit. After all Go team said multiple times they don't need module system as Google does not use it.
Even this very article contradicts your assertion about how it was done.
"If I had asked people what they wanted, they would have said faster horses."
(or yet another Bundler/npm/Cargo clone.)
Would you really expect those people to be excited about the outcome?
This feels like an overly personal attack.
- Richard Feynman
If Go had been designed by community vote from the beginning, it would almost certainly have generics... and operator overloading, and exceptions, and 50 exposed GC knobs, and macros -- and a SAT-based dependency resolver, of course.
I trust Russ, Rob, and the rest to get it right, and they've proven that my trust is justified over and over again.
I can't read that statement without thinking of perl, and the nightmare of developing in a team of perl devs. We don't need ten thousand ways of doing things and not everyone needs their way to be represented. I'm very happy to have Russ enforce one "right" way of doing things and vgo seems to fit that nicely. One less argument for me and my fellow engineers using go to argue about.
It's odd to call out Russ as being disingenuous when he says his concerns were ignored when the dep committee did effectively ignore them. Adding 'effectively' as a qualifier here is somewhat important because it grants that there may have been a deliberation process and that those concerns were not outright dismissed, but the output of a deliberation process must be either in favor or against, and when a concern is deliberated against it is effectively ignored. It has never made sense to me when someone says something along the lines of "we've heard your concerns and have taken them into consideration" and yet did not heed those concerns. Obviously you haven't taken them into consideration and therefore they were effectively ignored.
In this case it seems to me that the burden of proof is on the dep committee to demonstrate that the alleged showstoppers are in fact not stopping the show. You don't get to call a committee meeting, decide to ignore showstoppers by claiming that you don't believe them to be so, and then expect nothing but smooth sailing and cooperation from there on. That makes no sense at all. All this rhetoric about community involvement and "working with us" falls on its face when the implicit terms of engagement are not adhered to.
However, what happened was that he told the dep people "I will build a dep tool myself to understand more on the problem details", which implies "after I understand more, I will come back to you and discuss more", but not "after I build the tool myself, I will just integrate my version into the official tool". For this part, Russ was not executing his own words, or at least delivered the wrong impressions. Maybe this result should be already well expected based on Russ's personality or how the golang team did stuff in the past, but many programmers are not that politically awareing..
It is not the technical result that the "vgo" solution kills "dep" saddens dep people. It is rather the form of communication: "vgo" stabs "dep" in the back, by emerging suddenly out of Russ's pocket, and essentially claims immediate victory using the power of the Go core team. When the vgo posts are out, and the vgo proposal is posted (and it is not really bad technically), there is really no effective way for the rest of the community to reject it.
Russ could have kept his words by going back to the dep people and told them about what he learned from writing "vgo", and then move forward with integrating "vgo" if a consensus cannot be reached. Eventually, Russ might just be able to get the same technical results as today anyways (but maybe with larger communication overhead).
What the dep people essentially says here is that, if Russ really wants, they can also implement exactly "vgo" from "dep", but they never got the chance.
Russ said he was going to go and build a tool to understand the problem better. In no way does that imply that any lessons he learns from the implementation must be communicated back to the dep committee. But for the sake of argument, let's say that did happen. Then what? Does Russ practically force the dep committee to implement vgo out of the remains of dep? Why would he do that when he just spent a month or two implementing vgo from scratch? Why bother going back to a problematic group that doesn't believe their core issues are showstoppers? If that difference of perspective exists and they're not willing to debate that, to say nothing of the fact that they feel that they should be on the receiving end of the burden of proof, what more can be gained from them from Russ's perspective?
That said, the dep people also made a similar mistake on not clearly confirming with Go core team that they had a chance to integrate dep into the official toolchain eventually. They wanted dep to be the thing, and they really thought that they were on the right track.. poor folks..
If Russ managed to influence the "problematic" group to do what he wants, rather than doing it by himself, he could give these people a meaningful place in the Go community. That is the gain. If that is worthy or not (comparing with discarding 2-month work on Russ's own time) depends on your own perspectives, and I am not sure what Russ thought or thinks. No matter how, it will affect how the community grows in the future.
So this goes back to the root issue about how the Go core team view its Google-external language community (or even more fundamentally, if they will be rewarded/recognized by Google if they did a good job on making the community happy). Specifically, do they want to carry the responsibility to deliver messages to the community clearly, at least to some certain extent? Opinionated is fine and firing users (e.g. users who think generics is a must) is fine, sometimes even preferred, but hurting people that were willing to follow and contribute might not be the best way to go.
In my eyes, Russ is too nice here, and this committee seems like it would've been difficult to work with at an intellectual level, anyhow, judging by this type of griping.
Maybe Russ should hire Linus to deal with situations like this :-)) /s
Was anything major ever decided based on a "community consensus" in Go?
What I've read about, and this case is no exception, is that it's always "a bunch of long-time Go team members". Aside from making libs, and conferences, and such, the community might as well not exist.
Certainly nothing like PEP process, or something like Swift (which despite Apple's thing, goes out of its way to engage the community in feature roadmap) or Rust.
>We did all of this because we wanted to be an exemplar of how the community could step up and solve a problem that was being ignored by the core team. I can’t think of anything else we could have done to be better than we were. But the end result of this effort was no different than if we had done none of it at all: the core team ultimately didn’t engage with us meaningfully on the body of work we’d contributed, and instead insisted on doing the work themselves, as an essentially greenfield project.
Well, have they had any other impression about the prospects of such external effort? The author adds an even more bitter remark later on, and then retracts it saying: "Upon reflection, I think this may be too strongly stated. There are good examples of large contributions that originated outside of the core team, including support for different architectures, like WASM".
But adding a different architecture is not a decision that changes the languages syntax, standard libs, tooling, direction or semantics. It can even be dropped at any time, and none other architecture would even care. It's like proving how an OS is "open" to third parties by pointing out that anybody is welcome to contribute a hardware driver for it.
Good you say that. Not too long ago solutions from swift-server-group is discarded and Apple simply developed swift-nio in-house. I read that server group was blindsided by this development and now more or less disbanded.
"I've seen a surprising amount of people thinking that the server-side Swift community or the Swift Server working group was somehow blind-sided by SwiftNIO. That couldn't be farther from the truth. We had known about SwiftNIO before the first line of Vapor 3 code was even written."
And in any case, even if the server group was blindsided it's not representative of a general tendency, as there are a very active features discussions e.g. in https://forums.swift.org/c/evolution with the community and public members involved and shaping changes.
I don't know of examples if anything got in Swift just by force of community without core team approval. And it seems right thing to me whether Swift or Go or Rust.
One thing better in Swift is clear rejections of pitches if it is not right for swift which makes sense as communication is always better with everything Apple.
It's not about that though. Of course the core team should approve. It's when the core team only approves its own things, which often are designed decidedly not the way that the community wants, that's the problem.
Its also an example of the proposal process which is PEP like.
That said, changes like this are not decided by the community, but by the core team. Go is an opinionated language and far more ideas have been rejected than accepted. For example Ian Lance Taylor's 5 generics proposals: https://github.com/golang/proposal/blob/master/design/15292-....
Russ has asserted this from day one, and has brought several examples out in evidence, but has simply not convinced the committee that it’s true. Calling it a serious problem, let alone a showstopper, is a significant overstatement. The strongest claim that can be made on this point is that it’s a matter of opinion.
OK that's enough for me. "BDFL" clearly correct, random critics totally unwilling to consider the proposition. More language developers are realizing that hierarchical imports with directory name conventions are a good practical thing that just works. Multiple versions come for free with this arrangement. If your tool can't handle that, your tool sucks.
This response is a long one, of which the ending deserves special note:
> I hope this story serves as a warning to others: if you’re interested in making substantial contributions to the Go project, no amount of independent due diligence can compensate for a design that doesn’t originate from the core team.
> Maybe some of you are reading this and thinking to yourself, “well, no duh, Peter, that should be obvious.” Maybe so. My mistake.
It personally made me think of this issue / PR in the Go issue tracker, https://github.com/golang/go/issues/20654, about support for constant time arithmetic with bigints. Currently, operations on bigints in Go may leak secret data via timing channels, because calculations with different values take predictably different time. The maintainers have chosen to specially modify only the P256 curve (but not P224, P384, or P521) to work in constant-time.
The author of the issue has written an extensive (somewhat strawman) PR that includes the constant-time code inside the current implementation of the bigint. That's a choice – for seemingly good reasons laid out in the issue, and of which the author admits that it also could be a separate package – but the rest of the conversation is halted in indecision about how the core team should proceed, without referring back to any of the arguments and reasons the original author has put forth.
The Go crypto libraries have a domain-specialist owner: Filippo Valsorda. If you have questions about constant-time math operations in Go, he's a smart and nice guy; you should maybe ping him.
Go's crypto library is imperfect, but it is the best native crypto library provided by any modern language. (Ring is excellent too, but is not as I understand it part of the Rust standard library).
I think the similarity is as follows:
- there is a desire to fix something that feels like a gap (in this case limited support of constant-time math in standard library)
- there is a change proposal coming from the community
- there are arguments why it should be included and some discussion if it belongs in a separate library or not
- very little or no input from core team
- the whole process is stalled waiting for core team to participate
Now we just need someone on the core team decide they need to research via implementation instead of actively engaging community, end up liking their own solution more and we'll have dep all over again.
I don't think the similarity is all that apt.
If the core team is the limiting factor in the PR, and if there is a need for this, then why not just make the separate package and be done with it? If it is popular and core wants to merge it in at a different time, then great.
> - very little or no input from core team
I simply don't see this being true. Apart from OP (who is expected to comment a lot, because it is his proposal), the thread has comments from Robert Griesemer, Russ Cox, Austin Clements, Adam Langley and many others.
> - the whole process is stalled waiting for core team to participate
Again, not true. See Ian's and Russ' last comment. It is actually waiting on further information.
All that being said, instead of complaining how there is no response on a thread by the core team, maybe you can contribute by answering the questions Russ and Ian has raised, and ping others as needed.
It still made me think of it, because in that thread there is a domain expert which lays out careful reasoning for the decision to place constant-time arithmetic inside big/int. Among some other discussion, the main hangup is about the placement of the code, and not any core maintainer is signalling go-ahead with any direction. The issuer author could of course continue to work out the existing strawman proposal, but that risks to be abandoned if the core maintainers turn out to change their mind.
Meanwhile, on March 19 2018 a core maintainer asks if "anybody want to fill out this proposal to see what changes would be required in math/big?", despite that the issue itself has an extensive PR with test suite which reveal most of the necessary changes. (either that or it refers to the Aug 17 2017 comment, but that seems to be exactly bford's proposal with a different public API)
Once a design is set, implementing it is "just work". I'm not sure trying to undervalue that, but the important part is first investigating the design. The package management committee came up with the tried and tested design while Russ was trying to understand the true requirements. Both have their merits, and time will tell if Russ is right here. The thing to do keep in mind is that investigative work is inherently "wasteful".
Calling something “open” a hundred times a day does not remove one bit of corporate control.