Hacker News new | past | comments | ask | show | jobs | submit login
A response about dep and vgo (bourgon.org)
149 points by Zariel on July 27, 2018 | hide | past | favorite | 133 comments



Vgo has certainly been a disaster from a communication and community point of view. From an outsider perspective, it looks like a case of maintainer arrogance, and it is hard to believe that Go really has needs that are significantly different from those of e.g. Rust or Java.

However, I am also in the unusual position that I had to design and implement a package manager for my own little language recently. My original hunch was to just create a Cargo clone, but after reading Russ Cox's writings, I ended up cloning vgo (with a few simplifications). The implementation was wonderfully simple - everything just fit together. The algorithms are trivial. Operationally, it is easy to understand what the package manager is doing (even more so than for vgo, as vgo has do deal with various Go idiosyncracies and backwards compatibility).

This is in stark contrast to my experience with other package managers, which are temperamental beasts at the best of times. When they work, everything's groovy, but their error modes are easily incomprehensible. I think vgo's approach of restricting expressivity and streamlining processes is the right one. But since nobody has used such a package manager before, it remains to be seen whether it works in practice.

For my own experiment, I do have one data point: I showed people the (rather brief) documentation for my vgo-inspired package manager, and they all felt it was very simple to follow and easy understand what it did, and how.


For what it's worth, I analyzed as many Gopkg.{lock,toml} files I could find in the wild, and found that vgo's algorithms would service all of them. I've done similar analysis for a smaller number of Rust projects using Cargo and had the same findings. I've found it hard to find projects in the wild that actually do any non-trivial version selection. I think this is some good evidence that it will work out in practice.

I think it's less that Go has significantly different needs, but it's more that people overestimate what their actual needs are.

https://github.com/zeebo/dep-analysis


> I think it's less that Go has significantly different needs, but it's more that people overestimate what their actual needs are.

I think you are right. And I doubt it hurts that SAT solving is a fun problem!

My main package management experience has been with Haskell, which has used the cabal tool for many years. Cabal was a traditional solver-based tool (with the added pain of a global mutable package database, although that is going away), and it frequently broke down in confusing ways. Cabal hell was a widely used term. A few years ago, another tool arrived on the scene, Stack, which used the same package format and such as cabal, but snapshotted the central package list (Hackage) by gathering subsets of packages and versions that were guaranteed to work together (or at least do not have conflicting bounds). It works well[0], and although it does in principle result in a major loss in flexibility, it's rarely something I miss. Importantly, the improvement in reliability was nothing short of astounding. That certainly helped convince me that flexibility may not be a needed feature for a (language) package manager.

[0]: There are all sorts of socio-political stack/cabal conflicts in the Haskell community now, but I'm not sure they are founded in technical issues.


The problem with minimal version selection isn't that it can't support nontrivial version selection but rather that it doesn't automatically select the newest versions of packages.


It's a feature not a problem. It select the only know working solution. You're still free to update to newest versions.


I've never seen Cargo errors be incomprehensible. Solving versions just isn't a problem in practice. Minimal version selection optimizes for convenience of the package manager implementer at the expense of the users. To me, that's the wrong trade.


Could you contrast Cargo and vgo for those who only know the former?


There are lots of small differences, but I would say there are two major:

* Semantic Import Versioning. A package is identified by some name. I think vgo calls it an "import path", but I have also seen "package path" used. Every version of that package must remain compatible with previous versions. You may not break compatibility (i.e. increment the major semver number) without also renaming the package. There is some syntactic sugar that makes it clear that a version 2 is closely related to the original version 1, but from the point of view of vgo, the two versions are completely distinct packages. This also means a program may depend on several major versions of the same package (which Russ claims is necessary when doing gradual migrations of large programs).

* Minimal Version Selection. In contrast to pretty much every other package manager, vgo tries to install the oldest version of a packages that still satisfies any lower bounds specified anywhere in the dependency tree. Upper bounds are not supported. This both makes the solving algorithm trivial (which probably doesn't matter much), but also gives you reproducible builds without using a lockfile (which is nice), and it gives you a very simple operational model for what the package manager is doing (which I think is crucial).

All package managers suck and all package managers break, but I think when vgo breaks, it will be more obvious what is wrong. Time will tell whether it works in practice!

(And I'm not even a Go programmer; I just like the thoughtfulness that goes into the tooling.)


Adding to this:

> vgo tries to install the oldest version of a packages that still satisfies any lower bounds specified anywhere in the dependency tree

Another benefit of this that may be a little less obvious (it wasn't obvious to me) is that this means every package-version selected by MVS in the entire dependency tree is explicitly specified somewhere in the dependency tree. This means that every selected package-version was specifically tested with at least one other package (assuming, reasonably, that libraries are tested with the versions that they specify). You don't get this with SAT, in fact it's possible for SAT to select a set of packages where no two package-versions were ever tested with each other anywhere else before.

I feel that this property, combined with upgrades requiring affirmative action by the root package maintainer (by editing go.mod or having a tool do it), will make vgo-managed packages much more reliable over time.


> but also gives you reproducible builds without using a lockfile (which is nice)

I find the claims that vgo removes the lockfile disingenuous. Yes it's technically correct in that there's no lockfile, but it's incorrect in that you've basically turned the file that declares dependencies into the equivalent of a lockfile.

With something like cargo, I can `cargo update` and it will fetch new versions of my dependencies and update my lockfile. Net result, I have one file to commit changes to (the lockfile), with zero manual edits.

With vgo, updating a dependency requires manually editing the dependency list to specify the new version. Net result, I have one file to commit changes to (the dependency list), but it required manual editing.

In both scenarios, the one file that I have to commit changes to specifies the version of the package that will be used. Though that's not actually strictly true with vgo; there it specifies the minimum version that will be used, but that's not necessarily the actual version if another dependency requires a later version. Not a big deal, but it does mean there's no single file I can inspect to find out what the resolved package version is.


If you want the latest version using versioned go, you have to run a different command. "go get -u" will upgrade to the newest version and also update the dependency list. [1]

So that difference isn't actually all that different.

[1] https://tip.golang.org/cmd/go/#hdr-Module_aware_go_get


go get will update a dependency and edit go.mod for you automatically. And using go mod -fix will fix misleading versions specified in go.mod (i.e. if you require version X but are getting version Y because of a transitive dependency the require will be updated to say Y).


I'm still mind-boggled by the crusade against the lockfile. It's a completely unambiguous source for everything that costs essentially nothing, why is there so much time and energy devoted to getting rid of it? Especially with all the drawbacks minimal version selection comes with; exact version selection with the lockfile seems better in just about every case.


I don't think the lockfile is a big problem (and I don't have enough experience with lockfile-based systems), but if you already have some other file that can serve the same purpose, isn't that nicer than having two? It also means you don't have to be confused about whether lockfiles should be committed to version control (apparently the answer is "not always", which is counter-intuitive to me).


When are you not supposed to commit a lockfile? I think the norm is to commit it.


You're supposed to commit it for applications, but omit it for libraries. Libraries declare the versions they're compatible with but they don't lock to specific releases.


In at least some packaging systems, committing the lock file for a library is useful so that the library maintainers use the same versions. (It's ignored by users of the library.)


I had some experience with plone (and zope) - big, early python projects that formed a lot of the basis for package management and infrastructure for python.

The "known good set" of packages as used by buildout for any given plone version were not really easy to work with.

See eg:

https://docs.plone.org/4/en/manage/troubleshooting/buildout....

Which points to things like: http://dist.plone.org/release/3.3.5/versions.cfg

Which aren't that much worse/better than a gemfile.lock for a rails project, I guess.

Now, I guess the better answer is to have smaller projects, with different interfaces (eg: don't give all modulles/parts access to your zodb object database, have a ome json/rest interfaces etc).

But apparently there are still many "big systems" being built.


Minimal Version Selection.

That's really interesting. Obviously this would lead to fewer regressions, but it seems opposed to the "security problems fixed for free" scenario you hear about with other package dependency schemes. Were we all just imagining that scenario? Is there another way we should be handling vulnerabilities, i.e. by "repudiating" old versions rather than simply releasing new ones? Then you could have something like "Minimal Unrepudiated Version Selection".


I think reproducible builds are more important than having security problems fixed behind your back. It's not like vgo makes it hard to upgrade all dependencies to their most recent version - it's done automatically with a single command, if you want it.


Other package managers (cargo, gemfile, etc.) also give reproducible builds by using a lockfile. Upgrading dependencies to their maximum version is an explicit action (cargo update). It's not clear to me why the go team things lockfiles are a huge issue.


Lock files also remove the "silent security fix" that was touted as the benefit of other systems.

You can't have it both ways.

You either get reproducible builds (by default with vgo, by adding lock files in other systems) or you get silent upgrades that potentially fix security issues (but also potentially introduce security issues).

You also mis-characterize the position of vgo creator. He doesn't think that lock files are a huge issue per se.

The difference is in default behavior of the system: vgo by default picks predictable, consistent version of dependencies. That version doesn't change if dependencies release new versions.

In fact, it's not just the default behavior but the only behavior.

Other systems allow specifying complex rules for which version of dependency to pick and they all allow for a scenario where you run the same algorithm over the same rules but pick different version because in the meantime some dependency has released a new version.

It's such a big problem that all those system end up introducing lock files, which is a tacit admission that what vgo does by default is a good idea.

vgo doesn't need lock files to get the benefit of lock files, which is a nice cherry on top but the real advancement is in changing the default behavior of how resolving versions of dependencies work.


> Lock files also remove the "silent security fix" that was touted as the benefit of other systems.

That is the great thing about lock files. You don't have to check them in. If you want reproducible builds, you do check them in. If you don't, you don't. The user, not the package manager, gets to make this choice on a case-by-case basis.

> The difference is in default behavior of the system: vgo by default picks predictable, consistent version of dependencies. That version doesn't change if dependencies release new versions.

And that right there is the problem. You don't get the latest version unless you explicitly ask for it. This makes the user do what a package manager is perfectly capable of doing. It thrusts the problem onto the user instead of applying a well-known solution that was causing problems for nobody.

If you are writing an application, don't you want your transitive dependencies to get security fixes without having to trawl through the tree?


Lockfiles have other benefits, such as locking down dependencies' hashes to prevent future tampering.


You'll find a go.sum with hashs of dependencies.


What you're saying makes perfect sense. This is what makes HN great. Incidentally, there's nothing about e.g. npm that would prevent this scheme: just set literal version numbers instead of using operators. The two systems definitely have different "best practices".


> It's not like vgo makes it hard to upgrade all dependencies to their most recent version - it's done automatically with a single command, if you want it.

And how does it guarantee that this actually results in anything that works, if upper bounds are not supported?


Upper bound are major version, everything between should be api compatible (it's in the specs of go modules).


Except for version 0.x, and with vgo you have no recourse if somebody didn't follow the rules and made an incompatible change.


This is the whole point: just specify the version before the change, and it won't bother you.


Until someone actually wants to update their dependencies, writes "go get -u", and breaks the build.


Well sure, you probably want to test after updating all of those dependencies. That's a good reason to only update when necessary. Do you argue that some other scheme prevents bugs in dependencies? That hasn't been my experience...


> Minimal Version Selection. In contrast to pretty much every other package manager, vgo tries to install the oldest version of a packages that still satisfies any lower bounds specified anywhere in the dependency tree.

This is somewhat misleading. Most package managers use a human-edited manifest containing constraints and a machine-edited lockfile, containing the chosen versions. You periodically (or after changing dependencies to require newer versions) run a tool that fetches the newest versions satisfying your constraints and writes out a lock-file, which you then commit and use at install-time to get reproducibility.

Go modules use a single file that serves as both and is both human- and machine-edited. You periodically (or after changing dependencies to require newer versions) run a tool that fetches the newest versions satisfying your constraints and updates the manifest with them. The manifest is then (also) used at install-time with minimum version selection to get reproducibility.

What that means is that in practice, Go modules never install older versions than the equivalent in other package managers would do. The maintainer specifies constraints and uses a tool to decide what version will actually used to at build-time. In both cases, that tool will choose the newest versions available.

What is correct (and contentious) is that Go modules have no concept of upper bounds for dependencies. So the complaint is, that as a library author you can not prevent a newer version from being chosen by one of your reverse dependencies. It remains to be seen how much of a problem that will be in practice.

So, if anything, the problem is that as opposed to other package managers, Go modules sometimes choose too new versions, from the perspective of some people. It never chooses too old versions.


> vgo tries to install the oldest version of a packages that still satisfies any lower bounds

This alone would make npm 100x less painful to use.


You don't have to use operators in package.json, so this pain relief is available to you right now. Just let the version remain precisely as "npm install" wrote it, with no tildes or wildcards or anything.

Admittedly, the "update all packages to latest compatible version" flag described above sounds very nice. There is approximately zero chance the npm developers would ever accept such a PR, but there would be nothing wrong with a utility that accomplished the same thing.


That may work for my dependencies but not the dependencies of my dependencies. In other words, for that to work it isn't enough for me to change - every npm library maintainer must change.


Yep, this is the problem with "features", when they exist somebody else will use them and you'll eventually have a dependency that does.


Maybe deno will implement something like vgo. (If you don't yet know about deno: https://youtu.be/M3BM9TB-8yA )


That's just the thing, right? It seems to be a clearly superior technical solution, but it has thoroughly failed from the perspective of human relationships.


I'm relieved to see that the fate of Go is ultimately controlled by a small team of highly talented engineers, not a nebulous "community" that designs by democracy.

If Go had been designed by community vote from the beginning, it would almost certainly have generics... and operator overloading, and exceptions, and 50 exposed GC knobs, and macros -- and a SAT-based dependency resolver, of course.

I trust Russ, Rob, and the rest to get it right, and they've proven that my trust is justified over and over again.


> nebulous "community" that designs by democracy

I can't read that statement without thinking of perl, and the nightmare of developing in a team of perl devs. We don't need ten thousand ways of doing things and not everyone needs their way to be represented. I'm very happy to have Russ enforce one "right" way of doing things and vgo seems to fit that nicely. One less argument for me and my fellow engineers using go to argue about.


I am not involved in this discussion and have no skin in the game, but from an outsider's perspective it seems that Russ laid out exactly what his concerns were about showstoppers in the dep implementation and those concerns were effectively ignored by the dep committee. What were they expecting to happen after that? Are there some other non-showstopper classified concerns that Russ should make up on the spot and have the dep committee then concentrate on? Why bother after showstoppers are effectively ignored?

It's odd to call out Russ as being disingenuous when he says his concerns were ignored when the dep committee did effectively ignore them. Adding 'effectively' as a qualifier here is somewhat important because it grants that there may have been a deliberation process and that those concerns were not outright dismissed, but the output of a deliberation process must be either in favor or against, and when a concern is deliberated against it is effectively ignored. It has never made sense to me when someone says something along the lines of "we've heard your concerns and have taken them into consideration" and yet did not heed those concerns. Obviously you haven't taken them into consideration and therefore they were effectively ignored.

In this case it seems to me that the burden of proof is on the dep committee to demonstrate that the alleged showstoppers are in fact not stopping the show. You don't get to call a committee meeting, decide to ignore showstoppers by claiming that you don't believe them to be so, and then expect nothing but smooth sailing and cooperation from there on. That makes no sense at all. All this rhetoric about community involvement and "working with us" falls on its face when the implicit terms of engagement are not adhered to.


I think the issue is that Russ should have clearly told the dep people that if the showstopper issues are not handled, then he will seek for some other solution, e.g. build one by himself.

However, what happened was that he told the dep people "I will build a dep tool myself to understand more on the problem details", which implies "after I understand more, I will come back to you and discuss more", but not "after I build the tool myself, I will just integrate my version into the official tool". For this part, Russ was not executing his own words, or at least delivered the wrong impressions. Maybe this result should be already well expected based on Russ's personality or how the golang team did stuff in the past, but many programmers are not that politically awareing..

It is not the technical result that the "vgo" solution kills "dep" saddens dep people. It is rather the form of communication: "vgo" stabs "dep" in the back, by emerging suddenly out of Russ's pocket, and essentially claims immediate victory using the power of the Go core team. When the vgo posts are out, and the vgo proposal is posted (and it is not really bad technically), there is really no effective way for the rest of the community to reject it.

Russ could have kept his words by going back to the dep people and told them about what he learned from writing "vgo", and then move forward with integrating "vgo" if a consensus cannot be reached. Eventually, Russ might just be able to get the same technical results as today anyways (but maybe with larger communication overhead).

What the dep people essentially says here is that, if Russ really wants, they can also implement exactly "vgo" from "dep", but they never got the chance.


If I were in Russ's shoes and I had told someone as politely as I could that their solution exhibits what I consider showstopper issues, I wouldn't expect to have to clarify that I find that solution entirely unacceptable and flawed from its very design. There's no issue in the clarity of communication when the term 'showstopper' is thrown around.

Russ said he was going to go and build a tool to understand the problem better. In no way does that imply that any lessons he learns from the implementation must be communicated back to the dep committee. But for the sake of argument, let's say that did happen. Then what? Does Russ practically force the dep committee to implement vgo out of the remains of dep? Why would he do that when he just spent a month or two implementing vgo from scratch? Why bother going back to a problematic group that doesn't believe their core issues are showstoppers? If that difference of perspective exists and they're not willing to debate that, to say nothing of the fact that they feel that they should be on the receiving end of the burden of proof, what more can be gained from them from Russ's perspective?


hmm.. I was not aware that Russ used the word "showstopper" until recently in his tweets, and even if he literally used the word in the past, the word of the meaning was not delivered, reading from the post here.

That said, the dep people also made a similar mistake on not clearly confirming with Go core team that they had a chance to integrate dep into the official toolchain eventually. They wanted dep to be the thing, and they really thought that they were on the right track.. poor folks..

If Russ managed to influence the "problematic" group to do what he wants, rather than doing it by himself, he could give these people a meaningful place in the Go community. That is the gain. If that is worthy or not (comparing with discarding 2-month work on Russ's own time) depends on your own perspectives, and I am not sure what Russ thought or thinks. No matter how, it will affect how the community grows in the future.

So this goes back to the root issue about how the Go core team view its Google-external language community (or even more fundamentally, if they will be rewarded/recognized by Google if they did a good job on making the community happy). Specifically, do they want to carry the responsibility to deliver messages to the community clearly, at least to some certain extent? Opinionated is fine and firing users (e.g. users who think generics is a must) is fine, sometimes even preferred, but hurting people that were willing to follow and contribute might not be the best way to go.


The implementation of vgo (so simple) cannot be compared to dep. It can explain why it was more easy to just do it than spend time to explain. It's often like that in dev, when we find a simpler solution in our own code we just do it and throw away the old complicated one.


Yep, this is it. Russ himself admits it. He was corresponding with Sam about his ideas and thought that would be sufficient. Whereas, he should have done that with the entire community and been more explicit about where he was going.


"Showstopper" is a word with a specific meaning in software development. If your customer tells you about a showstopper and you ignore it, that is bad.


Rather than a technical argument, or a clarifying point of view, all I see here is basically "we formed a committee goddammit, we're the incumbent, how dare you not listen to us?"

In my eyes, Russ is too nice here, and this committee seems like it would've been difficult to work with at an intellectual level, anyhow, judging by this type of griping.

Maybe Russ should hire Linus to deal with situations like this :-)) /s


>Community consensus is not always possible. If we don’t get there, then the core Go team decides. Technically I am the final decider but what actually happens is that a bunch of long-time Go team members talk through the decision to get to a consensus among ourselves.

Was anything major ever decided based on a "community consensus" in Go?

What I've read about, and this case is no exception, is that it's always "a bunch of long-time Go team members". Aside from making libs, and conferences, and such, the community might as well not exist.

Certainly nothing like PEP process, or something like Swift (which despite Apple's thing, goes out of its way to engage the community in feature roadmap) or Rust.

>We did all of this because we wanted to be an exemplar of how the community could step up and solve a problem that was being ignored by the core team. I can’t think of anything else we could have done to be better than we were. But the end result of this effort was no different than if we had done none of it at all: the core team ultimately didn’t engage with us meaningfully on the body of work we’d contributed, and instead insisted on doing the work themselves, as an essentially greenfield project.

Well, have they had any other impression about the prospects of such external effort? The author adds an even more bitter remark later on, and then retracts it saying: "Upon reflection, I think this may be too strongly stated. There are good examples of large contributions that originated outside of the core team, including support for different architectures, like WASM".

But adding a different architecture is not a decision that changes the languages syntax, standard libs, tooling, direction or semantics. It can even be dropped at any time, and none other architecture would even care. It's like proving how an OS is "open" to third parties by pointing out that anybody is welcome to contribute a hardware driver for it.


> something like Swift (which despite Apple's thing, goes out of its way to engage the community in feature roadmap)

Good you say that. Not too long ago solutions from swift-server-group is discarded and Apple simply developed swift-nio in-house. I read that server group was blindsided by this development and now more or less disbanded.


Not sure about the swift-server-group situation -- some where blindsided, but here's e.g. the head of the closely related Vapor (server side web framework) project:

"I've seen a surprising amount of people thinking that the server-side Swift community or the Swift Server working group was somehow blind-sided by SwiftNIO. That couldn't be farther from the truth. We had known about SwiftNIO before the first line of Vapor 3 code was even written."

And in any case, even if the server group was blindsided it's not representative of a general tendency, as there are a very active features discussions e.g. in https://forums.swift.org/c/evolution with the community and public members involved and shaping changes.


Well discussion is all fine. The important thing will be if Swift core team which is all Apple employees (except Chris Lattner who left Apple recently and removing him would look petty) will simply accept community solution even if they don't think its right.

I don't know of examples if anything got in Swift just by force of community without core team approval. And it seems right thing to me whether Swift or Go or Rust.

One thing better in Swift is clear rejections of pitches if it is not right for swift which makes sense as communication is always better with everything Apple.


>I don't know of examples if anything got in Swift just by force of community without core team approval.

It's not about that though. Of course the core team should approve. It's when the core team only approves its own things, which often are designed decidedly not the way that the community wants, that's the problem.


Its more open than the OS example. There have been community discussions on many issues related to the language. The core team solicits feedback, and designs are often adjusted in light of that feedback. Consider type aliases:

https://github.com/golang/go/issues/18130

Its also an example of the proposal process which is PEP like.

That said, changes like this are not decided by the community, but by the core team. Go is an opinionated language and far more ideas have been rejected than accepted. For example Ian Lance Taylor's 5 generics proposals: https://github.com/golang/proposal/blob/master/design/15292-....


"Dep does not support using multiple major versions of a program in a single build. This alone is a complete showstopper. Go is meant for large-scale work, and in a large-scale program different parts will inevitably need different versions of some isolated dependency."

Russ has asserted this from day one, and has brought several examples out in evidence, but has simply not convinced the committee that it’s true. Calling it a serious problem, let alone a showstopper, is a significant overstatement. The strongest claim that can be made on this point is that it’s a matter of opinion.

OK that's enough for me. "BDFL" clearly correct, random critics totally unwilling to consider the proposition. More language developers are realizing that hierarchical imports with directory name conventions are a good practical thing that just works. Multiple versions come for free with this arrangement. If your tool can't handle that, your tool sucks.


The discussion about the original post can be found here: https://news.ycombinator.com/item?id=17623023

This response is a long one, of which the ending deserves special note:

> I hope this story serves as a warning to others: if you’re interested in making substantial contributions to the Go project, no amount of independent due diligence can compensate for a design that doesn’t originate from the core team.

> Maybe some of you are reading this and thinking to yourself, “well, no duh, Peter, that should be obvious.” Maybe so. My mistake.

It personally made me think of this issue / PR in the Go issue tracker, https://github.com/golang/go/issues/20654, about support for constant time arithmetic with bigints. Currently, operations on bigints in Go may leak secret data via timing channels, because calculations with different values take predictably different time. The maintainers have chosen to specially modify only the P256 curve (but not P224, P384, or P521) to work in constant-time.

The author of the issue has written an extensive (somewhat strawman) PR that includes the constant-time code inside the current implementation of the bigint. That's a choice – for seemingly good reasons laid out in the issue, and of which the author admits that it also could be a separate package – but the rest of the conversation is halted in indecision about how the core team should proceed, without referring back to any of the arguments and reasons the original author has put forth.


I don't see the comparison you're drawing here. The question proposed in the Go issue is whether the standard bignum library in Go should be constant-time, whether crypto primitives should provide their own constant-time math, or whether a third library for constant-time bignum math should be added that all Go crypto can rely on. That's not an easy question, and meanwhile significant chunks of the bignum crypto code that people actually use in Go is written in assembly anyways.

The Go crypto libraries have a domain-specialist owner: Filippo Valsorda. If you have questions about constant-time math operations in Go, he's a smart and nice guy; you should maybe ping him.

Go's crypto library is imperfect, but it is the best native crypto library provided by any modern language. (Ring is excellent too, but is not as I understand it part of the Rust standard library).


> I don't see the comparison you're drawing here.

I think the similarity is as follows:

- there is a desire to fix something that feels like a gap (in this case limited support of constant-time math in standard library)

- there is a change proposal coming from the community

- there are arguments why it should be included and some discussion if it belongs in a separate library or not

- very little or no input from core team

- the whole process is stalled waiting for core team to participate

Now we just need someone on the core team decide they need to research via implementation instead of actively engaging community, end up liking their own solution more and we'll have dep all over again.


> which the author admits that it also could be a separate package

I don't think the similarity is all that apt.

If the core team is the limiting factor in the PR, and if there is a need for this, then why not just make the separate package and be done with it? If it is popular and core wants to merge it in at a different time, then great.


I'm not aware of a way to plug your own crypto to a lot of things that are part of standard library and I'm not aware of a way to plug your own math library to existing crypto.


Would they take PRs for adding that bit of plugability?


Probably not.


There has been some discussion about a pluggable TLS stack: https://github.com/golang/go/issues/21753


I think your expectations from huge open source projects are a bit too much. There are over 3500 issues open currently as of this moment, and new issues get opened every 15-20 minutes. The Go core team is like 10 people. That being said -

> - very little or no input from core team

I simply don't see this being true. Apart from OP (who is expected to comment a lot, because it is his proposal), the thread has comments from Robert Griesemer, Russ Cox, Austin Clements, Adam Langley and many others.

> - the whole process is stalled waiting for core team to participate

Again, not true. See Ian's and Russ' last comment. It is actually waiting on further information.

All that being said, instead of complaining how there is no response on a thread by the core team, maybe you can contribute by answering the questions Russ and Ian has raised, and ping others as needed.


The comparison is indeed pretty thin; it certainly doesn't come anywhere near the miscommunication and whiffs of NIH that are present in the Dep case.

It still made me think of it, because in that thread there is a domain expert which lays out careful reasoning for the decision to place constant-time arithmetic inside big/int. Among some other discussion, the main hangup is about the placement of the code, and not any core maintainer is signalling go-ahead with any direction. The issuer author could of course continue to work out the existing strawman proposal, but that risks to be abandoned if the core maintainers turn out to change their mind.

Meanwhile, on March 19 2018 a core maintainer asks if "anybody want to fill out this proposal to see what changes would be required in math/big?", despite that the issue itself has an extensive PR with test suite which reveal most of the necessary changes. (either that or it refers to the Aug 17 2017 comment, but that seems to be exactly bford's proposal with a different public API)


There's an interesting discussion about this with russ and peter participating over on reddit here: https://www.reddit.com/r/golang/comments/92f3q1/peter_bourgo...


One thing that I think is being missed is that code/a product is not the only thing that matters. In fact, it's one of the less important things here.

Once a design is set, implementing it is "just work". I'm not sure trying to undervalue that, but the important part is first investigating the design. The package management committee came up with the tried and tested design while Russ was trying to understand the true requirements. Both have their merits, and time will tell if Russ is right here. The thing to do keep in mind is that investigative work is inherently "wasteful".


I don’t have extensive knowledge of the matter in hand, but this narrative seems much more coherent. And sad for Go/Google. So much for “open”. Just like Android/Google play services thing.

Calling something “open” a hundred times a day does not remove one bit of corporate control.


If someone is putting >90% of funding for project, it is just common sense they will veto things they do not like. I wonder if some people believe that online cliche about open source is all there to it.


Is there any package manager using P2P technology (e.g. Bittorrent) to speed up transfer? Or would this generally a bad idea?


Dep is awesome from the user perspective. You just type > dep ensure and it works and it’s fast.


So is GO111MODULE. You just type `go build` and it resolves dependencies as part of the build. It's even one less step than `dep ensure`.


Idk about fast... Not to mention the incomprehensible errors.


Following the meta discussion is more taxing for me than following the technical details.


Go does not exist to raise the profiles of Sam Boyer and Peter Bourgon. Sam wanted to be a Big Man On Campus in the Go community and had to learn the hard way what the D in BDFL means. The state of dep is the same as it was before - an optional tool you might use or might not.

Lots of mentions in Peter's post about things the "dep committee" may or may not have agreed with. Isn't this the same appeal to authority he is throwing at Russ? When did the "dep committee" become the gatekeepers of Go dependency discussions and solutions? Looks like a self-elected shadow government, except it didn't have a "commit bit". Someone should have burst their balloon earlier, that is the only fault here. Russ, you are at fault for leading these people on.

Go is better off with Russ's module work and I personally don't care if Sam and Peter are disgruntled.


Ya, I've been following all this drama since the vgo proposal and I also find it somewhat rich. I mean in a way didn't the `dep` folks do exactly the same thing to the `glide` and other existing solutions? At some point you just have to say, hey, you've done good work but I think another approach is better. They did that to glide and Russ did that to them.

Note also that he doesn't dispute that Russ tried to bring them into the fold with his concerns and direction but they rejected his arguments. That's a fight they should have known they were going to lose.

Maintainers / BDFLs get the final say, that's just how it is. Just because there is a large community does not mean they get the final say, just because you put in a lot of work does not mean you get the final say. The reason BDFLs are in the position of making that decision is because they have proven through the accumulation of their previous decisions that they get it right more than they get it wrong. And that's the very reason there is a community around them at all.

If you walk into ANY project, public or private, open sourced or closed, and ignore the lead's objections when you go off in a direction they don't agree with, then yes, you are very likely in for some wasted work. That's just how the world works.

On another note I hope this is the last we hear about all this, please lets move on.


Ya, I've been following all this drama since the vgo proposal and I also find it somewhat rich...On another note I hope this is the last we hear about all this, please lets move on.

I only became aware of this by going to a meetup and seeing a talk. I just want something that's workable and not broken.

The reason BDFLs are in the position of making that decision is because they have proven through the accumulation of their previous decisions that they get it right more than they get it wrong. And that's the very reason there is a community around them at all.

This is the same reason given for totalitarian rule generally. The problem is that the dictator has enough power to make a big mistake, committing the entire communitiy's resources. In this position, if you can avoid making a decision, you should. Toyota doesn't simply make such decisions by fiat. Instead, they have a competition.

I mean in a way didn't the `dep` folks do exactly the same thing to the `glide` and other existing solutions?

Was the competition squashed?


Note that Russ isn't going against the community, he's just going against `dep`. Most of the community, myself included is pretty psyched about go modules.

Note also that the risk you propose for bad decisions is mitigated for Open Source. If that decision really turns out to be so bad then it will be forked and a wiser leader(s) will have their shot at decision making.

`dep` isn't squashed! You are free to continue using it, it's just that they likely won't find much audience anymore.


> If that decision really turns out to be so bad then it will be forked and a wiser leader(s) will have their shot at decision making.

No, that doesn't work in practice. The network effects are so insanely dominant in open source projects that forks are nowhere near an efficient market.


The network effects are so insanely dominant in open source projects that forks are nowhere near an efficient market.

The market doesn't have to be anywhere near efficient. It just has to allow any escape whatsoever from a death-march/death spiral.


Open Source is absolutely chock full of examples of this.


That's survivorshop bias. Yes, occasionally a fork takes over. It very rarely succeeds and usually only in cases where the original is so toxic that it cancels out its own network effects. node.js is a good example of that. And then even there note how they eventually unforked.


It is really uncharitable to describe the node fork as "so toxic". People disagreed, but there's nothing wrong with that. I felt like the subsequent "merge" happened fairly quickly and with minimal drama.

Also this seems like a misuse of the term "survivorship bias". You claimed upthread that forks don't work because of network effects. The response was "some forks work". That is a direct refutation, no matter what the percentages are. Besides, you're misunderstanding the varied purposes of forks. I have several forks right now, not because I hate the original maintainers but because it was convenient to change a small thing for my own purposes. Whether or not upstream eventually agrees with me, my fork "works" perfectly well.


And sometimes the new community wins out over the existing one. LibreOffice vs. OpenOffice, X.org vs. XFree86, etc.


Libreoffice vs Openoffice, Hudson vs Jenkins, and there would be so many more.


Like the OP mentions in another comment, this is survivorship bias.


It has literally happened before. Look into the node.js/io.js fork.


> Most of the community, myself included is pretty psyched about go modules.

Citation needed. In my corner of the community this is a very contentious issue.

The golang dependency management story has been a disaster for years. Nothing about the modules or vgo story have seemed to solve that.

In fact, its Russ who seems to be going against the norms of the community. Choosing cutting edge and untried technologies, over well understood and tested ones. If he was going to get edgy with his leadership decisions couldn't he have done it somewhere more valuable like cleaning up the type system?


> This is the same reason given for totalitarian rule generally. The problem is that the dictator has enough power to make a big mistake, committing the entire communitiy's resources. In this position, if you can avoid making a decision, you should. Toyota doesn't simply make such decisions by fiat. Instead, they have a competition.

You're comparing very different things. Programming languages and empires/nations/companies aren't the same thing. Rails has stayed on the course DHH wants it and hasn't become a mess mainly because DHH remains the BDFL. Go could benefit from similar structure. We're now about to see how Python will do post-Guido.


> Rails has stayed on the course DHH wants it and hasn't become a mess mainly because DHH remains the BDFL.

\* and DHH happens to make good decisions frequently enough.

For every successful BDFL example, history is littered with a hundred dead languages and frameworks designed by a single visionary leader who made the wrong decisions.


> a hundred dead languages and frameworks designed by a single visionary leader who made the wrong decisions

E.g. there's the namesakes for Rails and Ruby, i.e. Grails and Groovy. Virtually no-one's upgraded to Grails version 3 since it came out 3 yrs ago, or started new projects in Grails version 2. The Grails 2 plugin ecosystem is as good as dead. It only has its "single visionary leader" listed for the 3 contact persons (owner, admin, tech) in the grails.org DNS registration.

As for Apache Groovy, it's hanging on as the build language for Gradle and Android Studio, but doesn't seem to have any other significant use besides its original use case of glue code and testing harnesses. Groovy's problem is its creator, who had successfully added closure functionality to a clone of Beanshell, left the project after 3 yrs and the "despot" at Codehaus who subsequently claimed the title Project Manager was someone who didn't have the aptitude for many programming tasks.


So is other way round. Huge numbers of failed design-by-committee or community projects. What you say is truism not particularly insightful or useful.


The only meaningful example of this I can think of is Larry Wall and Perl...but his mistake was in loosening his grip instead of tightening it. Perl6 started off as a utopia wherein Larry was only considered slightly more powerful than any other contributor. This resulted in a revolving door of clowns taking the reigns and rapidly resigning as Larry looked on.

Fortunately Perl5 still does the Pumpking thing which is effectively a rotating BDFL.


> This resulted in a revolving door of clowns taking the reigns and rapidly resigning as Larry looked on.

I would say that this hasn't been the case for the past 10 years at least. Patrick Michaud has been Perl 6 pumpking since then, to make room for Jonathan Worthington about 2 years ago. Hardly a revolving door and hardly clowns.


Parrot did burn through quite a few pumpkings. I wouldn't call them clowns either, but I do think they led Parrot astray. Obviously Parrot is not Perl 6, but they are closely linked in many people's minds for obvious reasons.

(It's not all the pumpkings' fault, either. I think Parrot would still have been doomed if its leadership were constantly flawless, just because it was designed before Perl 6's object system was.)


A competition still has to be judged by someone.

The only curious element here is that a (the?) judge also had the winning entry. Russ claims he got buy-in from the other core members and so far no one has contradicted that.

AND...vgo is better than dep. Let's not forget that. Peter can hem and haw about politics but in the end the best horse won.

Go developers want the tools to be as good as possible, 99% of them don't know anyone who contributes to Go by name so this drama is irrelevant to them.


> AND...vgo is better than dep. Let's not forget that. Peter can hem and haw about politics but in the end the best horse won.

Nobody has worked with minimum version selection and compared it to the traditional approach enough to know. I have serious reservations as to how it will work in practice.


> AND...vgo is better than dep. Let's not forget that.

Facts not in evidence.


After reading TFA, which tries to be self-serving but is mostly just silly, do you even doubt it? I say this as someone with no dog in any Go fight. When the defendant's own testimony convicts himself, he's guilty.


I couldn't tell if that assertion was ironic or not.


Yeah it's hard to see how Peter can reconcile:

> Russ failed to convince the committee that it was necessary, and we therefore didn’t outright agree to modiying dep to leverage/enforce it.

and

> The community of contributors to dep were ready, willing, able, and eager to evolve it to satisfy whatever was necessary for go command integration.

If you're willing to say: "Sorry BDFL, you didn't convince us this was necessary so we're not doing it." Then you're not willing to do whatever is necessary, BDFL objections are necessities. Peter and Sam just found out what happens when an unstoppable force meets a quite movable object.

I agree with the sentiment that Russ's only mistake here was leading them on for too long. I see this as an argument in favor of Linus' style of BDFLing, in this case he would have cussed them out over their design's flaws and told them in no uncertain terms that it was never getting merged and probably that they were stupid. It would have hurt more at the time, but also probably would have saved them a ton of time and, in the long run, might have even hurt less.


I've interacted quite a bit with both Sam and Peter. Your characterization rings completely false. I've never gotten the least hint of the impression they were doing this for self-promotion, but rather to serve to fill an obvious vacuum, perhaps even with a degree of reluctance.

I remember being excited when the “dep committee” was formed, because (if I remember correctly) it was set up with the explicit blessing of the Go Team, with a Go Team Member on it to facilitate two-way communication.

Your characterizations are both inaccurate and ugly.


> Your characterizations are both inaccurate and ugly.

I think it may be blunt but seems true to me. Sam calling integration of Go module as sad milestone does seem disgruntled.

Dep folks have decided not to take high road which is fine but others are free to call this out.


I want vgo to succeed and I like most of your contributions, but comments like this and the original post (“Sam just wants to be the big man on campus”) are petty and rude. You’re better than this.


Not that I disagree with you. But this unending whining by dep folks is not helping them endeared to me or so many others.


The vacuum was filled, thanks to them. Now, an official solution is being presented. Why is there a problem?


Additionally, dep is, and always was, meant as an experiment to learn from. Russ gave multiple concrete examples of things he learned from dep. That the dep authors don't take this as a win is telling. Pitching dep as the eventual Go dependency management system and having that not be the case is a problem they invented for themselves.


> dep is, and always was, meant as an experiment to learn from.

My impression is that the dep folks understand that too. The problem is that there is no consensus on what was learned from it.

The dep folks seem to have come away convinced that a SAT-solver approach is the better approach. rsc is clearly convinced of the opposite.

Everyone knows it is ultimately rsc's call, so I don't think talking about the power dynamics is very interesting. What I am more interested in is whether or not it's the right call. A good faith interpretation is that the dep folks aren't sad that their solution lost, it's that what they believe is a better solution lost.


I'll present a counterpoint of a different kind, since everyone is arguing about what current package managers do and don't do.

Satisfiability problems of this kind appear in a ridiculous number of fields and applications (and not just by reduction).

The vast majority of them, in practice, are approximated rather than exactly solved.

Most of the ones that are exactly solved are in software verification, model checking, etc. Areas where having an exact answer is very critical.

Outside of that, much like you see in MVS, they approximate or use heuristics. And it's fine. You don't notice or care.

The idea that "package management" is one of those areas that absolutely must be exactly solved to generate acceptable results seems to me to be ... probably wrong.

There are much more critical and harder things we've been approximating for years and everyone is just fine with it.

(IE not clamoring for faster exact solvers).

Thus i have trouble saying a sat solver is a better solution. It certainly would be a more "standard" one in this particular instance, but that's mostly irrelevant. It's also a very complex one that often fails in interesting ways in both this, and other, domains.


> The idea that "package management" is one of those areas that absolutely must be exactly solved to generate acceptable results seems to me to be ... probably wrong.

The logical conclusion of your statement is that minimal version selection is the wrong approach! Minimal version selection is an "exact" solution, in contrast to the traditional one. You would only arrive at MVS if you considered the problem of selecting precise dependencies to be so important that it's worth making it the user's problem instead of having the tool solve it. The philosophy of the traditional package management solution is that it's best to have the tool do the right thing--select most recent versions that satisfy constraints--so that the user is free to worry about more important things.

> There are much more critical and harder things we've been approximating for years and everyone is just fine with it.

Yes! That's why MVS, and by extension vgo, is oriented around solving a non-problem!

> It's also a very complex one that often fails in interesting ways in both this, and other, domains.

I cannot name one example of a single time SAT solving has failed in Cargo.


I feel like you are really stretching here. You took a bunch of words out of context so you could parse them.

"Minimal version selection is an "exact" solution, in contrast to the traditional one. "

It's not an exact solution to SAT, it's an exact solution to a simpler problem than SAT (2-SAT). A problem that admits linear time solutions, even.

That is in fact, what a lot of approximations actually are - reduction of the problem to a simpler problem + exact solving of the simpler problem.

Some are heuristic non-optimal solvers of course, but some are not.

Certainly you realize the complexity and other differences between "an exact solver for SAT" and "an approximation of a SAT problem as a 2-SAT problem + an exact solver for 2-SAT"

I can write a linear time 2-SAT solver in about 100 lines of code and prove it's correctness. It's even a nice, standard, strongly connected component based solver.

Here's a random one: https://github.com/kartikkukreja/blog-codes/blob/master/src/...

So if i have an SCC finder implemented somewhere, it's like 20 lines of code.

Past this, your argument about "taking the user's time" is so general you could apply it to literally any problem in any domain. You can just plug in whatever domain you like and whatever solution you happen to like into this argument.

Here it's backed by no data - you have surfaced zero evidence of your premise - "that it is taking user time". This entire thread in fact has exactly no evidence that it's taking any appreciable amount of user time, so it definitely fails as an argument.

(in fact, the only evidence presented in this thread is that the algorithm simply works on existing packages)

If you actually have such evidence, great, i'm 100% sure that go folks would love to see it!

Because right now the main time spend, in fact, seems to be people arguing in threads like these.


> Certainly you realize the complexity and other differences between "an exact solver for SAT" and "an approximation of a SAT problem as a 2-SAT problem + an exact solver for 2-SAT"

I'm saying that the theoretical complexity of the core dependency resolution algorithm is irrelevant in practice. Therefore, removing useful features to reduce SAT to 2-SAT is not a good trade. I, and everyone else who has worked with Cargo, keep saying this, but nobody listens. :(

> Here it's backed by no data - you have surfaced zero evidence of your premise - "that it is taking user time". This entire thread in fact has exactly no evidence that it's taking any appreciable amount of user time, so it definitely fails as an argument.

Minimum version selection makes it the user's problem to fetch the newest version of dependencies. That is the entire premise of minimum version selection. If you want to upgrade your versions, you have to use "go get -u". That command blindly updates all minor versions of packages. The problem arises when you have some packages that did not follow the semver rules (or are on 0.x) and you need to hold them back to avoid breaking your build. That is when the more fine-grained version control that systems like Cargo support becomes essential. Vgo has unfortunately decided to omit that support in favor of some theoretical benefits that make no difference in practice.

> If you actually have such evidence, great, i'm 100% sure that go folks would love to see it!

The Go team has the evidence in that every other package manager uses maximal version selection instead of minimal version selection, because of the problems with minimal version selection.

I strongly suspect that the problems in minimal version selection will become apparent over the years as people hit the limitations, at which point it will become apparent that Go made a mistake, but it will be difficult to fix. In particular, I think that, several years down the line, there's a good chance that running "go get -u" in large software projects is going to result in a broken build, because people are imperfect and don't perfectly follow semver. So people just won't upgrade their packages very often.


In large software you'll not do go get -u for all packages, you'll upgrade each package separately, at the maximum version or at a specified one. It's just that it's you the user of the modules will choose what and when you upgrade, not the(this) tools automagicaly.


> It's just that it's you the user of the modules will choose what and when you upgrade, not the(this) tools automagicaly.

You've correctly identified the problem with vgo.


You didn't find example of failed Cargo because most of the time it just do like go get -u, and solve a non-problem !


I think there's empirical evidence that the SAT-solver approach is not necessary. I have done an analysis on as many Gopkg.{lock,toml} files as I could find, and in no instances did it ever do any non-trivial version selection: the maximal available version at the time was always selected. Additionally, Russ has stated that ~93% of the top 1000 Go packages in the wild build successfully with no changes. I appreciate that they may be convinced, but I think they need to ask themselves what evidence would change their mind. They have had months coming up on a year to figure this out.

https://github.com/zeebo/dep-analysis


The more interesting question for me is what evidence would change Russ's mind?

The package managers for many successful languages and distributions use lockfiles and constraint solvers. Not only is that empirical evidence that it works technically, it is evidence that it works socially — users are able to understand and work with it, and the package ecosystems for those languages have evolved with those rules in place.

Empirical data from Go's own package ecosystem is useful too, but you can only learn so much about package management from a corpus that does not have sophisticated package management. The ecosystem has already learned to work within the restrictions so you'll mostly see packages that confirm the system's own biases.

It's like countering passers-by on a bike trail and concluding that the only vehicles users need are bikes.

I'm not saying vgo isn't better. But it's an unproven approach where lockfiles and constraint solving are proven, multiple times over. The burden of proof lies on vgo.


It's clear that a SAT solver is strictly stronger than the MVS approach. In other words, any MVS selection can be encoded in a SAT solver. The argument is for a reduction in power. Thus, in order to convince someone that a SAT solver is preferred over MVS, you must show examples where it succeeds when MVS fails, and the extra power is necessary. It's trivial to contrive these situations, but finding them in practice seems harder. Evidence of that happening would help change my mind, and I'd hope would help change Russ's mind.

The empirical data from Go's package ecosystem is drawn from a corpus with sophisticated package management: dep. The argument is that dep is unnecessarily powerful and that a simpler approach will suffice. The evidence supports that argument. Note that this is not an argument about Cargo, or Bundler, or any thing else. Right now, the ecosystem is using dep, and there is evidence that it can be done simpler.

To stick with your analogy, I think it's fair to conclude that the only vehicles users need on bike trails are bikes.

Additionally, I have done an analysis of two Rust projects that have been brought up in my discussions on this issue. Specifically, LALRPOP and exa. In both cases, throughout the entire history of the project (hundreds of changes over 4-5 years), Cargo only had to select the largest semver compatible version [1]. Again, I would love to find examples of projects where this strategy was not sufficient.

[1] There is one complication: in Cargo, a ^ constraint (the default kind) on a v0 dependency is only good up to the minor version. In other words, ^0.1.0 means >=0.1.0 and <0.2.0, where ^1.0.0 means >=1.0.0 and <2.0.0. Selecting the largest semver compatible version is meant in this way because of the community norms around breakage in v0. In an MVS world, any breaking change is a major version bump, and would have the same properties, but with different version strings.


I feel like this argument is focusing on the wrong thing. It doesn't matter if the entire corpus of go packages have trivial version requirements that don't require to resolve. What seems like a much more important issue is the fact that MVS literally picks different versions of dependencies. Specifically, it picks the oldest satisfiable dependency rather than the newest. And while this simplifies the algorithm, it also has the consequence that you don't get any bugfixes to packages if you haven't explicitly requested the version that includes the bugfixes.

One of the main benefits of semantic versioning is that you can upgrade packages to new minor and patchlevel versions without breaking backwards compatibility, thus allowing you to easily pick up bugfixes by simply updating your dependencies. Of course, you should still test after updating, as packages could introduce new bugs, but on the whole updating minor and patchlevel versions is far more likely to fix bugs than it is to introduce them. But MVS discards this benefit and says you cannot get bugfixes unless you're willing to manually edit your dependency list to declare that you want the newer package. The net result is that packages that use vgo are likely to end up stuck on old versions of dependencies. This is especially true for indirect dependencies. If I publish a library, my incentive is to declare the oldest version of my own dependencies that I'm compatible with, in order to give my upstream user the most control over dependency versions. But if my library's client doesn't know about my own dependencies, then this means my library's dependencies will almost certainly resolve to a really old version, and my library's client won't even know about it and so won't be in a position to request the newer, less-buggy package version. Which then means I have an incentive to instead constantly update my library to list the newest versions of my dependencies, which then forces my library's client to upgrade those dependencies even if my library's client would prefer to be conservative and not upgrade those dependencies simply because they need to upgrade my library.


The first package installer I was aware of that used a SAT solver was SUSE's, back when Yum's solver was extremely primitive and would regularly fail to find a solution.

https://en.wikipedia.org/wiki/ZYpp

I think using a SAT solver for package installation arose out of dealing with much more complex requirements than are likely to arise in a Go project. The Smart package installer used heuristics to find a solution depending upon the operation:

https://bazaar.launchpad.net/~smartpm/smart/trunk/view/head:...

Poetry, Cocoapods, and Dart are all apparently using this SAT solver:

https://github.com/dart-lang/pub/blob/master/doc/solver.md

FWIW, I wrote a package installer that worked with RPMs, Solaris packages and AIX packages about 15 years ago and ended up with a minimal version selection similar to vgo. I wasn't a genius or anything... I just wasn't aware of SAT solvers at the time and it was the simplest thing that worked.


It also ignores rsc's clear algorithmic preferences. No, RE2 does not support back-references, because they require a back-tracking implementation with exponential worst-case behavior. RE2 uses an NFA with O(nm) worst-case behavior. It's a classical Unix approach where a simple implementation is preferable to having more features, especially if there are algorithmic considerations.


Throwing away dep means also throwing away the nontechnical groundwork that dep was built on - for instance, user research. Seems pretty careless to me, especially when the alternative is the product of one person's thinking on the subject done in a vacuum without the input of an entire committee of smart, reasonable people who have literally spent years diving into this specific domain.


As a user of dep, glide, godep and avid reader of vgo technical docs I favor vgo's solution. I've been a professional Go developer a few years now.

The notion that this committee speaks for the community seems a weak one to me and I've seen my view mirrored with many of my peers. Its good they attempted such research but it seems like confirmation bias as dep just seemed like a re-write of glide with similar fundamentals and a improved user experience. This appeal to authority by the committee to represent the Go community seems unproductive and unnecessarily divisive/misleading.

I wasn't thrilled when I saw become Sam elected to lead the implementation of the "official experiment" because I wasn't a fan of Glide at all. There was no community vote to oversee this, just one maintainer of one flavor of go pkg management was declared the expert and just re-wrote the existing Glide solution with some lesson's learned. Many other maintainers (experts) of other go pkg management solutions favor vgo.

I've navigated many thorny dependency problems in Go before and never have I ever been convinced that the solution was NP complete version constraints. MVS, good tooling, and finally SIV is enough to make this miles better then previous attempts in this space. Also hooray for GOPATH elimination and project based workflows I've always loved GB by Dave Cheney.


I have exactly the same feeling. And remember at the time of the start of Dep that there was no consensus and a big hope that the final integrated solution will be more Goish than Glide. Very surprised by the begin of Dep. I don't believe that rsc and the go team didn't know since the begin that it will not fit sooner or later...


Why do you think vgo wasn't built on the nontechnical research carried out by dep? In fact, that's normally what I would assume would be the purpose of something branded an "official experiment" - the code will for certain be thrown away eventually, but the experiences gained will be used to create the actual final product.


> without the input of an entire committee of smart, reasonable people who have literally spent years diving into this specific domain.

If that is the case why the solution wasn't developed already outside of Go team's ambit. After all Go team said multiple times they don't need module system as Google does not use it.


A solution was in progress before Russ preempted it with his own thing: dep.


> for instance, user research

"If I had asked people what they wanted, they would have said faster horses."

(or yet another Bundler/npm/Cargo clone.)


"specially when the alternative is the product of one person's thinking on the subject done in a vacuum without the input of an entire committee of smart, reasonable people who have literally spent years diving into this specific domain."

Even this very article contradicts your assertion about how it was done.


From this account, it sounds like “the community” was encouraged by “the core team” to work on, meet, discuss an idea and present work to “the core team” that was then ignored.

Would you really expect those people to be excited about the outcome?

This feels like an overly personal attack.


This is some pretty ugly character assassination. I haven't seen anything that justifies your characterizations of Sam or Peter.


"The first principle is that you must not fool yourself and you are the easiest person to fool."

- Richard Feynman




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: