Hacker News new | past | comments | ask | show | jobs | submit login

> Minimal Version Selection

Means no security fixes at the price of, well, minor developer inconvenience? What is the inconvenience, exactly?

> ...tomorrow the same sequence of commands you ran today would produce a different result.

I mean, I guess this is technically true. But seems like it shouldn't be an issue in practice as the API you're calling shouldn't have changed, just the implementation. If it has changed, then downgrade the dependency?




As a long-time maintainer of various packaging systems (and co-author of one):

I find myself wholly in agreement with the idea that maximal version selection is appropriate for operating systems and general software components, but not necessarily desirable for a build system.

When you consider the evaluation of a dependency graph in the context of a SAT solver, you realize that the solver would consider both a minimal and maximal version as potentially satisfying the general constraints found in a dependency graph. Whether you then optimize the solution for a minimal or maximal version becomes a matter of policy, not correctness.

The security concern can be appropriately addressed by increasing the minimum version required in the appropriate places.

With all of that said, I think that Go's potential decision to optimize for minimal version selection is likely to be considered a bug by many because it will not be the "accepted" behavior that most are accustomed to. In particular, I can already imagine a large number of developers adding a fix, expecting others to automatically pick it up in the next build, and being unpleasantly surprised when that doesn't happen.

This is an interesting experiment at the least, but I hope they make the optimizing choice for minimal/maximal version configurable (although I doubt they will).


> When you consider the evaluation of a dependency graph in the context of a SAT solver, you realize that the solver would consider both a minimal and maximal version as potentially satisfying the general constraints found in a dependency graph. Whether you then optimize the solution for a minimal or maximal version becomes a matter of policy, not correctness.

I believe rsc is hoping they can avoid the need for a SAT solver entirely by going with minimum versions (and, implied, not allowing a newer package to be published with a lower version number).


Whether you use a SAT solver or not doesn't really matter for the purposes of this discussion -- the SAT solver is just a tool that can be used to find a solution given a set of logical statements.

My point really was this: both a maximal and minimal version can exist that satisfy a minimum-version (<=) bound dependency. In such case, which one is chosen is a matter of policy, not correctness.

As for not allowing a newer package to be published with a lower version number, that is sometimes necessary in a back-publishing scenario. For example, imagine that you publish a newer package with a higher version that has breaking changes and a security fix, and you also publish a newer package of an older version that just has the security fix. It's entirely valid to do so, for what should be obvious reasons.


Actually, I think the SAT solver is avoided by making the only version constraints of the form <=.

Using the min version appears to eschew the need for lock files. Want to upgrade? Bump your min version.


> Using the min version appears to eschew the need for lock files.

This only works if the system also prevents you from publishing a version of a package with a lower number than any previously-published version of that package.

So, for example, after you've shipped foo 1.3.0, if a critical security issue is found in foo 1.2.0, you can't ship foo 1.2.1. Instead, all of your users have to deal with revving all the way up to foo 1.3.1 where you're allow to publish a fix.

It's not clear to me why they're trying so hard to avoid lockfiles. Lockfiles are great.


You CAN ship v1.2.1 after v1.3.0 is live. I have tested this with the vgo prototype and it works fine (see github.com/joncalhoun/vgo_main):

    $ vgo list -m -u
    MODULE                          VERSION                    LATEST
    github.com/joncalhoun/vgo_main  -                          -
    github.com/joncalhoun/vgo_demo  v1.0.1 (2018-02-20 18:26)  v1.1.0 (2018-02-20 18:25)
Notice that v1.0.1 was released AFTER v1.1.0

What the minimum version is doing is giving our code a way to automatically resolve upgrades if they are necessary. Eg if module X requires module Z w/ a version >= 1.0.1, while module Y requires Z with a version >= 1.1.0 we clearly CANNOt use v1.0.1, as it won't satisfy the requirements of Y, but we CAN use v1.1.0 because it satisfies both.

The "minimum" stuff basically means that even if a version 1.3.2 of Z is available, our code will still use v1.1.0 because this is the minimal version to satisfy our needs. You can still upgrade Z with vgo, or if you upgrade a module X and it now needs a newer version of Z vgo will automatically upgrade in that case (but to the minimal version that X needs), but random upgrades to new versions don't just occur between builds.


What happens if:

1. I depend on foo with constraint ">1.5.0". The current minimum version of foo that meets that is 1.7.0.

2. Later, foo 1.6.0 is published.

3. I run go get.

If I understand the proposal correctly, that go get will now spontaneously downgrade me to foo 1.6.0. That defies the claim that builds are always reproducible.


So, I think you're right... but this is only a flaw if you as a user specify a lower bound that does not exist. The tool won't do this. And it can be prevented by disallowing referring to versions that don't exist.

It's entirely valid (and interesting! I hadn't thought of this one), but I'm not sure if this would happen even once IRL, except for people trying to break the system. Which can be fun, but isn't a risk.


My experience from maintainer a package manager and trying to keep the ecosystem healthy — which mirrors my experience on lots of other systems with many users — is that anything your system allows people to do will be done at some point.


heh, good point :)

as always, of course there's a relevant xkcd: https://xkcd.com/1172/


If your module requires at least version 1.3.0 of a dependency (i.e. 1.3.0 is the minimum version that satisfies the constraint) then it doesn't matter if a new version appears upstream. 1.3.0 is always the minimum version that satisfies >=1.30. That is, unless version 1.3.0 disappears from upstream.

If there is only one constraint against a dependency, then it behaves exactly as version locking.

The "maximum of the minimums" rule kicks in when the same dependency appears more than once in the dependency graph, because the constrains might be different.

vgo won't fail and say "incompatible versions". It will just resolve to the biggest of the lower limits. It's up to the build and test system to judge if the combination works.


One potential problem with dependency resolution that focuses solely on minimal or maximal-bound resolution is that it usually ignores the untested version combination problems.

That is, a given version that satisfies a version bound may not have necessarily been tested with all of the different combinations of versions of components it is used with. It will be interesting to see how minimal version selection interacts with that particular challenge. For a build system it may matter much less than an operating system.


As far as I know, that problem is effectively unsolvable. For most real-world-sized dependency graphs, the set of all valid combinations of dependency versions is heat-death-of-the-universe huge.


The way the system I worked on "solved it" was to provide another layer of constraints called "incorporations" that effectively constrained the allowed versions used to resolve dependencies to a version that was delivered for a given OS build. Administrators could selectively disable those constraints for some packages, at which point resolution was solely based on the "standard" dependencies. But by default, dependencies would "resolve as expected".

The "incorporate" dependencies are described here: https://docs.oracle.com/cd/E26502_01/html/E21383/dependtypes...


We can only hope they'll at least have a tool that you can run that will bump up all the versions to specify the latest. But then of course that will make solving dependencies a lot harder since it's much more likely you'll get an unsolvable set of deps. It's hard to see how this could ever help anything.

Lock files also have the major advantage that you have a hash of the thing you're downloading so you're not depending on the maintainer not e.g. force pushing something.


Because only the minimum version can be specified, you will always get a solvable set of deps. Those deps might not build but they will be solvable.


These issues all seem to be solved by lockfiles, which are a part of the dep tool. Why ditch them here? It seems like rsc seems to be hinting at an argument for simplicity. But the long history of package managers, and my anecdotal experience confirms, that you're going to need a lock file no matter what. Good versioning practices are not enough, npm versions 1-4 prove that you need locking, and it must be on by default to make reproducible software. Minimum versioning will likely be better than the newest version for stability, but its not enough.


Not only that, but locking allows for what is IMO the best way to handle this through CI:

Always run your builds with maximal version selection in CI (unlocked), and lock the new versions in if it passes all tests, otherwise leave the previous locked versions and flag it for review.


> I mean, I guess this is technically true. But seems like it shouldn't be an issue in practice as the API you're calling shouldn't have changed, just the implementation. If it has changed, then downgrade the dependency?

And if the new implementation has a new bug, you might be screwed. It worked last week, but not this week. How do I get back to the working version?


Use 'git bisect' or some similar tool to walk back and find the point where the lock file change triggered the bug?


If you have a lock file, there's no problem. I was arguing about what happens when there isn't a lock file.

I may have inferred something that wasn't in the original comment, by reading too many of the other comments on this page.


Sort but given that we're relying on git mostly, what are the advantages of lock files over submodules/subtrees ?


Well, that would lock all of the projects into git. What if some libraries are in git, and others are in mercurial and darcs?

I think it’s better to have a package manager that’s not reliant on a particular source control system.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: