And this is an awesome idea. Cryptographically hashing your install and keeping the install spec around for later is very powerful.
> Lock files, dependency resolution algorithms, etc. However, these are obsolete.
These are still very important (and hard) problems. Claiming that they’re obsolete ignores what they’re used for.
> The functional package management paradigm approaches the problem from a different angle which is able to precisely describe the full dependency graph for a piece of software, all the down to the bootstrap binaries (the C compiler's compiler, etc.).
I suppose this is fine from the perspective of a system package manager. But at the application development level, there are not very many users who want to specify all that.
If you want to play around with different software stacks in Guix and Nix, you have to actually write packages, which involves tweaking a lot of files. There aren’t so many people with the intestinal fortitude to get down to that level. I want a system where I can say “try running with a different version/build/configuration of this particular dependency”, and then the PM figures out what else needs to be done to accommodate that. Users don’t want to tweak package files a lot, and even less do they want to manage the profusion of version/configuration-specific package files that result from doing such experimentation.
We’ve developed Spack as kind of a compromise between these extremes. Spack  has builds parameterized by version, compiler, build variants, compiler flags (soon), etc. and it attempts to let the user experiment with a large combinatorial space of packages. You can do that with a command-line syntax (without editing package files), and you can specify tweaks down to the dependencies with a recursive syntax .
I support code teams at LLNL who want to experiment with many different compiler flags, dependency versions, etc., across Blue Gene/Q, Cray, Linux, GPU, and Xeon Phi machines, and none of them want to specify everything quite as rigorously as Nix/Guix demand. What we want is really good build parameterization and a concise way to specify it. I don’t see many Guix/Nix builds written that way — they’re all tied to specific versions and you’d need a new package file to support a new version.
> functional package managers overcome the problems that plague traditional package managers, allowing for unprivileged package management, transactional upgrades and roll backs, multiple variants of a package coexisting without conflict, etc.
That’s one way to look at it. The other is that these systems ignore these problems. What you’re really offering is a really good reproducible build system, at the price of rigorous specification. Reproducibility is a big deal, and there are great reasons to do this, but you can’t say the other systems are “obsolete” when they have very different goals.
What Nix/Guix are not doing is reducing the burden of specification. You write down all the details of a very specific, reproducible software stack, but you do not make that software much more composable or extensible than it already was. The user can install the version combination that you packaged, but can they easily try their own? Can they easily try to build with a different compiler/compiler version/set of compiler flags/dependency version/etc.?
npm, pip, etc. hide a lot of details from the user, and that is why people likely continue to use them (i.e., they are not “obsolete”). Constraint solving, etc., are still necessary to hide a lot of the complexity that users don’t want to deal with. Nix and Guix are great for reproducing a snapshot, but app devs want to explore the build space more than that.
Spack attempts to find a happy medium. We cryptographically hash builds, but we also let the user build things that the original package author may not have tried yet, without modifying the package. That does require all the constraint solving nastiness, but it doesn’t kill reproducibility. Spack stores the generated build provenance after doing some constraint solving , but the tool helps fill in the missing details of the dependency graph, not the human. Nix and Guix, AFAIK, do not do that. Spack isn’t fully reproducible because it’s not a full system package manager and it doesn’t use chroot, but you can try the same build again using the provenance that was generated when all the solving was done. The tool just helped the user along the way, and I think that’s still a very useful (not obsolete) thing.
 Slides 9-11: https://tgamblin.github.io/files/Gamblin-Spack-SC15-Talk.pdf
 Slide 14: https://tgamblin.github.io/files/Gamblin-Spack-SC15-Talk.pdf
I haven't looked at Spack, but as a Nix user, it's often the case that different versions of some software package require different packages in order to build them. For example, particularly on a less-used platform like darwin, I've submitted numerous patches to the Nix package of MariaDB due to minor breakages in newer versions. It's not always simple to parameterize the upstream version.
That said, there are many cases where it is that simple. Overriding a package in Nix to change the version is possible , but could be easier from a UX perspective.
> The user can install the version combination that you packaged, but can they easily try their own
As an example, I needed an older Ruby version that required an older libxml2 than available in nixpkgs. I did need to create my own Ruby package, but was able to simply override the version in the existing libxml2 package, and I configured nix to use that version across all other predefined nixpkgs that required libxml2 and were needed by my project. This meant a lot of compiling on my part (since nixpkgs of course can't publish binaries for my customized builds), but it was fairly straightforward.
If spack doesn't know about the version, it can try to scrape the package webpage to find it automatically, or you can just add another one-line `version()` directive in the package file, rather than creating a new package entirely or making a global config change.
My point is mainly that to automate this type of change, you need something like constraint solving to adjust the DAG. That can be used in conjunction with a functional Nix-like build, and actually adds some value to it. That is what Spack is doing. This is why I claim the constraint solving, etc., is not obsolete.
This is not so. You do not have to write down all the details of the complete software stack as build systems can be abstracted away (e.g. `gnu-build-system` provides the GCC toolchain, binutils, make, etc). Only immediate first-level dependencies are declared.
And in many cases you don't have to do even that because you can generate package expressions from upstream information using `guix import`. There are importers for CRAN, bioconductor, pypi, hackage, CPAN, and others.
> The user can install the version combination that you packaged, but can they easily try their own? Can they easily try to build with a different compiler/compiler version/set of compiler flags/dependency version/etc.?
Yes! We use Guix for multiple clusters at a research institute and of course users must be able to create package variants, with different configure flags or compiler versions. This use-case is covered very well by Guix.
Don't get me wrong -- Nix is the inspiration for a lot of what Spack is doing, but we've added to it by making an attempt to template the packages by version, compiler, and options. So the usr don't have to "create" the package variant at all: they just `spack install numpy %firstname.lastname@example.org` to compile with the Intel compiler, because the compiler can be swapped in/out of any build. We do similar things with versioned interfaces like MPI -- you can swap OpenMPI or MVAPICH in/out of a build. I have not seen anything to lead me to believe Guix allows this in a simple way, without generating a new package and copying a lot of boilerplate from the old one. The graph transformation stuff you mentioned in your other comment is promising, though.
Yep, this. Glad someone else is saying it, because I'm starting to wonder how people can read literally thousands of words rooted on a foundation of making it easier for developers, and reducing their uncertainty, and then expect them to write a book-length spec doc.
(note that requiring people to write down tons of spec information reduces, at most, one kind of uncertainty, and at high cost)
> Spack stores the generated build provenance after doing some constraint solving , but the tool helps fill in the missing details of the dependency graph, not the human.
Sounds a lot like what's I described, no?
(I haven't heard of Spack, will look into it)
> isn’t fully reproducible because it’s not a full system package manager
I didn't touch on shared libs in the article, and probably should've, at least to point out that, for the purposes of the discussion of a PDM, it's mostly out of scope.
Things are really just so, so much nicer when we actually let there be layers and scopes of responsibility.
Like the author, I want language implementers to split the packaging realm: anything that's a pure source dependency can and should be packaged with language-specific tools, but anything that runs on the system should be distributed in some sort of universally executable format, such as a launcher shell script with an application directory in /usr/bin.
Otherwise you're in this current mess where anyone writing a system package manager has to deal with language runtimes and fight with the language-specific package manager to handle distribution/installation/removal/rollback/versioning in a coherent way with the rest of their platform. Moreover, the user has to make the decision between installing a package from their system vendor or from the language-specific repository.
So I guess what I'm advocating for is for the "language package manager (LPM)" concept to go away. If you need to distribute source dependencies, that should be handled on a per-project and not per-language-environment or per-system basis. If you need runtime dependencies, distribute them as a tarball in a standard directory format and let system vendors package them like the rest of the software they manage.
The JVM world has good things going with Maven and Gradle. I never have to use some Java command-line package manager to install a global dependency; rather, I'm declaring what my project needs and letting the build tools handle the rest. I never have to deal with conflicting dependency versions between projects or language versions. And binary Java applications are easily available via my system's package manager. There is just no need for the middle layer.
Language PMs are "good enough" to solve the problems of the people actually writing the software, and in most cases that is really all that matters. I'm sorry they're not good enough for you, but it's not my fault that Rust, Python, and Go will let me upload to their package manager but Debian/Fedora/whatever you use will not.
Your fight is not with the language PMs, it is with your OS vendor who will neither give me upload rights nor send out someone else who does, who neither designs tools for me to use nor sends someone who already knows how to use them.
To be clear, I think there are good reasons why OS vendors won't change their ways, but software developers won't change our ways either, so neither language PMs nor OS PMs are going away any time soon.
As far as Fedora is concerned becoming a package maintainer is a fairly straightforward process, and you are more than welcome to package up your own software and submit it to the software collection or solicit someone to assist you with it. The rpmdevtools package contains a super-useful utility called rpmdev-newspec that provides templates to create packages for most popular standards out there already (ruby gems, python setuptools/distutils scripts, CPAN modules, etc) - it's really easy to get started and you can always push your packages up to COPR without needing sponsorship from the existing packaging team.
Which is precisely why rpmdev-newspec has templates for this case. Simply running rpmdev-newspec -t ruby/python/perl/php-pear/ocaml/R mypackagename will generate a rpmspec file ready to go for any of these languages using their standard package managers (gem, distutils/setuptools, CPAN, pear, OPAM, packrat) - just add necessary metadata (Requires/BuildRequires, summary, description, version, and a changelog).
It's not magic, it still requires minimal effort (3 minutes worth of metadata + calling mock to test the build) - but it's hardly difficult. Sure, you could make some extra scripts to automatically populate the metadata from the language PM's descriptor of choice to limit the manual work to release bumps and editing the changelog - and I don't think anyone would be opposed to that, but that's pretty much all that would be left.
The biggest problem is outside of rolling-release distributions like Arch, Gentoo, OpenSUSE Tumbleweed and Fedora Rawhide bumps in major versions within a release are highly discouraged - and a lot of people used to using language PM's have a habit of changing things rapidly even post-1.0 and not supporting old branches - often leaving the distribution packager with needing to backport patches for security or bug fixes. I'm not going to argue if it's better or worse, but it's an issue that needs to be addressed.
I guess my issue is that "good enough" is a very fluid concept, and I often discover some months after that it was my excuse for shitty code/integration test.
> Copr is an easy-to-use automatic build system providing a package repository as its output.
> Start with making your own repository in these three steps: 1) choose an architecture and system you want to build for; 2) provide Copr with src.rpm packages available online; 3) let Copr do all the work and wait for your new repo.
Gradle internally uses Maven. Leiningen (the Clojure build tool) also uses Maven, and the nice thing here (while being rife with other problems) is that it pulls the correct language version in as a dependency (i.e. it pulls in Clojure in the required version).
Sure, if I need the source of something for compilation or testing, i.e. 'pip install numpy', then having that handled by pip + virtualenv makes sense.
Besides that use case, brew should handle everything to reduce conflicts.
The end result is that there is no "global" dependency like you would install with `npm install <foo>`. There is no dealing with dependency mismatches between projects that depend on "<foo> >= 1.2" for example, which can be a hassle to track down in the "global dependency installation" model most often used by these language-level package managers.
Same thing for gradle, you need to install it at first place. There is absolutely no difference except the language that is being used. And yes you can use shell scripts with npm too, except your shell scripts aren't going to run on windows.
I think that's what GP was referring to -- it's not a shell script for automating use of the tool, it's a shell script for automating acquisition and initialization of the tool.
I googled quickly for gradlew.bat and from the first link I opened it looks like it depends on java being installed already.
Node already includes npm because it's the official package manager. Java doesn't include gradle, presumably because they have multiple options and none of them is considered the official one?
This is simply not true. Language package managers cannot describe the entire dependency graph of a piece of software because they can't handle things that aren't written in that language. For example, there are plenty of Ruby gems that require a C compiler and C shared libraries in order to work, but those dependencies cannot be encoded in that system. System package managers can describe the entire graph, thus providing richer information.
>Finally most system package managers won't let you have multiple versions of a library installed at all, which makes using more than 2 programs at the same time virtually impossible (because they usually want different versions of some dependent library or other).
Yes, this is a big issue with the status quo system package managers. Fortunately, there's hope on the horizon in the form of functional package management. Functional package managers such as GNU Guix and Nix allow for an arbitrary number of variants of the same software to coexist without conflict. Additional benefits include unprivileged package management, transactional upgrades and roll backs, reproducible builds, and no central point of trust for binaries (anything can be built from source).
Language package managers are good for quickly sharing code written purely in that language with other developers, but unacceptable for anything else.
On the contrary, the system package manager has to have a simpler model, precisely because it has to handle multiple languages. A generic model is never going to be able to capture the ruby-specific details as well as a ruby-specific model.
Take the Pg gem, for example. In order to build and use it, you need the libpq shared library. RubyGems can't encode this dependency, but Guix can. The recipe for ruby-pg is only 25 lines long: http://git.savannah.gnu.org/cgit/guix.git/tree/gnu/packages/...
There was a talk given at FOSDEM about foreign packages in Guix with a focus on Ruby which is worth watching or skimming the slides: https://fosdem.org/2016/schedule/event/guixmodules/
Right, so, what almost all of the FLOSS world is doing, almost all of the time. Miiiight be worth optimizing for that case. Especially if a proper PDM would compose well with larger systems...which it would!
Citation needed. I have been packaging (scientific) software for years and it's usually a happy mix of different languages and libraries.
Spack  attempts to solve this by allowing extensions [2,3] for languages that have their own module systems. Basically you can embed language-specific module enablement logic in the package, and you can enable/disable packages in particular Python installs. You can then build several versions of numpy with different dependency libraries and swap them in and out. It's a compromise between language and system package managers. Right now it's too stateful in that the modules are linked/unlinked into a python installation. We have plans to provide something more like virtualenv or conda environments.
Functional package managers are nice, as are reproducible builds, but they come up short for customizability. I'd have to write an awful lot of packages with Nix and Guix to try out many different versions, compilers, dependencies, etc. Spack has a syntax  that allows these to be composed arbitrarily. It's not fully reproducible right now (though it could be if we went down to glibc and used a chroot jail), but the packages are much more composable than the current crop of functional package managers.
I disagree. At least with Guix you have programmatic access to the dependency graph (since package objects are just Scheme values). You can traverse the package graph and easily swap out all occurrences of a particular package. You do not have to write the package expressions yourself as you can just modify the package objects directly.
See also the paper "Reproducible and User-Controlled Software Environments in HPC with Guix".
Since the paper was published Guix also gained command-line syntax to rewrite the dependency graph on the fly, e.g.
guix build guix --with-input=guile=guile-next
spack install guis ^guile@<new version>
There's an iterative DAG-building step called concretization that looks at these types of constraints and tries to solve for a satisfactory DAG, which is then built in a functional style. I don't see how I would model that in Guix, and but in Spack it's a few lines, and there's command-line syntax to customize versions on the command line, not just programmatically. e.g.:
spack install email@example.com ^firstname.lastname@example.org ^email@example.com
The people in charge there have either implemented or plan to implement support for those language specific managers into the Gobolinux tree.
As best i understand it, this by making the Compile tool able to use these managers to download, and then capture the output and place it into the proper place in the /Programs tree.
Arguably, this might be already a wrong approach. Whatever you say, but this is a problem (and is causing problems) that I can do both `apt-get install python-mutagen` and `pip install mutagen`. But what I cannot do for sure at this point: get rid of one and only one of these tools. So before speaking about how can I fuck up writing some language-specific PM it might be fair to admit that pretty much nothing actually is language or project-specific when talking about computer systems, and consider why somebody might write another such tool when we're done with our task.
If we also had some simple, standardised, cross-platform convention for indicating which packages were available and had which dependencies on which platforms, so we could have common repositories that could advertise the specific versions but also the specific supported platforms of each package they hosted, and so any package management tool could speak the same protocol to those repositories, then many things would get better in terms of portability as well. This is an area where the language-specific tools do sometimes have advantages today, though those advantages typically turn to dependency hell almost immediately if you have a package containing a native code element that it decides to build with whatever C(++) compiler it can find on the target system.
OTOH: part of the reason that i included the notion of "compiler, phase zero" is because i think that's one thing that, scope-wise, differentiates SPMs from PDMs. an SPM's output is executables. i think this difference is significant. maybe i'm wrong.
(If anything, I think there's more overlap between an SPM and an LPM).
If I write a language and ask users to use their favorite package manager to fetch dependencies, I'm not going to build a big community because packages and their dependencies will not be shared across platforms, or each user will have to add some manifest for every package manager, or different repositories will have to be maintained for each platform ...
os package managers and language package managers ARE orthogonal.
In fact, pretending that these are different things and that "developer" is something drastically different from "user" is precisely the reason, why all (or, well, most) package managers in existance are so fucked up. The hard truth we must accept before we talk about package managers is that there's no hard lines in this domain. It's a complex problem and must be dealt with accordingly, without exceptional reductionism.
Go didn't support shared libraries until very recently, and still doesn't use them for things other than where runtime pluggability is actually required. If I want to install a Go program without compiling it, I just use my favorite package management tool-- the cp command.
Meanwhile, Linux package managers have acceptable versions of many of the Go programs I want to use. And building from source just uses git to pull the dependencies it needs. Not every programming language needs to reinvent the wheel, poorly. This looks like a solution looking for a problem.
Because a language is nothing without an ecosystem of libraries. These libraries need to be version-ed and dependencies between these libs need to be resolved.
> Go didn't support shared libraries until very recently
It still doesn't.
> If I want to install a Go program without compiling it, I just use my favorite package management tool-- the cp command.
That's not the issue at end here. If i'm a developer and I need to publish an open source project, i'm not going to publish it with all its dependencies. I need a mechanism to manage them, that's what a package manager is for.
Go has an half baked on, go get. It can fetch dependencies but it cannot tell between versions of the dependencies.
It's interesting to note that a lot of Go developers are actually against a separate package manager.
OK, fair enough. Go supports shared libraries, but not shared libraries in Go that can be loaded from Go. It's a bit confusing. Design doc here: https://docs.google.com/document/d/1nr-TQHw_er6GOQRsF6T43GGh...
If i'm a developer and I need to publish an open source project, i'm not going to publish it with all its dependencies. I need a mechanism to manage them, that's what a package manager is for.... Go has an half baked one, go get. It can fetch dependencies but it cannot tell between versions of the dependencies.
You don't need a package manager to lock down the versions of external Go dependencies. You only need "go get with versions". The reason why "go get with versions" has not been implemented is very simple: the Go developers looked at the horrible stuff that was going on in Maven with ancient dependencies being used all over the place, and decided "let's not implement that." This is the point that all the pro-version-locking people never talk about, let alone acknowledge: version locking encourages people to use old versions "for stability reasons" and makes it seem acceptable to break APIs in new versions.
It's disingenuous to call "go get" "a half baked-package manager." "Go get" is fully baked, but Go ain't C++ or Java. We don't need DLL hell, lockfiles, dependency graphs, and all that.
This whole discussion makes me think of a quote by Yossi Krenin. Many people feel that "good design" means "lots of code" or "lots of layers". Actually, those are some of the meanings of "bad design". If you think you need to build a rocket ship to walk down to your mailbox and pull out the mail, you might want to rethink.
There is a compromise solution brewing where you can "vendor" dependencies (i.e. bundle them with your project.) See https://docs.google.com/document/d/1Bz5-UB7g2uPBdOx-rw5t9MxJ... There are maybe a few cases where this won't work, like with proprietary code where you're not allowed to redistribute the dependency, but in general this should solve 99% of the problems out there. The forthcoming shared library support with traditional Linux package managers should solve the other 1%.
Especially as a user who isn't programming in the given language. Don't make me install some package manager for your language before I can run your program.
Self-contained executable or installer, please, or go back to the wood shed.
Language package managers suck, but this is way worse. Self-contained executables bundle all of their dependencies, which is terrible for both reproducibility and security. Users and system administrators have no reasonable way of identifying, patching, and updating vulnerable libraries/programs these things, leaving them dependent on the upstream providers of each binary bundle for a bunch software they didn't even write. I've seen a growing push to "appify" GNU/Linux lately, with Docker, xdg-app, OmniBus, etc. and I have become very worried about what will happen if this becomes the dominant way of distributing software on GNU/Linux.
Package management is good for users. We need a safe and sane way to maintain the software running on our computers. The control ought to be in the hands of the user, not the developer.
That is only a religious belief. If two programs share a common library, an upgrade to that library (say security fix) could fix things for one program, but introduce a hole into the other. Every upgrade to some shared piece must be validated by the developers and QA of every program that uses it. 99 programs could be fine the change, and the hundredth could break.
And if you want utmost reproducibility, then you in fact need a given version of a program to have its exact dependencies, so that you're running exactly what the developers are running. If program X needs libfoo.1.2, and program Y needs libfoo.1.3, and the programs are actually bundled with their specific version of libfoo, then you have better reproducibility than if libfoo.1.3 is foisted upon program X because program Y requested that version.
The model where you have one libfoo only works if everything is open source and packaged by an upstream distribution, which takes care of curating the entire combination of stuff, so that when program Y needs libfoo.1.3, the entire distro is officially pushed forward to that libfoo version; it becomes the official libfoo for program X also. What you have matches the upstream and so behaviors are likely to be reproducible. If the vendors for different programs are completely independent, then you in fact sometimes need multiple versions of dependencies.
I really wish we'd get generic package management.
A proper package management server and protocol that is.
Something that can be used and picked up by Windows and linux
distros but also by domain-specific packages (pypi, etc), browser
extensions, games with addon repositories and so on.
It's sad that right now we can't answer "and now we need package
management" by "sure, let me fire up pkgserve.d" or something.
Even if it does happen, though, I suspect that end-user package management still ends up looking different from developer package management.
What is possible I think is to have generic package management servers and protocols. The same way you have HTTP servers right now and don't just roll your own domain-specific protocol for your own domain-specific web just because it's in go instead of python or whatever. And yes, you end up with multiple web browsers and that is fine. You even end up with domain-specific browsers (eg. embedded in games, or as a library, or "live apps" and what have you) and that's all fine.
What I see, whenever I see a new package manager, is a colossal waste of time that could have been avoided.
Try to host your game's addons in a nix repository, see how well that goes...
Also, I'm curious as to what you mean by "system package manager" when nix explicitly supports installation of software by unprivileged users (the packages only get linked to from the users profile, so it's safe). I also run nix under debian a lot very successfully.
If you want the game to load/unload addons at runtime, rather than just wrap the games executable with flags, then yes you have to learn to talk to nix daemon directly which is a bit more work. But do that and things will still work just fine.
(This was the diagram leading up to it: http://imgur.com/bzy22DA)
In the hazy future, one thing I wonder about is how to do back-pressure outside the monorepo: if upstream makes a mistake and breaks something important downstream, how quickly do they find out? Today this happens informally. Ideally (from a responsible maintainer's point of view) you'd have an easy way of finding out before publishing a new version, and the second-best approach would be automatic notification soon after. This might involve speculatively compiling and testing some downstream packages to find out how bad the breakage is, and probably involves coordinating continuous builds somehow, or perhaps making continuous builds the job of the central repo (making it even more centralized).
This practice of forever chaining a language to a bunch of fragile servers out there that go bad at the rate of links is worrying.
Most of the integrity of packaging systems (at the PDM level), derive from the assurances provided by the underlying VCS - e.g., Git's tamper-proofing assurances by virtue of how its commit DAG is built. If your system has a registry in the middle, that does create an SPOF; if that registry intermediates the VCS with tarballs, that takes away the clients' ability to rely on the VCS for verification. So, in my naive view, that's the point at which signing becomes more crucial.
It's just unbelievable that windows doesn't have this yet.
Although the only thing I hate to do, is compile a lib for such version of MSVC, and add it in my project, meaning I have to add the include path, the lib path, the lib name, in each debug and release, after having ran CMake on it.
Windows is still a little cranky.
I'd like the distribution to stick to its knitting and leave the decisions about my personal world of versioning and dependencies to me.
I don't use a Macintosh. Is their OS system not built out of binaries or libraries? How are those managed, if not by a package manager? I was under the impression that Apple called their package manager "the App Store".
> I use homebrew, which has its problems no lie, but I can upgrade my own versions of software that might conflict with system level stuff without fear.
It's true that some systems' package managers don't support separation of each user's packages from each other, or from system-wide packages. For example, I've used dpkg and rpm which don't support that. However, some package managers can handle user-specific installation (e.g. Nix can), so I don't see how using a combination (e.g. using Nix for user-specific packages, in a Debian system managed by dpkg) is any different from using homebrew alongside whatever-the-Macintosh-package-manager-is.
I've found that using the same package manager for both makes life easier though; e.g. it avoids having two copies of something installed, since one PM didn't spot that the other had already installed it.
> I'd like the distribution to stick to its knitting and leave the decisions about my personal world of versioning and dependencies to me.
Personally, I consider myself to be in charge of my computers, so my personal decisions about versioning and dependencies apply to both user-specific packages and the whole system; e.g. if I want to test on different versions of Python, I should be able to; if I want to have my kernel built with different compiler flags, I should be able to.
App Store isn't a package manager, this is a store where end users download applications like Google Play, has a lot of restrictions regarding what can be published and distributed. One certainly cannot distribute libraries through the App Store or register alternative repositories. And the App Store doesn't resolve dependencies or stuff like that.
This is all separate from user-level package management (homebrew, nix, etc.), and project/application/source-level package management (what that article was actually talking about).
Now that I'm finally done writing this, maybe I can write some code.
That's because with NPM you can install a specific version of module A on time t0 that also installs, based on package.json, the latest version of module B. But on t1 there is a new version of module B, so the new installation of the same version of module A ends up having a different version of module B. Obviously, this problem expands dramatically when you use multiple modules with different degrees of depth.
Hopefully mook pointed about the mention of shrinkwrap in the article.
I think it got started being used at work (by stuff I wasn't working on) because trying to install old code that hasn't changed broke because some HTTP proxying middleware for Express changed APIs or something.
Personally I've never seen this be a problem in practice. Possibly because I'm lucky, possibly because I've only used dependencies that don't do this. Since I mostly use quite popular libraries, and check the dependencies of the more niche ones I decide to use for exactly things like this, I think the latter is probably the case.
Out of curiosity, I wonder if anyone here has hit this problem in practice with npm and could provide a concrete example - in particular with popular open source dependencies?
Yes it's a possible problem in theory, but a lot of problems in software are possible in theory but rarely occur in practice because people follow sensible patterns to guard against them.
I'm only halfway through it - package managers aren't something I'm used to dealing with. It makes me wonder though, just how many package managers are there these days? Even within Python, I've heard of several.
Interestingly there are several threads on the nix mailing-list right now discussing how to do just this: implement a language-specific package manager, for a brand-new language, using nix. One guy is even using it to replace make.
Are there advantages in developing a new package manager today? There are many mature packaging systems (npm, Bower, Nix, OPAM, ...) which are not written in Go but could probably be adapted to support the Go ecosystem. Moreover, using an existing solution prevents the human cost of debating on "design choices".
The primary requirement for a PDM is that developers will actually use it. If the interaction is much more complicated than the command diagrams I laid out towards the end of the general section, it's really just not gonna happen. Maybe someone could build a guix frontend that focuses in on just those things. I don't know. New territory for me.
My guess, though, is that SPMs and PDMs are really only superficially similar.
The important thing is that the underlying idea is rock solid, and the work of taming all the crapily packaged software out there is mostly done. So yes there is work to do but its already getting easier. Big todos:
- Proper CI. We need to not merge PRs until we know the merge commit builds. (Travis is wrong in this regard.)
- Sane CLI. Things change over a decade (cough git cough), but good news is were going to replace the whole damn thing 0-fucks given for backwards compat and make something sane even for those that don't appreciate/understand the underlying elegance.
- Support for institutional installations: A central build daemon + farm is a pretty lame say to synchronize things. Should make it so anybody can build stuff, and anybody can share binaries with those that trust them, and shared NSF store or equivalent for de-dup.
None of these things are hard at all.
At the research site where I work as a software person, we use a central guix-daemon managing a shared NFS store. Anyone can build stuff and manage their own software profiles from cluster nodes. Works very well for us and gets us a big step closer to reproducible science.
Sharing build artifacts also works with Guix. You can either export from and import items to the store or use "guix publish" to share items via HTTP.
Since every build is forced to its very own unique output directory (by prepending a hash of all the inputs), one user's build/installation does not affect other users.
It's nothing more than three short portable shell scripts, using only ftp, sed, gzip, tar, rm, cd, etc.
Credit to the pkgsrc folks for making things simple enough that this is possible.
However my usage of "packages" is minimal. I prefer statically compiled binaries that I compile myself. And I write scripts to automate the fetching, patching and compiling.