Hacker News new | comments | show | ask | jobs | submit login
Comparison of Programming Language Package Managers (docs.google.com)
93 points by mustpax 502 days ago | hide | past | web | favorite | 95 comments



One of the things that I've found to be important for (and generally under-documented by) package managers is resolution strategy.

pip, for instance, eagerly resolves the highest version of a dependency first found in the tree and doesn't backtrack AFAIK [1]. Cargo sounds as though it behaves similarly, but will actually backtrack [2]. I've also seen package managers like 0install that will use SAT solvers to solve dependency constraints [3].

I've found lack of backtracking on conflicts to be frustrating, but I'm curious how the other two (and other conflict resolving strategies) compare in practice.

[1]: https://github.com/pypa/pip/issues/988

[2]: http://doc.crates.io/cargo/core/resolver/index.html

[3]: http://0install.net/solver.html


pip's lack of backtracking is doubly irritating, because it installs packages as it goes. It would be great if it would fetch all of the required manifests and make sure that the install was going to succeed before it starts spewing stuff everywhere, but instead it will happily back itself into a corner where A->B->C(v1) and A->D->C(v2) conflict and so it leaves the system with A, B and C(v1) installed and prints a nice red stack trace.


Yep it's maddening. We use pip-compile [1] at work and check-in the then totally-resolved requirements.txt. That is working for us well. However I still head-desk every time I just want to install a local package (with pip install -e) to work on it and pip runs amok, naïvely downgrading/upgrading/doing-whatever-it-pleases :/

[1] https://github.com/nvie/pip-tools


This, along with the inability to intelligently detect a lack of necessary system package dependencies (e.g. libcurses/libpq-dev) is the most irritating part of pip.


There was some discussion about potentially using a SAT solver for cargo in this issue:

https://github.com/rust-lang/cargo/issues/2064

and graydon wrote a POC of using the Z3 solver to find a solution where semver constraints were satisfied, optimizing for the latest versions within those constraints:

https://github.com/graydon/z3-rs/blob/master/tests/semver_te...


Resolution strategies are really interesting in Nix; specifically, because it doesn't have any! All dependencies, etc. are passed in explicitly (either as function arguments, or by importing files).


Yep, currently the maintainers are the ones doing resolution in order to achieve package set harmoniousness.


You are absolutely right about solver strategies. Most of the existing package managers in Go have no solver. It's all manual and typically the tip of master or latest commit.

Glide has a dumb solver in that it goes with the first best version it finds. It will tell you if there is a later conflict that you can manually resolve.

There is a package, discussed here called gps, that is being prepped to merge into Glide. This solver has a back-off strategy. It's loosely based on the solver in dart. The idea is to find the latest newest version that works for the complete tree.

We've been incrementally working to get a good solver and are almost there.


At least three things wrong with respect to Bundler/RubyGems:

1. RubyGems is the package manager. Bundler is a meta-layer on top of that which does full dependency resolution to find the right version before installing. Bundler builds on the capabilities provided by RubyGems (and will be integrated into RubyGems in the future).

2. Bundler does support vendoring. It is widely discouraged, but I take advantage of it in my app packager for deploys at work (Cartage: https://github.com/KineticCafe/cartage, specifically cartage-bundler https://github.com/KineticCafe/cartage-bundler).

3. RubyGems and Bundler can use alternative sources (my work applications use both public and private sources simultaneously).

One thing I don’t see that would love to see tracked and solved is authenticity verification. RubyGems has support for signed gems but it’s not widely used and hasn’t really been validated as Correct.

They should probably also look at CocoaPods and Carthage (iOS and macOS build dependencies) and whatever Swift provides.


> It is widely discouraged

It is also widely encouraged. Some engineers:

1. Don't trust dependencies to always be there.

2. Write apps that need to work in disconnected environments -- ie with no internet connectivity.

I work on the Cloud Foundry buildpacks team for Pivotal. Being able to stage and launch apps in a disconnected environment is A Big Deal for a lot of companies.


Absolutely. That’s sort of a fundamental reason why I wrote Cartage. I like some of what Capistrano does, but the general insistence on being able to `bundle install` on the target server is questionable, and the earlier practice for Rails apps of keeping the gems in Git resulted in other problems.

Heroku mostly does this right with its 'slug' system, and I made Cartage specifically to be able to make deployable packages that use dependencies. You have a build machine that’s connected to the network and can reach RubyGems (and, in the future with some plug-in work, other packaging systems—I expect to add Node, Elixir, and Lua soon) and that will vendor the resulting build and give you a tarball that is just installable with all of the dependencies of your package.


Cloud Foundry has staging and run steps, much as Heroku does, to the point that a lot of Heroku buildpacks will run without modification ... if you're in a fully connected environment at staging time. It's not an accident: Cloud Foundry was in part consciously inspired by Heroku in the early days, so adopting the buildpacks model was natural.

However, as I noted above, this model breaks for disconnected environments, in which neither the staging container nor the runtime container have internet connectivity.

Heroku's ruby buildpack code runs bundler, Cloud Foundry's buildpack is a soft fork of Heroku's, so either you vendor your dependencies before sending it to Cloud Foundry for staging, or you get a failed staging step when the code in the staging container can't dial out to a remote repo.


Yeah. That makes sense. My run steps are in Ansible, so it made sense for me to make something that‘s just a packager. The nice thing about Cartage is that it can make use of already-vendored packages (old way) or you can create the deployment package from a developer machine. I really should write it up one of these days, but I’m so busy getting my team’s pipeline fully fed that I haven’t had time to properly shout out about Cartage (because, frankly, it’s kind of awesome if I do say so myself).


Cartage looks neat (thought Googling it took a few tries).

To me it "looks" like a buildpack, insofar as you are taking something, injecting its dependencies and producing an artifact that's ready to run by itself. If you ever whack bin/detect, bin/compile and bin/release onto it, it'll probably work well enough as a buildpack in connected environments.


Thanks. We’re going to add Node, Lua, and Elixir plug-ins soon to enable standalone running. The nice thing about making just a tarball at the end of the process is that it should be relatively easy to make anything else (Docker image, AMI, etc.) with that.

https://github.com/crohr/pkgr is the only thing that’s nearly similar, and it does a few things differently than Cartage (https://github.com/KineticCafe/cartage for anyone else following this subthread). Pkgr is a little more opinionated on using the OS environment and making sure that OS-level dependencies are fully declared…I leave that to Ansible, for the most part.


If you had an option to package the runtime, you could use the binary buildpack[0] to run Cartage tarballs.

[0] https://github.com/cloudfoundry/binary-buildpack


Call me old fashioned, but I find it annoying that every new language nowadays seems to want to pull in its own package manager, redundantly parallel to the perfectly good one my operating system provides. I am already perfectly fine with installing my software through apt-get. Why oh why do I need pip, and gem, and npm, and CocoaPods, and cargo, and NuGet, and on and on and on...

Using a new language is no longer a matter of cracking open a book, installing a compiler, and firing up vim. You've got to change your whole lifestyle now.


Global installs are the root of a lot of headaches when building software in the same way that global mutable state is the root of a lot of headaches when developing it. Nix may be the one system package manager that is the exception to this rule (I don't have experience with it, so I can't vouch towards that, but I hear great things). However apt-get is incredibly bad at getting you the dependencies you need for a reproducible build.

I've seen this argument come up a few times now, and I have a hard time reconciling this rosy view of system package managers at the center of a development process with my own experiences. That process was never as easy as "cracking open a book, installing a compiler, and firing up vim" and usually entailed a fair amount of trial and error, environment variable fiddling, and sadness from having to chase down a derelict PPA (or the equivalent) hosting the actual dependency version I need. Then more sadness as the number of projects I work on increases and my dependency versions bifurcate.


Global installs are fine IMO, what's not fine is lack of versioning. I would say for Node/NPM at least, the lack of global installs is a pretty big burden, you end up pulling in 500MB for a trivial application. The same versioning isolation could be provided globally without issues unless you're tampering with third-party code for some reason.


Yeah, it really depends on the language and build artifact characteristics, but I'm mostly speaking about my experience with haskell and C. Natively compiled languages tend to have more problems with this because the libraries themselves hold linker references to each other. Versioning isolation alone isn't enough to solve these cases, because you can end up needing multiple copies of the same version of a library to satisfy the needs of all installed packages (because of versioning incompatibilities of transitive dependencies in different libraries). There's an article series about fixing this (and other problems) with haskell's cabal, but the solution basically involves nix style sandboxing [1] [2].

[1]: https://www.well-typed.com/blog/2014/09/how-we-might-abolish...

[2]: https://www.well-typed.com/blog/2015/01/how-we-might-abolish...


> Nix may be the one system package manager that is the exception to this rule

Some peers of mine recently experimented with building a Nix buildpack and concluded that it won't work. Nix has hardcoded paths for its core directories, and won't work in a security-constrained container.


Does this not work?

https://nixos.org/wiki/How_to_install_nix_in_home_(on_anothe...

I've not personally tried it. From reading that document, it seems inconvenient but possible.

My understanding is that the hard-coded paths are needed to ensure compatibility of the binary caches, since store paths are often embedded in other artifacts (e.g. RPATH in binaries, shebang line in scripts). You could always run your own cache using another prefix if that's a problem.


I have no idea where you got that idea. Nix has built in support for building normal non-privileged Docker containers. The paths are hard-coded but just like hard-coded /usr/whatever paths in other unix software, that's what the chroot in the container is for.

http://lethalman.blogspot.ca/2016/04/cheap-docker-images-wit...


The paths that are hard-coded are non-negotiably incompatible with how Cloud Foundry locks down its containers.

Other language managers don't hard-code and will allow stuff to run from a local directory.


They (other package managers) have to rely on at least a hardcoded elf interpreter or they don't provide binaries and probably don't even handle external dependencies.


> on at least a hardcoded elf interpreter

Some rely on gcc, but again, it can be run from a local directory. Various dynamic libraries can be looked up by setting environment variables or passing parameters.

I think Nix is a great idea, but I'm not aware of an argument for hardcoding those root paths which truly holds water. It makes sense of NixOS, sure, but that's because it's an environment with top-to-bottom control.

The rest of us have to be good and share our toys.


Okay you should have clarified that in your original comment, it's not that Nix is incompatible with unprivileged containers (it includes built in support!) but only with a specific container system that isn't even the most popular one (Docker AFAIK).

As for why those paths have to be hard-coded: they don't it's a config variable you can change. When you change it you give up using other people's binaries so you have to build everything yourself, which is exactly like every other distribution system that relies on hard-coded /usr/whatever paths for binaries. The FHS standard is incompatible with Nix's guarantees and dropping it is where it gets all of its power, so it has to use something different, and it chose to make that path /nix/store/whatever


Sad face. Nix is the one package manager that looked portable enough and interesting enough to push as a “single” manager for pretty much any system.


Have you had a look at pkgsrc? I know it's not as interesting as Nix or Guix, but I've had very good luck using it on OSX and various Linux distros, as well as its natural habitat of NetBSD.

http://pkgsrc.org/


Because apt-get doesn’t exist on the system I use, and most system packagers can’t/won’t keep up with the rate of change of development, anyway.

CPAN, RubyGems, etc. all exist because system packaging systems are (a) specific to a particular distribution, (b) generally many versions behind the development versions, (c) really hard to get into unless you are already a system packager for a particular distribution, and (d) effectively don’t exist for two of the largest platforms where you will find users.

I’d suggest making some sort of meta-system packaging system that works for all the languages, but xkcd 927 (and I wouldn’t use it myself, anyway, because it would probably not install Ruby packages correctly and would solve things worse than NPM already does).


using a system package manager does not mean that one needs to use a distribution's (debian, ubuntu, redhat, gentoo, arch, etc) mirror to pull the package, it does mean that you use an OS provided tool to manage all software on your machine.

- It does mean that the common libraries are installed once - conflicts between common libraries are more transparent - System tools make sure that package stays up-to date


Except that you haven’t solved most of the problems that I pointed out by saying “use a system package manager” (and have introduced several new ones):

a. I maintain ~13 Ruby gems. I can generate new versions and verify their install on any system that I can run Ruby on. Under “use a system package manager”, I now have to maintain at least four unique package management formats (deb, RPM, emerge, Pacman) just for Linux distributions and at least one for the BSDs. Thanks, you’ve just increased my workload at least five-fold, although I no longer have to worry about Windows or macOS users (wait, I’m a macOS user).

b. Sure I can use a non-distribution mirror (like a PPA), but now you have to have at least six of those mirrors (see point 1) and, again, too bad about Windows and macOS users since they obviously aren’t smart enough to use an OS that has an in-built packaging manager.

c. In truth, you haven’t increased my package load five-fold, you’ve increased it much more than that. I’ve followed instructions for making debs and RPMs…and the only thing that works for me is using FPM because the system package managers are so bloody complex and built for a single particular use-case.

d. You still don’t have a solution for Windows or macOS users because neither of those OSes have system package managers, and no neither ports nor homebrew counts. RubyGems, pip, cabal, npm, etc. may increase the cognitive load but they work for people who don’t use Ubuntu Zany Zebra.

You’ve also introduced a new problem:

e. OS package managers (except probably nix) do not solve the multiple-installed-version problem. RubyGems allows (even encourages) multiple versions of a gem to be installed and applications (through their Bundler Gemfile) can lock to a particular version. I have packages with 1.x, 2.x, and 3.x releases where all three versions are still in use despite some backwards incompatibilities (mime-types 2.x dropped support for Ruby 1.8, deprecated some methods, and changed the default data format and loading mechanism; 3.x dropped support for Ruby 1.9 and removed the deprecated methods, and changed the default data format).

This is not possible in an OS package manager without renaming the installed library in some way. This is mostly fine when you‘re installing compiled software…but it’s not even perfect there. I’ve been working with language-specific package managers for a long time, and I’ve been dealing with how distribution package managers break some of the packages that I work with for just as long. I generally prefer the mess we have now to waiting for the perfect system package manager, especially since there are systems which will never get them.

[fixed formatting]


Oh you know, just some silly little use cases like...

* building on not-linux -- you know, where almost all personal computers exist. Gee, wouldn't it be nice if most computers could actually compile your code?

* Having only one build config, only having to maintain one publication (ugh need to maintain deb, rpm, nix, chocolatey, brew, whatever the hell arch and suse use...)

* reproducible builds -- distros are well-known to randomly modify libs, making "foo x.y.z" in yum and apt basically random. It's not even clear to me I'll get the same thing from apt on different days!

* Iterating and collaborating on fledgling projects which rightly wouldn't be accepted into a distro -- or just iterating on a timescale smaller than "months"

* local deps -- using system-global deps (as preferred by most linux package managers) is known to be a huge nightmare for any language package manager that's tried. Hence pyvenv. Hence why so many have thrown up their hands in disgust and just started dockering all the things.

(inb4 someone comes to explain that Nix or whatever probably magically solves all these problems)


Although, the pony is still valid that every programming language has decided to implement its own pkg mgr.

Like all things in languages, the world needs to be reimplemented. Which is too bad, but I also never see a great alternative. Eventually people want everything they work on to be in the same language, which of course has the advantage that anyone in that language can hack on it, but the disadvantage that we always start anew every time a language gets created.


You know, earlier today I was thinking, "wouldn't it be great if we all just used one programming language for everything?" And I tried to imagine which one it could be. And all had different strengths and weaknesses. C is great for raw speed, but ends up with very rigid programs, and is easy to shoot your foot in. Rust solves the second problem but not the first. Ruby solves both, but is much slower. LuaJIT solves all these, but doesn't have a great ecosystem. Etc etc etc. Every programming language exists because someone needed the trade-offs that it made, and no other language before it made them. And the package management situation of a given language is usually tied to the nature of how that language works, meaning you can't just use Homebrew to install Python and Lua and Ruby and Node and Rust packages.


I disagree about Rust (I assume you mean rigidity) you just need to learn how to work with it's vtable's through &Traits and/or enums. In terms of speed and foot shots, it definitely beats C.

I think Rust is the greatest programming language of our generation.

But regardless (and I meant irony, not pony in the original post), not everyone will agree with me, so everyone will want their own language. What we need is a general purpose packaging tool (I like cargo, but apt-get could serve equally well) which could support any language.


Where's the redundancy? When you pull down a project from github, grabbing all the specific versions of libraries it needs in a way which doesn't interfere with the versions of libraries used by other projects, is a different use case than the one system package managers perform, surely?


Yep. I always feel extremely uneasy when I encounter a codebase that asks me to install its language's package manager (npm I'm looking at you) as root. I just go 'nope' the only thing that touches my real system is portage. Even using npm or pip as a non-root user worries me because there is an entire extra level of validation missing and it signals to me that developers are not doing due diligence and working with distro maintainers. Yes I know it is more work, but it is an incredibly important step for quality control. The kids these days seem to think that containerization will solve everything, but these problems cannot be avoided.


Most language module systems don't need root. Certainly none of the mainstream ones.

Containers don't "solve" everything, but some of the primitives used to build them can be used to severely restrict process and user privileges -- to the level that it's much more secure than a "standard" unix-y shared environment.


Prepare for the hate-train from developers. But I agree with you though. It's frustrating because each PM wants to install it's own version, which ends up conflicting with another PM's version, and who knows where they end up, or in what state. Apt-get rarely fails for me, but pip, gem, npm fail all the time. And who knows what version of the package they just installed, or where they got it from.

I wish devs would just tell me the libraries I need, and I'll get the packages myself.

Unless your on the bleeding edge of development, you don't need the newest versions of a lib. And you probably shouldn't be developing on the bleeding edge anyway. There's a reason apt-get is so stable, and is used everywhere.

The caveat to all that is: I hate making packages for apt-get. It's a terrible process, and I completely understand why so many devs want to avoid it.


Actually no, it's not a terrible process, it's a misunderstood process[#]. Modern tools (debhelper 7+) allow for simple process, though one needs to understand how the binary packages thing works.

This is a decent overview on technical details: http://vincent.bernat.im/en/blog/2016-pragmatic-debian-packa...

[#] I assume you meant the friction with the tools, not the fact that package builder needs to chase down all the dependencies and generally prepare many source tarballs to be included in a repository.


I really dislike this sentiment "It's not terrible, you just don't understand it!" If everyone is just misunderstanding it, than it's terrible.

I write apt-packages all the time. I get the process. And yes, it's still terrible. And when you consider that devs need to know how to build for apt-get, yum, etc, it's understandable, IMO, why they don't want to bother.


> If everyone is just misunderstanding it, than it's terrible.

This is the place where I'm sure we're not talking about the same aspect of building packages.

I assumed that you were talking about Debian specifically, since the only hint to the topic was "apt-get" somewhere earlier. I thought that you were talking about the old, overcomplicated way of providing manually all the necessary targets (binary, build, install, clean) and specifying dh_{any} commands in debian/rules, possibly using underdocumented CDBS. This is what would be straightened up by the article I mentioned.

Instead, you seem to mean something about packaging on a higher level than that. Do you care to elaborate, what's terrible in building binary (or DEB) packages? I'm doing it for years and I'm yet to encounter a more sensible way of deploying software, including web applications (which are one huge inconsistent and fragile mess).

> [...] when you consider that devs need to know how to build for apt-get, yum, etc, it's understandable, IMO, why they don't want to bother.

What magic do they need to know? There are very little things for DEBs and RPMs. One thing (a common one) is to have a simple way to compile the project offline. Another (again, a common one, and not limited to packaging) is to know how your runtime and compiler (if any) allows to use libraries. Then you need to know there can be post-install scripts, which are meant to e.g. add user account for a daemon, and possibly how to write an init script. And there are two build rules skeletons to know, but those are quite short. This is the only thing which differs in a significant way. Or have I missed something in your opinion?


I don't know why I'm replying to a troll, but oh well.

Apt is no more standard than pip, gem, or any of the others.

Try maintaining packages for Debian, Red Hat, Arch, Gentoo, OSX, Windows, and a dozen others, for a while, and see how quickly you change your mind.

> You've got to change your whole lifestyle now.

Oh please.


I know Dart isn't super widely used, but it's package manager, pub, is probably a good reference for this. We put a ton of work into it, and Dart users regularly single it out as one of the compelling parts of the Dart platform.

https://www.dartlang.org/tools/pub

It hews pretty closely to Bundler's model (versions, version ranges, lockfiles, shared dependencies), though. So I don't know if it gives you much insight beyond "Yes, another language team things Bundler is on the right track."

Personally, I have a hypothesis that states that all programming languages either have a package manager that works like Bundler or will end up getting one.


Based on my work in buildpacks, I agree with the sentiment but I'm not as hopeful as you are.


Perl's CPAN has tooling for diffing versions through the use of MetaCPAN (a top-notch site which every language should try to emulate). For example, here is a diff of the URI distribution:

https://metacpan.org/diff/file?target=ETHER%2FURI-1.71%2F&so...

This information is also available through an API for integration into command line tools.


IIRC, and I could be wrong, CPAN was the first to go down the route that many modern toolchains now provide. We've looked at it.

In fact, the original creator of Glide (Go package manager) wrote about Perl and CPAN when talking about Go at http://technosophos.com/2015/09/02/dont-let-go-be-condemned-....


Yeah, I think it was. The only thing older was probably CTAN, but it didn't have the same structure.

Other things from the Perl ecosystem that should be copied are:

- CPAN Testers which automatically tests every package on multiple systems from Windows to Solaris. This helps identify portability, backcompat, and regression issues.

- CPAN mirrors which ensure that there isn't a single point of failure. This might not be as important now with fast networks and high uptimes, but it also ensures that everyone can replicate all of the code at anytime. I believe R's CRAN does this.


CPAN Testers also test on different operating system versions. And hence also test the Perl binaries, for a lot of different versions, by literally running the combined test suites for all of the (public) Perl code.

I really, really don't get why this isn't copied to the rest of the open source language environments. (Masochism? :-) )


Well, yes. But CPAN does not provide src urls via git (github, sourceforge, ...). You need to upload your tar.gz to a central repo which is then distributed via mirrors.

All modern packagers provide now such src urls, esp. go went ahead with this idea.


I don't get why "pip" is checked in "Has separate manifest and lock files". Actually it doesn't have that feature (which Bundler for Ruby has, for instance). This very feature comes with a third-party package called "pip-tools", or possible alternatives, but raw pip doesn't have this ability directly afaik.

Fwiw pip isn't even able to enforce versions correctly (packages are installed as the file is read, and can conflict with previously expressed constraints). Or report installed versions correctly (it's possible that packages are half-installed or installed but not reported as such by pip commands).


Pip has a constraints file now. Running

  $ pip freeze -r requirements.txt > constraints.txt
after you've installed all your packages gives you a constraints file that can be used to reinstall exactly the same versions:

  $ pip install -r requirements.txt -c constraints.txt


Didn't know that option thanks. Better than nothing, but unfortunately your environment is still subject to the remarks in my second paragraph. So pip-tools is still required if you want more guarantees.


One thing about Cargo's lack of 'vendoring' on the checkmark list; there is https://github.com/alexcrichton/cargo-vendor , and soon there will be https://github.com/rust-lang/cargo/pull/2857


On the CF buildpacks team in NYC, lack of vendoring prevented us from completing a buildpack that would work in disconnected Cloud Foundry installations.

We were sad.


Hm, even without the above stuff, `cargo fetch` should enable you to do offline builds just fine.


Can you point to a man page? We weren't able to find a reference doc on the cargo site.


"cargo --list" will show all commands, which shows fetch. "cargo fetch --help" then shows the help for fetch.

We certainly have some work to do on Cargo's docs. Please don't hesitate to open a thread on users.rust-lang.org if you run into some kind of issue. In addition, I actually live in Brooklyn; if Pivotal ever needs a hand with Rust stuff, just let me know, I'd be happy to swing by your office and talk about whatever.


Right, we've hosted you for a few tech talks (I'm that Australian guy who MCs).

Email me on ... uh ... wednesday, maybe? We could get you to visit to check the buildpack work we did last week.


Since I had starting looking for an alternative to NPM I have discovered a couple of things:

* All current package managers are either language or OS specific. What if you have an application with code written in multiple languages?

* NPM didn't have any kind of integrity checks for its packages, and I assume most package managers don't either. If you download a corrupt package, for example, you won't have any idea and it will still install.

* Some package managers do better than others with regards to managing packages. I found NPM encourages dependency hell and very little management tools for dependent or installed packages.

* A lot of package managers seem to intermix packaging, distribution, and a registry. The registries tend to have limited names to pick from (like real estate) and can result in legal problems. Also if registration to the service catalog is required you cannot self-host or self-manage the distribution of your application.

I am trying to work on a solution to these problems at https://github.com/prettydiff/biddle


> * All current package managers are either language or OS specific. What if you have an application with code written in multiple languages?

guix and nix both work cross-language and cross-distro. Still OS specific though since only linux AFAIK. Also, containers partially solve this problem.

> * NPM didn't have any kind of integrity checks for its packages, and I assume most package managers don't either. If you download a corrupt package, for example, you won't have any idea and it will still install.

Any package manager that doesn't do integrity checks is a bad package manager. The only one I know of currently that doesn't is npm, but I haven't looked deeply into every available package manager.

> * A lot of package managers seem to intermix packaging, distribution, and a registry. The registries tend to have limited names to pick from (like real estate) and can result in legal problems. Also if registration to the service catalog is required you cannot self-host or self-manage the distribution of your application.

What package managers don't let you self host? I'm truthfully not aware of any. Even NPM does according to a quick google.

> I am trying to work on a solution to these problems at https://github.com/prettydiff/biddle

From your readme: "biddle is inspired by the incredible awesomeness of NPM". Since NPM is literally the worst package manager I have ever used, that line makes me want to stop reading and never touch biddle. I'd word it differently.

Edit: Reading biddle further. Dependency management and central hosting are some of the primary reasons to have a package manager. At least for me, that kills any interest at all. I imagine there's a niche market though?


I often see drive-by bashing of npm, but rarely any justification. What exactly is wrong with it?

(Aside from the lack of package integrity check, which I'll grant, sucks).


I think it's not bashing of npm specifically so much as it is the node ecosystem it serves and depends on; at least in my mind it's difficult to separate node from npm. That said, for what it's trying to do (read a list of deps, resolve vs. registry, download and unpack) it seems to do a fine job of it.

My major complaint about npm is the choice to allow version range operators on dependency declarations. We know the node.js ecosystem places a high value on composability, so using lots of tiny modules which themselves depend on lots of tiny modules is the norm. This is a problem though because range operators get used liberally everywhere, so getting reproducible builds is like winning the lottery.

There are other things I don't like about using npm: node_modules/ is big and has a lot of duplication (even with npmv3), it's pretty slow, historically it has been unstable, its still crap on Windows, etc. - but for someone who has 'ensures reproducible builds' as part of their job description, the way its modules get versioned is its worst feature.


For reproducible builds (or at least 'to get the same versions again') you should be using 'npm shrinkwrap'. (Of course there's probably more you should do to get true reproducible builds, but that goes for any package manager).

The range operators are important, else you'd never be able to resolve 2 packages that want a similar versioned sup-dependency e.g. jquery 1.12 because without range operators those 2 packages would have declared minor version differences (1.12.1 and 1.12.3) depending on when they were published. This would mean you'd always end up with duplicated dependencies.

I'd argue 'node_modules is big' is not a fault of npm. If the package or app you're trying to install generates a large node_modules dir, that is something you should take up with the package maintainer. See buble vs babel - buble has a way smaller dep tree.

npm is only slow in the ways that all other package managers are, when installing large dependency trees or native dependencies (like libSass) and it is way faster than say pip and rubygems in this regard. When I 'pip install -r requirements.txt' at work, I literally go and make a coffee.

Also never experienced any instability, though I may have been lucky. Certainly it has been very stable for the last year or so when I've been working with a lot. Could you elaborate on why it is crap on Windows? I did think all major issues (e.g. deep nesting problem) were now fixed ...


The main problems we ran into with shrinkwrap were:

It shrinkwraps everything in your current node_modules directory.

This includes platform specific dependencies that may not work on other platforms but now will cause npm install to fail instead of just printing a message about it.

So our current workflow has to be:

1. Update package.json 2. rm -rf node_modules/ 3. npm install --production # This doesn't include any of those pesky platform specific packages 4. npm shrinkwrap 5. npm install # Get the dev dependencies

As far as the other comments about npm, I just generally have more problems with it than rubygems/bundler and the general OS package managers.


Ah okay, I've never used shrinkwrap across platforms (dev in Linux or Linux VMs, deploy to Linux). That does seem like a PITA.

> As far as the other comments about npm, I just generally have more problems with it than rubygems/bundler and the general OS package managers.

I generally don't :)


Shrinkwrap is ridiculous. I'm expected to go look at every resolved dependency and individually add them if I want to update or not? No thanks; one app at my workplace defines ~50 top level dependencies, but this balloons to almost 1300 - and this is with npm v3 - after npm install. Ain't nobody got time for that.

Deep nesting is not 'solved' it just doesn't happen 100% of the time anymore. If you have conflicts, you still have deep trees. I suppose range operators help with this a little, but looking at what gets installed it doesn't seem to help that much; I still have duplicated dependencies.

I was mentally comparing npm to tools like maven, ivy and nuget, all of which are faster but also not interpreted. Not a fair comparison I guess.


> Shrinkwrap is ridiculous. I'm expected to go look at every resolved dependency and individually add them if I want to update or not?

Not sure you're aware of the suggested flow (see here [1]), but it isn't ridiculous. Use 'npm outdated' to see which packages are out-of-date and 'npm update --save' to update a dep (and update the shrinkwrap file).

Keeping track of stale sub-dependencies is a problem in and of itself, but again that exists with any package manager. (Because you will always need to pin dependencies before you go to prod right). So that 'lockfile' will get out of date pretty fast. Node at least has solutions for this that other communities don't [2] (I haven't tried this service).

[1] https://docs.npmjs.com/cli/shrinkwrap#building-shrinkwrapped... [2] https://greenkeeper.io/


There was work to make nix work in Cygwin, it's been paused, but there was partial functionality. I remember one of the problems was Windows symlinks.


Nix runs on OS X as well.


Did not know that! Thanks.


It would be nice if the Go crowd and the Rust crowd, both of which are developing new package managers, had at least a common spec on how to describe dependencies.


This requires a programmer who writes a lot in both languages, who has enough energy left to help with two package managers, and who happened to be in the right places in the right moments to actually join the projects.

It's already very hard to find a programmer that writes a lot in more than one language and cares about packaging/deploying software.


That's a problem for a team. Having written in both languages, I consider package management to be the biggest problem in both.

(And then there's catkin, the build system for the Robot Operating System. That's the build job from hell. ROS is a message-passing environment to which a huge number of existing packages have been adapted. Multiple languages, different underlying libraries, and no central control of versions across packages. It sort of works. Although there are bad days when it breaks the Ubuntu updater.)


I'd like to see how one of the oldest package managers, Perl's CPAN, stacks up.


It's in the list, and does better than basically everything else.


This should probably use Stack for Haskell as well, which was designed to solve various common issues that arise when using cabal as a package manager.


Is stack really a package manager? It seems to me that it's more of a build manager, delegating package management to cabal/stackage.


I see your point, but I'm not sure if it matters. Stack might be built on top of cabal, but I never use cabal directly at all anymore, and stack cleanly addresses a lot of problems that you'll run into if you exclusively use cabal-install for package management. Stack actively installs packages and dependencies on your system.


I'm confused as to why composer (PHP) got an X for "Central package repository"... isn't Packagist a central registry? And other package managers allow packages from source.

I'm not arguing the point just trying to understand the decision.


You're not actually downloading the package from Packagist. It acts only as a metadata repository that points you to a source repository. For the others, I would assume they act as both a metadata and a source repository.


Thanks for the clarification. I'm shocked it is the only one like that.


It takes a bit of work to store versions and manage them as a central repo. Composer will fetch the sources, as a tarball from GitHub if possible, and cache them locally. They are skipping being the central store and deferring to root sources.

They removed a lot of complexity by doing that and I bet more than a few people didn't notice.


Which one allows to import multiple versions of a lib/module/package with different versions? (Not primarily but also an issue of the package manager.)


Npm allows that.


How do different versions of the same package interact with npm? Since JS is a dynamic language, it seems that there could be problems if a function in one version of a package returned a value which was then fed to a function in a different version.


You typically import it like:

   var foo = require('foo');
The lib's functions are accessed solely via the foo object. There is no interaction when different libs require different versions of a lib because the access the lib via different objects/modules. This actually is the one aspect of javascript I wish other languages would copy.


Surely if a package manger provides the ability to store in VCS then it's admitting defeat for reproducible builds. I'm thinking npm here


Cargo supports vendoring and enforces semver. Not sure why that document disagrees.


The difference is in the central repository: Elm will compare your types to the previous versions and how the versions diff, and refuse to let you upload a package that makes a backwards incompatible change without bumping the version.

At least, that's my understanding of what they meant.


Elm also, according to a Changelog podcast I just heard, does not allow releasing below version 1.0.


Wonder if a `cargo check-version-bump` command would be helpful for us as more crates reach 1.0.


> Not sure why that document disagrees

Probably because of this[0]:

> Note that Cargo does not yet support vendoring in a first-class fashion, but this is a hotly desired feature and coming soon!

[0] http://doc.crates.io/faq.html#how-can-cargo-work-offline


Can you point to where Cargo supports vendoring? Curious to read some docs.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: