Hacker News new | comments | show | ask | jobs | submit login

Call me old fashioned, but I find it annoying that every new language nowadays seems to want to pull in its own package manager, redundantly parallel to the perfectly good one my operating system provides. I am already perfectly fine with installing my software through apt-get. Why oh why do I need pip, and gem, and npm, and CocoaPods, and cargo, and NuGet, and on and on and on...

Using a new language is no longer a matter of cracking open a book, installing a compiler, and firing up vim. You've got to change your whole lifestyle now.




Global installs are the root of a lot of headaches when building software in the same way that global mutable state is the root of a lot of headaches when developing it. Nix may be the one system package manager that is the exception to this rule (I don't have experience with it, so I can't vouch towards that, but I hear great things). However apt-get is incredibly bad at getting you the dependencies you need for a reproducible build.

I've seen this argument come up a few times now, and I have a hard time reconciling this rosy view of system package managers at the center of a development process with my own experiences. That process was never as easy as "cracking open a book, installing a compiler, and firing up vim" and usually entailed a fair amount of trial and error, environment variable fiddling, and sadness from having to chase down a derelict PPA (or the equivalent) hosting the actual dependency version I need. Then more sadness as the number of projects I work on increases and my dependency versions bifurcate.


Global installs are fine IMO, what's not fine is lack of versioning. I would say for Node/NPM at least, the lack of global installs is a pretty big burden, you end up pulling in 500MB for a trivial application. The same versioning isolation could be provided globally without issues unless you're tampering with third-party code for some reason.


Yeah, it really depends on the language and build artifact characteristics, but I'm mostly speaking about my experience with haskell and C. Natively compiled languages tend to have more problems with this because the libraries themselves hold linker references to each other. Versioning isolation alone isn't enough to solve these cases, because you can end up needing multiple copies of the same version of a library to satisfy the needs of all installed packages (because of versioning incompatibilities of transitive dependencies in different libraries). There's an article series about fixing this (and other problems) with haskell's cabal, but the solution basically involves nix style sandboxing [1] [2].

[1]: https://www.well-typed.com/blog/2014/09/how-we-might-abolish...

[2]: https://www.well-typed.com/blog/2015/01/how-we-might-abolish...


> Nix may be the one system package manager that is the exception to this rule

Some peers of mine recently experimented with building a Nix buildpack and concluded that it won't work. Nix has hardcoded paths for its core directories, and won't work in a security-constrained container.


Does this not work?

https://nixos.org/wiki/How_to_install_nix_in_home_(on_anothe...

I've not personally tried it. From reading that document, it seems inconvenient but possible.

My understanding is that the hard-coded paths are needed to ensure compatibility of the binary caches, since store paths are often embedded in other artifacts (e.g. RPATH in binaries, shebang line in scripts). You could always run your own cache using another prefix if that's a problem.


I have no idea where you got that idea. Nix has built in support for building normal non-privileged Docker containers. The paths are hard-coded but just like hard-coded /usr/whatever paths in other unix software, that's what the chroot in the container is for.

http://lethalman.blogspot.ca/2016/04/cheap-docker-images-wit...


The paths that are hard-coded are non-negotiably incompatible with how Cloud Foundry locks down its containers.

Other language managers don't hard-code and will allow stuff to run from a local directory.


They (other package managers) have to rely on at least a hardcoded elf interpreter or they don't provide binaries and probably don't even handle external dependencies.


> on at least a hardcoded elf interpreter

Some rely on gcc, but again, it can be run from a local directory. Various dynamic libraries can be looked up by setting environment variables or passing parameters.

I think Nix is a great idea, but I'm not aware of an argument for hardcoding those root paths which truly holds water. It makes sense of NixOS, sure, but that's because it's an environment with top-to-bottom control.

The rest of us have to be good and share our toys.


Okay you should have clarified that in your original comment, it's not that Nix is incompatible with unprivileged containers (it includes built in support!) but only with a specific container system that isn't even the most popular one (Docker AFAIK).

As for why those paths have to be hard-coded: they don't it's a config variable you can change. When you change it you give up using other people's binaries so you have to build everything yourself, which is exactly like every other distribution system that relies on hard-coded /usr/whatever paths for binaries. The FHS standard is incompatible with Nix's guarantees and dropping it is where it gets all of its power, so it has to use something different, and it chose to make that path /nix/store/whatever


Sad face. Nix is the one package manager that looked portable enough and interesting enough to push as a “single” manager for pretty much any system.


Have you had a look at pkgsrc? I know it's not as interesting as Nix or Guix, but I've had very good luck using it on OSX and various Linux distros, as well as its natural habitat of NetBSD.

http://pkgsrc.org/


Because apt-get doesn’t exist on the system I use, and most system packagers can’t/won’t keep up with the rate of change of development, anyway.

CPAN, RubyGems, etc. all exist because system packaging systems are (a) specific to a particular distribution, (b) generally many versions behind the development versions, (c) really hard to get into unless you are already a system packager for a particular distribution, and (d) effectively don’t exist for two of the largest platforms where you will find users.

I’d suggest making some sort of meta-system packaging system that works for all the languages, but xkcd 927 (and I wouldn’t use it myself, anyway, because it would probably not install Ruby packages correctly and would solve things worse than NPM already does).


using a system package manager does not mean that one needs to use a distribution's (debian, ubuntu, redhat, gentoo, arch, etc) mirror to pull the package, it does mean that you use an OS provided tool to manage all software on your machine.

- It does mean that the common libraries are installed once - conflicts between common libraries are more transparent - System tools make sure that package stays up-to date


Except that you haven’t solved most of the problems that I pointed out by saying “use a system package manager” (and have introduced several new ones):

a. I maintain ~13 Ruby gems. I can generate new versions and verify their install on any system that I can run Ruby on. Under “use a system package manager”, I now have to maintain at least four unique package management formats (deb, RPM, emerge, Pacman) just for Linux distributions and at least one for the BSDs. Thanks, you’ve just increased my workload at least five-fold, although I no longer have to worry about Windows or macOS users (wait, I’m a macOS user).

b. Sure I can use a non-distribution mirror (like a PPA), but now you have to have at least six of those mirrors (see point 1) and, again, too bad about Windows and macOS users since they obviously aren’t smart enough to use an OS that has an in-built packaging manager.

c. In truth, you haven’t increased my package load five-fold, you’ve increased it much more than that. I’ve followed instructions for making debs and RPMs…and the only thing that works for me is using FPM because the system package managers are so bloody complex and built for a single particular use-case.

d. You still don’t have a solution for Windows or macOS users because neither of those OSes have system package managers, and no neither ports nor homebrew counts. RubyGems, pip, cabal, npm, etc. may increase the cognitive load but they work for people who don’t use Ubuntu Zany Zebra.

You’ve also introduced a new problem:

e. OS package managers (except probably nix) do not solve the multiple-installed-version problem. RubyGems allows (even encourages) multiple versions of a gem to be installed and applications (through their Bundler Gemfile) can lock to a particular version. I have packages with 1.x, 2.x, and 3.x releases where all three versions are still in use despite some backwards incompatibilities (mime-types 2.x dropped support for Ruby 1.8, deprecated some methods, and changed the default data format and loading mechanism; 3.x dropped support for Ruby 1.9 and removed the deprecated methods, and changed the default data format).

This is not possible in an OS package manager without renaming the installed library in some way. This is mostly fine when you‘re installing compiled software…but it’s not even perfect there. I’ve been working with language-specific package managers for a long time, and I’ve been dealing with how distribution package managers break some of the packages that I work with for just as long. I generally prefer the mess we have now to waiting for the perfect system package manager, especially since there are systems which will never get them.

[fixed formatting]


Oh you know, just some silly little use cases like...

* building on not-linux -- you know, where almost all personal computers exist. Gee, wouldn't it be nice if most computers could actually compile your code?

* Having only one build config, only having to maintain one publication (ugh need to maintain deb, rpm, nix, chocolatey, brew, whatever the hell arch and suse use...)

* reproducible builds -- distros are well-known to randomly modify libs, making "foo x.y.z" in yum and apt basically random. It's not even clear to me I'll get the same thing from apt on different days!

* Iterating and collaborating on fledgling projects which rightly wouldn't be accepted into a distro -- or just iterating on a timescale smaller than "months"

* local deps -- using system-global deps (as preferred by most linux package managers) is known to be a huge nightmare for any language package manager that's tried. Hence pyvenv. Hence why so many have thrown up their hands in disgust and just started dockering all the things.

(inb4 someone comes to explain that Nix or whatever probably magically solves all these problems)


Although, the pony is still valid that every programming language has decided to implement its own pkg mgr.

Like all things in languages, the world needs to be reimplemented. Which is too bad, but I also never see a great alternative. Eventually people want everything they work on to be in the same language, which of course has the advantage that anyone in that language can hack on it, but the disadvantage that we always start anew every time a language gets created.


You know, earlier today I was thinking, "wouldn't it be great if we all just used one programming language for everything?" And I tried to imagine which one it could be. And all had different strengths and weaknesses. C is great for raw speed, but ends up with very rigid programs, and is easy to shoot your foot in. Rust solves the second problem but not the first. Ruby solves both, but is much slower. LuaJIT solves all these, but doesn't have a great ecosystem. Etc etc etc. Every programming language exists because someone needed the trade-offs that it made, and no other language before it made them. And the package management situation of a given language is usually tied to the nature of how that language works, meaning you can't just use Homebrew to install Python and Lua and Ruby and Node and Rust packages.


I disagree about Rust (I assume you mean rigidity) you just need to learn how to work with it's vtable's through &Traits and/or enums. In terms of speed and foot shots, it definitely beats C.

I think Rust is the greatest programming language of our generation.

But regardless (and I meant irony, not pony in the original post), not everyone will agree with me, so everyone will want their own language. What we need is a general purpose packaging tool (I like cargo, but apt-get could serve equally well) which could support any language.


Where's the redundancy? When you pull down a project from github, grabbing all the specific versions of libraries it needs in a way which doesn't interfere with the versions of libraries used by other projects, is a different use case than the one system package managers perform, surely?


Yep. I always feel extremely uneasy when I encounter a codebase that asks me to install its language's package manager (npm I'm looking at you) as root. I just go 'nope' the only thing that touches my real system is portage. Even using npm or pip as a non-root user worries me because there is an entire extra level of validation missing and it signals to me that developers are not doing due diligence and working with distro maintainers. Yes I know it is more work, but it is an incredibly important step for quality control. The kids these days seem to think that containerization will solve everything, but these problems cannot be avoided.


Most language module systems don't need root. Certainly none of the mainstream ones.

Containers don't "solve" everything, but some of the primitives used to build them can be used to severely restrict process and user privileges -- to the level that it's much more secure than a "standard" unix-y shared environment.


Prepare for the hate-train from developers. But I agree with you though. It's frustrating because each PM wants to install it's own version, which ends up conflicting with another PM's version, and who knows where they end up, or in what state. Apt-get rarely fails for me, but pip, gem, npm fail all the time. And who knows what version of the package they just installed, or where they got it from.

I wish devs would just tell me the libraries I need, and I'll get the packages myself.

Unless your on the bleeding edge of development, you don't need the newest versions of a lib. And you probably shouldn't be developing on the bleeding edge anyway. There's a reason apt-get is so stable, and is used everywhere.

The caveat to all that is: I hate making packages for apt-get. It's a terrible process, and I completely understand why so many devs want to avoid it.


Actually no, it's not a terrible process, it's a misunderstood process[#]. Modern tools (debhelper 7+) allow for simple process, though one needs to understand how the binary packages thing works.

This is a decent overview on technical details: http://vincent.bernat.im/en/blog/2016-pragmatic-debian-packa...

[#] I assume you meant the friction with the tools, not the fact that package builder needs to chase down all the dependencies and generally prepare many source tarballs to be included in a repository.


I really dislike this sentiment "It's not terrible, you just don't understand it!" If everyone is just misunderstanding it, than it's terrible.

I write apt-packages all the time. I get the process. And yes, it's still terrible. And when you consider that devs need to know how to build for apt-get, yum, etc, it's understandable, IMO, why they don't want to bother.


> If everyone is just misunderstanding it, than it's terrible.

This is the place where I'm sure we're not talking about the same aspect of building packages.

I assumed that you were talking about Debian specifically, since the only hint to the topic was "apt-get" somewhere earlier. I thought that you were talking about the old, overcomplicated way of providing manually all the necessary targets (binary, build, install, clean) and specifying dh_{any} commands in debian/rules, possibly using underdocumented CDBS. This is what would be straightened up by the article I mentioned.

Instead, you seem to mean something about packaging on a higher level than that. Do you care to elaborate, what's terrible in building binary (or DEB) packages? I'm doing it for years and I'm yet to encounter a more sensible way of deploying software, including web applications (which are one huge inconsistent and fragile mess).

> [...] when you consider that devs need to know how to build for apt-get, yum, etc, it's understandable, IMO, why they don't want to bother.

What magic do they need to know? There are very little things for DEBs and RPMs. One thing (a common one) is to have a simple way to compile the project offline. Another (again, a common one, and not limited to packaging) is to know how your runtime and compiler (if any) allows to use libraries. Then you need to know there can be post-install scripts, which are meant to e.g. add user account for a daemon, and possibly how to write an init script. And there are two build rules skeletons to know, but those are quite short. This is the only thing which differs in a significant way. Or have I missed something in your opinion?


I don't know why I'm replying to a troll, but oh well.

Apt is no more standard than pip, gem, or any of the others.

Try maintaining packages for Debian, Red Hat, Arch, Gentoo, OSX, Windows, and a dozen others, for a while, and see how quickly you change your mind.

> You've got to change your whole lifestyle now.

Oh please.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: