
Just what we need. Another package manager.  - MPSimmons
http://www.standalone-sysadmin.com/blog/2014/03/just-what-we-need-another-package-manager/
======
doesnt_know
I don't know about anyone else, but I treat my system and language specific
package managers _completely_ differently.

My system package manager has root access, it can make or break my machine. As
a user, I am trusting the distribution I use and it's package maintainers.
Package vetting, stability testing and signing are what I expect from my
distribution.

For language specific package managers, those things would be nice, but
completely unreasonable to expect. There is no trust involved, how can there
be? Most package repositories have no vetting process, it's publicly
writeable.

For python, there is virtualenv. Packages are "installed" in their little
environments with user privileges. For node, I personally have a dir in home
for modules and then ln -s cli tools into ~/bin. Again, all with user
privileges.

The crazy thing is, for some people there is no distinction. In fact, I
noticed a trend in the node community in that they all give instructions to
install their modules globally. Literally every single installation
instructions I have seen for node cli tools have said the same thing, install
globally.

This is pretty baffling. If you were on a Windows machine, would you download
some random setup file from a public ftp and run it as administrator? I don't
know why an entire community (of power users and developers no less) seem to
think it's somehow acceptable practise.

~~~
simias
If you're a C or C++ dev your system and language package managers are usually
the same thing.

And even in your case I would argue you don't need several different package
managers, merely several different environments.

It's just a matter of having different databases/install paths depending on
what you're trying to do, you don't need a whole new packager.

That would fit within the unix philosophy of having "one program that does one
thing and does it well" instead of having a hundred package managers, each
with its share of bugs and quirks and unique features.

~~~
sanderjd
Is this actually true? It seems to me that wget+configure+make is the
corollary to gem/pip/npm/cargo for dependencies on active development, while
everybody uses apt-get/port/brew for system packages.

~~~
sambeau
I believe that what they are trying to say is that, if the correct libraries
aren't installed, there would be a failure at the configure point wherein you
would then spend some time with the system configuration manager and get the
required libraries.

This is all well and good assuming that your system repository has the library
you need (let alone the exact library you need). However, when developing new
software you generally need to link with something newer than what the system
package manager can reasonably provide and being that most system level
package managers were created before the days of GitHub you are stuck in a
recursive loop of downloading and configuring.

Whenever I get 3 deep into a make loop like this I start pining for a package
manager to sort it all out for me.

~~~
sanderjd
Yeah, that was pretty much my point. Instead of using gem/pip/npm to download,
build, and install things in an accessible place, you use wget to download
things, configure and make to build them, and make install to install them.

------
exDM69
It was inevitable that there will be a package manager for Rust. Packaging,
versioning and distributing programming libraries still is an unsolved
problem.

The requirements for an OS package manager are very different to one that is
used for installing libraries to your development environment. Things move
relatively slowly and there's less need to get bleeding edge versions
included.

All of these programming language specific tools have their specific needs
when it comes to version and dependency management. In contrast with OS
package management, there is frequently a need to have several versions of a
particular library installed. Libraries are usually installed in a per-user or
per-project sandbox rather than system wide.

As much as I wish there was one package manager that could serve all these
needs, I don't see that happening in the short term. The situation where we
have half a dozen popular OS package managers and one (or more) package
management systems for each programming language is less than ideal, but
trying to unify all of those would be quite an effort. That would need getting
the correct people sitting around the same table. And the end result would be
a compromise of some kind.

I hope this happens but I don't know who would put the time and the effort to
do it and what it would take for it to gain traction.

Question to the OP: which package manager would _you_ have picked for Rust?
You point out a lot of problems in the post but don't come up with obvious
solutions.

~~~
MPSimmons
Op here.

I'm not sure what the right solution is.

Each of the major package management solutions have provided ways to talk to
independent repositories. I think that, in some ways, it makes a sense for a
language to maintain repositories for each of the major OSes. This isn't
without problems, of course, because then instead of writing software, you're
spending time packaging and testing that, when you could just make a ruby gem
and be done with it. Which is what happens now.

Honestly, even a way to conclusively enumerate the installed packages,
versions, and sources of each package would be an improvement. That way, I
could at least be reasonably assured of recreating the environment.

~~~
shadowmint

        I think that, in some ways, it makes a sense for a language to maintain 
        repositories for each of the major OSes.
    

_Specifically_ with regard to rust, what exactly does that mean?

Are you suggesting that if someone writes a rust library they should publish
an .rpm, a .deb, an insert-here for osx, and an installable .exe (or nuget
package) from a centralized repository? Or push them to the upstream providers
for each distribution?

Or something else entirely?

------
notacoward
As I've argued before, language-specific package managers are evil.
Unfortunately, so are system package managers that are too cumbersome or
can't/don't keep up with what's going on in the faster-moving language
communities. This results in projects being packaged by people who don't
actually understand it, while the people who understand it don't (and
shouldn't need to) understand the minutiae of how to package things for a
particular platform. This in turn causes far too frequent breakage.

What we really need is system package managers that can _cooperate_ with their
language-specific brethren to get info about packages under other managers'
control, direct other managers to install something according to its own
rules/methods, and so on.

"Hey apt-get, please tell me the status of this Ruby package" <<apt-get turns
around and gets the info from gem>>

"Hey yum, please install this Go package" <<yum turns around and tells go to
do it>>

The rules for how to talk to each language-specific package manager shouldn't
even need to be very complicated. The real work would be getting all of them
to use a common format for talking about versions, file lists, dependencies,
etc. It would be worth it, though, to have those dependencies tracked properly
across all languages/formats instead of being lost at each boundary.

~~~
sanderjd
Ha, I knew this would already be here:
[https://news.ycombinator.com/item?id=7421759](https://news.ycombinator.com/item?id=7421759)

I do like that your idea isn't "build something perfect" but rather "teach the
imperfect things how to talk to each other". Could be very neat. Not sure what
it would look like.

------
chrismorgan
> _Please, accept that a tool someone else wrote that you see as imperfect may
> actually solve the problem, and know that the world may not need another
> solution that does the same thing._

For Rust specifically, can you suggest any?

(I can't think of any, but that doesn't mean there aren't any.)

~~~
matt__rose
rpm, dpkg, windows MSI file, .App folder on Mac. These will all work with any
language that you want, plus, you get the added benefit of standardized
placement and having the USERS of each platform know what to expect when they
install whatever you've decided to throw on their machine.

------
claudius
My favourite solution to this would be an APT extension which allows
installation of binaries into $HOME by unprivileged users and for all these
language-specific things to be turned into simple APT repositories.

I can still have dreams, right?

~~~
dTal
You can probably accomplish that today with fakeroot.

~~~
claudius
At the dpkg level, this mostly works, yes. I can extract to e.g. ~/local, set
$PATH, $LD_LIBRARY_PATH etc. and stuff works.

However, I'd also like to have some dependency resolution (ideally including
already system-installed packages where possible, no need to install libc6
twice) and some more magic - for example, I have to manually adapt the file
/etc/bash_completion if I want it to work out of ~/local, similarly, many
programs only look at /etc/foo and not ~/local/etc/foo.

fakeroot doesn't really help and chroot again requires root privileges.

As I said, dreams :)

------
scrollaway
The path to a standard package manager starts with a standardized protocol for
package management.

A service protocol that is able to serve a repository of packages over http
and ftp. A client protocol that can keep track of installed packages and can
index, search and look for updates on installed packages.

Split package management into layers and only try to standardize bit by bit.
People will never agree on deb vs rpm. People will never agree on using json
vs python vs Makefile vs ruby vs shell vs whatever else - they'll always want
their most familiar language for their package manager, which in domain-
specific packaging means the domain-specific language.

So don't try to standardize those. Standardize the rest. Give us the protocol
that can power all of this and increase interoperability. Separate the
repository layer, the package format (deb, rpm), the packagefile format
(setup.py, Makefile, PKGBUILD) and the package manager (interface: yum, apt-
get, aptitude, pip, npm) from the rest of the protocol.

Make this potentially usable for things such as browser extension
repositories, android package management, vim bundles and what not.

Someone please work on this. I'd do it but it just occured to me I have to
clean my oven.

~~~
nijo108
Yes, I think this is the right approach. Most package managers use the same
command functionality under different synonyms. I don't mind all the different
applications so much as the lack of a standard that they are built too.

------
cmhamill
My knee-jerk inclination to this post is to yell, "oh holy hell, yes!"

That said, and as others in this thread have noted, there are actually two use
cases that need to be satisfied.

1\. Here, you've got a base system, and you want to install some piece of
software in order to use it. You want this to be guaranteed, for some
reasonable definition of "guaranteed," to work with your existing base system.

2\. Here, you want to install packages within a segregated environment, and
you want those packages to work with any packages previously installed in said
environment. You're probably attempting to do something like recreating your
deployment environment locally.

It strikes me that there are only two issues preventing the latter from being
subsumed by the former.

1\. Not all package management systems provide a means to have multiple
versions of a package/runtime/what-have-you installed at the same time. Often,
this capability is there , but packages need to be specially crafted (unique
names, etc.) for it to work. See Debian's various Ruby and Python runtime
packages for example.

2\. Not all package managers provide a way to install a set of specific
package versions in a contained environment which is segregated and requires
intention to enter.

(Note that I'm ignoring the "there are different package formats" issue; I
don't think is in practice a huge barrier, and the package maintainers should
be involved anyway.)

If we could get RPM and YUM to provide those services, then we could remove
the vast majority of this duplication.

Alternatively, if we all agreed that developers should just use Linux
containers as development environments, then all we'd need is upstream to use
native OS packages (which is, really folks, not very hard).

Can we do that pretty please??

------
yeukhon
_So how do I rebuild the compost heap infrastructure that I used to build my
environment?_

This.

Has anyone ever tried
[https://github.com/jordansissel/fpm](https://github.com/jordansissel/fpm) FPM
yet?

~~~
threedaymonk
Yes, I use FPM quite a lot to make debs for any ad-hoc installations on
Ubuntu. It's a lot easier than the actual Debian packaging tools!

E.g.:
[https://github.com/threedaymonk/packages/blob/master/go.sh](https://github.com/threedaymonk/packages/blob/master/go.sh)

------
callahad
> _Suppose I used entirely off-the-shelf puppet code. Nothing custom, just
> modules I found. And I erased my repo which contains my puppet modules. How
> would I rebuild it and get the same thing that I had before?_

Well, there's Blueprint
([http://devstructure.com/blueprint/](http://devstructure.com/blueprint/))
which purports to reverse engineer servers and spit out Chef / Puppet modules.

But... I'm not sure I understand the question. It seems akin to asking "I
deleted all of my source code, how do I rebuild what I had before?" That's why
we have version control. That's why we have backups.

I also don't understand this rant in the context of Rust and its Cargo package
manager. There are several distinct domains involved, and it seems pretty
reasonable for each to have its own management tool.

Puppet, Chef, Ansible, or Salt for handling machine configuration. Yum or APT
for handling system-level packages and services. Pip, Gem, NPM, or Cargo for
application-level dependencies. Seems pretty reasonable to me.

If you need it to instantiate brand new machines, you can get into VMs
(VirtualBox / VMware) or containers (Docker), each of which can also be
trivially scripted (Vagrantfiles / Dockerfiles).

The whole array of tooling seems more complementary than competitive.

~~~
jzwinck
> Yum or APT for handling system-level packages and services. Pip, Gem, NPM,
> or Cargo for application-level dependencies.

That's the thing: what is the distinction between "system-level" and
"application-level"? Has it really gotten to the point where the only thing we
can use /usr/bin/python for is to run other things in /usr/bin? This may very
well be the case, but it strikes me as slightly strange given that we never
used to be afraid of linking against /usr/lib or running our scripts under
/bin/bash.

What happened in the past 10-15 years that changed the world so much that
whereas before we ran our applications on top of the system, now we seem to
want to run them in individual sandboxes, often inside of other sandboxes
inside of other sandboxes? Was it really so bad to yum install and gcc
-lwhatever without having special paths everywhere for everything?

~~~
callahad
> _what is the distinction between "system-level" and "application-level"?_

I think it mostly comes back to scope or encapsulation. I expect my host to
provide facilities external to my program (databases, sendmail, a webserver),
and I expect to have control over the software libraries internal to my
program.

Perhaps you're also asking why APT or Yum, which are great at managing system-
level package availability and versioning, couldn't be adapted for local use
by applications. I'm not sure there's a good answer for that. Maybe it's just
portability? If BlubLang runs on 9 platforms, then BlubPM needs to run on
those same 9 platforms. It's probably easier to get there if BlubPM is written
in BlubLang.

> _What happened in the past 10-15 years that changed the world so much that
> whereas before we ran our applications on top of the system, now we seem to
> want to run them in individual sandboxes, often inside of other sandboxes
> inside of other sandboxes?_

VMs? I once heard a terrible analogy to the effect that we used to treat
servers like pets: we gave them cute names and we nursed them back to health
when they got sick. Now we treat them like livestock: if one gets sick, you
shoot it and get a new one. There's a hilarious send-up of hand-maintained
systems in the "DevOps is Ruining My Craft" article at
[http://tatiyants.com/devops-is-ruining-my-
craft/](http://tatiyants.com/devops-is-ruining-my-craft/)

The more you isolate the applications from the host, the easier it is to
redeploy them.

------
nly
I agree, it does harm.

In principle if everyone used their distro packages for things like...say,
Wordpress, we wouldn't have as many vulnerable installations on the web (see:
NHS). How many people actually use the wordpress package from their distro
rather than just uploading a private copy to their webdoc root?

Instead blog admins have to log in to their control panel and perform a
(hopefully working) auto update there, and then have to shell in to upgrade
other important things like PHP.

~~~
_ak
Have you ever seen how Debian et al package web software? According to the LSB
FHS, and that's not only a lot of additional work that requires testing, and
goes against how e.g. WordPress tackles the update functionality on their own.

~~~
nly
Looks pretty reasonable to me, they even provide a setup program of some kind
and what is presumably default htaccess rules.

[https://packages.debian.org/sid/all/wordpress/filelist](https://packages.debian.org/sid/all/wordpress/filelist)

for comparison:

[https://www.archlinux.org/packages/community/any/wordpress/](https://www.archlinux.org/packages/community/any/wordpress/)

What's wrong with it?

The fault is that of PHP developers for not seeing their distributables as
system software.

------
ymmy
Fine, don't use it. But you probably will because after fighting it you will
likely find it makes you life easier.

------
jerf
Package management is a fractal problem; look at it from a high level and it
all looks simple and they all look similar... zoom in and the similarities
start falling away.

It's probably theoretically possible to build a meta-package-manager that
really could make everybody happy, but it's difficult to imagine what project
structure could get us there, and it's also difficult to imagine how to
incrementally develop such a thing in a way that it is immediately useful to
everybody. Without that you've got a barrier to deal with.

If you view an individual language package manager as essentially creating a
container for the code to run in, a combination of Docker plus the Nix package
manager is probably getting pretty close to what everybody needs, but you'd
still have a long row to hoe getting everybody even remotely on board.

~~~
unwind
Captain metaphor here ... Isn't what you describe (a phenomenon which looks
distinctly different at different levels of zoom) the _opposite_ of a fractal?

Fractals are typically described as being self-similiar, i.e. they look the
same regardless of the zoom level.

Most things don't, which would seem to mean that package management is like
many other things, more than it is like fractals. Many things that are
different look alike when viewed from far away, since you don't see the
differentiating detail.

~~~
jerf
Two variants of the Mandelbrot with slightly different settings will look very
similar at the highest level, but have completely different zoom
characteristics.

I have to admit that in hindsight I used a fractal metaphor that assumes
entirely too much time spent fiddling around in Fractint.

------
kibwen
With regard to Rust specifically, you will always have the option of working
like you do with C++: grab some binaries, and either stuff them into a
location on your search path or just pass the "-L" flag to the compiler
telling it where to look when linking. Cargo is not an attempt to create
another walled garden, it's just an optional tool to automate dependency
resolution and versioning external libraries.

That said, I agree that it's a huge pain that so many groups feel that the
current tools are inadequate enough that they have to design and implement
these sorts of things from scratch. I haven't looked much at 0install
([http://0install.net/](http://0install.net/)), but let's hope that something
of its ilk saves us from this mess some day.

------
dredmorbius
[http://xkcd.com/927/](http://xkcd.com/927/)

------
bryanlarsen
A huge part of the problem is that many of the language-level packages like
.gem are incompatible with system packages like .deb. Some of this is due to
the package managers and some of it is cultural. Rust is young enough that the
culture is not frozen. Establish the culture that breaking API changes without
increments to the major number is a showstopper bug, and that will help.
Compare that with the Ruby culture, where Rails 2.3 introduced a huge number
of breaking changes vs 2.2. Heck, there were breaking API changes in several
of the 2.3.X releases. No wonder Bundler was created to lock versions down.

------
yxhuvud
I wonder if it would be possible to build a meta-package-manager that works
with all or at least a lot of the existing ones. The OP is totally correct in
that having lots and lots of different package managers is insane. One major
thing that is lacking currently is managing cross packet manager dependencies.

I don't believe this problem scales well enough to be possible to scale at a
centralized point like a distro - there are too many different versions of too
many libraries involved, so any solution must be decentralized. Nested support
for namespaces would probably also be necessary to scale well.

~~~
mattdeboard
Of course it's possible. I bet it even exists already in some dusty, little-
used repository and someone will post it to HN in the next day or two.

~~~
danielweber
Or two will be posted, with conflicting requirements, and each will get their
own little band of followers that tell people asking questions on
StackOverflow why their plugin isn't working "just trust MetaSlackOpkg and it
will be okay."

------
simias
IMHO the problem is that there is no standard package manager. Therefore
everybody keeps backing custom solutions and fragment the ecosystem a little
more.

If there was a standard package manager that wasn't tied to a particular
OS/distribution then we could all just happily target it instead.

Of course the task of making a package manager that would work on all un*x
flavours as well as Windows and probably a couple others and managing to get
it accepted by the majority of users/distributions sounds like an impossible
task to achieve.

------
lstamour
Having been in a similar place myself, the solution is to host your own repos
for packages and deployment config using Git. Never rely on the remote
internet to be as consistent as internally-hosted code. Of course it'd be
wonderful if you could do without, but somewhere you'll have to specify and
track version numbers in a text file and Git's as good a way as any to track
and tag that.

------
chrisfarms
Let just all agree that Nix is the way forward and start migrating every linux
distro, language repository and app to Nix expressions.

 _sigh_ I can dream.

~~~
cwp
Just linux distros? You think too small. :-)

------
ballard
I'd like to see something like Arch, Parabola or similar with packages that
are source only, signed, minimal dependencies and just works. Oh and push
fixes upstream.

(Glibc makes me sad.)

------
pekk
This blog post expresses little understanding of why these other package
managers are used, or even how.

They are not supposed to be used to install things under (say) /usr at all.
That is up to the platform's package manager. If you do that, it's not the
tool but you who is being stupid. If this is a real problem for you, why are
you putting the person who does this into sudoers on production machines - I
assume or anyway hope it is not a real problem for you.

I have any number of command-line tools which depend on recent versions of
things, and are not project-specific. I can install them under a prefix that
is inside my $HOME.

Or when I make a project, I can make a sandbox where the tools get installed.

And I can deploy it in a sandbox, not haphazardly depending on system versions
of everything.

None of these things I am doing to make my life as a developer manageable are
harming your "architecture" (scripts to run package managers are architecture
now?)

There are good use cases for platform packages. But every little thing should
not be done with them. That adds up to a HUGE waste of time, with no real
purpose.

When I make projects, I often need them to run against any number of different
versions of things. But the most minimal requirement is to be able to use an
actually-recent version of a dependency, where the platform you have dictated
thinks recent means "6 years old." Neither of these can I do with the existing
platform packages. So you demand that I make platform packages for every
little tool and dependency I may use. Then, everything I want to use has to
run against exactly one set of versions. So you are actually asking me to make
my own whole-platform upgrades, or never use recent packages. That's not my
job, it's not a sane way for me to do my job, and it actually doesn't benefit
you at all. It reflects a really profound lack of understanding not only of my
job, but actually also of your own job.

Then as an author of a library or tool I made available for free, you are
demanding that to satisfy you I make packages for several different platforms,
each with their own idiosyncrasies and versions of everything, and in fact
that I package half the universe for each platform since most of them do not
have recent versions of anything - or again only ever use 6 year old
dependencies. All so that your job never has to go beyond running the One True
Package Manager, whatever you think it is.

Then after all this unnecessary pain, your "solution" will require me to run
an entire fresh VM for each new thing. Because you have stipulated that I have
to dump all my dependencies for every project into one big sewer, it is
guaranteed that there will be version conflicts. This is amazingly stupid
because the tools already exist to avoid it very easily and people are already
using them. And on your side, all you have to complain about is vague bruises
to your feelings because not everybody is using the One True Package Manager
for every little thing.

There is such a thing as a business listening too much to system
administrators, when these system administrators do not at all understand
software development, because their motivation is to prevent change and have
less to do rather than to facilitate development of the business. When you
blast this complaint at the whole community, you are asking for time to stand
still and for nobody to develop anything new in the entire community. ALL
forward progress must halt in order for you to have less work. It's not
happening and it shouldn't happen.

If you are paid to be a system administrator, please perform your function as
a system administrator. Please resolve these problems within the framework of
your own company's division of labor. If you have to build packages for your
production platform, I'm sorry but that's probably part of your job. If you
don't like it, please whine to HR instead of whining to HN.

If you think it will be easy to make every language developer use the same
package manager as every other language, it should be equally easy to make all
the platforms use one package format instead of the arbitrary hell of
different ones they are using now. That is going to be your best bet for
actually making one true package manager. Good luck.

