
Maintainers Matter: The case against upstream packaging - keenerd
http://kmkeen.com/maintainers-matter/
======
davexunit
I agree with a lot of this. GNU/Linux distros are going down a very dangerous
path with Snappy, Docker, Flatpak, Atomic, etc. I think a lot of this is
responding to the fact that traditional systems package managers are quite bad
by today's standards. They are imperative (no atomic transactions), use global
state (/usr), and require root privileges. Snappy and co. take the "fuck it,
I'm out" approach of bundling the world with your application, which is
terrible for user control and security. Instead, I urge folks to check out the
functional package managers GNU Guix[0] and Nix[1], and their accompanying
distributions GuixSD and NixOS. Both Guix and Nix solve the problems of
traditional systems package managers, while adding additional useful features
(like reproducible builds, universal virtualenv, and full-system config
management) and avoiding the massive drawbacks of Snappy and friends.

[0] [https://gnu.org/s/guix](https://gnu.org/s/guix)

[1] [http://nixos.org/](http://nixos.org/)

~~~
mercurial
As a happy nixos user, I have to say that it's conceptually great, but that
right now, the management tools are abysmal. Not only are the command line
utilities unintuitive to use and pretty slow, but there is simply no graphical
frontend, which means that I'm unable to switch family members to it.

~~~
roblabla
Your complaints have been heard, and there are a lot of efforts to address
them :)

There's currently an effort to make the command line utilities easier to
use[0]. There's also an effort to bring PackageKit to NixOS, which should make
a few GUI package managers work[1].

[0]:
[https://github.com/NixOS/nix/issues/779](https://github.com/NixOS/nix/issues/779)
[1]:
[https://github.com/NixOS/nix/issues/233](https://github.com/NixOS/nix/issues/233)

~~~
mercurial
That's really good to hear, I'm looking forward to it.

------
drewcrawford
I feel like this article is sort of attacking a straw man.

> The promise: Sandboxing makes you immune to bad ISVs. The reality: It
> protects you from some things and not others.

Well yeah, but the same could be said of maintainers. Maintainers let things
through all the time, and sometimes they _cause_ problems (hello, Debian weak
keys).

The reality is a bit complicated, and it boils down to something boring like:
if the maintainers are better than upstream, maintainers are good, and if they
are worse than upstream, then they are bad. But that is both tautological and
vacuous, so it is not especially useful insight.

The real insight is this: there just aren't enough maintainers to go around.
Debian has 1500 contributors and 57,000 packages, which is 38x as many
packages as people. Now maintainers have a lot of tooling to make that more
tractable, and upstream developers have their time split between multiple
packages too, and some packages are important enough to get the attention of
several maintainers fulltime. But do we really believe 1/38th of a person is
judiciously considering how to overrule upstream's decisions, attentively
investigating user bug reports, and so on?

More likely, the maintainers do not even _claim_ to be doing that, because
they have not packaged the software at all. I use lots of unpackaged software;
until recently nobody was packaging Chromium for example. Java packaging is
really unreliable. And so on.

Ultimately, Ubuntu Snap exists because there is a lot of unpackaged software,
and a lot of software packaged poorly. It would be nice if we could wave a
magic wand and get 38x more maintainers, but we cannot.

~~~
jsizz
> sometimes they cause problems (hello, Debian weak keys).

> until recently nobody was packaging Chromium for example.

To name one example, SlackBuilds.org has had Chromium since 2010, although
admittedly that's not so long ago as the Debian weak key cockup, which was
2006.

Maybe your examples could use an upgrade to the latest stable version.

~~~
drewcrawford
Chromium may have been packaged in 2010, but it was _released_ three years
before. The Debian weak keys "cockup" may have been _created_ in 2006, but it
was _discovered_ two years later. Maintainers had long windows of time to add
value. Did they?

I don't mean anything personal by it. If I was maintaining 38 packages on my
nights and weekends I'd do a bad job too.

But examples don't go out of date unless you present some force that takes 30k
unmaintainable packages and turns them into 50k maintainable packages. What
specific advance in software maintenance do you believe improved the art of
software maintenance by an OOM?

But if you want to talk recent examples, we could talk about how nobody's
packaging Swift.

~~~
ploxiln
Chromium only became compatible with linux in 2010.
[https://googleblog.blogspot.com/2009/12/google-chrome-for-
ho...](https://googleblog.blogspot.com/2009/12/google-chrome-for-holidays-mac-
linux.html)

Arch Linux packaging files have source history going back to then as well.

Chromium is also an example of a package that took a long time to appear in
other distros, because it's really written with the app mindset. It copies and
incompatibly customizes most of its dependencies. No one would consider that a
great idea for ongoing maintenance and security patching for a typical
project, but of course this is a google product-oriented thing with nearly a
thousand well-above-average-skill developers assigned, so it's not a problem
for them to manage surprisingly huge volumes of code and changes.

Web browsers like Chromium are also a good example of the kind of modern
software which doesn't work well with the debian release model, because it's
"unsupported and practically broken" in like half a year. That's not true for
the "make" utility, or for gimp or inkscape or libreoffice or audacious or
countless other useful applications and tools which are not browsers.

I really don't like the fact that modern browsers are a crazy-train and
there's just no getting off it.

~~~
digi_owl
Yeah Chromium is a mess. To get and build the source you first have get the
special tool set they use from their git repo.

------
stormbrew
This seems like an overly optimistic view of the abilities and resources of
distribution package maintainers. The reality is that no matter how hard they
try, it's almost impossible for them to keep up, and even with a non-LTS
Ubuntu release train I _still_ need to use a number of PPAs and other
mechanisms of install to get some things I use on a daily basis.

And that's with both Canonical and Debian's resources involved in making sure
that I get a relatively up to date base set of packages.

Never mind the issue of when your politics don't quite align with any
distribution maintainer's in the whole.

(Note: edited to clarify 'bleeding edge ubuntu' as meaning the non-LTS release
train)

~~~
gkya
When you serve out an entire system for a group of diverse users with lots of
installable programs, it's important to ensure that parts will work together.
Especially in Unix where namespace clashes in /bin, /lib and /etc may cause
serious problem for system administrators and users. Thus, every stable
release has to ensure that none of provided software shall do nothing nasty.

But with bleeding edge, you lose the control over this. It may be worth it for
some users, and not for some others. That's a trade-off, and I myself am at
the maintained-package-repo+use-only-stable-releases side; thus I use FreeBSD.
Debian and Ubuntu mainly provide stable server OSs, so they're on my side too.
If you want bleeding edge software, you should migrate to an OS that provides
bleeding edge software, tho should know that it's called bleeding edge is
called bleeding for a reason. An OS that has to release a version that'll be
stable for the coming five years simply cannot provide bleeding edge packages.

~~~
rodgerd
> An OS that has to release a version that'll be stable for the coming five
> years simply cannot provide bleeding edge packages.

Wanting software more than every five years is hardly "bleeding edge".

~~~
gkya
If what you have is fine and works, why want new software? And if you want
more frequent updates, you can (a) use a faster-moving branch, which BTW
Debian and Ubuntu have; or (b) switch to an OS that provides you with more
recent software, e.g. Arch Linux, Manjaro, Gentoo (? not that sure about this
last one). Declaring sth. stable takes time, especially if you also
incorporate new stuff into it. Even after five years of development, Debian
and Ubuntu release patches, because of errata in their releases.

But if you want secure, stable OS that wont drop the eggs, and also rolling
release, well, nobody can do anything about it.

~~~
jcastro
I don't want to switch to ubuntu-devel or sid just to get a new version of
VLC.

I just did a `snap install vlc` and got a recent version of vlc without having
to upgrade my entire operating system.

~~~
gkya
That's OK. Nobody has an objection that you do it that way. It's just that
there are acceptable flaws and shortcomings for some use cases, and some OSs
care more about those.

------
kalessin
While I agree with a lot of this as well, I feel like downstream packaging has
a lot of issues.

I'm working on a project (licensed under the GPLv3) and I have decided to be
my own maintainer, i.e: building and distributing packaged versions of my
project all by myself:

1\. my project is young and unknown, I have no choice but to package it myself
(I really like AUR by the way);

2\. it lets me write and maintain step-by-step documentation;

3\. it allows me to enforce a consistent interface for my project across
operating systems, I know my project can be used the exact same way on any
operating system I have packaged it for;

4\. I have had bad experiences with maintainers having no idea what they are
doing and completely breaking my software;

5\. It allows me to build a full release pipeline including both source and
binary releases.

The first point is sort of a big deal, having packages is important to drive
adoption, therefore I need to invest my time in that. I'm not gonna wait for
some random folks to package it a few years down the road (if that ever
happens).

My work is not incompatible with downstream packaging, but I do intend to work
closely with (or directly maintain) any downstream package.

I really feel like this is sort of a catch-22.

~~~
keenerd
I presume you mean lightsd. Nifty stuff. What do you think of Suse's OBS? I
believe you're squarely in its target audience.

And hats off to you for going the extra mile for your project! Your PKGBUILD
even looks quite good. Would you like some assistance in writing a -git
pkgbuild for people who want to try the absolute newest commit?

~~~
kalessin
Thank you! Yes, I was referring to my work on lightsd.

I didn't know about Suse's OBS until this thread, I guess I need to look into
it. I have been building my own CI/CD pipeline on top of Buildbot so far. It's
a PITA, but leaves a lot of room for creativity, hopefully it will be
rewarding down the road.

I don't know about the -git thing for PKGBUILD, what's the
syntax/documentation?

~~~
keenerd
It is briefly mentioned in the manpage,
[https://www.archlinux.org/pacman/PKGBUILD.5.html](https://www.archlinux.org/pacman/PKGBUILD.5.html),
but basically a pkgbuild can use a git repo as a source directly. Here's an
example:

[https://aur.archlinux.org/cgit/aur.git/tree/PKGBUILD?h=pacma...](https://aur.archlinux.org/cgit/aur.git/tree/PKGBUILD?h=pacman-
git)

Compare it to pacman's release-based pkgbuild:

[https://git.archlinux.org/svntogit/packages.git/tree/trunk/P...](https://git.archlinux.org/svntogit/packages.git/tree/trunk/PKGBUILD?h=packages/pacman)

The major differences are the "git" url in the sources, the provides/conflicts
metadata and the dynamic pkgver() function.

~~~
kalessin
Nice, maybe I'll consider this for dogfooding, thank you for the explanations!

------
guardiangod
Preface:

1\. I use Arch Linux for my desktop and I like it.

2\. I hate docker's repository systems (and other upstream packaging)

But this article writer sure is putting the maintainers on a pedestal.

 __ _More fundamentally, the maintainer is the primary line of defence and
interaction between users and developers. Maintainers shield developers from
uninformed users, allowing the devs to write software with less support
overhead. Non-bugs are caught and filtered out. Low-quality bugs reported to
the distribution 's tracker often becomes a good bug when the maintainer
reports it upstream.

...

Maintainers also shield users from developers, offering a layer of quality
control...choosing a subset of the (subjectively) best software FOSS has to
offer. Maintainers will disable features that they feel act in bad faith.
Maintainers' greatest power is the ability to outright say "This is not good
enough for our users" ___

Really? Then what about this poor sap who had the misfortune of releasing one
buggy minor version, and is now forced to support that for the next 5 years
even though the bug was fixed in the next version?

[https://news.ycombinator.com/item?id=11452432](https://news.ycombinator.com/item?id=11452432)

The maintainers actively tried to sabotage his attempts to tell people to
upgrade to a new version in the name of 'consistency.'

Sure, consistency is great, but only if we (the maintainers) don't have to do
any work (such as backporting). We offload all the supports to the developer
and have him do support work for us. Then when he complains we will call him a
whiner and laugh him off...

~~~
tremon
_The maintainers actively tried to sabotage his attempts to tell people to
upgrade_

Ah, poor maintainer. Releasing software with a self-destruct timer is not an
acceptable way to "tell people to upgrade".

~~~
coldtea
No, but it's an effective way to say "f __k you " to stubborn/lazy/backwards
maintainers and/or distros who wont update.

~~~
cyphar
Yes, those pesky stable distributions that prioritise bug fixes are clearly to
blame. While it might be annoying for _you_ (and me to some extent), some
people actually need to have a stable system. If I was installing a GNU/Linux
distro for a family member I would pick Debian or openSUSE Leap over a more
rapidly updating distribution -- I just got burned by such a distro yesterday
and I'm still reinstalling my machine.

~~~
coldtea
> _While it might be annoying for you (and me to some extent), some people
> actually need to have a stable system._

If desktop packages like the XScreensaver is what makes your system unstable,
then you have worse problems...

------
tiziano88
Personally I think Nix ([https://nixos.org/nix/](https://nixos.org/nix/)) does
this right: packages are easy to install and upgrade for users, and developers
and maintainers can easily create and update them. And if someone does not
like the default collection of packages, it is as easy as starting one from
scratch, or forking the existing one on GitHub, but at least it doesn't
require reinventing the entire system.

~~~
digi_owl
Pretty much.

The base problem is not ISVs or maintainers. But that the rigidity of the
package managers force maintainers to either use one version of a lib for the
duration of the distro version, or play musical chairs with the package names
to get around conflicts.

Either option makes it hard for third parties to produce packages.

The likes of Nix allows multiple versions of a packages to exist side by side,
and ensures that each program gets the version it wants.

This allows a third party package to request a newer version of a lib than the
distro provides, as it will not conflict with the distro provided packages
while retaining the naming scheme.

------
cyphar
I cannot agree with this enough. I help package the container tools for
openSUSE and SLE (I'm also an upstream maintainer of one of the tools as well,
so I see both sides of the picture). But I personally am against ISVs making
packages (especially "universal" ones) -- if they want to provide a container
deployment method then provide the Dockerfile so people can build and curate
it themselves.

It's really frustrating when upstream turns around and says "actually, we
provide packages -- not you". In fact, we've had cases where a certain project
suggested that a reasonable compromise would be that we _change the name of
the package in our distribution to "reduce confusion to users"_. Wat.

However, I do understand their point somewhat. If you have users pinging your
issue tracker and they're using distro-supplied packages, then you are getting
a lot of noise. But the right way of reporting bugs is to complain to your
distribution (we actually live and breathe the distro you're using!) and I
feel like upstream projects should make this communication model more clear
rather than complaining to distros about users submitting bugs in the wrong
place.

EDIT: Thanks for the SUSE shoutout! OBS is pretty cool and the people
developing it are pretty cool people too.

------
peterwwillis
Yes, upstream vendors would love to have total control over how any user uses
its software. But there is no guarantee that at least _one_ of the vendors
will not have a really shitty install or uninstall script which will do
something like this:

    
    
      #!/bin/sh
      DIR=/home/$USER/ftp
      rm -rf $DIR
      rm -rf /home/guest
      adduser guest
    

.... assuming that /bin/sh is symlinked to bash (it isn't always), assuming
$USER exists (it doesn't always), assuming the user 'guest' doesn't already
exist with important files to keep... you really don't want to know how many
ISVs actually have install scripts like this.

Users can still install ISV-packaged software in distros. But ISV packages
typically hurt the system by removing things like vital dependency tracking
information, or replace files that other packages provide, or won't be tracked
for security patches, or will be installed into non-standard paths.

\--

I have been a package maintainer for 6.. no wait 8... no, 9 Linux
distributions. I have created thousands of packages (easily 10,000). I have
maintained forked distributions for corporate use. I have also developed
dozens of individual software projects which all were designed to be packaged
by maintainers, and were adopted into Linux distributions _by somebody else_.

And i'm telling you: Anything other than maintainers packaging software is a
fucking nightmare for everyone involved. We have a good system! Devs, make
your software so it's easy to package. Maintainers, package it for your
particular distro. Users, just use what's in the distro, OR follow the
upstream's instructions on installing _at your peril_.

------
tacone
Enough of this. People have the right to install last version of software
easily without having to upgrade the whole server at once. ISV have the right
to package their own software.

If MongoDB is a commodity I can gladly stick to the distro version. If I'm
developing something of more cutting edge I want and will install the latest
version from the ISV repo.

~~~
insanebits
Yes they can, there is nothing preventing them creating the package. But it
shouldn't be a primary way of installing package. If you really need bleeding
edge for development you can build it yourself.

~~~
mtgx
This type of attitude completely ignores regular desktop users who may one day
want to use Linux over Windows.

~~~
majewsky
A regular desktop user does not want bleeding-edge software; he will do just
fine with last year's VLC. (A notable exception being the web browser.)

The priority for a regular desktop user is a nicely integrated system, and
that is something that ISVs cannot deliver because they don't know what to
integrate with.

~~~
digi_owl
Except for the security stuff, i think many would be quite happy with last
years Firefox, Chrome or even IE.

------
insanebits
That's really good case about upstream packaging. Do we really want yet
another android story? I trust maintainer which is volunteering it's time to
ensure the package complies with distributions guidelines.

With universal packages we would have universal config file paths. So for
example: apache2 on debian based systems will store documents on /var/www
while something like arch will store them at /srv/httpd. Similar thing happens
with a `sites-available` accross distros. Different distributions are doing
things differently, which users are expecting and putting upstream developers
in charge of that will make a huge mess, because it's more convenient for them
to store configuration at a specific location accross every distribution. Yes
that makes a pain to write deployment scripts accross difrerent distros, but
IMO consistensy is a key to a good distribution.

Another good key point here would be testing, different distributions aim for
different things, while debian aims for maximum stability and uses quite dated
packages that are known to work. Where there are distributions which are
always on the bleeding edge. Will upstream backport security patches for old
packages? I really doubt they would like to maintain packages for years to
come.

I don't say it would be all bad, there certainly would be great developers
which would benefit from universal packaging, but that will not always be the
case.

------
rodgerd
Linux has never had crap bundled? I guess the author forgot Canonical's
"shopping lens" debacle, which was fully as sleazy as any ask toolbar bundling
nonsense.

~~~
johnny22
that's not the kind of bundling people are talking about.

~~~
rodgerd
Did you even read the article? That's _exactly_ the kind of thing the author
claims distribution maintainers protect users from.

~~~
vacri
"This is why Linux doesn't have spyware, doesn't come with browser toolbars,
doesn't bundle limited trials, doesn't nag you to purchase and doesn't pummel
you with advertising."

The shopping lens was openly added as a marketed feature, and could be turned
off with one action. You could, I suppose, class it as spyware, but it wasn't
surreptitiously sideloaded - it was touted as a default feature. It didn't
alter the browser, wasn't limited run, didn't nag you, and while I didn't use
it much, it could hardly be called 'pummelling with advertising'.

It was an openly-acknowledged experiment that failed, and was removed.

------
wmf
In other news: [https://www.nginx.com/blog/supporting-http2-google-chrome-
us...](https://www.nginx.com/blog/supporting-http2-google-chrome-users/)

~~~
kbenson
Note: It may be worth providing a bit more info about why you are linking the
article. It's a relevant example of the counterpoint playing out in real-time,
which people may miss from the simple preface "In other news:"

 _To ensure the operating system is stable and reliable, OS distributors do
not make major updates to packages such as OpenSSL during the lifetime of each
release._

Not entirely true. RHEL backports bug and security fixes continuously, and
select feature enhancements are rolled out with point releases. For example,
RHEL 7.2 updated mod_nss, and enabled TLS v1.1 and v1.2 in NSS (I'm not sure
how that interacts with Nginx, if at all). I imagine RHEL 7.3 should be coming
out any time now. Maybe it will help with this.

1: [https://access.redhat.com/documentation/en-
US/Red_Hat_Enterp...](https://access.redhat.com/documentation/en-
US/Red_Hat_Enterprise_Linux/7/html/7.2_Release_Notes/servers_and_services.html)

------
dmacvicar
I don't see the contradiction between Flatpack/xdg-app and keeping packages
downstream. [http://openbuildservice.org/](http://openbuildservice.org/) could
end building xdg-app images in addition to rpm, deb, Arch and images it
already builds. Why not?

That would keep security, reviews & processes while stopping the nonsense of
requiring root, global state & single versions rpm/deb enforce to the user.

------
cathartes
The author's point about "dogfooding" jumped out at me more than he probably
intended, but that's probably because it's an aspect of Slackware's
SlackBuilds repository (SBo) that I can't say enough about.

Quite often, SBos are maintained by folks who use (even depend) on the
software they are responsible for. As such, well-tested and sanely configured
software tends to be highly valued by maintainers, and package
responsibilities will commonly change hands when a maintainer no longer uses
what (s)he's responsible for.

The mechanics of an SBo are also dead simple, making package review and
modification by the user straightforward. This means that rolling your own
package is straightforward, enabling you to easily become a "dogfooding"
maintainer yourself for any package not already in the repo. Few distributions
allow ordinary users to close the official package loop with anywhere near the
ease--both technically and bureaucratically--that SBos do.

SBos are not the Perfect Package model by any means. But in terms of
"dogfooding", ISVs and big distributions (especially Canonical) could stand to
learn a few things from a relatively unstructured community of volunteers with
a taste for the KISS principle.

------
lamontcg
Unless I'm misunderstanding, I think Chef has been doing this for years with
omnibus packages.

The reason we did this is that we had to support RHEL5/6 and old Solaris and
AIX distros.

We struggled for several years (and I personally struggled for at least a
year) with supporting RHEL5 and it simply failed. That distro came with
ruby-1.8.5 and upgrading it to even 1.8.7 was a major PITA. The ruby-1.9.2 in
RHEL6 was also buggy and crashy and segfaulty and we didn't remotely have the
resources to attempt to debug it and patch it for RedHat.

The reality is that we just wouldn't have been able to support those old
distros at all.

I think there's a myth here that the human resources to support packaging and
debugging apps against distro-supplied libraries and supporting packages
magically appears out of thin air. It does not. As those supporting libraries
age it consumes more and more time to support "back porting" onto aging
versions of frameworks, which cannot be upgraded in the distro because of
compatibility with other apps. That creates a large amount of serious shit
work (seriously, nobody enjoys doing that) and open source doesn't make it
magically appear.

I also think there's a misconception that user's have chosen to use those
versions of software. Largely they've chosen that because they need the
compatibility with a large body of their own custom in-house software and
their internal porting effort is costly in time and labor. Largely they don't
care if you install an upgraded version of supporting libraries so long as it
doesn't impact any other running software and its contained.

For another example of software that has been doing this for years, check out
vagrant.

Admittedly, there's issues with this since openssl vulnerabilities require the
release of new software versions to upgrade the embedded version.

~~~
digi_owl
I maintain that the best of both worlds is a scheme like Nix or Gobolinux. It
allows multiple versions of libs to exist, without having each program come
packed with half a distro.

------
legulere
The case against distro packaging:

\- Desktop Linux has a small user base. Further splitting it up with different
packages for each distribution makes it further a non-target.

\- You can only install software cleanly that is packaged by your distributor.
No distributor packages software that costs money. Many distributors try to
prevent you from installing software with licenses they don't approve.

\- If your user is using a stable distribution they will have old software.
The distribution might only push bug fixes if they are security related. Users
will report bugs that are already fixed.

\- Maintainers often have no idea what they're doing, because they aren't
working on the project and there are rarely other developers that check their
patches. Worst case this leads to something like the Debian random number
generator bug.

Distribution packaging for user facing programs has large drawbacks with
marginal benefits, which can be mostly broke down to the not invented here
syndrome.

------
sternfall
All true. And yet, Steam is the most used package manager of the linux desktop
today and the exact opposite in every single point. If Distros don't adapt
they will become just kernel maintainers.

------
paulryanrogers
Anyone tried 0install?

------
eldude
Not to get unnecessarily political, but there is almost certainly an analogy
to be drawn here to American federalism and the state v. federal government
interplay on behalf of their citizens.

Taking the analogy further, the App Store trend is analogous to the 17th
amendment[1] superseding the states' right to appoint senators. The relevant
implication being that tribal power inevitably gravitates toward
centralization on behalf of the "user."

[1]
[https://en.m.wikipedia.org/wiki/Seventeenth_Amendment_to_the...](https://en.m.wikipedia.org/wiki/Seventeenth_Amendment_to_the_United_States_Constitution)

