Hacker News new | past | comments | ask | show | jobs | submit login
Winding down my Debian involvement (stapelberg.ch)
469 points by secure on March 10, 2019 | hide | past | favorite | 227 comments

I've been using Debian for over 10 years and I still love it as a user. But as a developer, I find it extremely frustrating. I've several times attempted to figure out how to package my open-source projects [1] [2] for Debian but the process is a nightmare. As I understand it, I first have to find someone with appropriate privileges to mentor me. I should be able to just submit a potential package for review.

Then there is a ton of documentation on creating packages but which 300 page guide is the right one to use is unclear. And which set of packaging tools should I use? In the end, the time investment required to get started has kept me from contributing.

[1] https://camotics.org/

[2] https://foldingathome.org/

Agreed, I've tried and failed to make Debian packages, and have no idea how to proceed.

In contrast, with Gentoo and their documentation, it was straightforward for me to make my own additional, separate repo, look at other ebuilds to see how packaging works, and have everything just work.


I submit ebuilds to Gentoo itself if I think others would benefit.

As far as I can tell, packaging for Arch is straightforward as well.

Shameless plug: https://github.com/hoffa/debpack

It won't pass all lintian tests due to the ridiculous requirements and useless ceremony "correct" Debian packages need. Still working on it.

If we're plugging tools we use to make Debian packages without dealing with the insanity of Debian tooling, I use debbuild[1] with OBS[2] for the bulk of my packaging at work for Debian/Ubuntu systems.

As it turns out, it doesn't take that much effort to make packages that comply with Fedora/openSUSE Packaging Guidelines and Debian Policy with this tool, and it has drastically simplified my ability to maintain software across distributions in a way that still cleanly integrates with the distribution platform.

This is how the Spacewalk Debian/Ubuntu client packages are built[3][4], among many other things. There's also a few other examples out in the wild[5][6][7].

At least for the stuff I make for myself, I haven't made packages that fully pass lintian, but it's not terribly difficult to do if you want to.

[1]: https://github.com/ascherer/debbuild

[2]: https://openbuildservice.org/

[3]: https://gitlab.com/datto/engineering/spacewalk-debian-client...

[4]: https://build.opensuse.org/project/show/systemsmanagement:sp...

[5]: https://pagure.io/python36-flatpkg-deb

[6]: https://pagure.io/rpmdevtools-deb

[7]: https://build.opensuse.org/package/view_file/Virtualization:...

Another shameless plug:


It doesn't exactly simplify the Debian packaging in my case as it leaves the packaging mostly exposed.

But it provides a lot of far more convenient and easy to remember commands (make rpm, make deb, make deb_repo, make rpm_repo, etc) compared to the easy to forget options of rpmbuild/mock and dpkg-buildpackage/cowbuilder.

It also glues everything together from upstream source recovery to building the final repository ready to be rsynced on a public server.

NixOS was also easy to get started with, I really like it !

NixOS (Nixpkgs actually) is indeed incredibly simple to contribute to.

It's a monorepo, and packages are declarative. So for most usecases, its just 5-10 LOC describing the source location, dependencies and build process.

After that, if it builds on your local copy of Nixpkgs, it will build on the same Nixpkgs commit as everything is purely functional. A simple PR is all you need.

For updates, there's super low burden too. Thanks to packages being declarative, everything is quite explicit. So a bot can check for source updates upstream, update your package and rebuild it. All it requires is simple maintainer approval.

Yeah the packaging part of NixOS/Nixpkgs is very pleasant. You write a "config" in the Nix language (example[1] of vorbis-tools, it's <40 LOC), open a PR on Github and it'll be submitted provided you follow the CR feedback. As a user you'd later download the built binary from the NixOS cache so you don't have to build it yourself.

[1]: https://github.com/NixOS/nixpkgs/blob/fce8f26af6ef8209c7d282...

Agreed, Gentoo and Nixpkgs both have really helpful people onboarding new packagers.

Probably won't help with Debian official packages but I've found success with these materials.


And then a quality of life enhancement says push the package building off to FPM.


Personal Repo


> Community: If a newbie has a bad time, it's a bug.

This is the root of the Debian newbie community experience. FPM seems to get it at least.

And FreeBSD ports (which, if I remember correctly, inspired a lot of things in the Gentoo packaging system).

> there is a ton of documentation on creating packages but which 300 page guide is the right one to use is unclear

So true. There seems like there are competing packaging approaches, some claiming others are outdated while themselves say the opposite.

I've been willing to package a few of my open-source projects as well for almost a year, and out of frustration, I've ended up building my .deb packages manually and hosting them on my own apt repository. In the meantime, I've published a few packages on PPAs (for Ubuntu) and on AUR (for ArchLinux), and it's been as easy as it could have been.

Given I use Debian exclusively on my servers, it feels like a missed opportunity to me (not that I think my own projects are so much valuable to others, but I'd package a lot more stuff if it was clear how to do it well).

I like the Arch/AUR approach too as you don't even have to host your own repo. With a good AUR helper that system is quite convenient for maintainers as well as users.

The problem with the AUR is its semi-official status takes the pressure off the 'community' binary repositories, while not being up to the same standards - many packages are out of date or don't build, and the complete lack of gatekeeping renders the user vulnerable to malicious packages. Note that "AUR helpers" are not officially condoned by Arch Linux, despite the system being unusable without them - officially you're supposed to examine all the PKGBUILDs of all dependencies manually to make sure they're safe.

I prefer the Void Linux approach - every "template" (equivalent to Arch's PKGBUILD) is kept in a single, monolithic git repository, and the barrier to contribution is kept low; just submit a pull request, and a core developer will review and accept it. This way all packages have binary builds and all packages have had at least nominal review, yet the repository is surprisingly broad. And if you need something that isn't in the repository, adapting an AUR PKGBUILD is trivial.

> Then there is a ton of documentation on creating packages but which 300 page guide is the right one to use is unclear. And which set of packaging tools should I use?

I've run into the same problems, though I've only wanted to build packages for my own use, or company-internal use.

I think I've understood now how packaging is supposed to work, and have written a guide, shameless plug: http://leanpub.com/debian/c/hn (coupon link to get a free copy; feedback would be appreciated).

> feedback would be appreciated

I'm just looking through it, for me looking through the epub version the layout is broken (at least viewing using Adobe Digital Editions), apparently every time there's a yellow block with code.

"1.1 Target Audience and Prerequisites" is the first example but I keep finding more.

Thanks, I'll have to bring that to leanpub's attention, the PDF looks fine.

https://wiki.debian.org/Packaging points to the official documentation.

Thanks for this!

My first upload will soon be 10 years ago. One thing that is definitely noticeable is that while creating a package is still by no means trivial, it has definitely become much, much easier.

10 years ago, you had a handful of very idiosyncratic build helpers to help you manage your package.

Today, effectively it's just one [1], debhelper, and it has become trivial to build packages with it. It's a very powerful framework that can be overridden do do basically anything you like, but usually automagically does the right thing. Gone are the days where you had to write an unwieldy debian/rules file.

3 years ago, you had X version control systems supported: git, svn, mercurial, cvs, and who nows what else. Sounds unreasonable, right? Well, when you rely on volunteer work, you need to live with the fact that volunteers will choose the particular tooling they like to work with.

Today, you have Salsa which is a GitLab instance, and people apparently just learned to deal with it, and the world hasn't come tumbling down.

I absolutely agree with the author that some things are just wrong within Debian. However, I believe these are inherent to the nature of an organization comprised entirely of volunteers; achieving consensus becomes really hard because nobody wants to be told how to spend their free time.

On the other hand, being involved in Debian has also been an incredibly formative experience. I believe Debian does some things right that others still fail at or trail after.

[1] https://anarc.at/blog/2019-02-05-debian-build-systems/

[2] https://salsa.debian.org

The Debian tooling can be really fantastic at time. If the upstream is clean (project with a proper setup.py/Makefile/CMake/Makefile.pl/etc), you only have to declare the meta-data, the build and runtime dependencies and you are set.

But at the same time, I'm not a big fan of other aspects. For example, the fact the packaging is split between at least 3 or 4 files and generally something close to a dozen (at least control, rule, changelog, copyright and maybe .install, .conffiles, .postinst, .preinst, etc) is a bit complex at first specially compared to rpm where everything is centralized into one .spec file.

I'm also not a big fan of how Debian packages handle permission different than root:root. dpkg-statoverride in post install doesn't feel natural, the newcomer while typically use chown, which is wrong. rpm is better in that regard, you explicitly state the permissions in the spec file %files section.

And I'm also not a big fan of Debian renaming upstream archives (<pkg>_<ver>.orig.tar.gz), IMHO, upstream should be taken as it is, including the file name. But that's a minor one and also a very personal opinion.

I've mixed opinions about packages with a lot of scripting to handle a kind of default but slightly customized configuration. I know it can be disabled with "DEBIAN_FRONTEND=noninteractive" but I feel like it's a lot of efforts and an unecessary source of complexity and bugs for a distribution that will primarily be used on servers where such helpers are almost useless specially using tools like puppet/chef/ansible/saltstack.

The tooling also feels a bit like an aggregation of scripts put on top of each. For example, when you want to build in a clean and throwaway environment (aka a chroot), you have pbuilder, cowbuilder, qemubuilder, whalebuilder... CentOS/Fedora has mock. Granted it's less advanced (apart maybe from mockchain) but at least it the obvious choice.

For a lot of things, the Debian packaging feels like Perl: there is 10 ways to do it. But 9 of them are wrong, and you have to go through dozens of pages of policies with no search box (https://www.debian.org/doc/debian-policy/index.html) to figure out the right way. And it's also kind of difficult to follow examples (by downloading source packages to see how it's done) since it's not explicit where each part fits, at least for a newcomer.

I'm curious, in what way do you feel it's less advanced? Mock supports leveraging qemu-user-static for doing foreign architectures as of Mock 1.4.11, which I think was the only major feature that the Debian tools had for a while that Mock lacked.

Is there something else missing?

Mock doesn't have COW (copy on write), cowbuilder by contrast creates a base chroot, then use copy on write to enrich it per build. This speeds-up things considerably.

Also, cowbuilder runs fine with several instance running at the same time, mock is a bit more sketchy in that regard. Basically, you can only build one package at a time. Typically, it's a bit of an issue for my personal scripts: https://github.com/kakwa/amkecpak, in them I can basically do make rpm -j 42 to build 42 packages at the same time, but with mock/mockchain, the chroot creation has some concurrency issues.

In fairness, I used the rather old mock version packaged by Debian, I've not looked at 1.4, it seems that some of these issues are addressed now.

I’m surprised and sad that that’s still the case. I was the creator and maintainer for an open source application some 10+ years ago, and figuring out how to package it properly for Debian was essentially impossible.

It eventually got packaged by someone who had been involved in the Debian community for some time.

I had this exact same problem and also gave up. It was absurd. The amount of friction in contributing to Debian is enormous.

And, at the risk of upsetting people.. it is time to move away from mailing lists, especially for submitting bugs and managing packages.

Debian hasn't used mailing lists for reporting bugs since the 90s: https://en.wikipedia.org/wiki/Debbugs

Debbugs is basically a mailing list.

The original problem with reporting bugs to a mailing list was mails for many bugs were mixed and you couldn't easily get the state of individual bugs. Debbugs, by automatically creating a new mailing list for individual bugs, does solve this problem admirably.

The new problem is that new people hate emails. I think the merit of this problem is debatable.

Mailing lists have had threads for decades.

You can subscribe to individual bugs in Debbugs, you can't subscribe to threads. Threads also don't have short ID (bug 123456) and don't have metadata like whether the bug is closed.

Broken email clients and careless people break those all the time though...

In the long run, everyone who tolerates email is dead.

But do they die before everything else?

My experience trying to update a package was similar. I use Ubuntu so I had the extra friction of using their Launchpad service first, then being told to go-to the Debian lists instead.

It should be easier to make a Guix package and submit it. Then you could run Guix on top of Debian, or use Guix functionality to make a container. Either a tarball or a docker container if that's more your style. https://www.gnu.org/software/guix/blog/2018/tarballs-the-ult...

> I've been using Debian for over 10 years and I still love it as a user.

I've been using it for even longer than that, but I no longer love it. It has declined to being "just OK" in my eyes, which is why I've begun the effort to shift all of my machines to something better.

Why not using CMake/cpack? It can generate 100% valid Debian packages and you don't need to do a C++ project, but you can package whatever you want.

I hadn't heard of cpack - it seems like there's not that many projects using it. Do you have any examples? Do the packages pass lintian checks?

In my experience, better tooling always wins in the long run.

I've built Debian packages in the past, and after packaging the same software with Nix, it's very hard to not feel that Debian tooling and packaging is time consuming for no good reason. The nixpkgs model, where all you need to build everything is one `git clone`, all you need to make a change is a pull request and wait for a range of automated tests to tell you it's good (for both build and the _uses_ of a package!), and all you need to land a contributed change is click one button, seems strictly better.

Over the years I've heard many people say "but shell scripts, FTP servers and arcane helper tools is the way we've done it for decades and that will never change" in many projects, but eventually, these projects shrink and those with a good developer experience and clean tooling overtake them.

Similarly, after experiencing automatic, safe refactoring across billion-line typed-checked code bases, you can't help but wonder why people put up with spending their time going through heoric community efforts distributing work across people that a machine could easily do if you used good tooling.

In my opinion, the real strength and legacy of Debian is successfully running a large, diverse, distributed project over decades with (reasonable) cohesion, democracy, and (reasonably) good organisation and project management.

But even some non-technical problems go away with good tooling, and more time gets freed up to solve those hard tasks.

Concrete example: In nixpkgs it is very easy to build overlays for the whole of NixOS that allow you to switch from dynamic to static linking or add hardening flags across all packages, avoiding big debates over which is the "one true way" because providing both is so easy, and both can be merged upstream.

I think any big and successful project should continuously invest into better tooling, and simplify and automate things. That keeps contributors motivated and on board.

(14-years happy Ubuntu [and thus Debian] user, and 10-years i3 user, so thanks for your efforts, Michael.)

> I have more ideas that seem really compelling to me, but, based on how my previous projects have been going, I don’t think I can make any of these ideas happen within the Debian project.

Interesting article. I had a chance of asking two distributions about some particular feature (making it easier to get GPG keys of developers).

The first one I contacted was Gentoo: they quickly CCed my email to relevant people, discussed the matter between themselves and deployed the change in a week.

Then I contacted Debian about the same thing. The email was basically identical. But the reply was largely negative, complaining about details and openly avoiding work. The entire interaction reminded me of large corporations where any change is met with resistance for resistance sake.

(I use Arch btw.)

I used debian for over 10 years and never managed to contribute anything.

10 Minutes after using homebrew for the first time I sent in a PR to update a package to the latest version and update the built dependencies.

Honestly, as both a Mac and a Debian user, I’m not sure which one I prefer. I understand your frustration, but random strangers not being allowed to push updates to my operating system in 5.34 seconds doesn’t sound all bad. To me.

(Obviously not that TFA paints a rosy picture..)

Those changes are reviewed by maintainers. The fact that a random stranger is able to push a change and get it reviewed, approved, merged, and available in minutes should be the goal of most open source projects. On the contrary, older projects tend to be too reactionary when it comes to infrastructure tools, so in turn they get very slow interactions. This becomes a demotivator for anybody who is used to more efficient workflows.

It is not a coincidence that the author became demotivated after doing some professional experience.

> The fact that a random stranger is able to push a change and get it reviewed, approved, merged, and available in minutes should be the goal of most open source projects

> On the contrary, older projects tend to be too reactionary when it comes to infrastructure tools, so in turn they get very slow interactions

Well said.

'Open source' doesn't say anything about infrastructure. But it's just as important.

> The fact that a random stranger is able to push a change and get it reviewed, approved, merged, and available in minutes should be the goal of most open source projects.

That sounds like a security risk to me.

How much safer is it if pushing in a malicious change takes two months?

I had to re-read it too. He does not say that the same dev should do all these things

> but random strangers not being allowed to push updates to my operating system in 5.34 seconds doesn’t sound all bad. To me.

So that's what the frustrating to maintainers crumbling infrastructure and crappy tooling are for. Now it all makes sense!

Actually, I'd very much prefer quickly pushed updates in case of severe security issues. Debian had lots of really ancient packages with problems in "stable" last time I looked.

If there's a severe security issue, Debian will quickly push an update containing a backported version of the fix, assuming you have the debian-security repository enabled (which you should). Just because the version number on the package is old doesn't mean it's insecure.


The problem is: what defines a security issue. A developer (more often then not) doesn't know if a fixed bug could have lead to a security issue.

You're misunderstanding the point of Debian.

Debian's "stable" release is that. Stable. No updates are issued for packages except for critical security updates, which are backported to the released version.

It's essentially an LTS release. This isn't what everyone wants or needs, but if you do, Debian does it very well.

Homebrew is effectively a "rolling release" with fast turnover of package versions. So in that sense it's not directly comparable to Debian stable (or even unstable). That rapid turnover has led to numerous breaking bugs over the last five years or so that I've been using it. It's a lot simpler, but it also lacks the sophisticated versioned dependencies which dpkg/apt provide. Both have their tradeoffs, but I wouldn't want to use Homebrew for anything serious; it can break at any moment, and can be very painful to manage. It's got better over the years, but still has some way to go to match up to Linux package management.

Try macports for a BSD style packaging for the mac.

What's the advantage of using macports over Homebrew?

For example proper runtime dependency management?

Try updating a single homebrew package to a new major version and watch the other homebrew packages depending on it breaking.

To be fair, that's not a problem unique to homebrew. Gentoo used to break runtime dependencies, too. Although that was a long time ago. Nowadays libs used by other programs are preserved and only deleted when all dependencies have been updated, so breakage no longer occurrs.

For one, it works just fine on multi-user Macs as it installs software under the uid/gid of the user who installed Homebrew somewhere under /usr/local whereas Homebrew installs it the proper way as root:admin and doesn't create a shitload of permission problems.

I can't understand whether you're saying Homebrew does it right or not on multiuser systems.

I have a lot of trouble with Homebrew on multiuser Mac systems. Running `brew install ...` or `brew update` as anyone but the user who originally installed Homebrew almost always fails because of permission errors (or it successfully installs, but then causes permission errors in the future for the user who installed Homebrew). Whenever I need to use Homebrew commands now, I use `sudo -iu otheruser` to switch to the user account who first installed Homebrew in order to avoid the permissions issues. I'm really baffled that Homebrew doesn't support the multiuser system case. Is it so rare to share a system nowadays? I've had this issue on multiple Macs and fresh installs, so I don't think this is some fluke bug.

I’d argue that Debian’s “stability” should be a process/policy thing. It should be orthogonal to the tooling. Just because you have a “stability-first” policy doesn’t mean you should be enforcing it through crappy tooling.

Same for node packages, and same for rubgems. Some package managers are just better than others.

Seems to me like either snap or Homebrew Linux are the way to go. ArchLinux also has a reasonable packaging experience.

> 10 Minutes after using homebrew for the first time I sent in a PR to update a package to the latest version and update the built dependencies.

And that's exactly why I use and trust Debian.

It's a package that I am one of the upstream developers for. The PR involved changing the version from x.y.2 to x.y.3 and removing a no longer needed build dependancy.

> And that's exactly why I use and trust Debian.

Comments like this are why you are an asshole, mr throwaway30012.

I noticed a mistake in a comment in a default file in the /etc/ directory - pretty certain the file is part of Debian (although this was on Ubuntu).

I thought I would try to fix it.

Two hours googling later and I couldn't even work out who the maintainer was. I don't like eating other people's time, but I even tried using IRC.

For next time:

    dpkg -S path/to/file
gives the the name of the package containing a file, and

    dpkg -s package-name
gives you the name of the maintainer.

All mails I've sent to maintainers of packages regarding issues with them have been ignored. I think they prefer you to use the bug tracker. (Which sucks, so I always end up doing nothing about it.)

In the end I switched all my servers to Ubuntu. It's been good and I love PPAs.

Ubuntu bug reporting is just as shitty. No reply for 6+ months was common for me. One time I even tracked down the issue, which was fixed upstream just one commit after the one they used for their package. Still no reaction, but about a year later they asked of the problem still persists with the current release. I didn't bother to reply and stopped reporting to either Ubuntu or Debian.

I had the same experience. After an Ubuntu update, my workstation was suddenly not able to mount an NFS filesystem from FreeBSD when Kerberos was enabled.

The bug was rather quickly marked as confirmed, with absolutely no updates for several years, even though the root cause was known. As far as I know, it still hasn't been fixed.

Exact same experience. The best bug reporting experiences I had were with arch and Nix.

My experience: I submitted a bug once, with patch, using the official bug tracker and jumping through all the hoops. It was ignored completely. When I went on IRC, the response was "perhaps you'd like to become the package maintainer?".

The bug remains to this day.

> I think they prefer you to use the bug tracker. (Which sucks, so I always end up doing nothing about it.)

Oh, so much this. The crappiness of bug reporting for Debian has meant that I stopped trying to report issues years ago.

> dpkg -S path/to/file

In the case of config files, this only seems to work sometimes -- maybe if it wasn't modified by the user? Or if it came from a package directly, and wasn't generated from its {pre,post}inst scripts?

    $ dpkg -S /etc/hosts
    dpkg-query: no path found matching pattern /etc/hosts
    $ dpkg -S /etc/resolv.conf
    dpkg-query: no path found matching pattern /etc/resolv.conf
    $ dpkg -S /etc/bash_completion
    bash-completion: /etc/bash_completion

`dpkg -S /path/to/file` only works if the file belongs to a package. `/etc/resolv.conf` gets created dynamically at runtime. `/etc/hosts` gets created at installation time because it includes the hostname (which ought to get replaced with nss-myhostname).


I suspect with that I can probably find the repository, so that I can check whether the comment has already been fixed, then I can work out how to submit a patch or bug.

Usually worth just following this workflow:


That is useful for reporting bugs.

I am a developer so every now and then I make a concerted effort to diagnose a bug, then fix it, then do a pull request.

However, I do admit I usually give up before getting to the point of submitting something useful...

I like it.

    set -e
    set -u

    if ! [[ ${1:-} ]]; then
        echo please provide a file

    dpkg -s $(dpkg -S "$1" | cut -d: -f 1) | grep Maintainer

reportbug --file /etc/foobar

Instead, currently, all packages become lint-unclean, all maintainers need to read up on what the new thing is, how it might break, whether/how it affects them, manually run some tests, and finally decide to opt in. This causes a lot of overhead and manually executed mechanical changes across packages.

I always wondered if Debian/Ubuntu could benefit from a "monorepo". It seems to work for other distributions, e.g. Alpine Linux and Homebrew.


Right now every Debian package lives in a separate repo, or it doesn't even have to live in a repo at all AFAIK.

I think Debian has the most packages because their process is very loose and decoupled (as well as it being one of the oldest distros). But having tighter integration does help move things forward faster.

Nix / NixOS[1], and Gentoo[2] also use monorepos with some provision for overlays.

1: https://github.com/NixOS/nixpkgs

2: https://github.com/gentoo/gentoo

Likewise for the BSD ports.

There are advantages to both ways. However, a single repository permits changes to multiple packages in a single change. In Debian, simple transitions which affect multiple packages can take months or even years to fully propagate through the entire system. Not due to technical difficulty, but the logistics of coordinating the change.

Debian's approach made sense at the time. Developers who were widely distributed, communicating sporadically using dial-up internet connections, needed to be able to work independently and changes which had wide-ranging effects needed discussion and coordination. Nowadays, that can happen on a single merge request on GitLab. Homebrew can do such changes in a single pull request.

We've had CVS for years.

True, but it's not just the version control that's the important part. It's the infrastructure around it.

Today, when you submit e.g. a homebrew PR, it gets comprehensively tested by CI builds, including rebuilding and testing all reverse dependencies on every supported platform version. The BSD ports changes are backed by poudriere builds of the entire collection, again on multiple versions.

Services like GitHub and GitLab allow one to hook in all sorts of stuff and greatly improve the ease of submission of large- and small-scale changes, as well as making thorough testing and review both possible and accessible. Older projects are stuck with older entrenched tools and infrastructure, and haven't taken advantage to newer ways of doing things which newer projects have been able to adopt wholesale. The Debian tooling and infrastructure is certainly dated, but it's also of extremely high quality and very robust.

And CVS is clearly the cutting edge of source control.

I don't think it necessarily needs a monorepo, but having every package on Salsa (Debian's GitLab installation) would help.

The move to GitLab was a glimmer of hope that one day I'll be able to help contribute to the project which I'm a heavy user.

I wish they went all in and used GitLab Issues for bugs and GitLab CI/CD to auto-build packages for both validation and pushing new packages into the Debian repositories.

I'm not following the situation but what's stopping them from improving their packaging system, except for being complicated?

If the systemd discussion has taught me anything about Debian governance, getting everyone to agree is going to be a pain. Far easier to let it be.

Loud minorities and trolls on public mailing lists are not representative of the community of DDs and DMs.

No, but they're representative of the decision process. "Should we do A or B" results in people popping up supporting A, B, C, D, and E, and the decision tends to be "let's support them all". And as the original article we're commenting on discusses, that "support" comes in the form of "hey package maintainers, here's what you have to deal with".

This is a misunderstanding of how Debian works.

I've participated in the Debian community for 18 years, since Potato (2.2). I have a fairly good practical idea of how Debian operates. The above description was based on many, many, many decisions over the years. On the rare occasions that Debian has to actually decide something rather than answering "X or Y" with "yes" (and the decision doesn't fall solely to a small number of developers responsible for the packages in question), it results in painful institutional friction.

I love using Debian, I care deeply about Debian Policy and Debian's procedures, I enjoy many aspects of the Debian community, and I'm also well aware of where Debian has difficulties.

A mono repo is not needed. Some teams are able to push changes to all the packages of the team without it and without much effort (for example, the Debian Python Modules Team does that quite often). It's more a social problem: all packages should be maintained in the same way and people should be allowed to do global changes. This is true for some teams, not for some others.

Wouldn't even need to be a monorepo. As a first step, ensuring that all packages use the same system, e.g. git, and one authoritative repository (e.g. a debian-maintained gitlab) would be a great start.

https://salsa.debian.org/ is a Debian maintained GitLab

Reading between the lines I see "wouldn't it be nice if Debian's repository was like Google's repository"

Yes, a monorepo for all software with a single soviet to oversee it with a good five-year plan for rollout. We have plenty of historic examples of such centralized control.

The sidebar makes this barely readable on Pixel 2. The markdown is easy to read:


Sorry about that! I’m not great with Web Design, as you can tell. If anyone wants to contribute a CSS fix, or just a pointer to a good article on how to fix this, I’d be very grateful!

You could add the following CSS to the container (`.row` here but you should make the class name a bit more specific) which will stack the sidebar and your content into a backwards column (so the sidebar shows up after scrolling past the content, which looks and feels ok for a quick & scrappy mobile fix.

    display: flex;
    flex-direction: reverse-column;
Put that in a media query to only target phones/small devices.

Feel free to message me if you're confused and would prefer a fast PR, but you strike me as someone who wouldn't mind learning a bit & I'm in Morocco with limited internet.

edit: I would also move the branding (name & logo) above the content so people know who they're reading. And adjust the margins a bit. Frontend is Fun.

Thank you so much! Committed your suggestion in https://github.com/stapelberg/hugo/commit/3a7227fef47fc49550....

Let me know your Paypal, if you have any, and I’ll gladly invite you for a virtual coffee as a sign of my gratitude :)

All good! Feel free to shoot a small donation to the Internet Archive or we can grab a coffee if you ever find yourself in NYC.

Glad it worked for you!

"Put that in a media query to only target phones/small devices."

Tone: Honest question. What's that, exactly?

I've been trying to figure that out off and on for the last, oh, 5 or 6 years, and I keep bouncing off the problem as being too complicated to be worth it for my personal site. If there's a clean answer that has developed in the time since I basically gave up, I'd (again, no sarcasm) love to hear it. But I'd probably need a link to something; Google is clogged with old stuff here.

Media queries are ways to only apply a bunch of CSS on clients that fulfill some conditions.

If I write:

     @media (max-width: 500px) {
        p {
           font-style: italic;
It will turn the text in p tags italic on small screens.

Besides the lack of responsive sidebar, never justify text on the internet. Particularly for blog posts.

With hyphens enabled it's not that bad.

Reader mode in Firefox works like a charm.

You can also enable reader mode in Chrome if you turn on its flag in 'chrome://flags'.

Which flag is that? I don't see anything when searching for "read".

the content is very narrow on iphone6 as well, but safari readability view suffices

The Arch User Repository (https://aur.archlinux.org/) seems to solve a lot of the collaboration issues. It is pretty painless to create PKGBUILD files to make a package and to upload a new one to the AUR. Most maintainers read the comments and accept patches on a timely manner, and there's even a way to forcibly relinquish a package if a maintainer is AWOL for significant time.

On the other side of the spectrum, I've found that the official binary repositories for Arch Linux suffer many of the same issues described in the article for Debian. Patches being ignored and collaboration or involvement being near impossible. Even worse, the few people in charge of the official repositories are allowed to basically remove packages from the AUR with no interaction with the AUR maintainer, for the purpose of "promoting" them to the official repositories. This has happened to me twice, and it resulted in what I think is a worse package in one of those cases.

Yo! Trusted user from Arch Linux.

>On the other side of the spectrum, I've found that the official binary repositories for Arch Linux suffer many of the same issues described in the article for Debian. Patches being ignored and collaboration or involvement being near impossible.

Well, we still relay on svn internally so things are complicated to say the least. Even if we had things on git (which we are working on), I'm unsure if opening stuff up for outside collaborations like gentoo, alpine, void and nixos does is a good way. You need the proper tooling setup to make sure this isn't a burden on the maintainers.

>Even worse, the few people in charge of the official repositories are allowed to basically remove packages from the AUR with no interaction with the AUR maintainer, for the purpose of "promoting" them to the official repositories.

Which is true. Some people email maintainers of complicated packages before inclusion, and some also gives a headsup in the comments of the AUR. However, this is all done if the packager want to. There is no rules here. The removal is on the grounds that AUR packages shouldn't overlap with official ones.

>This has happened to me twice, and it resulted in what I think is a worse package in one of those cases.

I'm interested taking a look at this if you want :) foxboron@archlinux.org or just type in the comments.

> I'm interested taking a look at this if you want :)

I'd rather not, the thing in question was around three years ago and I don't think I even have my AUR PKGBUILD for comparison. It wasn't a cry for help, and I'm not naming packages or individuals for a reason. Just voicing some frustration at the process.

Content is never deleted from the AUR, so any PKGBUILD can be retrieved.

I love Debian as a user. I considered becoming a DM then DD -- I read all the relevant documents and tested water with maintaining a package I used -- but ultimately gave up due to the bureaucracy and politics involved.

The last straw was this: https://lwn.net/Articles/704608/

(re the lwn link) Holy shit that is nuts.

Yeah that was painful to read. And all too typical.

For me this is the telling part::

"""When I joined Debian, I was still studying, i.e. I had luxurious amounts of spare time."""

OSS has stopped (if it ever truly was) being a part time endeavour. I know from bitter personal experience one cannout up with a lot of bureaucracy and process of it is your day job - you have time to get through the rubbish in order to find the diamonds.

How we (as a society now utterly dependent on OSS) manage this problem is on a par with how we manage journalism - they are bigger questions than I have easy answers to

> OSS has stopped (if it ever truly was) being a part time endeavour.

I think that depends on what OSS crowd you want to run with. The part-time hobbyist sector may be, I think, stronger than ever before.

The difference now is that there exists the "commercial" OSS sector. That is certainly not part-time hobbyist.

There is some overlap between those two worlds, but they are very different and distinct worlds nonetheless.

You're the guy behind i3? Thank you very much. I love it.

I’m glad you like i3!

Still using i3 too! It just works great and never gets in the way.

Btw. You changed from .name to .ch I noticed... Trying to naturalize? ;)

Baby steps ;)

My productivity changed drastically ever since I switched to i3. There is almost no friction when I interface with my computer due to it. Thank you.

I3 is coolest thing ever.

While this is sad and painful to read, I can't say I'm surprised.

The problems listed are precisely the kind of problems that Redhat strategically supports fedora with, in terms of investment of resources. For all the hate Redhat receives it has consistently been a good community member by being willing to help fedora in areas that it knows are hard and yet not 'cool' enough to attract volunteer contributions.

What has Ubuntu done for the debian community along the same lines?

I've been using Debian / Ubuntu for many years, as much due to inertia and familiarity as anything else. And I have a lot of respect for the project.

If I wanted to start being a contributor to a distribution, which one would be the best to dive in to?

I would be interested to know the generally-accepted answers to this as well. What are the free/libre OSes which most folks enjoy contributing to?

It's a bit of an odd ball, but I loved contributing to the GuixSD project. I contributed multiple packages after learning Guile Scheme in a couple of days.

> If I wanted to start being a contributor to a distribution, which one would be the best to dive in to?

I tried to contribute a package to Fedora once upon a time...gave up in frustration after a while since it went nowhere after a bit of hoop jumping.

I'm sorry you felt that way. All I can say is that we've done a lot of work in recent years to drastically improve the quality of life for new contributors, especially packagers!

The process is pretty well documented here: https://fedoraproject.org/wiki/Join_the_package_collection_m...

If you have issues, feel free to hop into any of the Fedora communication channels[1][2][3][4][5] and ask for help.

[1]: https://fedoraproject.org/wiki/Communicating_and_getting_hel...

[2]: https://fedoraproject.org/wiki/Telegram

[3]: https://discord.gg/fedora (Yes, Discord!)

[4]: https://discussion.fedoraproject.org/

[5]: https://fedoraforum.org/ (Unofficial, but a strong, helpful group!)

I think Fedora is a great distribution to get started in. New stuff, cool tech, and decent, well-specified workflows across the board. :)

We also have a cool website that helps you figure out what you want to do and how to apply it: https://whatcanidoforfedora.org/

(Yes, it's inspired by the Mozilla one!)

The smaller ones are often easier to contribute, port some packages to Void!

That's also where you make the least impact.

There is probably an impact graph with popularity * impact of change on the project * probability of change being accepted * log(delay of change reaching 80% of users).

For instance with Homebrew: high popularity, high probability of being accepted and low delay to reach 80% of users, so the impact of the change on the project is the main factor.

And a follow-up question: how would you get involved with it.

The post shares several pain points with Debian's slow, aging change-process and integration-infrastructure.

Open question: If Debian contributors feel the need to drop out and move on due to these pain points, are there less-painful Linux-distribution projects out there that are getting more of these pain points right they can flock to? Is this a sign that Debian needs to reform, or that other, newer distributions are outpacing it?

I like Debian, but I agree it's become too slow and bureaucratic.

Sadly, like most big organizations, it is hard to implement changes from inside.

In my opinion, most Debian's issues stem from a centralized imperative package manager where all packages need to be carefully kept in sync for things to work. Moving to something like Nix would greatly simplify development.

This is far from a new idea. In fact, it was quite thoroughly discussed in debian-devel mailing list back in 2008, 2013 and several other times [1].

[1] https://lists.debian.org/debian-devel/2008/12/msg01007.html

> it's become too slow and bureaucratic.

When I first used Debian in the 90s it had a reputation for being slow and bureaucratic.

And unlike most distros which were around back then, Debian is still here today, so it must be good for something.

That said, I agree anything Debian (and thus Ubuntu too) and FSF feels awkwardly baroque to work with these days.

Old email-based systems, bickering over politics and minor issues rather than the subject at hand, for weeks or months... and no standard CI?!?

With all that I too find myself spending my time contributing elsewhere.

It’s a shame as these 2 organizations acts as a fundament for sooo many things I rely on daily, but I just can’t bother.

> Debian is still here today, so it must be good for something.

It’s stable and I know they’ll still be around 5 years from now.

They also very rarely break anything on updates.

There are only three Linux distress I’d run in production, RHEL, Debian or Ubuntu (depending.

What is the problem with old email-based systems? This is an honest question.

For FSF and Emacs all bugs, discussions and contribution are handled through email.

Sending patch-files by email and then having to discuss/defend your patch for weeks, only to have to create and send a new one, leaving receivers to diff patch-files to see what is different...

Let’s just say in 2019 you expect something a little more sophisticated to be available to make your contribution easier.

There's no problem with systems which allow interaction by email. It can be handy, and if you like that, then the Debian BTS is convenient for responding to bugs as a maintainer. However, email-only systems which don't offer any other methods of interaction can be problematic.

email-only interaction can be massively inefficient. For larger changes, I might need to change the severities, tags, and follow up to dozens of bugs. For every metadata change I make, I need to check the reply to make sure the change took place. That means manually tying together sent emails with BTS replies and repeating any which failed.

I used to track all this on large pieces of paper! I shouldn't need a manual system to manage my interaction with an issue tracking system. When I press submit in any other system, the effect is immediate. This is hugely costly in terms of time and effort to do really trivial and mundane things. It's one thing to do it once, but what happens when you make 30 changes and have to wait 20 minutes for each request to round-trip? It's a logistical nightmare which can take several hours to complete.

If I was to highlight a single problem with the Debian BTS, it's that there was a failure to acknowledge its inefficiencies and look at other systems. The criticisms raised in this thread aren't new; they were known 15+ years back. There's a reason why the others are not using email as their primary mode of interaction, and it's a shame the BTS didn't get a decent web frontend to make it more efficient as well as more accessible.

The thing that I think Debian does best (which is the same thing that has kept me using Debian since about 1996) is stability. I don't think any other Linux distro (that isn't Debian-based, anyway) even comes close.

> In my opinion, most Debian's issues stem from a centralized imperative package manager where all packages need to be carefully kept in sync for things to work.

Hmm, in my opinion, this is good thing, not an issue needs to be solved.

openSUSE uses the Open Build Service[0] to build, well, openSUSE. OBS also supports other distributions etc., but it makes it fairly easy to put up a package.[1]

For RPM-based distros (e.g. openSUSE), you write a .spec-file, check it in via OBS's version control alongside your sources, and off you go. OBS builds the package (and pulls in dependencies as needed etc.) and publishes the result as a repository with GPG keys and all the jazz, which you can just add to your own distro, and which is openly visible, so everybody else can use your package(s), too.

OBS also supports forking existing packages, and you can merge them back together, which means you can fix something in an existing package (whether a distro-package or something somebody else put up) and if they accept the changes, congratulations, you just fixed something in the distribution.

This means a lot of building, compilation, versioning etc. is out in the open, and you always have the sources available on top of it.

As an aside: I doubt people will "flock" to openSUSE, since many people sneer at them for no good reason (YaST, still?!), but they do a lot of good work, are good upstream contributors (like RedHat and unlike Ubuntu) and some of the tooling is absolutely amazing, except that nobody really knows about it.

[0]: https://en.opensuse.org/Portal:Build_Service

[1]: https://fosdem.org/2019/schedule/event/distribution_build_de...

OBS is so good, Fedora (my distro of choice) has a clone of it called Copr: https://copr.fedorainfracloud.org/

I'm a BSD guy but needed a Linux distro last year for a specific project. OpenSUSE was remarkably good, stable, easy to use and update, and apparently free of the political crap that seems to come with many other distros.

I have recommended it to others who need a stable Linux distro with lots of well-maintained packages. Some of them looked at me like I was crazy. Preconceived notions are hard to overcome sometimes. Oh well.

Do you feel any concern when the owner keeps changing? I think that's part of the reason people keeping a distance.

NixOS/nixpkgs is exceptionally friendly and open to new developers, and even just a one-off patch is simple to do.

Indeed. I started using NixOS and Nix a bit more than six months ago and I was contributing patches within no-time. One can just submit a pull request on GitHub (with the proper format) and a lot of aspects are automated: e.g. the impact of the change (in terms of packages that are effected) is automatically determined and someone with the right privileges can request a build. Some PRs linger around for a longer time, but most of my PRs were merged within the day. Also, I received useful feedback if a PR needed more polish.

What helped me a lot with Nix is that you can easily make your own local derivations. Derivations are usually short [1] and bear little risk with sandboxed builds enabled. Basically, you do a nix-shell -p mypackage, you get dropped in a shell with your derivation, but it does not affect the rest of your system.

[1] Example of a C++ library: https://github.com/danieldk/nix-home/blob/master/overlays/30...

I am surprised OpenSuse does not get more attention. I feel its infrastructure, with the open build service etc, should be aťtractive to many?

Ugh, just saw your comment now. I wrote a bit more in a sibling comment about openSUSE. For some reason, people seem to either sneer at openSUSE (probably people who go way back?) or are simply unaware of anything it does and/or its existence. I briefly asked the people at the openSUSE stand at FOSDEM 2019 why they don't advertise things such as OBS more, and the answer suggested that some people basically don't see a point in it? It's a pity.

The last time I tried OpenSUSE, it gave me nothing but problems. Admittedly, this was many years ago and things may have changed -- but I have no real reason to give it another try to find out.

I switched all my servers to Ubuntu years ago when people were telling me "Ubuntu is no server distribution". I honestly never looked back.

Ubuntu is... interesting.

Watch out for what packages you rely on, and what repository those packages come from. If they're in universe, expect your bug reports on Launchpad to go unanswered or unresolved. The only way to get fixes in them is to go upstream to Debian. But note that not every bug in a universe package is actually fixable or broken in a Debian world, because the bug might be down to interactions between a decision made by Canonical with packages in the main repository, and those for the universe package.

Yes. For some reason people, even those running Ubuntu servers, are surprised to learn that universe has no guarantee to get updates for known vulnerabilities.

Ubuntu is currently divided into four components: main, restricted, universe and multiverse. All binary packages in main and restricted are supported by the Ubuntu Security team for the life of an Ubuntu release, while binary packages in universe and multiverse are supported by the Ubuntu community.


If you care about security, disable universe and multiverse on servers, or use Debian Stable if you want a Debian-based distribution.

I think ubuntu has done a decent job with universe by and large. I find it in much better shape overall than epel. A good way to judge the health of these auxiliary repos is to look at the age of chromium. Ubuntu is generally good with patching at least security issues in universe. epel is still running behind by a month or two despite the security disclosures. So RH ought to take a page from Canonical's book when it comes to epel and make it a more official thing than the entirely community driven thing it is right now.

I don't mind RH being slower as I take it that's what makes RH what it is. Old packages but most stable.

It's not about RH packages directly. They are fine. As it is with universe on ubuntu, you quite likely need epel on rhel for doing quite a lot of basic stuff. If you are able to get by with just the proper supported packages in each distro, this doesn't affect you. And that's where the two diverge. universe is in better shape since it is semi-official, v/s epel which is completely unofficial/community-based.

Actually, since EPEL is often backports from Fedora, many people elect to keep them up to date with what's shipped in Fedora, so it tends to do better than other counterparts.

The closest counterpart is SUSE's PackageHub, which is often seeded by stuff going into openSUSE Leap that isn't part of SLE itself.

Just saw your other comment about Fedora. Thanks for Fedora and EPEL. I was just pointing out that EPEL would benefit from some recognition as a semi-official but ultimately unsupported repo from RH. Technically RH already has 'optional' channels which are already in that category, but EPEL should really be that. Some communication channel between RH and EPEL and QA so that stuff doesn't break on RHEL point releases and maybe some resources for security updates would go a long way in making RHEL more usable.

Are you a RHEL customer? If so, telling your salespeople this would not hurt. Just sayin' :)

We'd all love that. It's funny, because Red Hat does regularly source content from EPEL to pull into RHEL for various products, which is where these issues come from.

I would love it if Red Hat were to give it proper recognition. I honestly don't know why they haven't yet...

I think Ubuntu suffers equally from every single pain point that Michael describes in his post.

Given that ubuntu depends heavily on debian maintainers and governance, what exactly does this improve?

Ubuntu LTS distributions have been the perfect mix of new-ish packages and stability for us.

Are they who blindly use CentOS/RedHat?

Ubuntu specifically has a "server" version. I've been using it since 10.04 (almost 9 years) and never had to use anything else.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact