Hacker News new | past | comments | ask | show | jobs | submit login
Winding down my Debian involvement (stapelberg.ch)
469 points by secure on Mar 10, 2019 | hide | past | web | favorite | 227 comments

I've been using Debian for over 10 years and I still love it as a user. But as a developer, I find it extremely frustrating. I've several times attempted to figure out how to package my open-source projects [1] [2] for Debian but the process is a nightmare. As I understand it, I first have to find someone with appropriate privileges to mentor me. I should be able to just submit a potential package for review.

Then there is a ton of documentation on creating packages but which 300 page guide is the right one to use is unclear. And which set of packaging tools should I use? In the end, the time investment required to get started has kept me from contributing.

[1] https://camotics.org/

[2] https://foldingathome.org/

Agreed, I've tried and failed to make Debian packages, and have no idea how to proceed.

In contrast, with Gentoo and their documentation, it was straightforward for me to make my own additional, separate repo, look at other ebuilds to see how packaging works, and have everything just work.


I submit ebuilds to Gentoo itself if I think others would benefit.

As far as I can tell, packaging for Arch is straightforward as well.

Shameless plug: https://github.com/hoffa/debpack

It won't pass all lintian tests due to the ridiculous requirements and useless ceremony "correct" Debian packages need. Still working on it.

If we're plugging tools we use to make Debian packages without dealing with the insanity of Debian tooling, I use debbuild[1] with OBS[2] for the bulk of my packaging at work for Debian/Ubuntu systems.

As it turns out, it doesn't take that much effort to make packages that comply with Fedora/openSUSE Packaging Guidelines and Debian Policy with this tool, and it has drastically simplified my ability to maintain software across distributions in a way that still cleanly integrates with the distribution platform.

This is how the Spacewalk Debian/Ubuntu client packages are built[3][4], among many other things. There's also a few other examples out in the wild[5][6][7].

At least for the stuff I make for myself, I haven't made packages that fully pass lintian, but it's not terribly difficult to do if you want to.

[1]: https://github.com/ascherer/debbuild

[2]: https://openbuildservice.org/

[3]: https://gitlab.com/datto/engineering/spacewalk-debian-client...

[4]: https://build.opensuse.org/project/show/systemsmanagement:sp...

[5]: https://pagure.io/python36-flatpkg-deb

[6]: https://pagure.io/rpmdevtools-deb

[7]: https://build.opensuse.org/package/view_file/Virtualization:...

Another shameless plug:


It doesn't exactly simplify the Debian packaging in my case as it leaves the packaging mostly exposed.

But it provides a lot of far more convenient and easy to remember commands (make rpm, make deb, make deb_repo, make rpm_repo, etc) compared to the easy to forget options of rpmbuild/mock and dpkg-buildpackage/cowbuilder.

It also glues everything together from upstream source recovery to building the final repository ready to be rsynced on a public server.

NixOS was also easy to get started with, I really like it !

NixOS (Nixpkgs actually) is indeed incredibly simple to contribute to.

It's a monorepo, and packages are declarative. So for most usecases, its just 5-10 LOC describing the source location, dependencies and build process.

After that, if it builds on your local copy of Nixpkgs, it will build on the same Nixpkgs commit as everything is purely functional. A simple PR is all you need.

For updates, there's super low burden too. Thanks to packages being declarative, everything is quite explicit. So a bot can check for source updates upstream, update your package and rebuild it. All it requires is simple maintainer approval.

Yeah the packaging part of NixOS/Nixpkgs is very pleasant. You write a "config" in the Nix language (example[1] of vorbis-tools, it's <40 LOC), open a PR on Github and it'll be submitted provided you follow the CR feedback. As a user you'd later download the built binary from the NixOS cache so you don't have to build it yourself.

[1]: https://github.com/NixOS/nixpkgs/blob/fce8f26af6ef8209c7d282...

Agreed, Gentoo and Nixpkgs both have really helpful people onboarding new packagers.

Probably won't help with Debian official packages but I've found success with these materials.


And then a quality of life enhancement says push the package building off to FPM.


Personal Repo


> Community: If a newbie has a bad time, it's a bug.

This is the root of the Debian newbie community experience. FPM seems to get it at least.

And FreeBSD ports (which, if I remember correctly, inspired a lot of things in the Gentoo packaging system).

> there is a ton of documentation on creating packages but which 300 page guide is the right one to use is unclear

So true. There seems like there are competing packaging approaches, some claiming others are outdated while themselves say the opposite.

I've been willing to package a few of my open-source projects as well for almost a year, and out of frustration, I've ended up building my .deb packages manually and hosting them on my own apt repository. In the meantime, I've published a few packages on PPAs (for Ubuntu) and on AUR (for ArchLinux), and it's been as easy as it could have been.

Given I use Debian exclusively on my servers, it feels like a missed opportunity to me (not that I think my own projects are so much valuable to others, but I'd package a lot more stuff if it was clear how to do it well).

I like the Arch/AUR approach too as you don't even have to host your own repo. With a good AUR helper that system is quite convenient for maintainers as well as users.

The problem with the AUR is its semi-official status takes the pressure off the 'community' binary repositories, while not being up to the same standards - many packages are out of date or don't build, and the complete lack of gatekeeping renders the user vulnerable to malicious packages. Note that "AUR helpers" are not officially condoned by Arch Linux, despite the system being unusable without them - officially you're supposed to examine all the PKGBUILDs of all dependencies manually to make sure they're safe.

I prefer the Void Linux approach - every "template" (equivalent to Arch's PKGBUILD) is kept in a single, monolithic git repository, and the barrier to contribution is kept low; just submit a pull request, and a core developer will review and accept it. This way all packages have binary builds and all packages have had at least nominal review, yet the repository is surprisingly broad. And if you need something that isn't in the repository, adapting an AUR PKGBUILD is trivial.

> Then there is a ton of documentation on creating packages but which 300 page guide is the right one to use is unclear. And which set of packaging tools should I use?

I've run into the same problems, though I've only wanted to build packages for my own use, or company-internal use.

I think I've understood now how packaging is supposed to work, and have written a guide, shameless plug: http://leanpub.com/debian/c/hn (coupon link to get a free copy; feedback would be appreciated).

> feedback would be appreciated

I'm just looking through it, for me looking through the epub version the layout is broken (at least viewing using Adobe Digital Editions), apparently every time there's a yellow block with code.

"1.1 Target Audience and Prerequisites" is the first example but I keep finding more.

Thanks, I'll have to bring that to leanpub's attention, the PDF looks fine.

https://wiki.debian.org/Packaging points to the official documentation.

Thanks for this!

My first upload will soon be 10 years ago. One thing that is definitely noticeable is that while creating a package is still by no means trivial, it has definitely become much, much easier.

10 years ago, you had a handful of very idiosyncratic build helpers to help you manage your package.

Today, effectively it's just one [1], debhelper, and it has become trivial to build packages with it. It's a very powerful framework that can be overridden do do basically anything you like, but usually automagically does the right thing. Gone are the days where you had to write an unwieldy debian/rules file.

3 years ago, you had X version control systems supported: git, svn, mercurial, cvs, and who nows what else. Sounds unreasonable, right? Well, when you rely on volunteer work, you need to live with the fact that volunteers will choose the particular tooling they like to work with.

Today, you have Salsa which is a GitLab instance, and people apparently just learned to deal with it, and the world hasn't come tumbling down.

I absolutely agree with the author that some things are just wrong within Debian. However, I believe these are inherent to the nature of an organization comprised entirely of volunteers; achieving consensus becomes really hard because nobody wants to be told how to spend their free time.

On the other hand, being involved in Debian has also been an incredibly formative experience. I believe Debian does some things right that others still fail at or trail after.

[1] https://anarc.at/blog/2019-02-05-debian-build-systems/

[2] https://salsa.debian.org

The Debian tooling can be really fantastic at time. If the upstream is clean (project with a proper setup.py/Makefile/CMake/Makefile.pl/etc), you only have to declare the meta-data, the build and runtime dependencies and you are set.

But at the same time, I'm not a big fan of other aspects. For example, the fact the packaging is split between at least 3 or 4 files and generally something close to a dozen (at least control, rule, changelog, copyright and maybe .install, .conffiles, .postinst, .preinst, etc) is a bit complex at first specially compared to rpm where everything is centralized into one .spec file.

I'm also not a big fan of how Debian packages handle permission different than root:root. dpkg-statoverride in post install doesn't feel natural, the newcomer while typically use chown, which is wrong. rpm is better in that regard, you explicitly state the permissions in the spec file %files section.

And I'm also not a big fan of Debian renaming upstream archives (<pkg>_<ver>.orig.tar.gz), IMHO, upstream should be taken as it is, including the file name. But that's a minor one and also a very personal opinion.

I've mixed opinions about packages with a lot of scripting to handle a kind of default but slightly customized configuration. I know it can be disabled with "DEBIAN_FRONTEND=noninteractive" but I feel like it's a lot of efforts and an unecessary source of complexity and bugs for a distribution that will primarily be used on servers where such helpers are almost useless specially using tools like puppet/chef/ansible/saltstack.

The tooling also feels a bit like an aggregation of scripts put on top of each. For example, when you want to build in a clean and throwaway environment (aka a chroot), you have pbuilder, cowbuilder, qemubuilder, whalebuilder... CentOS/Fedora has mock. Granted it's less advanced (apart maybe from mockchain) but at least it the obvious choice.

For a lot of things, the Debian packaging feels like Perl: there is 10 ways to do it. But 9 of them are wrong, and you have to go through dozens of pages of policies with no search box (https://www.debian.org/doc/debian-policy/index.html) to figure out the right way. And it's also kind of difficult to follow examples (by downloading source packages to see how it's done) since it's not explicit where each part fits, at least for a newcomer.

I'm curious, in what way do you feel it's less advanced? Mock supports leveraging qemu-user-static for doing foreign architectures as of Mock 1.4.11, which I think was the only major feature that the Debian tools had for a while that Mock lacked.

Is there something else missing?

Mock doesn't have COW (copy on write), cowbuilder by contrast creates a base chroot, then use copy on write to enrich it per build. This speeds-up things considerably.

Also, cowbuilder runs fine with several instance running at the same time, mock is a bit more sketchy in that regard. Basically, you can only build one package at a time. Typically, it's a bit of an issue for my personal scripts: https://github.com/kakwa/amkecpak, in them I can basically do make rpm -j 42 to build 42 packages at the same time, but with mock/mockchain, the chroot creation has some concurrency issues.

In fairness, I used the rather old mock version packaged by Debian, I've not looked at 1.4, it seems that some of these issues are addressed now.

I’m surprised and sad that that’s still the case. I was the creator and maintainer for an open source application some 10+ years ago, and figuring out how to package it properly for Debian was essentially impossible.

It eventually got packaged by someone who had been involved in the Debian community for some time.

I had this exact same problem and also gave up. It was absurd. The amount of friction in contributing to Debian is enormous.

And, at the risk of upsetting people.. it is time to move away from mailing lists, especially for submitting bugs and managing packages.

Debian hasn't used mailing lists for reporting bugs since the 90s: https://en.wikipedia.org/wiki/Debbugs

Debbugs is basically a mailing list.

The original problem with reporting bugs to a mailing list was mails for many bugs were mixed and you couldn't easily get the state of individual bugs. Debbugs, by automatically creating a new mailing list for individual bugs, does solve this problem admirably.

The new problem is that new people hate emails. I think the merit of this problem is debatable.

Mailing lists have had threads for decades.

You can subscribe to individual bugs in Debbugs, you can't subscribe to threads. Threads also don't have short ID (bug 123456) and don't have metadata like whether the bug is closed.

Broken email clients and careless people break those all the time though...

In the long run, everyone who tolerates email is dead.

But do they die before everything else?

My experience trying to update a package was similar. I use Ubuntu so I had the extra friction of using their Launchpad service first, then being told to go-to the Debian lists instead.

> I've been using Debian for over 10 years and I still love it as a user.

I've been using it for even longer than that, but I no longer love it. It has declined to being "just OK" in my eyes, which is why I've begun the effort to shift all of my machines to something better.

It should be easier to make a Guix package and submit it. Then you could run Guix on top of Debian, or use Guix functionality to make a container. Either a tarball or a docker container if that's more your style. https://www.gnu.org/software/guix/blog/2018/tarballs-the-ult...

Why not using CMake/cpack? It can generate 100% valid Debian packages and you don't need to do a C++ project, but you can package whatever you want.

I hadn't heard of cpack - it seems like there's not that many projects using it. Do you have any examples? Do the packages pass lintian checks?

In my experience, better tooling always wins in the long run.

I've built Debian packages in the past, and after packaging the same software with Nix, it's very hard to not feel that Debian tooling and packaging is time consuming for no good reason. The nixpkgs model, where all you need to build everything is one `git clone`, all you need to make a change is a pull request and wait for a range of automated tests to tell you it's good (for both build and the _uses_ of a package!), and all you need to land a contributed change is click one button, seems strictly better.

Over the years I've heard many people say "but shell scripts, FTP servers and arcane helper tools is the way we've done it for decades and that will never change" in many projects, but eventually, these projects shrink and those with a good developer experience and clean tooling overtake them.

Similarly, after experiencing automatic, safe refactoring across billion-line typed-checked code bases, you can't help but wonder why people put up with spending their time going through heoric community efforts distributing work across people that a machine could easily do if you used good tooling.

In my opinion, the real strength and legacy of Debian is successfully running a large, diverse, distributed project over decades with (reasonable) cohesion, democracy, and (reasonably) good organisation and project management.

But even some non-technical problems go away with good tooling, and more time gets freed up to solve those hard tasks.

Concrete example: In nixpkgs it is very easy to build overlays for the whole of NixOS that allow you to switch from dynamic to static linking or add hardening flags across all packages, avoiding big debates over which is the "one true way" because providing both is so easy, and both can be merged upstream.

I think any big and successful project should continuously invest into better tooling, and simplify and automate things. That keeps contributors motivated and on board.

(14-years happy Ubuntu [and thus Debian] user, and 10-years i3 user, so thanks for your efforts, Michael.)

> I have more ideas that seem really compelling to me, but, based on how my previous projects have been going, I don’t think I can make any of these ideas happen within the Debian project.

Interesting article. I had a chance of asking two distributions about some particular feature (making it easier to get GPG keys of developers).

The first one I contacted was Gentoo: they quickly CCed my email to relevant people, discussed the matter between themselves and deployed the change in a week.

Then I contacted Debian about the same thing. The email was basically identical. But the reply was largely negative, complaining about details and openly avoiding work. The entire interaction reminded me of large corporations where any change is met with resistance for resistance sake.

(I use Arch btw.)

I used debian for over 10 years and never managed to contribute anything.

10 Minutes after using homebrew for the first time I sent in a PR to update a package to the latest version and update the built dependencies.

Honestly, as both a Mac and a Debian user, I’m not sure which one I prefer. I understand your frustration, but random strangers not being allowed to push updates to my operating system in 5.34 seconds doesn’t sound all bad. To me.

(Obviously not that TFA paints a rosy picture..)

Those changes are reviewed by maintainers. The fact that a random stranger is able to push a change and get it reviewed, approved, merged, and available in minutes should be the goal of most open source projects. On the contrary, older projects tend to be too reactionary when it comes to infrastructure tools, so in turn they get very slow interactions. This becomes a demotivator for anybody who is used to more efficient workflows.

It is not a coincidence that the author became demotivated after doing some professional experience.

> The fact that a random stranger is able to push a change and get it reviewed, approved, merged, and available in minutes should be the goal of most open source projects

> On the contrary, older projects tend to be too reactionary when it comes to infrastructure tools, so in turn they get very slow interactions

Well said.

'Open source' doesn't say anything about infrastructure. But it's just as important.

> The fact that a random stranger is able to push a change and get it reviewed, approved, merged, and available in minutes should be the goal of most open source projects.

That sounds like a security risk to me.

How much safer is it if pushing in a malicious change takes two months?

I had to re-read it too. He does not say that the same dev should do all these things

> but random strangers not being allowed to push updates to my operating system in 5.34 seconds doesn’t sound all bad. To me.

So that's what the frustrating to maintainers crumbling infrastructure and crappy tooling are for. Now it all makes sense!

Actually, I'd very much prefer quickly pushed updates in case of severe security issues. Debian had lots of really ancient packages with problems in "stable" last time I looked.

If there's a severe security issue, Debian will quickly push an update containing a backported version of the fix, assuming you have the debian-security repository enabled (which you should). Just because the version number on the package is old doesn't mean it's insecure.


The problem is: what defines a security issue. A developer (more often then not) doesn't know if a fixed bug could have lead to a security issue.

You're misunderstanding the point of Debian.

Debian's "stable" release is that. Stable. No updates are issued for packages except for critical security updates, which are backported to the released version.

It's essentially an LTS release. This isn't what everyone wants or needs, but if you do, Debian does it very well.

I’d argue that Debian’s “stability” should be a process/policy thing. It should be orthogonal to the tooling. Just because you have a “stability-first” policy doesn’t mean you should be enforcing it through crappy tooling.

Homebrew is effectively a "rolling release" with fast turnover of package versions. So in that sense it's not directly comparable to Debian stable (or even unstable). That rapid turnover has led to numerous breaking bugs over the last five years or so that I've been using it. It's a lot simpler, but it also lacks the sophisticated versioned dependencies which dpkg/apt provide. Both have their tradeoffs, but I wouldn't want to use Homebrew for anything serious; it can break at any moment, and can be very painful to manage. It's got better over the years, but still has some way to go to match up to Linux package management.

Try macports for a BSD style packaging for the mac.

What's the advantage of using macports over Homebrew?

For example proper runtime dependency management?

Try updating a single homebrew package to a new major version and watch the other homebrew packages depending on it breaking.

To be fair, that's not a problem unique to homebrew. Gentoo used to break runtime dependencies, too. Although that was a long time ago. Nowadays libs used by other programs are preserved and only deleted when all dependencies have been updated, so breakage no longer occurrs.

For one, it works just fine on multi-user Macs as it installs software under the uid/gid of the user who installed Homebrew somewhere under /usr/local whereas Homebrew installs it the proper way as root:admin and doesn't create a shitload of permission problems.

I can't understand whether you're saying Homebrew does it right or not on multiuser systems.

I have a lot of trouble with Homebrew on multiuser Mac systems. Running `brew install ...` or `brew update` as anyone but the user who originally installed Homebrew almost always fails because of permission errors (or it successfully installs, but then causes permission errors in the future for the user who installed Homebrew). Whenever I need to use Homebrew commands now, I use `sudo -iu otheruser` to switch to the user account who first installed Homebrew in order to avoid the permissions issues. I'm really baffled that Homebrew doesn't support the multiuser system case. Is it so rare to share a system nowadays? I've had this issue on multiple Macs and fresh installs, so I don't think this is some fluke bug.

Same for node packages, and same for rubgems. Some package managers are just better than others.

Seems to me like either snap or Homebrew Linux are the way to go. ArchLinux also has a reasonable packaging experience.

> 10 Minutes after using homebrew for the first time I sent in a PR to update a package to the latest version and update the built dependencies.

And that's exactly why I use and trust Debian.

It's a package that I am one of the upstream developers for. The PR involved changing the version from x.y.2 to x.y.3 and removing a no longer needed build dependancy.

> And that's exactly why I use and trust Debian.

Comments like this are why you are an asshole, mr throwaway30012.

I noticed a mistake in a comment in a default file in the /etc/ directory - pretty certain the file is part of Debian (although this was on Ubuntu).

I thought I would try to fix it.

Two hours googling later and I couldn't even work out who the maintainer was. I don't like eating other people's time, but I even tried using IRC.

For next time:

    dpkg -S path/to/file
gives the the name of the package containing a file, and

    dpkg -s package-name
gives you the name of the maintainer.

All mails I've sent to maintainers of packages regarding issues with them have been ignored. I think they prefer you to use the bug tracker. (Which sucks, so I always end up doing nothing about it.)

In the end I switched all my servers to Ubuntu. It's been good and I love PPAs.

Ubuntu bug reporting is just as shitty. No reply for 6+ months was common for me. One time I even tracked down the issue, which was fixed upstream just one commit after the one they used for their package. Still no reaction, but about a year later they asked of the problem still persists with the current release. I didn't bother to reply and stopped reporting to either Ubuntu or Debian.

I had the same experience. After an Ubuntu update, my workstation was suddenly not able to mount an NFS filesystem from FreeBSD when Kerberos was enabled.

The bug was rather quickly marked as confirmed, with absolutely no updates for several years, even though the root cause was known. As far as I know, it still hasn't been fixed.

Exact same experience. The best bug reporting experiences I had were with arch and Nix.

My experience: I submitted a bug once, with patch, using the official bug tracker and jumping through all the hoops. It was ignored completely. When I went on IRC, the response was "perhaps you'd like to become the package maintainer?".

The bug remains to this day.

> I think they prefer you to use the bug tracker. (Which sucks, so I always end up doing nothing about it.)

Oh, so much this. The crappiness of bug reporting for Debian has meant that I stopped trying to report issues years ago.

> dpkg -S path/to/file

In the case of config files, this only seems to work sometimes -- maybe if it wasn't modified by the user? Or if it came from a package directly, and wasn't generated from its {pre,post}inst scripts?

    $ dpkg -S /etc/hosts
    dpkg-query: no path found matching pattern /etc/hosts
    $ dpkg -S /etc/resolv.conf
    dpkg-query: no path found matching pattern /etc/resolv.conf
    $ dpkg -S /etc/bash_completion
    bash-completion: /etc/bash_completion

`dpkg -S /path/to/file` only works if the file belongs to a package. `/etc/resolv.conf` gets created dynamically at runtime. `/etc/hosts` gets created at installation time because it includes the hostname (which ought to get replaced with nss-myhostname).


I suspect with that I can probably find the repository, so that I can check whether the comment has already been fixed, then I can work out how to submit a patch or bug.

Usually worth just following this workflow:


That is useful for reporting bugs.

I am a developer so every now and then I make a concerted effort to diagnose a bug, then fix it, then do a pull request.

However, I do admit I usually give up before getting to the point of submitting something useful...

I like it.

    set -e
    set -u

    if ! [[ ${1:-} ]]; then
        echo please provide a file

    dpkg -s $(dpkg -S "$1" | cut -d: -f 1) | grep Maintainer

reportbug --file /etc/foobar

Instead, currently, all packages become lint-unclean, all maintainers need to read up on what the new thing is, how it might break, whether/how it affects them, manually run some tests, and finally decide to opt in. This causes a lot of overhead and manually executed mechanical changes across packages.

I always wondered if Debian/Ubuntu could benefit from a "monorepo". It seems to work for other distributions, e.g. Alpine Linux and Homebrew.


Right now every Debian package lives in a separate repo, or it doesn't even have to live in a repo at all AFAIK.

I think Debian has the most packages because their process is very loose and decoupled (as well as it being one of the oldest distros). But having tighter integration does help move things forward faster.

Nix / NixOS[1], and Gentoo[2] also use monorepos with some provision for overlays.

1: https://github.com/NixOS/nixpkgs

2: https://github.com/gentoo/gentoo

Likewise for the BSD ports.

There are advantages to both ways. However, a single repository permits changes to multiple packages in a single change. In Debian, simple transitions which affect multiple packages can take months or even years to fully propagate through the entire system. Not due to technical difficulty, but the logistics of coordinating the change.

Debian's approach made sense at the time. Developers who were widely distributed, communicating sporadically using dial-up internet connections, needed to be able to work independently and changes which had wide-ranging effects needed discussion and coordination. Nowadays, that can happen on a single merge request on GitLab. Homebrew can do such changes in a single pull request.

We've had CVS for years.

True, but it's not just the version control that's the important part. It's the infrastructure around it.

Today, when you submit e.g. a homebrew PR, it gets comprehensively tested by CI builds, including rebuilding and testing all reverse dependencies on every supported platform version. The BSD ports changes are backed by poudriere builds of the entire collection, again on multiple versions.

Services like GitHub and GitLab allow one to hook in all sorts of stuff and greatly improve the ease of submission of large- and small-scale changes, as well as making thorough testing and review both possible and accessible. Older projects are stuck with older entrenched tools and infrastructure, and haven't taken advantage to newer ways of doing things which newer projects have been able to adopt wholesale. The Debian tooling and infrastructure is certainly dated, but it's also of extremely high quality and very robust.

And CVS is clearly the cutting edge of source control.

I don't think it necessarily needs a monorepo, but having every package on Salsa (Debian's GitLab installation) would help.

The move to GitLab was a glimmer of hope that one day I'll be able to help contribute to the project which I'm a heavy user.

I wish they went all in and used GitLab Issues for bugs and GitLab CI/CD to auto-build packages for both validation and pushing new packages into the Debian repositories.

I'm not following the situation but what's stopping them from improving their packaging system, except for being complicated?

If the systemd discussion has taught me anything about Debian governance, getting everyone to agree is going to be a pain. Far easier to let it be.

Loud minorities and trolls on public mailing lists are not representative of the community of DDs and DMs.

No, but they're representative of the decision process. "Should we do A or B" results in people popping up supporting A, B, C, D, and E, and the decision tends to be "let's support them all". And as the original article we're commenting on discusses, that "support" comes in the form of "hey package maintainers, here's what you have to deal with".

This is a misunderstanding of how Debian works.

I've participated in the Debian community for 18 years, since Potato (2.2). I have a fairly good practical idea of how Debian operates. The above description was based on many, many, many decisions over the years. On the rare occasions that Debian has to actually decide something rather than answering "X or Y" with "yes" (and the decision doesn't fall solely to a small number of developers responsible for the packages in question), it results in painful institutional friction.

I love using Debian, I care deeply about Debian Policy and Debian's procedures, I enjoy many aspects of the Debian community, and I'm also well aware of where Debian has difficulties.

A mono repo is not needed. Some teams are able to push changes to all the packages of the team without it and without much effort (for example, the Debian Python Modules Team does that quite often). It's more a social problem: all packages should be maintained in the same way and people should be allowed to do global changes. This is true for some teams, not for some others.

Wouldn't even need to be a monorepo. As a first step, ensuring that all packages use the same system, e.g. git, and one authoritative repository (e.g. a debian-maintained gitlab) would be a great start.

https://salsa.debian.org/ is a Debian maintained GitLab

Reading between the lines I see "wouldn't it be nice if Debian's repository was like Google's repository"

Yes, a monorepo for all software with a single soviet to oversee it with a good five-year plan for rollout. We have plenty of historic examples of such centralized control.

The sidebar makes this barely readable on Pixel 2. The markdown is easy to read:


Sorry about that! I’m not great with Web Design, as you can tell. If anyone wants to contribute a CSS fix, or just a pointer to a good article on how to fix this, I’d be very grateful!

You could add the following CSS to the container (`.row` here but you should make the class name a bit more specific) which will stack the sidebar and your content into a backwards column (so the sidebar shows up after scrolling past the content, which looks and feels ok for a quick & scrappy mobile fix.

    display: flex;
    flex-direction: reverse-column;
Put that in a media query to only target phones/small devices.

Feel free to message me if you're confused and would prefer a fast PR, but you strike me as someone who wouldn't mind learning a bit & I'm in Morocco with limited internet.

edit: I would also move the branding (name & logo) above the content so people know who they're reading. And adjust the margins a bit. Frontend is Fun.

Thank you so much! Committed your suggestion in https://github.com/stapelberg/hugo/commit/3a7227fef47fc49550....

Let me know your Paypal, if you have any, and I’ll gladly invite you for a virtual coffee as a sign of my gratitude :)

All good! Feel free to shoot a small donation to the Internet Archive or we can grab a coffee if you ever find yourself in NYC.

Glad it worked for you!

"Put that in a media query to only target phones/small devices."

Tone: Honest question. What's that, exactly?

I've been trying to figure that out off and on for the last, oh, 5 or 6 years, and I keep bouncing off the problem as being too complicated to be worth it for my personal site. If there's a clean answer that has developed in the time since I basically gave up, I'd (again, no sarcasm) love to hear it. But I'd probably need a link to something; Google is clogged with old stuff here.

Media queries are ways to only apply a bunch of CSS on clients that fulfill some conditions.

If I write:

     @media (max-width: 500px) {
        p {
           font-style: italic;
It will turn the text in p tags italic on small screens.

Besides the lack of responsive sidebar, never justify text on the internet. Particularly for blog posts.

With hyphens enabled it's not that bad.

Reader mode in Firefox works like a charm.

You can also enable reader mode in Chrome if you turn on its flag in 'chrome://flags'.

Which flag is that? I don't see anything when searching for "read".

the content is very narrow on iphone6 as well, but safari readability view suffices

I love Debian as a user. I considered becoming a DM then DD -- I read all the relevant documents and tested water with maintaining a package I used -- but ultimately gave up due to the bureaucracy and politics involved.

The last straw was this: https://lwn.net/Articles/704608/

(re the lwn link) Holy shit that is nuts.

Yeah that was painful to read. And all too typical.

The Arch User Repository (https://aur.archlinux.org/) seems to solve a lot of the collaboration issues. It is pretty painless to create PKGBUILD files to make a package and to upload a new one to the AUR. Most maintainers read the comments and accept patches on a timely manner, and there's even a way to forcibly relinquish a package if a maintainer is AWOL for significant time.

On the other side of the spectrum, I've found that the official binary repositories for Arch Linux suffer many of the same issues described in the article for Debian. Patches being ignored and collaboration or involvement being near impossible. Even worse, the few people in charge of the official repositories are allowed to basically remove packages from the AUR with no interaction with the AUR maintainer, for the purpose of "promoting" them to the official repositories. This has happened to me twice, and it resulted in what I think is a worse package in one of those cases.

Yo! Trusted user from Arch Linux.

>On the other side of the spectrum, I've found that the official binary repositories for Arch Linux suffer many of the same issues described in the article for Debian. Patches being ignored and collaboration or involvement being near impossible.

Well, we still relay on svn internally so things are complicated to say the least. Even if we had things on git (which we are working on), I'm unsure if opening stuff up for outside collaborations like gentoo, alpine, void and nixos does is a good way. You need the proper tooling setup to make sure this isn't a burden on the maintainers.

>Even worse, the few people in charge of the official repositories are allowed to basically remove packages from the AUR with no interaction with the AUR maintainer, for the purpose of "promoting" them to the official repositories.

Which is true. Some people email maintainers of complicated packages before inclusion, and some also gives a headsup in the comments of the AUR. However, this is all done if the packager want to. There is no rules here. The removal is on the grounds that AUR packages shouldn't overlap with official ones.

>This has happened to me twice, and it resulted in what I think is a worse package in one of those cases.

I'm interested taking a look at this if you want :) foxboron@archlinux.org or just type in the comments.

> I'm interested taking a look at this if you want :)

I'd rather not, the thing in question was around three years ago and I don't think I even have my AUR PKGBUILD for comparison. It wasn't a cry for help, and I'm not naming packages or individuals for a reason. Just voicing some frustration at the process.

Content is never deleted from the AUR, so any PKGBUILD can be retrieved.

For me this is the telling part::

"""When I joined Debian, I was still studying, i.e. I had luxurious amounts of spare time."""

OSS has stopped (if it ever truly was) being a part time endeavour. I know from bitter personal experience one cannout up with a lot of bureaucracy and process of it is your day job - you have time to get through the rubbish in order to find the diamonds.

How we (as a society now utterly dependent on OSS) manage this problem is on a par with how we manage journalism - they are bigger questions than I have easy answers to

> OSS has stopped (if it ever truly was) being a part time endeavour.

I think that depends on what OSS crowd you want to run with. The part-time hobbyist sector may be, I think, stronger than ever before.

The difference now is that there exists the "commercial" OSS sector. That is certainly not part-time hobbyist.

There is some overlap between those two worlds, but they are very different and distinct worlds nonetheless.

You're the guy behind i3? Thank you very much. I love it.

I’m glad you like i3!

Still using i3 too! It just works great and never gets in the way.

Btw. You changed from .name to .ch I noticed... Trying to naturalize? ;)

Baby steps ;)

My productivity changed drastically ever since I switched to i3. There is almost no friction when I interface with my computer due to it. Thank you.

I3 is coolest thing ever.

While this is sad and painful to read, I can't say I'm surprised.

The problems listed are precisely the kind of problems that Redhat strategically supports fedora with, in terms of investment of resources. For all the hate Redhat receives it has consistently been a good community member by being willing to help fedora in areas that it knows are hard and yet not 'cool' enough to attract volunteer contributions.

What has Ubuntu done for the debian community along the same lines?

I've been using Debian / Ubuntu for many years, as much due to inertia and familiarity as anything else. And I have a lot of respect for the project.

If I wanted to start being a contributor to a distribution, which one would be the best to dive in to?

I would be interested to know the generally-accepted answers to this as well. What are the free/libre OSes which most folks enjoy contributing to?

It's a bit of an odd ball, but I loved contributing to the GuixSD project. I contributed multiple packages after learning Guile Scheme in a couple of days.

> If I wanted to start being a contributor to a distribution, which one would be the best to dive in to?

I tried to contribute a package to Fedora once upon a time...gave up in frustration after a while since it went nowhere after a bit of hoop jumping.

I'm sorry you felt that way. All I can say is that we've done a lot of work in recent years to drastically improve the quality of life for new contributors, especially packagers!

The process is pretty well documented here: https://fedoraproject.org/wiki/Join_the_package_collection_m...

If you have issues, feel free to hop into any of the Fedora communication channels[1][2][3][4][5] and ask for help.

[1]: https://fedoraproject.org/wiki/Communicating_and_getting_hel...

[2]: https://fedoraproject.org/wiki/Telegram

[3]: https://discord.gg/fedora (Yes, Discord!)

[4]: https://discussion.fedoraproject.org/

[5]: https://fedoraforum.org/ (Unofficial, but a strong, helpful group!)

I think Fedora is a great distribution to get started in. New stuff, cool tech, and decent, well-specified workflows across the board. :)

We also have a cool website that helps you figure out what you want to do and how to apply it: https://whatcanidoforfedora.org/

(Yes, it's inspired by the Mozilla one!)

The smaller ones are often easier to contribute, port some packages to Void!

That's also where you make the least impact.

There is probably an impact graph with popularity * impact of change on the project * probability of change being accepted * log(delay of change reaching 80% of users).

For instance with Homebrew: high popularity, high probability of being accepted and low delay to reach 80% of users, so the impact of the change on the project is the main factor.

And a follow-up question: how would you get involved with it.

The post shares several pain points with Debian's slow, aging change-process and integration-infrastructure.

Open question: If Debian contributors feel the need to drop out and move on due to these pain points, are there less-painful Linux-distribution projects out there that are getting more of these pain points right they can flock to? Is this a sign that Debian needs to reform, or that other, newer distributions are outpacing it?

I like Debian, but I agree it's become too slow and bureaucratic.

Sadly, like most big organizations, it is hard to implement changes from inside.

In my opinion, most Debian's issues stem from a centralized imperative package manager where all packages need to be carefully kept in sync for things to work. Moving to something like Nix would greatly simplify development.

This is far from a new idea. In fact, it was quite thoroughly discussed in debian-devel mailing list back in 2008, 2013 and several other times [1].

[1] https://lists.debian.org/debian-devel/2008/12/msg01007.html

> it's become too slow and bureaucratic.

When I first used Debian in the 90s it had a reputation for being slow and bureaucratic.

And unlike most distros which were around back then, Debian is still here today, so it must be good for something.

That said, I agree anything Debian (and thus Ubuntu too) and FSF feels awkwardly baroque to work with these days.

Old email-based systems, bickering over politics and minor issues rather than the subject at hand, for weeks or months... and no standard CI?!?

With all that I too find myself spending my time contributing elsewhere.

It’s a shame as these 2 organizations acts as a fundament for sooo many things I rely on daily, but I just can’t bother.

> Debian is still here today, so it must be good for something.

It’s stable and I know they’ll still be around 5 years from now.

They also very rarely break anything on updates.

There are only three Linux distress I’d run in production, RHEL, Debian or Ubuntu (depending.

What is the problem with old email-based systems? This is an honest question.

For FSF and Emacs all bugs, discussions and contribution are handled through email.

Sending patch-files by email and then having to discuss/defend your patch for weeks, only to have to create and send a new one, leaving receivers to diff patch-files to see what is different...

Let’s just say in 2019 you expect something a little more sophisticated to be available to make your contribution easier.

There's no problem with systems which allow interaction by email. It can be handy, and if you like that, then the Debian BTS is convenient for responding to bugs as a maintainer. However, email-only systems which don't offer any other methods of interaction can be problematic.

email-only interaction can be massively inefficient. For larger changes, I might need to change the severities, tags, and follow up to dozens of bugs. For every metadata change I make, I need to check the reply to make sure the change took place. That means manually tying together sent emails with BTS replies and repeating any which failed.

I used to track all this on large pieces of paper! I shouldn't need a manual system to manage my interaction with an issue tracking system. When I press submit in any other system, the effect is immediate. This is hugely costly in terms of time and effort to do really trivial and mundane things. It's one thing to do it once, but what happens when you make 30 changes and have to wait 20 minutes for each request to round-trip? It's a logistical nightmare which can take several hours to complete.

If I was to highlight a single problem with the Debian BTS, it's that there was a failure to acknowledge its inefficiencies and look at other systems. The criticisms raised in this thread aren't new; they were known 15+ years back. There's a reason why the others are not using email as their primary mode of interaction, and it's a shame the BTS didn't get a decent web frontend to make it more efficient as well as more accessible.

The thing that I think Debian does best (which is the same thing that has kept me using Debian since about 1996) is stability. I don't think any other Linux distro (that isn't Debian-based, anyway) even comes close.

> In my opinion, most Debian's issues stem from a centralized imperative package manager where all packages need to be carefully kept in sync for things to work.

Hmm, in my opinion, this is good thing, not an issue needs to be solved.

openSUSE uses the Open Build Service[0] to build, well, openSUSE. OBS also supports other distributions etc., but it makes it fairly easy to put up a package.[1]

For RPM-based distros (e.g. openSUSE), you write a .spec-file, check it in via OBS's version control alongside your sources, and off you go. OBS builds the package (and pulls in dependencies as needed etc.) and publishes the result as a repository with GPG keys and all the jazz, which you can just add to your own distro, and which is openly visible, so everybody else can use your package(s), too.

OBS also supports forking existing packages, and you can merge them back together, which means you can fix something in an existing package (whether a distro-package or something somebody else put up) and if they accept the changes, congratulations, you just fixed something in the distribution.

This means a lot of building, compilation, versioning etc. is out in the open, and you always have the sources available on top of it.

As an aside: I doubt people will "flock" to openSUSE, since many people sneer at them for no good reason (YaST, still?!), but they do a lot of good work, are good upstream contributors (like RedHat and unlike Ubuntu) and some of the tooling is absolutely amazing, except that nobody really knows about it.

[0]: https://en.opensuse.org/Portal:Build_Service

[1]: https://fosdem.org/2019/schedule/event/distribution_build_de...

OBS is so good, Fedora (my distro of choice) has a clone of it called Copr: https://copr.fedorainfracloud.org/

I'm a BSD guy but needed a Linux distro last year for a specific project. OpenSUSE was remarkably good, stable, easy to use and update, and apparently free of the political crap that seems to come with many other distros.

I have recommended it to others who need a stable Linux distro with lots of well-maintained packages. Some of them looked at me like I was crazy. Preconceived notions are hard to overcome sometimes. Oh well.

Do you feel any concern when the owner keeps changing? I think that's part of the reason people keeping a distance.

NixOS/nixpkgs is exceptionally friendly and open to new developers, and even just a one-off patch is simple to do.

Indeed. I started using NixOS and Nix a bit more than six months ago and I was contributing patches within no-time. One can just submit a pull request on GitHub (with the proper format) and a lot of aspects are automated: e.g. the impact of the change (in terms of packages that are effected) is automatically determined and someone with the right privileges can request a build. Some PRs linger around for a longer time, but most of my PRs were merged within the day. Also, I received useful feedback if a PR needed more polish.

What helped me a lot with Nix is that you can easily make your own local derivations. Derivations are usually short [1] and bear little risk with sandboxed builds enabled. Basically, you do a nix-shell -p mypackage, you get dropped in a shell with your derivation, but it does not affect the rest of your system.

[1] Example of a C++ library: https://github.com/danieldk/nix-home/blob/master/overlays/30...

I am surprised OpenSuse does not get more attention. I feel its infrastructure, with the open build service etc, should be aťtractive to many?

Ugh, just saw your comment now. I wrote a bit more in a sibling comment about openSUSE. For some reason, people seem to either sneer at openSUSE (probably people who go way back?) or are simply unaware of anything it does and/or its existence. I briefly asked the people at the openSUSE stand at FOSDEM 2019 why they don't advertise things such as OBS more, and the answer suggested that some people basically don't see a point in it? It's a pity.

The last time I tried OpenSUSE, it gave me nothing but problems. Admittedly, this was many years ago and things may have changed -- but I have no real reason to give it another try to find out.

I switched all my servers to Ubuntu years ago when people were telling me "Ubuntu is no server distribution". I honestly never looked back.

Ubuntu is... interesting.

Watch out for what packages you rely on, and what repository those packages come from. If they're in universe, expect your bug reports on Launchpad to go unanswered or unresolved. The only way to get fixes in them is to go upstream to Debian. But note that not every bug in a universe package is actually fixable or broken in a Debian world, because the bug might be down to interactions between a decision made by Canonical with packages in the main repository, and those for the universe package.

Yes. For some reason people, even those running Ubuntu servers, are surprised to learn that universe has no guarantee to get updates for known vulnerabilities.

Ubuntu is currently divided into four components: main, restricted, universe and multiverse. All binary packages in main and restricted are supported by the Ubuntu Security team for the life of an Ubuntu release, while binary packages in universe and multiverse are supported by the Ubuntu community.


If you care about security, disable universe and multiverse on servers, or use Debian Stable if you want a Debian-based distribution.

I think ubuntu has done a decent job with universe by and large. I find it in much better shape overall than epel. A good way to judge the health of these auxiliary repos is to look at the age of chromium. Ubuntu is generally good with patching at least security issues in universe. epel is still running behind by a month or two despite the security disclosures. So RH ought to take a page from Canonical's book when it comes to epel and make it a more official thing than the entirely community driven thing it is right now.

I don't mind RH being slower as I take it that's what makes RH what it is. Old packages but most stable.

It's not about RH packages directly. They are fine. As it is with universe on ubuntu, you quite likely need epel on rhel for doing quite a lot of basic stuff. If you are able to get by with just the proper supported packages in each distro, this doesn't affect you. And that's where the two diverge. universe is in better shape since it is semi-official, v/s epel which is completely unofficial/community-based.

Actually, since EPEL is often backports from Fedora, many people elect to keep them up to date with what's shipped in Fedora, so it tends to do better than other counterparts.

The closest counterpart is SUSE's PackageHub, which is often seeded by stuff going into openSUSE Leap that isn't part of SLE itself.

Just saw your other comment about Fedora. Thanks for Fedora and EPEL. I was just pointing out that EPEL would benefit from some recognition as a semi-official but ultimately unsupported repo from RH. Technically RH already has 'optional' channels which are already in that category, but EPEL should really be that. Some communication channel between RH and EPEL and QA so that stuff doesn't break on RHEL point releases and maybe some resources for security updates would go a long way in making RHEL more usable.

Are you a RHEL customer? If so, telling your salespeople this would not hurt. Just sayin' :)

We'd all love that. It's funny, because Red Hat does regularly source content from EPEL to pull into RHEL for various products, which is where these issues come from.

I would love it if Red Hat were to give it proper recognition. I honestly don't know why they haven't yet...

I think Ubuntu suffers equally from every single pain point that Michael describes in his post.

Given that ubuntu depends heavily on debian maintainers and governance, what exactly does this improve?

Ubuntu LTS distributions have been the perfect mix of new-ish packages and stability for us.

Are they who blindly use CentOS/RedHat?

Ubuntu specifically has a "server" version. I've been using it since 10.04 (almost 9 years) and never had to use anything else.

Improving the distribution developer experience was one of the early objectives of Ubuntu, and where it had the least amount of success. It's a pity, because perhaps the only useful thing you could get out of essentially forking Debian (in itself not a great idea, but say you wanted to...) is independence from its technical and social processes. Ubuntu's processes are different, but not wildly improved... primarily because Launchpad was designed from a database schema, not for users.

Ubuntu exploited a clear gaping hole in the distro market in 2004. There's still a gap in 2019, but it's a weirder market now than it was back then.

Thanks for the perspective! Ubuntu’s founding was before my time.

Personally, I'm annoyed at Debian still not having a Web frontend for bug filing. Practically all other distros have some kind of issue tracker site that allows filing bugs through it.

Using reportbug or e-mail feels way less comfortable in comparison.

Better have fewer high quality bug reports than more low quality reports. The latter just teaches you to ignore them.

It's not just about "low quality" bug reports. The entire system is baroque and messy to use even for experts. I've been using it for 2+ decades now.

As a maintainer, I frequently needed to change bug tickets. Tags, versions, dependencies, and other details. All need doing through the control@ email interface. Changes can take over half an hour to be processed and acknowledged. It's inefficient and tedious, and prone to error if there's a single typo. While I'm not a fan of web interfaces, bugzilla, JIRA, gitlab issues etc. are all a world more effective and efficient to use.

The Debian BTS does what it does well. However, the other systems all do a much better job. The Debian BTS is a couple of decades past its prime. Nowadays we don't need email interfaces to services like this, we have much simpler and more immediate ways to talk to services, which are even easier to script for non-interactive use.

Take a typical interaction. I need to respond to a ticket I didn't originally file or get CC'd on. With any other system, I'd log in, find the ticket, make a comment and any metadata changes, and press "submit". With the Debian BTS, I need to find the ticket, download the mbox mailbox by hand and open that in a mail client by hand, then find and reply to the relevant message(s) and make any other needed changes using the control interface. It takes much longer, and it wastes huge quantities of valuable time. It is not a pleasure to use.

I can not understate what a huge time sink it is to use the Debian BTS to work with dozens of tickets every day. It's an exercise in frustration.

So why didn't it switch to a better system yet?

Thank you for saying this. I’m so glad I’m not alone in my perception :)

Why download the mbox? Don't bugnumber@bugs.debian.org work?

Because you might need to reply to a specific email with all the same people CCd in who wouldn't necessarily be copied in by the bugnumber address. It also includes the appropriate message ID to make threading work. And you can quote the necessary bits to put the reply in context.

The fact that you need to jump through these hoops to respond so that people will see your reply is another reason why it's a really painful and inefficient workflow.

That falsely presupposes that only people with poor quality bug reports would be put off from reporting bugs due to the interface.

I've known people who can provide high quality, detailed bugs, decide not to because the extra hassles in reporting to Debian just aren't worth it to them.

And I've seen people who learn how to report bugs to the Debian bugtracker and then proceed to spend tons of time filing low-quality bugs now that they've acquired that skill, until (maybe) they get banned from the bugtracker.

And I've seen Debian package maintainers fail to file bugs correctly....

No, it assumes that quality ratio varies by reporting mechanism. 9 good reports and 1 bad report are better than 18 good reports and 82 bad reports.

Bad reports can be closed with a message. But you can't do much about good reports which never got filed because the process was unnecessarily complicated.

There are tools like reportbug which already help to filter out inappropriate use by asking simple questions and doing basic checks. I'm sure that could be further improved to help triage and improve submissions.

That's subjective - I would much rather spend a few minutes sorting out the 82 bad ones than miss 9 good reports. (Actually, I would much rather have 18 good reports and spend several minutes trying to see if there's anything of value in the 82 bad ones: see for instance the high-school kid who found the Apple FaceTime vulnerability and who couldn't figure out how to report it.)

One of the principles of the Debian Social Contract reads:

We will not hide problems. We will keep our entire bug report database open for public view at all times. Reports that people file online will promptly become visible to others.

Dissuading people from filing bug reports feels like it's not in the spirit of this principle. You're basically hiding problems because you don't think the problem reporter is worthy.

If only it were possible to sort a hundred reports in a few minutes.

I guess if you want to discourage ordinary users then this is the way to go:

You might have a massive problem with a default option, but it's not reported because the people savvy enough to handle the bug reporting can fix it themselves. But that might mean most people using your software are getting a bad impression and most ordinary users are put off.

Depends on whether you're trying to create an elitist grouping or create something to fill people's needs.

It's an interesting, though not unexpected, position that demonstrates technocratic societies can have the same problems of control of power that capitalist societies have; except instead of the elite controlling access to money resources, they control access to technological abilities: "fix it yourself" might be the parallel to saying to a homeless person "get a job".

It's an interesting revelation to me of how a meritocratic society can fail to be egalitarian.

Assuming there's no way to triage or filter?

I don't think that's empirically true - plenty of projects that have easy bug-reporting mechanisms (e.g., "open an issue on GitHub") don't ignore their bugs.

I doubt making filing bug reports uncomfortable is increasing their quality.

Sad to see it takes this to raise such issues.

It’s hard to make a change on the inside when so many people’s lives depend on not making that change.

Maybe Debian needs to go through a OpenWRT/LEDE split. That was over a lot of resistence to change and general bureaucracy. In the end, they merged back after fixing the biggest problems.

Isn’t that Ubuntu?

Been using Debian more or less exclusively since Potato. And I'm a bit surprised by the tone of these comments and even more by the content of the blog post. To me, Debian had become an irrelevant fossil not because of any technical factor such as those listed here or there (lack of powerful tools to go over packages silos, bug reporting tools, communication tools, packaging tools...) Indeed I tend to believe not following the industry /best/ practices is actually an asset in many cases, sometime even a Debian landmark that served them and their users well (for instance, refusing to stick to release dates and favouring rock-solid new releases instead); Indeed, "they still use X why everybody else is using Y by now" should sounds very suspicious to many ears.

To me as an outsider the obvious cause for Debian obsolescence is, and has been for more than a decade, the growing bureaucracy and consequently the lack of new blood and innovation. This opinion anchored the day when, participating to a Debian bug squashing party surrounded by similarly minded hackers, we were approached by a DD asking if he could check our identity papers in order to "simplify the process" of accepting our fixes.

Organisations, and companies too to some degree, can sometime be best described by what they stand against. Since the beginning Debian has been standing against a hostile environment: I mentioned already the industry bad practices, but also part of this hostile outer world were the negligent upstreams, the unaware users, the cheating corporations and the misguided FOSS enthusiasts. Some bureaucracy was certainly in order to protect against them all. But I'm afraid one of Debian legacy will be that the DD will personify the FOSS bureaucrat, with its 300 pages long packaging manual and 30 steps long contributor approval processes, in the cultural pantheon of the distributions of the future.

10+ year emeritus DD here -- Honestly I think "modern" dev culture is quite different from the core of he Debian community and this is both a strength and weakness. I can't say that I want Debian to look anything like the node ecosystem.

Not saying there's not enormous room for improvement - just would be careful to conflate that with "it needs to work more like homebrew" or whatever-- yikes

Look at some of the other examples people are bringing up to compare against, though. Gentoo definitely predates the modern dev culture, for example.

DD here. "modern" dev culture feels more like a Silicon Valley / HN bubble, where people believe that all companies "move fast and break things".

Thankfully, most companies (including FAANGs where I worked) are way more careful than that.

Flashy UIs change every day, really critical systems are kept very stable...

Another issue, IMHO, is the fact that every time somebody proposes some change, for example Gitlab + Gitflow instead of sending patches in some awkward format, via mail, there are a few people that already have their own workflow based on (probably) mutt combined with 10s of scripts, and they don't really want to change their workflow for the greater good; instead, they'd just find reasons why the new proposal sucks and keep using what they already have.

This developer is largely responsible for the Go ecosystem in Debian where they develop against HEAD. That's pretty hard and results in a lot of breakages. Go developers don't give a shit and push a lot of the externalities onto distro packages like Debian packagers (volunteers) or else Google developers (who get paid a huge amount to do bullshit engineering that nobody else cares about).

We're making a lot of progress with Debian Rust packages and have automated away 90% of the ordinary Debian crap - which is needed in the general case but not for Rust where the constraints are very well-defined by Cargo - you only have to maintain two files (d/changelog and d/copyright) for the vast majority of Debian Rust crates.

I wonder how well https://salsa.debian.org/public adoption is going?

It seems designed to address a lot of these issues?

Thank you, Mr Stapelberg. Some people in the OSS community can be overbearing and dealing with them on not one but two popular projects is something I, as a user (of both), want to thank you for.

I haven't made any debian packages, I heard it's a lot of work, but these looks good, not bad:

* Granting personal freedom to individual maintainers

* All maintainers need to read up on what the new thing is, how it might break, whether/how it affects them, manually run some tests, and finally decide to opt in.

I routinely make Debian packages. It's not actually that hard -- unless you're intending to submit them for inclusion in the Debian repository. I don't do that, so I can ignore a lot of the most painful stuff.

So I've been involved in Fedora for as long as Michael has been involved in Debian, and I have attempted branching out into other distribution communities over the years.

To this day, the Debian community is the only community where I have not been able to get past the initial stages to get involved. And you don't have to look too hard to see that I'm in quite a few communities...

There's a lot of parallels to Debian and Fedora when I started in the project over a decade ago.

The clear divider in how the two projects evolved was that Fedora elected to implement a lazy consensus model for decision-making, and developed a culture with a bias for action and improvement. Debian requires full consensus (generally) and has a culture that favors inaction. This difference is what has kept me in the Fedora Project for over a decade, and I still enjoy working in that community and doing my part to improve the greater Linux community and ecosystem.

Over the years, Fedora shed a lot of its more complex processes and developed simpler tools and supporting infrastructure to make it easier to use and contribute to the development of the distribution and outlying projects. Over the years, I've seen us replace our buildsystem infrastructure[1][2][3], develop APIs and protocols for weaving tools together[4], migrate SCMs and create the first ever binary store system for Git[5][6][7], develop tools to simplify complex tasks[8][9][10], and build replacements to proprietary or overly-complex systems and support open standards and interoperable systems[11][12], all to benefit our users, our contributors, and our ecosystem. We've taken a similar hammer to our processes and structures so that we enable a wider range of people to be involved, representing their concerns and making our community healthier than before.

We're still continuing down this path of making it easier for people to leverage the Fedora Project resources for the benefit of the community with things like COPR[13], CI on packages with Koschei[14] and CentOS CI integration for projects and packages[15], etc.

That's not to say Fedora is perfect, mind you. It still has some technical and process warts. But I'm proud of the fact that our community is still actively trying to improve our processes, our tools, and our distribution. We're not afraid to make things better, and our community generally wants to make the Linux world a better place.

[1]: https://fedoraproject.org/wiki/FedoraSummit/NewBuildSystem

[2]: https://fedoraproject.org/wiki/Infrastructure/CoreExtrasMerg...

[3]: http://koji.build/

[4]: https://fedmsg.readthedocs.io/en/stable/

[5]: https://fedoraproject.org/wiki/Dist_Git_Proposal

[6]: https://fedoraproject.org/wiki/Dist_Git_Project

[7]: https://github.com/release-engineering/dist-git

[8]: https://www.mankier.com/1/fedpkg

[9]: https://bodhi.fedoraproject.org/docs/

[10]: https://mirrormanager.readthedocs.io/en/latest/

[11]: https://pagure.io/pagure

[12]: https://ipsilon-project.org/

[13]: https://copr.fedorainfracloud.org/

[14]: https://fedoraproject.org/wiki/Koschei

[15]: https://fedoraproject.org/wiki/CI/Pipeline

As a Debian Developer, I often point people to Fedora to show you can be a community distribution very similar to Debian and still be modern in your practice. Your post is insightful.

Heh, and I still forgot stuff. I forgot that we created HyperKitty[1], the Mailman 3 frontend that offers web forum style interaction workflows while still remaining a mailing list for people used to that model. This was precisely because mailing lists have historically been horrible for new people to interact with.

[1]: https://hyperkitty.readthedocs.io/en/latest/

The Fedora Project really encourages participation from its community. They can definitely steer the ship a lot more quickly.

> Gmane used to paper over this issue, but Gmane’s availability over the last few years has been spotty, to say the least (it is down as I write this).

The Gmane web interface is not just down but shut down for good: https://lars.ingebrigtsen.no/2016/07/28/the-end-of-gmane/com...

So this issue won't get better without Debian doing something themselves.

It's a shame that the only thing Gmane v2 effort [0] yielded is just a couple of blog posts. Why did they stop so abruptly?..

[0] http://home.gmane.org

debian user for 20 years here, love it.

debain is the base for so many other projects, it needs to keep going strong.

in the meantime debian in many aspects is a bit "old" now, its infrastructure and the way to do things need evolve. I'm not a developer per se, it is not easy to become one either.

There are so many ways to make packages and it's hard to pick the "best" one for example.

I hope Debian can reform its model to make it even better, there are just so many archaic baggage from the past for new comers.

Hi Michael,

Thanks for writing this. I wrote a response here: https://changelog.complete.org/archives/9971-a-partial-defen...

The tl;dr version is I agree with you about some of the things you mention, but also feel like there's an element of personal preference for web-based tools showing through.

I have a naive question (I'm a long time Debian user, but never really tried to contribute): why isn't all the Debian in a single git repository? I'm not suggesting to vendor all the third party code but only all the packaging information, like, say, libreelec does (and I assume many others)

It's all down to history. When Debian was started, individual maintainers owned their packages completely, and they were often not kept under version control. This was a practical necessity for distributed development in the mid '90s. They uploaded the source and binaries when they uploaded a new Debian package version, but the actual source packages weren't necessarily under any sort of version control other than the Debian archive storing the uploaded versions.

Today, keeping packages under version control is considered good practice, but even today it's not mandatory as far as I'm aware.

While other systems, like the BSD ports, are in a single repository, this complete separation has had its advantages. The system is almost completely modular, with all the interdependencies explicitly documented. It's easy to add third-party packages. Look how easy it is to add extra BSD ports packages. You have to fork the entire thing because the repo doesn't just include the packages, it includes all the build infrastructure. This makes building third-party ports a bit more of a pain, because it wasn't considered important, while for Debian it was a fundamental design goal. On the flip side, making a patch for the BSD ports is the same for every single package and it takes just a few minutes to attach it to a bugzilla ticket.

Debian actually vendors all third party code. Their source archive is the best place to search if you are looking for old but good software that is not available any more from the original author.

The simplest answer is that it predates repositories of that size being a practical idea, and it's not changed because of institutional inertia.

Thank you for all of your hard work.

Compared to nixos everything else seems arcane, and compared to arch's pacman/makepkg combo everything else seems overly complicated.

So some guy realizes that the vast resources and top-down direction available at large corporations don't translate to a huge open-source project (which itself is compromised of tons of open-source projects)? And it took him 10 years to realize this?

Seriously though, I'm amazed at how well Debian just works. And I know it's because enough people are willing to put up with these types of frustrations. Thanks for the hard work.

I’m fully aware of how spoilt one can be as a developer at a large corporation, but I believe I have toned down my expectations reasonably, and still felt compelled to write this article and step down.

I'm curious as to what the author would think of this in contrast to Debian:


According to my 10 years of Linux experience I can say: community Linux distributions sucks. 'Cause it's funny when you can upload a couple of packages and not funny when you need to debug installer or understand why it doesn't work under upgrade. When you do this professionally, your salary is the great motivator.

Not really a fan of "I'm leaving X because of free time and also this 20-point list of why you suck."

I’m sorry you got that impression from the article. I’m not leaving over less free time, I’m leaving because of the issues I describe. Reduced free time amplified some issues, though.

Why not?

Comes off to me as a guy who's just tired of the project after 10 years. Understandable that he finds annoyance and frustration in so many things. It's like a marriage that has reached the point where one or both partners have decided they don't want to be together anymore.

As an outsider, to me the list of complaints sounds kind of petty and whining. Better to just say "I'm moving on, I wish everyone the best" and be done with it.

You should try patching Debian packages. I too am an outsider, but I have had to modify Debian and it's brutal; every package is different and the number of entry points into the tooling is mind boggling. I have bugs I've never filed over years because I don't understand the particular packages process.

I'm sure it's true, all I'm saying is that if you reach the point of walking away, just walk away. A list of grievances at that point is just going to be seen as a parting shot rather than having any real chance of motivating change.

I think the tone of the article was pretty stoic and so I didn't take it as "whining". I believe he just sees no future in an ever growing Debian bureaucracy and decided to voice the reason he is moving on. I've been with Debian since it's inception and can relate to what he is seeing.

But it probably does have a better chance of motivating change and maybe you are still trying to help. Obviously there are different ways to air your grievances, privately might be better, sure. I'm not sure if you are advocating for leaving without even giving some reasons in private. That to me would be an actively unhelpful way to leave.

I think posting publicly has the advantage that other projects can cite this when having discussions of how to set up their infrastructure.

The article mentions he's made attempts to fix some of those issues and the problems he's encountered.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact