Hacker News new | past | comments | ask | show | jobs | submit login
DUR: The Debian User Repository (hunterwittenborn.com)
86 points by luke2m on June 26, 2021 | hide | past | favorite | 102 comments



I'm excited to see someone is working on this. I have considered switching from Arch to Sid in the past, mainly so that I could run the same OS on all computers and because I like the DFSG. Two things have stopped me:

* It's so much harder to build your own software packages on Debian. Even patching an existing Debian package can be painful, and having to write your own packages is even worse. I'm sure a dedicated enough user can learn to use the Debian packaging software proficiently, but it's a huge barrier to entry when I have something as easy as Arch's PKGBUILD syntax at my disposal. It looks like this project is interested in solving this problem: rather than using Debian's packaging tools for the DUR, it actually appears to literally be using makepkg with a Debian conversion script running on top. [1]

* The systemd integration is currently quite bad. It's probably the source of a bunch of user complaints about systemd. Debian's systemd install currently still supports [2] and uses init scripts and update-rc.d, the very thing unit files were supposed to take us beyond. In fact, a pretty minimal Debian installation I have on a server has 14 init scripts, while my Arch install doesn't even have an /etc/init.d directory! On the whole, Debian administration is a lot more complicated than on Arch because of the relative complexity of the architecture.

[1] https://docs.hunterwittenborn.com/makedeb/makedeb/intro

[2] https://wiki.debian.org/Teams/pkg-systemd/Integration


>It's so much harder to build your own software packages on Debian. Even patching an existing Debian package can be painful, and having to write your own packages is even worse.

I can't say that's been my experience, once I figured out dh_make [0] that made it pretty easy by handling most of the boilerplate. Building an existing package is as simple as doing `apt source` and then `dpkg-buildpackage` to rebuild it (you may have to do `apt build-dep` first too). I agree with the rest of your comments though.

Edit: I haven't looked into the details of how it works but I would imagine that YMMV with trying to use makepkg that way. The hard part here isn't getting the packages to build, it's in reconciling the library dependencies across all the tens of thousands of packages that make up both distros.

[0]: https://blog.packagecloud.io/eng/2015/07/14/using-dh-make-to...


> I can't say that's been my experience, once I figured out [dozens of moving parts]

That's the whole point. Nobody doubts that the Debian tooling works (it evidently does), but the learning curve to get to that point is massively steeper and longer.

Makepkg you can figure out in an hour, at most an afternoon for the finer details like custom patches; reliable Debian packaging takes days or weeks to understand set up.


I don't think it took me that long. I said this elsewhere but to me that is mostly a documentation problem, not anything to do with the underlying technology or the package format.


When I was a teenager, running Arch for about a year in part to take advantage of patched/bleeding edge copies of WINE from the AUR, I decided Arch sucked and to try out Ubuntu (I'd mostly used Gentoo, Sabayon, and openSUSE previously).

Recreating the patched packages and setting up my own PPA for them was pretty much trivial. It did not take a whole day to figure out.

The Debian system is a little clunky, but it is not hard. If you're a user of Ubuntu or Fedora or openSUSE whatever, learning how to create DEB and RPM source packages and upload them to OBS or Launchpad is way less work than setting up Arch for the first time.


Disclaimer: Have been an apt shadow for a couple years and ran some PPAs on both launchpad and private servers.

Personally, I think the approach that upstream Debian uses in order to not include the headers of libraries with the binary builds is wrong.

Nobody has time to deal with over 30 GIGAbytes of hard drive memory that is just there because I wanted to rebuild a damn library of 20kB. That is a total fail, architecture- and build-system wise.

As long as Debian separates between distributed binaries, and -dev packages which include all source codes and headers - I would not recommend it for software development.

This is the exact reason of PPA fatigue, where end users have sometimes dozens of versions of the same library installed that is also an absolute security nightmare to begin with.

Reproducible builds are worth nothing if everybody has to use a PPA of some random untrusted blog post on the internet (omgubuntu anyone?) because Debian's packages are too outdated. Usually those PPAs are heavily outdated as they are built once and then abandoned for years.

Libraries are not upgradeable due to how other packages depend on them. They cannot be rebuilt without having to rebuild the whole operating system (have to rebuild X again? Well, gonna stop here then.)

Sorry for the rant, but I think in order for DUR to work in practice, these conceptual flaws have to be fixed upstream first.


> As long as Debian separates between distributed binaries, and -dev packages which include all source codes and headers - I would not recommend it for software development.

Is that the case for some specific languages? -dev packages for C and other compiled libraries do not include the full source code of the library. Normally they only include the headers, pkgconfig, and similar files.

> Libraries are not upgradeable due to how other packages depend on them.

Got some examples where it's a problem and not an actual dependency issue?


> Got some examples where it's a problem and not an actual dependency issue?

Try to recompile mesa or say, just the demos (aka glxgears) :) Worst case scenario for Debian dependencies.


I meant an example of what the issue is. Glxgears seems to have reasonable dependencies (https://packages.debian.org/buster/mesa-utils) gl1 and glu1 are not even restricted on versions.


I regularly recompile mesa on my Debian-based phone and I'm not exactly sure what you mean. It goes painless.


So you would rather have them waste that exact same amount of space on every computer, just so you don’t have to download it when you want to build a package?


> So you would rather have them waste that exact same amount of space on every computer, just so you don’t have to download it when you want to build a package?

Header files are literally less than a couple kB per package, and Megabyte-sized headers are the exception for huge frameworks like QT or GACL. Arch packages don't ship the source codes, only the binaries (.so files) and headers (.h files).

Overall, the /usr/include folder on my system has less than 230MB, which is a damn lot because I have all kinds of compilers, programming languages and libraries installed. I even have 32Bit and atmel libraries installed because I do embedded reverse engineering and cross-compiling development on other architectures; so I also have gcc, LLVM, rust, go, C/C++ and other programming languages and their dependencies installed. So it's very likely much less than that for others.

If you cannot afford that, you should maybe choose another, smaller, distribution like "Damn Small Linux", Alpine or similar. Debian wouldn't be a good choice for that matter anyways.


I’m not sure what the argument is here. Either the header files you have to download are big so you shouldn’t have them on your system if you don’t need them, or they’re not big so it’s not a problem to just use the apt command to download all the header packages you need to build the package you want.


> Arch packages don't ship the source codes

Neither do Debian's -dev packages.


I don't have any Debian machine that doesn't end up with build-essential and dozens of header packages installed; it just comes up too often. Adding the header files directly to the packages saves time and frustration, and probably even saves space since a lot these packages are 50% overhead (dpkg meta files, db entries, etc.) by size.


I don't install build-essential on a server, in a container, on a VM, etc. It really matters in those places. If you combine them then you lose the ability to deploy a small rootfs.


Servers, VMs etc. always end up getting build-essential to handle random python/ruby/npm/... package installations, as the shipped versions are hopelessly outdated and creating your own debian packages for all of them is about as enjoyable as performing appendix surgery on yourself.

Multi-stage builds do make it a bit easier to get rid of them for containers, yes, but during the build stage it's still required.


I refuse to believe that these meta files and database entries are occupying anywhere near a relevant amount of space.


Relatively.

Absolutely, it's negligible - because the space consumption of header files is also negligible. It's such a bizarre design choice to split out a single 30 line .h file into a separate package.


I'll try to respond to some of your comments, but I'll admit I'm always somewhat confused by rant posts. I never really understood PPAs, that seems like an Ubuntu thing that was not ever suggested to use in debian. If you want to stay on bleeding edge for packages, the suggestion would be to use debian sid.

I'm not sure what you mean 30 GB of memory. How is that related? And how would merging the -dev packages into the library packages solve it? IMO there are problems with the Debian approach but I wouldn't say that's one of them -- a worse problem to me is that things like cross-compiling/multi-arch mostly requires chrooting which is a lot less convenient than the approach used by something like Nix.


You don't need to chroot for cross-compiling, Debian supports multi-arch for most of the packages. You just install the cross-compiler, the cross-libc and all native development packages for the architecture you're building for.


Last time I tried that I had bad problems with package conflicts trying to overwrite files, but it was a while ago and maybe I was doing something wrong, I'll give it another shot.


Yeah, I'm vaguely aware of various wrappers that try to make the process easier, I should probably give those a try. Building an existing package is certainly very easy, but I've had problems trying to download the source, apply a patch, and then rebuild it. In Arch this is just "makepkg -o" then "makepkg -e" after you patch the source.


Really? For me it's literally just `dpkg-buildpackage` after patching the source. I never had any problems with it.


I would agree that Debian packaging is unnecessarily complicated. The raw stuff is tedious, so various wrappers have been created. In the wild you see a mixture of wrappers and lower level packaging, making everything more complicated for the occasional user.

I started with Debian (not as a packager, but just as a technical user who occasionally wants to do his own stuff). Years later I was forced to use rpm and was suprised that things could be so straight-forward. Yet years later I was forced to look at Arch and was suprised again: Hey, you can understand all of it just in a couple of hours.

Accumulated experience might result in a slightly biased perception. But I still believe the described gradient exists.


I can't say I agree with that conclusion, if you're just applying a patch then you don't need to edit the low level packaging. It doesn't affect the occasional user at all.


To be honest, you still do need to grok the difference between native and quilt packages, and, in case of the latter, know that you need to put your patches into `debian/patches/` instead of directly modifying the source tree. And if you started by cloning the packaging from a git repo, you may need to know what pristine-tar is as well.

A well-written tutorial will explain that to you quickly, but there's definitely more friction than with PKGBUILD files where you can pretty much just look at the contents of the file and edit it directly without second thoughts.


The difficulty of figuring out exactly what you need to do is part of the problem. It's hard to shake the feeling that you don't really grok the build system.

But in any case it's never been as easy as just running that one command for me. Actual example from my bash history:

    $ apt source packagename
    $ vim path/to/broken/file.c
    $ dpkg-buildpackage
Now I get an error: "aborting due to unexpected upstream changes".

    $ dpkg-source --commit
Make a changelog entry for the changes you made

    $ dpkg-buildpackage
This fails with an inscrutable error "dpkg-buildpackage: error: failed to sign .dsc file"

If you know what you're doing you realize that you can just ignore this error, it doesn't affect the built package. If you don't, you have to search around and then build it again with

    $ dpkg-buildpackage -uc
So it's not, like, the worst thing in the world but it doesn't go out of its way to be intuitive either. It's hard to find good help on what to do (the man pages are arguably too detailed to be useful to people who just want to make small changes), and hopefully you didn't follow a rabbit trail with debuild.


What package and how complex is it?


I'm not sure I understand the question? This is for any package. Of course you will run into trouble if you introduce breakage, like removing files that the build script expects to be there, or causing merge conflicts with patches applied by the build script. But those breakages can happen with any package manifest, including PKGBUILD.


When you say making a debian package is easy I want to know what kind of project it is you have experience packaging, like what type of assets has it got, what technologies does it use. How many languages, how many dependencies, how is it linked? That sort of thing


This is just for arbitrary autotools, cmake and meson projects using any language supported by those build systems. The sizes and dependencies vary. I don't really see it being much harder, the debian developer documentation is really convoluted but that's a different problem.


makepkg -o already applies patches. And makepkg -e after it is like calling it normally, if you don't touch anything in-between.


The difference there is that debhelper will usually try to keep its own modifications temporary, and otherwise keep the source in the same state as the original upstream source. So that way you know when there is going to be a merge conflict.


> The systemd integration is currently quite bad. It's probably the source of a bunch of user complaints about systemd.

I haven't used plain Debian, but this has been my experience with Ubuntu. Almost all the problems I've run into with systemd have been because of an interaction with init.d scripts that should be systemd units, or similar. My biggest complaint is that while systemd-resolved is supposed to be optional, on Ubuntu it is practically impossible to fully disable without breaking things, and disabling is officially "unsupported". On archlinux, if you want to use dnsmasq instead it is just a matter of disabling one service and enabling the other.


This is actually something that keeps tempting me to try arch, since it having gone all-in on systemd early on would hopefully give me a more 'native' systemd experience and help me figure out which of my gripes are with systemd and which of them are with how-$distro-added-systemd.

Thing is, I'd only want such an install for playing around with occasionally and as the argument upthread makes pretty clear, "doing a bulk upgrade occasionally" is something that really isn't regarded as a sensible way to run arch.


I do my upgrades every few weeks, for what it's worth, and I've had the same Arch install since 2013. The size of the upgrade should very rarely have any effect on whether it succeeds; you're just more likely to have missed some major changes if you're upgrading without checking the latest news from the website.

Partial updates are also unsupported, and in my experience that's the most common way people break things.


This is off-topic, but the reason why I stay away from Arch is that it has not once, but twice, just fucked itself from right under me.

First time around I was just getting up and running and as I installed python3 there was some mismatch between some very core libraries and I couldn't roll back since the library in question wasn't available anywhere, but python3 package (or something that came along with it) was linked against the old one. Only "work around" I found was to symlink the current lib to the old's path.

Second time I just just running bog standard fetch & update. Then I rebooted as is normal after you update and it no longer booted.

Arch is fine hobbyist distro and you for sure have to live on he edge, but I wouldn't use it for anything you expect to be able to use every day at a moments notice.


> Arch is fine hobbyist distro and you for sure have to live on he edge, but I wouldn't use it for anything you expect to be able to use every day at a moments notice.

I disagree 100%. This laptop I'm typing from now was installed 2014-07-06, it's my main personal machine with all the random things your typical desktop does and it works great. I similarly used a work laptop almost as long until I was forced off Arch for compliance reasons this year (sigh). Your anecdotal failures are not everyone's experience, definitely not mine.


>Your anecdotal failures

First of all how are these MY failures and not failures of the Arch ecosystem? Running `pacman -Syu` on any given Friday should NOT leave you with a machine that no longer boots. Installing a very stable software such as fucking python3 shouldn't break litereally everything else that is linked to same lib. Everything is anecdotal, but trying to use that to dimiss literally OS breaking incidents feels silly.

If I type in `apt update && apt update -y` I can be sure that my computer will be usable after a reboot. I've once ran into an issue where the next kernel update didn't boot, so Ubuntu booted me back into the previous version and told me the issue. This is exactly the kind of thing I want my OS to protect me from. I am not a kernel developer and don't have sufficient knowledge or interest to get into that. I also don't want to be afraid to update my software.

I have nothing against Arch or its users, but I wouldn't recommend it to anyone who actually needs to use their computer and isn't willing to spend a day every now and then fixing their OS.

Just as a side note on this rant: My experience is by no means unique. The whole reason I even gave Arch a second try (after the python3 library ordeal) was that my colleagues are using Arch as their daily driver, but even with them every now and then I hear how they have to fiddle with their packages to make things work/compile or that they updated something and something broke (often it is audio).


> Running `pacman -Syu` on any given Friday should NOT leave you with a machine that no longer boots

Says who? That's pretty much not what Arch tries to guarantee at all. You're at the very least supposed to read the announcements and you should upgrade often, as otherwise the end result is undefined. Some distros like Debian support partial updates, Arch very explicitly does not.

If you use Arch the way it's intended to be used, you're not going to have to spend a day every now and then to fix your OS, but if you don't, you will. I use it as my work system for years and every time I broke it was my own fault.

And if you don't want to use it the way it's intended, then just use another OS that better matches your expectations. I don't use Arch on my server after all.


Sorry what? So you have to go read a news site if it is OK to update your packages? What the hell is the point of having "rolling distro with all the new shit" if you have to be constantly afraid if you can update your packages or not?

>you should upgrade often

But you litereally started by saying that upgrading any given Friday is not quranteed to leave you with working OS


This is literally intended. Arch is a rolling release distribution, and more than that, it's a distribution which sticks very closely to upstream and tries to have a single correct way to do things. To have all those things at once, you have to have competent users who are willing and able to make changes to their system to keep up with the distribution.

To give a practical example. In 2013 Arch Linux merged all binaries into /usr/bin. This architectural improvement was obviously a major change, and the way Arch is designed as a rolling distro meant that you couldn't leave users behind on the old way of doing things. After the change, everyone needed to be using the merged /usr/bin.

This is a designed lack of backwards compatibility that means you're guaranteed to be using the latest system architecture at all times, reducing the maintenance burden for the maintainers and making sure everything just works most of the time. But it also means that you have to be willing to do system maintenance to get your local system in line with what the Arch devs use. In the case of the /usr/bin merge, manual steps to perform the upgrade were required. [1] They weren't hard, but Arch requires this kind of oversight as part of its design.

If you're not willing to perform that kind of maintenance, then Do Not Use Arch! It's designed for people who are - although in practice this kind of intervention is required only about once a year.

[1]https://archlinux.org/news/binaries-move-to-usrbin-requiring...


> So you have to go read a news site if it is OK to update your packages?

Not "if it's ok to update", but "whether there's manual intervention required". See https://wiki.archlinux.org/title/General_recommendations#Pac.... Not reading that before upgrade seems to be the main reason for post-upgrade failures.

> But you litereally started by saying that upgrading any given Friday

The point was that you should rather upgrade "every Friday" than "any given Friday".


> If I type in `apt update && apt update -y` I can be sure that my computer will be usable after a reboot

As from my other replies, that's awesome - you're looking for a usage paradigm that is not what Arch is for; it requires attention and regular maintenance, and "keeping up" on a regular basis. You've hit the nail on the head with the followup sentence, you have to be willing to spend a day every now and then keeping up.

Arch is not for people who want to ignore their system and hope it all just keeps working, then update 700 packages all at once and have it come crashing down. There is an agreement (the social kind, not the legal kind) you enter into when using Arch that you actually engage with it on a regular basis because you like doing that and like to tinker. It's build for experienced Linux techs who like to engage on a regular basis with the plumbing.


I like how you Arch guys seem to paint rest of us as some old folks who upate once a year our packages. I was literally running pacman -Suy weekly, but none of that should matter. It is the distribution maintainers and the package managers literal job to keep the OS running and runnable. It shouldn't matter how often I update my packages they should be migrated from one version to the next like any sane package manager does.

>It's build for experienced Linux techs who like to engage on a regular basis with the plumbing.

Yeah, sure. This is where the "Linux is only free if your time has no value" saying comes from. I think I've said this in all comments in the thread, but if it isn't abundantly clear anyone is free to use any distro they want or any OS for that matter (I use all major 3 almost daily), but I would not recommend anyone to use Arch unless they are willing to spend a lot of time fiddling with their machine and I wouldn't install it on anything that I might have to use at a moments notice since as I've been told part of Arch is that updating your packages just breaks your OS.


> I would not recommend anyone to use Arch unless they are willing to spend a lot of time fiddling with their machine and I wouldn't install it on anything that I might have to use at a moments notice since as I've been told part of Arch is that updating your packages just breaks your OS.

Yes, exactly right. As an Arch user, I endorse this.


> This laptop I'm typing from now was installed 2014-07-06,

It goes beyond anecdotal evidence. I mean, if you look at the Arch news feed[0], there are a good few updates since then that are labelled "requires manual intervention". I like Arch and it's great they're posting those notices, but I don't think it can be claimed that upgrading an on-the-edge system is always problem free.

[0] https://archlinux.org/news/


> there are a good few updates since then that are labelled "requires manual intervention"

That's by design. If you choose Arch as your system, you're agreeing to that (otherwise why would you choose it in the first place - that's a direct consequence of its value proposition). I'm not sure I would call that a "problem".


This is part of the agreement for using Arch, it's not a system you can ignore, you must actually pay attention to these news items. The upgrade tool I use actually shows new news to you front and center when performing an upgrade so you're blatantly aware that you have to follow instructions. Those are not "problems", they are "following instructions to keep up with changes." If you wand a distro to just ignore and hope it all goes well, Arch is not for you. Never was.


I have the similar experience to the OP. The last time I had 700 packages or so updating, pressed Ctrl+C (apt or brew never left my system in a broken state after Ctrl+C) and the system no longer worked after reboot. Had to update all packages via a bootable Arch image on a USB to fix it. Also had KDE and Gnome behave weirdly after update to the point of having to reboot immediately – never happened on Ubuntu LTS. Arch is amazing as long as everything is fine but if you do something wrong, it’s far more fragile. I guess it’s ok for a pro user distro but my servers will remain on Debian. Also, I had Arch on an old laptop I use from time to time and it happened at least a few times that if I did not turn it on for 3 months or so, the system was no longer upgradable.


> I had 700 packages or so updating

You absolutely ignored your Arch system for far too long and most likely missed important News notifications that manual actions were required, it happens now and again. With updating 700 packages at once you've just dug yourself a hole - Arch is not a system you can ignore for that long, by design it's always changing to reflect the newest upstream changes to software. As I posted to the other person, if you want a distro to ignore over time then Arch is not for you.


Imo this is a lame excuse. Some rolling release distros whose packages more or less equally up-to-date are far less brittle, namely NixOS and openSUSE Tumbleweed.

Arch's demanding nature with respect to being routinely updated follows directly from the fact that its package manager is stateful and unsophisticated (e.g., its dependency resolver is incomplete (will fail even when solutions are available, because dependency resolution is NP-complete and 'gotta go fast')).

It's totally fair to call this an Arch/pacman defect. Part of it is also lack of enduring hacks like Debian and derivatives use (long-lived transitional packages), which is a choice by Arch developers. A glance at the online Arch package listing and some example .PKGINFO files indicates that Pacman also doesn't support metadata like 'provides' and 'replaces' in DEB and RPM, which means the developers have fewer tools at their disposal for dealing with transitions like that, even when they want to.

Clearly many users don't mind all these tradeoffs, but others who shy away from Arch after experiences like that mentioned in the GP comment are totally right, as a matter of fact, to blame the design of the package manager for their woes.

We're not looking at the price of bleeding edge packages or a rolling release. We're looking at the price of 'keep[ing] it simple, stupid'.

Edited to add: the existence of the DUR shows that Debian probably doesn't fall on the right side of the complexity tradeoff for the purpose of maximizing user contributions. The same users who contribute to it could also create their own Debian packages, but they don't, presumably in part because it seems harder to get into. (It's not that bad if you use debhelper, but it is also clearly a crufty system that has evolved this way and that over time.)

So Arch's philosophy also clearly has other strengths. But imo we shouldn't give it a pass on the kind of fragility outlined in the GP.


> A glance at the online Arch package listing and some example .PKGINFO files indicates that Pacman also doesn't support metadata like 'provides' and 'replaces' in DEB and RPM, which means the developers have fewer tools at their disposal for dealing with transitions like that, even when they want to.

I guess this is what you get for making assumptions about a platform you're technically unfamiliar with, isn't it?

https://wiki.archlinux.org/title/PKGBUILD#Package_relations


So, in other words, you have intentionally tried to break your system and you succeeded. So, congratulations?

pacman and apt have a different feature-set. Handling a neglected system and recovering from terminated upgrade are explicitly not what pacman does. If you expect such features from your system, why did you install Arch in the first place?


Same thing for myself, I really enjoyed the learning experience from Arch, but one day I updated my distro and after a reboot the entire thing became unbootable.

Undeterred, I downloaded the latest ISO of Arch, reinstalled and updated it. Again unbootable. So rather than provide a working latest ISO that resolved a breaking glibc change they had introduced, there was a poorly communicated and (what I considered) fairly unclear set steps to upgrade the OS. I gave them a go, still broke.

I wasn't even that mad, but at that point I realised it just wasn't the OS for me.


I've had similar experiences long ago but not in the past .. seven years or so, and i use it as a daily driver. In fact I've encountered less bugs on arch, which follows upstream, than on other distros which often have their own patches and unsupported backports with their own problems.

I have had once a serious bug which rendered my computer unbootable, which i couldn't have solved without my fifteen something years career helping me. But it wasn't an arch issue, i would have had the same thing on debian.


Both of these incidents were during last 3 years. Last one was less than 6 months ago.

I am currently back on ubuntu 20.04 and haven't had any issues


I agree, RPM is similarly just as easy amongst others who all sort of stick to a single control file design. This exact DEB ergonomic pain of files all over the place keeps me at arms length, it's clunky (to me) and the build process weirds me out with how many variations on tools used to get the same result. It's backwards compatible as all get go, I'll give you that.


The inconvenience goes further than building in the first place.

If I want to try running a Debian package with an experimental patch as a one-off, I haven't found it too hard to build my own version.

But AFAIK there's nothing that helps me manage the necessary work to rebuild with the same patch when a new distribution version comes along.

Do source-based distributions have a good solution for this?


If you use Git, you just run git merge. If not, uupdate gives a good enough result for further fine tuning.


> On the whole, Debian administration is a lot more complicated than on Arch because of the relative complexity of the architecture.

This should be expected though. Debian has been around a lot longer and is depended on by more of the Linux world. Complexity is a natural outcome of more stakeholder requirements (more stakeholders = more requirements = more complexity).

I won't get into the systemd holy war, but suffice to say that comparing the architecture of init scripts vs systemd is like comparing the design of a bicycle to a car. One most definitely is more complex than the other, but whether that's a good thing or not depends on what you want to do with it.


It's okay to ride a bicycle instead of a car, or vice versa. It's not okay to put bicycle tires on a car, or a car transmission on a bicycle.

I have no complaints about distros that are either 100% systemd or 100% init scripts. The problem with Debian (and its derivatives) is that they're trying to maintain both at the same time.

If they really want to support both "/etc/init.d/servicename start" and "systemctl start servicename" commands, one should be turned into an empty shell that simply invokes the other, like what Red Hat did with the "service" command. The current system kinda works out of the box, but goes out of sync as soon as you try to customize either the init script or the unit file.


> I have no complaints about distros that are either 100% systemd or 100% init scripts. The problem with Debian (and its derivatives) is that they're trying to maintain both at the same time.

Well said. I'd add that "service" is actually doubly confusing because the command was IIRC actually introduced by Ubuntu when they were doing Upstart, which was their own attempt an an init replacement. Ubuntu still has the service command, actually, and now it supposedly runs both init scripts and systemd units. I've never actually tried it because I'm rather spooked by having multiple mechanisms to activate a service. It still mostly has the upstart command syntax from what I can recall.

My Ubuntu server actually has a Upstart service file from one of the daemons I have installed, I only hope it's not somehow being activated or translated into an init script by some compatibility system Ubuntu has built in.

You might even be talking about RHEL 6, in which they used Upstart, and not whatever they're doing now. RHEL 6 is still under extended support, which means that there are still quite a lot of Upstart systems out there.


I was actually talking about RHEL 7, which uses systemd. When you type "service foo restart", it redirects to "systemctl restart foo.service" and prints a message saying so. It's a nice way to nudge people to use the new command without being obtuse, redundant, or confusing.


In fact, init scripts are only used by systemd when there are no corresponding native unit files.


A lot of debian packages supply both systemd unit files and init scripts. You can use either to step on the other's foot.


As I said, init scripts are only used if there's no unit file. If there's a systemd unit, the init script is ignored.


If there's a systemd unit, the init script is redundant, confusing, and possibly dangerous. It should not exist in the first place, perhaps except as a thin wrapper around the corresponding systemd command.


If you invoke the init script on a system booted with systems then it does indeed wrap the equivalent systemctl command.


If the package maintainer wants to support systemd-less systems out there, I don't see a reason to not let them to.


Having that support come in the form of installing both on all systems does have its downsides though.

OTOH, I wouldn't be surprised if every other feasible approach's downsides are widely regarded to be worse.

Software. Yay :D


Gentoo seems to manage.


Nice effort, the Arch User Repository is one of the benefits of the Arch way I feel. I have become so used to simply search "arch aur <VS Code>" or whatever I am looking for, finding it there, git clone and install. I keep installing random userspace software and I do not think AUR let me down.

On a side note, I keep thinking there are so many great ideas spread over the many Linux distributions and not having a single joint effort makes it hard from an adoption point. I love having choice but I would really like more users to be able to use a (any) distribution.


I’ve only just started running Arch. One thing I don’t understand is the AUR security model. Aren’t you running arbitrary code / binaries built by some stranger on the interwebs on your machine? This is my biggest hesitation, and why I haven’t used AUR.


The idea is you read the PKGBUILD/install files so it's no longer arbitrary code, they're usually very short files. On updates you can review just a diff. AUR helpers present this to you so it's not a manual process

Many packages are compiled from source rather than using prebuilt binaries, but when binaries are fetched it is something you'd see in the PKGBUILD itself. The binaries aren't included in the AUR itself, they'd usually be from the first-party of the software you're installing. For example google-chrome[1]'s package fetches the .deb from Google's server and unpacks it

[1]: https://aur.archlinux.org/cgit/aur.git/tree/PKGBUILD?h=googl...


Using the 80/20 rule, most AUR packages are simply the build script ("PKGBUILD") instructions and perhaps a patch if required, etc. (or say adding a missing .desktop file). The process involves your AUR tool (I use "pikaur") downloading them, then executing the build instructions _locally_ on your device. So your desktop is actually the one downloading the real source code and compiling it, then installing it.

The 20% side of this is yes, there are some prebuilt binaries in AUR - usually because (a) they are vendor proprietary code with no source (Zoom, for example) or (b) so insane to build that the vendor does it and you use their binaries (Firefox for example - it's a monster to compile). Most of the time these packages are clearly labeled, usually with "-bin" in the name so you can easily avoid them.

You are executing build instructions written by someone else, but the tools encourage you to actually review them (and pikaur for example can show them to you for review right then and there). There is a level of trust involved and yeah, bad actors do try and slip in ugly things but they're usually found pretty quickly because folks are actually reviewing-before-installing as encouraged. It's risk management on this one, you take some personal responsibility for using AUR to pay attention.


Thanks! That makes a lot of sense. I’ll have to do a deep dive this weekend.


Quick bootstrap: you normally want a "helper" to handle downloading things and running the compile steps (it can all be done manually, and in fact you have to bootstrap yourself the first time that way). I prefer this one, pikaur, due to how well it integrates (looks and feels like pacman) and has a rich featureset: https://aur.archlinux.org/packages/pikaur

After getting that downloaded, compiled and installed then try a sample simple AUR package like "downgrade" - `pikaur -S downgrade` - to get feel for it. https://aur.archlinux.org/packages/downgrade/ It really is kinda that simple, the rest is just learning the basic "how are Arch packages actually made?" which is a good thing to learn in general.


If you are think search and git clone are nice, waiting until you find out of AUR helper!

`paru -Syu visual-studio-code-bin`


It's almost certainly a conscious choice to not use an AUR helper


I agree it is a conscious choice, I read it from the Arch Wiki, but to be honest I do not know why this can not be solved and accepted as a way forward.


Took a while to work out what this was - nice intro at https://docs.hunterwittenborn.com/makedeb/debian-user-reposi....


I think this is great. I am an arch user at home, but at work I use Ubuntu. It is really difficult sometimes to get the latest version of a software installed in an easy way. I tend to use a lot of custom Makefiles for this, but I don't really like to not use the package manager. Being an arch user, I always wished it was as easy to build deb packages as it is to build packages with makepkg. I tried the debian way, and could successfully build some packages, but it is not at easy at using PKGBUILD files and I have to eventually. Now it seems to be possible to do this in Ubuntu! I will definitely try this!


These days I just use Nix/Guix on work machines and get to using latest versions in less than 5 minutes.

At home I've switched fully to NixOS.


I use Debian for desktop and server since Woody, but recently I switched to Void for desktop because it is easier to build custom packages.

It is 2021 and it still is a PITA to build custom Debian packages.

Debian is a rock solid distro and I think people would use it more if package management was better.


If I was asked how a Linux distribution and package management should work, I would probably describe something very different to Debian, because of all the bureaucracy involved, yet I have been happily using Debian for everything for more than 20 years.


Can anybody explain me what is the purpose of the element-desktop-bin package? It just takes the binary from the packages.riot.im repo. That's what I do, too, by having packages.riot.im in my sources.list.d. Why repackage it?


I can't say for sure, but my take is that this repo is designed to use Arch-style PKGBUILD packaging. So, it appears to simply be an alternative package that uses the Arch-style packaging.


Interesting and cool idea. In my experience Arch Linux PKGBUILDs are easier and more straightforward to write than Debian packages (or RPMs for that matter).


Is there any way to enable it on ubuntu?


Snap and Flatpak made distro specific user repositories like this obsolete


Only for apps. They aren't really suited for libraries or even command line tools.


Not suited but possible. Snap allows packaging both libraries (e.g. GNOME runtime libraries) and command line tools (e.g. htop). Although those will live under /snap rather on common Unix paths (/usr/lib and /usr/bin), it just means those will work across distros rather on a specific one.


Linux does not need more package managers. It creates nothing but support burdens and busywork for developers (yes, user repositories too).

I don't understand what people expect to achieve by further fragmenting a minority OS that makes up only 3% of Desktop users.


- Market share is not the most important metric, resilience matters and Linux distributions have only gained in acceptance over years

- People have a fear or even aversion toward experiments since it needs effort, but some folks are simply geeks - they will keep trying and more power to them

- Some of these experiments, historically, have brought some of the best open innovations in the industry and moved even closed-source rooted companies to change mindset

- Fragmentation is a marketing problem IMHO and I would like to see some ground between that and a consolidated Linux distro but it will not happen easily and that is OK

- The enthusiasts like us do not need every other mainstream user on Linux but I am sure we would love that (I am a Linux user for 18 years)

- This community is happy learning and trying new things, it is joyful

So, more power to experimentors. The value of this community is in choice in a market that is dominated by the Apples and Microsofts who do not care about choice.


I do get where you're coming from; this project just highlighted some personal frustrations I have with Desktop Linux is.

I really wish we could have an open-source OS that was reliable and user-friendly for consumers (especially given how user-hostile Windows has become in the last 5 years), while similarly being non-burdensome for devs to deploy software to (which, to be fair, has improved a lot recently thanks to AppImageKit et al).


Linux does need more package managers. Why? Because I like trying out new package managers and they don't harm people since they're not required to try or use them.


>they don't harm people since they're not required to try or use them

For users, no. For devs, there absolutely is pressure to support and maintain multiple distribution methods for Linux users.


I don't think that's true. Usually upstream devs and distro maintainers are different people. If I ever see packaging maintained by upstream, it's more often than not just for a single distro (most commonly Flatpak, then Ubuntu).


That's a good point.


This doesn't introduce a new package manager to the world




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: