Hacker News new | comments | show | ask | jobs | submit login
What is Debian all about, really? Or: friction, packaging complex applications (liw.fi)
102 points by edward 9 months ago | hide | past | web | favorite | 61 comments



The longer I use computers the more I think having an OS-core plus separated user-applications is the way to go. As user I don't care/think a lot about the core, it should just work. Apps should be totally isolated from the base system and be up-to-date. Updating the base should never impact apps, updating apps should never break the base system.

Right now I use openSUSE Tumbleweed[0] and Guix[1] for applications. This gives me an up-to-date OS with the ability to rollback OS updates (thanks to openSUSE snapper and btrfs). Guix is a functional package manager with rollback support & reproducible builds, which allows you to have multiple versions of the same library installed so that different applications can make use of them. The above combination solves a lot of problems traditional packaging systems have.

I also follow to some extend the Snap[2] packaging world. I like the security idea Snap brings to the table, but right now the snap thing is still work in progress.

[0]: https://en.opensuse.org/Portal:Tumbleweed

[1]: https://www.gnu.org/software/guix

[2]: https://www.ubuntu.com/desktop/snappy


I was going to cite Nix in response to this article, but I see you have already mentioned Guix. I hope that both GuixSD and NixOS strike happy balances between the two opposing philosophies described in this article.


Lars asks upstream libraries to be careful with API changes. However, Debian doesn't reward this.

Regardless of library track record, Debian pessimistically treats all libraries in Debian stable as if they were OpenSSL of old (which broke API between releases and exposed random internals as potential breakage surface). If you are upstream library developer and maintain perfect API compatibility, Debian still won't ship your releases to Debian stable before the next Debian stable, so app developers can't depend on the new features of your library if they use the system copy. Depending on the nature of the library, this either holds the ecosystem back or leads to app developers bundling the library leading to more untangling work for Debian.

Everyone loses.


OpenSSL of old (which broke API between releases...

Never mind that, there were cases where OpenSSL broke binary compatibility in security patches. Code which worked fine suddenly stopped working when OpenSSL was upgraded.

During my tenure as FreeBSD security officer, OpenSSL was right at the top of the "don't ever trust the patches these guys send out" list.


I've been around this loop before. I've come to the conclusion that it's a fool's errand to try to use Debian-supplied libraries for app development, and that Debian shouldn't bother trying to support it.

Debian-supplied libraries should only be for user-facing applications supplied by the Debian distribution itself. The system copy should be for the system only, and unless you're writing software intended to be part of that system, it's not for you. If you opt in to using it, you've got to accept the trade-off that the rest of the system and all its concerns come first - and that includes not introducing changes unless absolutely necessary.

I'd much prefer a clear "line in the sand" policy like that than the status quo, which tries to keep everyone happy.


I agree with the sentiment for libraries that provide particular computations like parsers for a particular data format.

What about libraries like Gtk, though? Should third-party apps bundle their own Gtk and interface with the system only on the X/Wayland + ATK layers?

What about glibc? Occasionally (getrandom()!) there's new glibc surface that apps really should adopt. Should apps just rely on the Linux syscall interface directly and bypass glibc?


The more stable the library, the more accepting the trade-off might make sense to an app developer.


backports ? debian testing ?


Backports and testing don't, by policy, get reliable security support, so they aren't a good answer for users.

As long as Debian stable is a target that developers feel their app should run on, backports and testing aren't solutions for app developers.


I've been reading gnome plant for a while and the recent blog posts about the move to gitlab and how gnome builder can just clone and build gnome apps are pretty promising, in terms of reducing friction.

Debian and distros in general should take note of that.

That said, I think distros (and debian) should look to provide a base system on which we can run flatpaks and containers.

The create a container with a low-friction programming language specific package manager for each project a developer is working on... and publish end-user apps as flatpaks, as these are the only thing people care to update anyways.


Debian is running a GitLab server in beta at https://salsa.debian.org/public


salsa, really? some people just shouldn't be allowed name things :) hehe

Still cool,


I would certainly like a world where the ease of producing a Debian package approaches that of publishing a library in one of the high-level interpreted languages. How likely is that to happen, and what's the best way to package stuff for Debian right now?


I am by no means expert, but fpm helps a lot. https://github.com/jordansissel/fpm


For desktop, it would seem the homebrew model is very successful. Products and software do get the updates they need; it's based on a pull-request model and when you create a new package you only have to spend a day to go through the syntax linting process in your pull request, as opposed to a year, waiting for a new stable release in a linux distro.

On the server-side, containers is not primarily about "getting the admin out of the way", but about reducing dependencies of your application to their bare minimums. To its extreme you have Erlang and Mirage that would let you compile your whole application to its own TCP-stack + GC + your app, bypassing the kernel completely.

In that world, there's no need for security patches, because you're using a managed fast language (OCaml/Erlang/Elexir/F*) and you don't have things running next to your application that can pose security problems; no shared libraries, no kernel, no SSH daemon; it's all compiled into your app. You get security updates through your language package manager and they are frequent, because you keep your app up to date as you develop it.

Because your above app runs its own GC, TCP-stack and is self-contained, it now makes sense to move to Kubernetes; because it gives you scheduling, health checks, networking (overlay), a resource permission model (RBAC), a network permission model (Calico) and secret management. Your deployment environment is now your operating system, and the distros aren't needed anymore.


> In that world, there's no need for security patches, because you're using a managed fast language (OCaml/Erlang/Elexir/F*) and you don't have things running next to your application that can pose security problems; no shared libraries, no kernel, no SSH daemon; it's all compiled into your app.

You're too optimistic. Managed languages solve a few security problems, but not all of them. Logic bugs still exist. Encoding issues still exist. Shellshock still happened. PHP is a managed language and we don't hold it as an example.

The only thing that the lack of shared libraries does is that now, you have to compile the same code into the app. It's going to contain the same errors, but now you have to replace the whole app rather than one library. It's also harder to tell from outside if you're relying on a specific version of a library.

MirageOS provides you with a kernel. You're not getting rid of that one. Also Erlang needs some system to run on. It may hide in the same package and be called Ling, but it's still a kernel.


I paid rent as a linux sysadmin for a ~dozen years so of course I have opinions about distros, but in the years since amazon linux has been out I've almost completely left spending any mental energy at that layer behind. And now with containers & kube, I just need like a kernel and init!

I haven't actually read any real detail about it, but my understanding is this is why redhat bought coreos. The old redhat/debian is going to slowly turn into a tarball of static libs for backwards compatibility.

caveat: this is w/r/t "servers" I don't know/care about desktops/laptops.


What about security updates to your dependencies in your containers?


You have to treat it as a separate system. Kind of like apps which have their own gem/npm/pip ecosystem. It's another list of versions to pay attention to. It sucks, but not worse than having another VM.


Exactly! :)

On the desktop we'll have the same problem with flatpaks. I hope it might all be okay, if the base-system is provided by distros..


Update the image and roll out new containers?


I think GP is referring to the fact that many containers use e.g. “apt-get” (i.e. the distro package manager) in their build process.


Debian works best for languages that haven't standardized on their own easy-to-use package manager. Such as C and C++. Concentrate on that?

If that's not ambitious enough, what would it take for the implementers of a new language to be convinced to just use Debian as their package manager? It seems like that would require a lot of changes.


Even those are trying to fix their lack of package managers, conan and vcpkg and the ones getting more love currently.

OS level packages only work properly when developers just focus on a specific OS.

The moment one wants to target as much OSes as possible, building OS specific packages becomes a pain and it is easier to outsource the problem to language specific package managers that abstract the OS layer.


I understand this from the developer's perspective - but from a user's perspective, it's horrible UX. I usually don't care much which language the programs I use are written in - however I would very much like a single place where I can view, manage and update all installed packages. This does not work if each package manager keeps its own list.

It also feels ridiculous to have to install another package manager if you only want to have a certain command line tool.

Maybe this could be solved by a sort of meta-standard for package managers, so the OS has at least some way to aggregate all package managers and language-specific packages installed.

Of course, as always with meta-standards, you might fall into the XKCD trap...


Containers solve this nicely; developers are free to use their language's package manager to build container images which the user can manage via a single interface. Of course, this means you often have more duplicate code on your disk and the update model changes, but that's alright for me.


Consider Rust - new _stable_ language release every six weeks. Debian would have puppies! However, having the 'rustup' tool available in the repos would work fine to bootstrap Rust environment. PS: plus, very explicit goal of Cargo is reproducible builds, which is a hard problem to generalize.


What are they supposed to do for Windows, Android, and other operating systems?


The git annex developer is more pessimistic about the future of Debian: https://joeyh.name/blog/entry/futures_of_distributions/


The Debian processes of package authoring is very complex. I maintain my selection of packages as a private metapackage so that I can "sudo apt-get install ./goktug.deb" and be ready, basically just a list of packages, and even that's complex compared to a PKGBUILD or a BSD port. Recently I wanted to package and maintain Arch Linux's netctl for Debian, and reading the suggested documentation, multiple hours in, having accumulated dozens of tabs of highly-suggested or required reading, I realised I still didn't have a working example package at my hands, and gave up. Furthermore, my packages list had to grow up incorporating more than ten tools specific to making and maintaining Debian packages, before I had even a working prototype of the thing at hand. I'd prefer by far making an RPM or pacman package to that.

Edit: After reading the linked article, I want to add this: I believe things from NPM don't really need to be packaged individually. The community is an outlier in many respects, with its sloppy practices, its very wide scope and the sheer innumerable amount of "packages" it hosts (many hundreds of thousands). For the rest, only the libraries the application software packages (from Firefox to coreutils, anything that's not a library) use need to be packaged. The rest is most often installed by developers or users that are installing non-package software, so they will somehow need to use a third party package manager anyways. The OS package manager does not need to include everything, but what's essential and stable.


It doesn't help that many of the official Debian wiki pages contradict eachother in what tools to use or how to use them. It took me a long time of plain experimentation to realize that the debian directory contents are mostly agnostic to what tools you're using to build the package itself, and then it's just figuring out which CLI build tools you can figure out how to get to run (for me it was a combination of cowbuilder + pdebuild, and cowbuilder is optional).


If you use that metapackage just for installing a list of things you keep as a base system, it may be easier to just keep a list of selections. dpkg --get-selections to get the list, and then to reinstall:

    dpkg --set-selections
    apt-get dselect-upgrade


Thanks for the tip! Indeed that's how I use it (see [1]), I want a manually curated list of packages that compose my base system (the packages that I explicitly want installed, not all packages and their dependencies). If using this I can skip the metapackage, that'd be an improvement, will look into it.

[1] https://github.com/cadadr/configuration/tree/master/deb/DEBI...


Everytime this disccusion comes up I'm missing a major point (so either my very own twisted idea or it's usually phrased in a way I can't comprehend).

So I don't want to say "it depends", but... it depends.

I seem to operating/administrating/using my machines in a certain way, regardless of their official designation as "production/QA/development/just-working-by-chance" - I have A LOT (say, 80-100%) of packages of the system where I simply care about two things:

a) does it work and

b) is it recent enough to include some features I need.

These are installed by a distro package manager and only get security updates. Everything is awesome.

On the other hand, those 5-20% other packages are usually either not packaged at all in $distro, or too old, or somethings's wrong or I'm really developing against git HEAD. So now the complicated part:

Do I use the latest and greatest version? (Debian unstable, arch/void/other rolling release)

Do I just use the language package manager anyway because will never get packages this year?

TLDR: For all the software (majority) I don't have a laser focus on, I love the distro packages. For some things, the language package manager is fine (or I will use nixpkgs or GNU stow or just plain make install to /usr/local) and for a tiny part I will manually "git pull" every few hours or days and use that. And all of this is not a problem, it works for me[tm].


You can also try Snap which works on Debian, I chose it over all the other DEB installers myself. I wrote about it here: http://visifile.com/visifile/25-jan-2018---why-ubuntu-snap-w...


I sincerely hope that Snap and Flatpack will die.

Linux package management is a rare example of successfully tamed complexity. It's magic: having a single copy of, say, gzip implementation imported and used by hundreds of installed applications.

Sandboxed approach works on servers, where a single machine is usually dedicated to running just a handful of rapidly re-installed apps, it has no place on a desktop. I am not interested of every little component (of which I use dozens) landing with its own "runtime".

this madness is caused by the common techno-disease of seeing a nail everywhere when holding a hammer. yes, containers are great for the (trivial) server use case and no, it's stupid on a desktop which is two orders of magnitude more complicated.

I clicked on your link and saw exactly what I expected:

> VisiFile is a NodeJS application built with Express, Sqlite, and various NodeJS modules.

yep. another example of wheels coming off: no, NodeJS is not an appropriate tool to build desktop software. by picking the wrong tool you're dragging the wrong deployment model along with it. I will only use an electron-like app if someone pays me for it (Slack at work).


Web browsers and Web applications actually completely break the distribution model completely on both ends, client and server. To fix this, you need containerization on both ends.

Chrome and Firefox ship new versions of their products every six weeks. Each version carries both new functionality (Web standards require field trials as part of the standard process), and essential security changes (e.g. for TLS). Ubuntu and Fedora repackage Firefox and ship it as fast as they can, because that is the responsible thing to do, but it would be easier and safer for everybody if that could be done with the transactional updates, parallel version installation and dependency isolation that flatpak and snap can provide.


I see this is working for a constantly developing project like browsers. But imagine you just stoped to work on your side project for a year. Now you come back and would just like to add a new feature. You start with updating all your libraries and half of them started to introduce API incompatible changes (the author gave you at least 15 minor releases and three month of transition period but you did not pay attention) and you spent the next two days just with catching up on the development side of things. We did not yet look at all those bugs and security holes which were open in all the libs you used for the last year.

I see a similar friction here we also face between distributions and developers and the main issue for friction seems to be the friction about guarantees for stability. And only after that it extents to the friction about getting the latest version shipped to the user (either developer or end user of an application).

In corporate environments they claimed we solve this all by running microservices and we would just version our APIs to guarantee the stability of our interfaces to our fellow developers. I still would like to see this working for extended period of time in the real world.

I don't see how containers or flatpacks will solve it. At some point someone has to touch it again. Be it for bug fixes, security updates, feature development. If hell breaks loose everytime you've to continue stalled development because something wasn't as stable as expected in your ecosystem we've not solved the problem yet. And it's not solved if we install five different versions of a lib with different sets of bugs in parallel.


Debian ships Firefox ESR, which is supported for over a year and has a 12 week overlap with the next ESR release.


Being only a teeny bit polemical, I sincerely hope the whole overblown containerization fad will die.

Sure, it has its uses, but outside of those, it's just yet another needless layer of obfuscation. OS'es and centralised package management served us well for a generation or two. I worry that Docker & Co. are going to send all that the same way js sent a few decades worth of soundly developed programming practice.


Containers is basically a way for devs to get admins out of their way.

Rather than have admins complain that something broke when some push-to-prod devs are running amok, said devs can just point to their containerized snowflakes and tell the admins to get lost.

The problem is not technical, it is cultural. It is the webdev mentality percolating down the stack, giving the devs the impression that they can just fix their latest API/ABI boondoggle by rapidly pushing another version, like a machinegun fires bullets.

It is not surprising that Gnome is one of the biggest pushers of this, as they seem to have and artistic attitude to their code. Just watch them trying to tar en feather Torvalds, as he has told them off many a time for this behavior (a behavior that is strictly not tolerated in kernel land, might i add).


> Linux package management is a rare example of successfully tamed complexity. It's magic: having a single copy of, say, gzip implementation imported and used by hundreds of installed applications.

Except it often breaks and you need to be an expert to fix it. I'll take the hit on disk space if it means I can reliably install software quickly and without hassle.

Containers aren't perfect, but they get the job done.


Visifile is not an electron app, it is a web server and uses a browser as the interface


I may admit that i find the Gobolinux solution interesting.

Basically wrap the language specific managers, and integrate the result into the Gobolinux directory tree.

https://github.com/gobolinux/AlienVFS


GoboLinux is incredibly interesting, along with GuixSD and NixOS, which are the three best things to happen in the Linux sphere I guess. GuixSD is my favourite, but it's full GNU, so I can't practically use it (missing drivers in kernel and maybe X, need to compile often). Nix has a funny language and systemd, so not really an improvement over Debian for me. Maybe I should try GoboLinux instead, but I'm afraid that there will be too many packages missing from it and IDK the story with security updates.


It is very much a hands on distro, for better or worse.

More akin to Gentoo than Arch, and far removed from the likes of CentOS or Debian Stable.

That said, rolling your own recipes is relatively straight forward. And with the /System/Index they are using these days, i think it should be less sensitive to hardcoded paths and similar idiocies.


> after upgrading to the next Debian stable release, my stuff continues to work

can be replaced by

> with frequent upgrades, errors are sparse blips on the Debian testing release

-

standard line: if you're considering Debian for the desktop and want a reasonable mix of stability as well as recent updates,

testing recommended over stable.


Testing doesn't, by policy, get reliably timely security patches. https://www.debian.org/security/faq#testing

(I wish Debian had a genuinely supported rolling or rolling-ish flavor. It's bad for the Linux ecosystem when people who don't need to be using out-of-date software use out-of-date software from stable.)


"unstable" (ignore the name, assume it refers to the version numbers...) serves this purpose for many.


It does for many, but:

1) The general attitude seems to be that if something breaks, the user is at fault for using something called “unstable”. I.e. it’s not positioned as a way for the casual user to keep using up-to-date software that’d still be smoke-tested enough that you don’t need to be an enthusiast who can resurrect an occasional broken system.

2) While there probably in practice is better security coverage than in testing and backports, formally there still isn’t security support per https://www.debian.org/security/faq#unstable . (I do recognize your HN handle and know you know this, but I’m providing the link for other readers.)


> it’s not positioned as a way for the casual user

Indeed. Any crazy ideas on how to remedy this? :)


Repurposing testing as such a distro (not calling it “testing”). Discontinuing stable (i.e. leaving the enterprise use case to Red Hat and SuSE) if the security team doesn’t have capacity to support both.


That's a huge ask. My query was really about "rebranding" unstable.


When suggesting that Debian provide a security-supported release stream other than stable, the usual response is that the security team doesn’t have the capacity to support another one. That why I pointed out unprompted that I’d cut stable if there isn’t capacity for multiple security-supported channels. I realize it would be a big change for Debian.


Also: unstable can’t be the channel for casual users with rebranding, since there needs to be some smoke testing channel before the one for casual users.


I started using Arch because it just works.


Which has same problems... this isn't really about debian, but distributions in general.


The current solution for rust or go applications seem to be just packaging the resultant binary even in the repositories. I don't know if they will ever ship rust libraries as packages.


I'm a mere user of Arch so I don't know what the official policy is, but from where I'm standing, it looks like---at least for the community repo---the policy is "do what works." You are indeed right that Go and Rust applications are packaged as-is without having all of their corresponding library dependencies packaged as well. But, for example, Haskell applications like pandoc do have all their libraries packaged. Compare the output of, say, `pacman -Si docker` and `pacman -Si pandoc`.


My understanding is that Debian uses debcargo to generate packages from cargo packages.


I also use arch.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: