
What is Debian all about, really? Or: friction, packaging complex applications - edward
https://blog.liw.fi/posts/2018/02/17/what_is_debian_all_about_really_or_friction_packaging_complex_applications/
======
thibran
The longer I use computers the more I think having an OS-core plus separated
user-applications is the way to go. As user I don't care/think a lot about the
core, it should just work. Apps should be totally isolated from the base
system and be up-to-date. Updating the base should never impact apps, updating
apps should never break the base system.

Right now I use openSUSE Tumbleweed[0] and Guix[1] for applications. This
gives me an up-to-date OS with the ability to rollback OS updates (thanks to
openSUSE snapper and btrfs). Guix is a functional package manager with
rollback support & reproducible builds, which allows you to have multiple
versions of the same library installed so that different applications can make
use of them. The above combination solves a lot of problems traditional
packaging systems have.

I also follow to some extend the Snap[2] packaging world. I like the security
idea Snap brings to the table, but right now the snap thing is still work in
progress.

[0]:
[https://en.opensuse.org/Portal:Tumbleweed](https://en.opensuse.org/Portal:Tumbleweed)

[1]: [https://www.gnu.org/software/guix](https://www.gnu.org/software/guix)

[2]:
[https://www.ubuntu.com/desktop/snappy](https://www.ubuntu.com/desktop/snappy)

~~~
equalunique
I was going to cite Nix in response to this article, but I see you have
already mentioned Guix. I hope that both GuixSD and NixOS strike happy
balances between the two opposing philosophies described in this article.

------
hsivonen
Lars asks upstream libraries to be careful with API changes. However, Debian
doesn't reward this.

Regardless of library track record, Debian pessimistically treats all
libraries in Debian stable as if they were OpenSSL of old (which broke API
between releases and exposed random internals as potential breakage surface).
If you are upstream library developer and maintain perfect API compatibility,
Debian still won't ship your releases to Debian stable before the next Debian
stable, so app developers can't depend on the new features of your library if
they use the system copy. Depending on the nature of the library, this either
holds the ecosystem back or leads to app developers bundling the library
leading to more untangling work for Debian.

Everyone loses.

~~~
regularfry
I've been around this loop before. I've come to the conclusion that it's a
fool's errand to try to use Debian-supplied libraries for app development, and
that Debian shouldn't bother trying to support it.

Debian-supplied libraries should only be for user-facing applications supplied
by the Debian distribution itself. The system copy should be for the system
only, and unless you're writing software intended to be part of that system,
_it 's not for you_. If you opt in to using it, you've got to accept the
trade-off that the rest of the system and all its concerns come first - and
that includes not introducing changes unless absolutely necessary.

I'd much prefer a clear "line in the sand" policy like that than the status
quo, which tries to keep everyone happy.

~~~
hsivonen
I agree with the sentiment for libraries that provide particular computations
like parsers for a particular data format.

What about libraries like Gtk, though? Should third-party apps bundle their
own Gtk and interface with the system only on the X/Wayland + ATK layers?

What about glibc? Occasionally (getrandom()!) there's new glibc surface that
apps really should adopt. Should apps just rely on the Linux syscall interface
directly and bypass glibc?

~~~
regularfry
The more stable the library, the more accepting the trade-off might make sense
to an app developer.

------
jopsen
I've been reading gnome plant for a while and the recent blog posts about the
move to gitlab and how gnome builder can just clone and build gnome apps are
pretty promising, in terms of reducing friction.

Debian and distros in general should take note of that.

That said, I think distros (and debian) should look to provide a base system
on which we can run flatpaks and containers.

The create a container with a low-friction programming language specific
package manager for each project a developer is working on... and publish end-
user apps as flatpaks, as these are the only thing people care to update
anyways.

~~~
sytse
Debian is running a GitLab server in beta at
[https://salsa.debian.org/public](https://salsa.debian.org/public)

~~~
jopsen
salsa, really? some people just shouldn't be allowed name things :) hehe

Still cool,

------
sethrin
I would certainly like a world where the ease of producing a Debian package
approaches that of publishing a library in one of the high-level interpreted
languages. How likely is that to happen, and what's the best way to package
stuff for Debian right now?

~~~
hamilyon2
I am by no means expert, but fpm helps a lot.
[https://github.com/jordansissel/fpm](https://github.com/jordansissel/fpm)

------
xyz-x
For desktop, it would seem the homebrew model is very successful. Products and
software do get the updates they need; it's based on a pull-request model and
when you create a new package you only have to spend a day to go through the
syntax linting process in your pull request, as opposed to a year, waiting for
a new stable release in a linux distro.

On the server-side, containers is not primarily about "getting the admin out
of the way", but about reducing dependencies of your application to their bare
minimums. To its extreme you have Erlang and Mirage that would let you compile
your whole application to its own TCP-stack + GC + your app, bypassing the
kernel completely.

In that world, there's no need for security patches, because you're using a
managed fast language (OCaml/Erlang/Elexir/F*) and you don't have things
running next to your application that can pose security problems; no shared
libraries, no kernel, no SSH daemon; it's all compiled into your app. You get
security updates through your language package manager and they are frequent,
because you keep your app up to date as you develop it.

Because your above app runs its own GC, TCP-stack and is self-contained, it
now makes sense to move to Kubernetes; because it gives you scheduling, health
checks, networking (overlay), a resource permission model (RBAC), a network
permission model (Calico) and secret management. Your deployment environment
is now your operating system, and the distros aren't needed anymore.

~~~
viraptor
> In that world, there's no need for security patches, because you're using a
> managed fast language (OCaml/Erlang/Elexir/F*) and you don't have things
> running next to your application that can pose security problems; no shared
> libraries, no kernel, no SSH daemon; it's all compiled into your app.

You're too optimistic. Managed languages solve a few security problems, but
not all of them. Logic bugs still exist. Encoding issues still exist.
Shellshock still happened. PHP is a managed language and we don't hold it as
an example.

The only thing that the lack of shared libraries does is that now, you have to
compile the same code into the app. It's going to contain the same errors, but
now you have to replace the whole app rather than one library. It's also
harder to tell from outside if you're relying on a specific version of a
library.

MirageOS provides you with a kernel. You're not getting rid of that one. Also
Erlang needs some system to run on. It may hide in the same package and be
called Ling, but it's still a kernel.

------
cagenut
I paid rent as a linux sysadmin for a ~dozen years so of course I have
_opinions_ about distros, but in the years since amazon linux has been out
I've almost completely left spending any mental energy at that layer behind.
And now with containers & kube, I just need like a kernel and init!

I haven't actually read any real detail about it, but my understanding is this
is why redhat bought coreos. The old redhat/debian is going to slowly turn
into a tarball of static libs for backwards compatibility.

caveat: this is w/r/t "servers" I don't know/care about desktops/laptops.

~~~
ognyankulev
What about security updates to your dependencies in your containers?

~~~
s_kilk
Update the image and roll out new containers?

~~~
chatmasta
I think GP is referring to the fact that many containers use e.g. “apt-get”
(i.e. the distro package manager) in their build process.

------
skybrian
Debian works best for languages that haven't standardized on their own easy-
to-use package manager. Such as C and C++. Concentrate on that?

If that's not ambitious enough, what would it take for the implementers of a
new language to be convinced to just use Debian as their package manager? It
seems like that would require a lot of changes.

~~~
pjmlp
Even those are trying to fix their lack of package managers, conan and vcpkg
and the ones getting more love currently.

OS level packages only work properly when developers just focus on a specific
OS.

The moment one wants to target as much OSes as possible, building OS specific
packages becomes a pain and it is easier to outsource the problem to language
specific package managers that abstract the OS layer.

~~~
xg15
I understand this from the developer's perspective - but from a user's
perspective, it's horrible UX. I usually don't care much which language the
programs I use are written in - however I would very much like a single place
where I can view, manage and update all installed packages. This does not work
if each package manager keeps its own list.

It also feels ridiculous to have to install another package manager if you
only want to have a certain command line tool.

Maybe this could be solved by a sort of meta-standard for package managers, so
the OS has at least some way to aggregate all package managers and language-
specific packages installed.

Of course, as always with meta-standards, you might fall into the XKCD trap...

~~~
weberc2
Containers solve this nicely; developers are free to use their language's
package manager to build container images which the user can manage via a
single interface. Of course, this means you often have more duplicate code on
your disk and the update model changes, but that's alright for me.

------
kryptiskt
The git annex developer is more pessimistic about the future of Debian:
[https://joeyh.name/blog/entry/futures_of_distributions/](https://joeyh.name/blog/entry/futures_of_distributions/)

~~~
gkya
The Debian processes of package authoring is very complex. I maintain my
selection of packages as a private metapackage so that I can "sudo apt-get
install ./goktug.deb" and be ready, basically just a list of packages, and
even that's complex compared to a PKGBUILD or a BSD port. Recently I wanted to
package and maintain Arch Linux's netctl for Debian, and reading the suggested
documentation, multiple hours in, having accumulated dozens of tabs of highly-
suggested or required reading, I realised I still didn't have a working
example package at my hands, and gave up. Furthermore, my packages list had to
grow up incorporating more than ten tools specific to making and maintaining
Debian packages, _before_ I had even a working prototype of the thing at hand.
I'd prefer by far making an RPM or pacman package to that.

Edit: After reading the linked article, I want to add this: I believe things
from NPM don't really need to be packaged individually. The community is an
outlier in many respects, with its sloppy practices, its very wide scope and
the sheer innumerable amount of "packages" it hosts (many hundreds of
thousands). For the rest, only the libraries the application software packages
(from Firefox to coreutils, anything that's not a library) use need to be
packaged. The rest is most often installed by developers or users that are
installing non-package software, so they will somehow need to use a third
party package manager anyways. The OS package manager does not need to include
_everything_ , but what's essential and stable.

~~~
viraptor
If you use that metapackage just for installing a list of things you keep as a
base system, it may be easier to just keep a list of selections. dpkg --get-
selections to get the list, and then to reinstall:

    
    
        dpkg --set-selections
        apt-get dselect-upgrade

~~~
gkya
Thanks for the tip! Indeed that's how I use it (see [1]), I want a manually
curated list of packages that compose my base system (the packages that I
explicitly want installed, not all packages and their dependencies). If using
this I can skip the metapackage, that'd be an improvement, will look into it.

[1]
[https://github.com/cadadr/configuration/tree/master/deb/DEBI...](https://github.com/cadadr/configuration/tree/master/deb/DEBIAN)

------
wink
Everytime this disccusion comes up I'm missing a major point (so either my
very own twisted idea or it's usually phrased in a way I can't comprehend).

So I don't want to say "it depends", but... it depends.

I seem to operating/administrating/using my machines in a certain way,
regardless of their official designation as "production/QA/development/just-
working-by-chance" \- I have A LOT (say, 80-100%) of packages of the system
where I simply care about two things:

a) does it work and

b) is it recent enough to include some features I need.

These are installed by a distro package manager and only get security updates.
Everything is awesome.

On the other hand, those 5-20% other packages are usually either not packaged
at all in $distro, or too old, or somethings's wrong or I'm really developing
against git HEAD. So now the complicated part:

Do I use the latest and greatest version? (Debian unstable, arch/void/other
rolling release)

Do I just use the language package manager anyway because will never get
packages this year?

TLDR: For all the software (majority) I don't have a laser focus on, I love
the distro packages. For some things, the language package manager is fine (or
I will use nixpkgs or GNU stow or just plain make install to /usr/local) and
for a tiny part I will manually "git pull" every few hours or days and use
that. And all of this is not a problem, it works for me[tm].

------
zubairq
You can also try Snap which works on Debian, I chose it over all the other DEB
installers myself. I wrote about it here:
[http://visifile.com/visifile/25-jan-2018---why-ubuntu-
snap-w...](http://visifile.com/visifile/25-jan-2018---why-ubuntu-snap-will-
win-part-1.html)

~~~
old-gregg
I sincerely hope that Snap and Flatpack will die.

Linux package management is a rare example of successfully tamed complexity.
It's magic: having a single copy of, say, gzip implementation imported and
used by hundreds of installed applications.

Sandboxed approach works on servers, where a single machine is usually
dedicated to running just a handful of rapidly re-installed apps, it has no
place on a desktop. I am not interested of every little component (of which I
use dozens) landing with its own "runtime".

this madness is caused by the common techno-disease of seeing a nail
everywhere when holding a hammer. yes, containers are great for the (trivial)
server use case and no, it's stupid on a desktop which is two orders of
magnitude more complicated.

I clicked on your link and saw exactly what I expected:

> VisiFile is a NodeJS application built with Express, Sqlite, and various
> NodeJS modules.

yep. another example of wheels coming off: no, NodeJS is not an appropriate
tool to build desktop software. by picking the wrong tool you're dragging the
wrong deployment model along with it. I will only use an electron-like app if
someone pays me for it (Slack at work).

~~~
sjellis
Web browsers and Web applications actually completely break the distribution
model completely on both ends, client and server. To fix this, you need
containerization on both ends.

Chrome and Firefox ship new versions of their products every six weeks. Each
version carries both new functionality (Web standards require field trials as
part of the standard process), and essential security changes (e.g. for TLS).
Ubuntu and Fedora repackage Firefox and ship it as fast as they can, because
that is the responsible thing to do, but it would be easier and safer for
everybody if that could be done with the transactional updates, parallel
version installation and dependency isolation that flatpak and snap can
provide.

~~~
gurrone
I see this is working for a constantly developing project like browsers. But
imagine you just stoped to work on your side project for a year. Now you come
back and would just like to add a new feature. You start with updating all
your libraries and half of them started to introduce API incompatible changes
(the author gave you at least 15 minor releases and three month of transition
period but you did not pay attention) and you spent the next two days just
with catching up on the development side of things. We did not yet look at all
those bugs and security holes which were open in all the libs you used for the
last year.

I see a similar friction here we also face between distributions and
developers and the main issue for friction seems to be the friction about
guarantees for stability. And only after that it extents to the friction about
getting the latest version shipped to the user (either developer or end user
of an application).

In corporate environments they claimed we solve this all by running
microservices and we would just version our APIs to guarantee the stability of
our interfaces to our fellow developers. I still would like to see this
working for extended period of time in the real world.

I don't see how containers or flatpacks will solve it. At some point someone
has to touch it again. Be it for bug fixes, security updates, feature
development. If hell breaks loose everytime you've to continue stalled
development because something wasn't as stable as expected in your ecosystem
we've not solved the problem yet. And it's not solved if we install five
different versions of a lib with different sets of bugs in parallel.

------
digi_owl
I may admit that i find the Gobolinux solution interesting.

Basically wrap the language specific managers, and integrate the result into
the Gobolinux directory tree.

[https://github.com/gobolinux/AlienVFS](https://github.com/gobolinux/AlienVFS)

~~~
gkya
GoboLinux is incredibly interesting, along with GuixSD and NixOS, which are
the three best things to happen in the Linux sphere I guess. GuixSD is my
favourite, but it's full GNU, so I can't practically use it (missing drivers
in kernel and maybe X, need to compile often). Nix has a funny language and
systemd, so not really an improvement over Debian for me. Maybe I should try
GoboLinux instead, but I'm afraid that there will be too many packages missing
from it and IDK the story with security updates.

~~~
digi_owl
It is very much a hands on distro, for better or worse.

More akin to Gentoo than Arch, and far removed from the likes of CentOS or
Debian Stable.

That said, rolling your own recipes is relatively straight forward. And with
the /System/Index they are using these days, i think it should be less
sensitive to hardcoded paths and similar idiocies.

------
auvrw
> after upgrading to the next Debian stable release, my stuff continues to
> work

can be replaced by

> with frequent upgrades, errors are sparse blips on the Debian testing
> release

-

standard line: if you're considering Debian for the desktop and want a
reasonable mix of stability as well as recent updates,

testing recommended over stable.

~~~
hsivonen
Testing doesn't, by policy, get reliably timely security patches.
[https://www.debian.org/security/faq#testing](https://www.debian.org/security/faq#testing)

(I wish Debian had a genuinely supported rolling or rolling-ish flavor. It's
bad for the Linux ecosystem when people who don't need to be using out-of-date
software use out-of-date software from stable.)

~~~
lamby
"unstable" (ignore the name, assume it refers to the version numbers...)
serves this purpose for many.

~~~
hsivonen
It does for many, but:

1) The general attitude seems to be that if something breaks, the user is at
fault for using something called “unstable”. I.e. it’s not positioned as a way
for the casual user to keep using up-to-date software that’d still be smoke-
tested enough that you don’t need to be an enthusiast who can resurrect an
occasional broken system.

2) While there probably in practice is better security coverage than in
testing and backports, formally there still isn’t security support per
[https://www.debian.org/security/faq#unstable](https://www.debian.org/security/faq#unstable)
. (I do recognize your HN handle and know you know this, but I’m providing the
link for other readers.)

~~~
lamby
> it’s not positioned as a way for the casual user

Indeed. Any crazy ideas on how to remedy this? :)

~~~
hsivonen
Repurposing testing as such a distro (not calling it “testing”). Discontinuing
stable (i.e. leaving the enterprise use case to Red Hat and SuSE) if the
security team doesn’t have capacity to support both.

~~~
lamby
That's a huge ask. My query was really about "rebranding" unstable.

~~~
hsivonen
When suggesting that Debian provide a security-supported release stream other
than stable, the usual response is that the security team doesn’t have the
capacity to support another one. That why I pointed out unprompted that I’d
cut stable if there isn’t capacity for multiple security-supported channels. I
realize it would be a big change for Debian.

------
xstartup
I started using Arch because it just works.

~~~
jopsen
Which has same problems... this isn't really about debian, but distributions
in general.

~~~
j605
The current solution for rust or go applications seem to be just packaging the
resultant binary even in the repositories. I don't know if they will ever ship
rust libraries as packages.

~~~
burntsushi
I'm a mere user of Arch so I don't know what the official policy is, but from
where I'm standing, it looks like---at least for the community repo---the
policy is "do what works." You are indeed right that Go and Rust applications
are packaged as-is without having all of their corresponding library
dependencies packaged as well. But, for example, Haskell applications like
pandoc _do_ have all their libraries packaged. Compare the output of, say,
`pacman -Si docker` and `pacman -Si pandoc`.

