Hacker News new | past | comments | ask | show | jobs | submit login
Ubuntu 20.04 LTS’ snap obsession has snapped me off of it (jatan.space)
604 points by uncertainquark 14 days ago | hide | past | favorite | 538 comments



Snap was neat when I first discovered it. I could get some bleeding-edge dependencies automatically delivered to my machine, without building it from source or having to manually update it. Wonderful! But the problem is: it's the slowest thing in the world. I've complained about this on HN before, but it can be hundreds of milliseconds from when you type a command to when the actual application starts running -- the intermediate time is being consumed by Snap doing god knows what. The slowness is a dealbreaker for me. My disk can read 500,000 4k blocks per second. My RAM can do 3400M transactions per second. I have 32 CPU cores. How is loading some bytes into RAM and telling the CPU to start executing instructions something that can take more than a few microseconds!? Easy: run that app with Snap!

I blew up my Ubuntu install and switched back to Debian. I haven't missed Ubuntu at all. I am resigned to the fact that if I care about a particular piece of software because it's the reason I use a computer (go, Emacs, Node, etc.) then I just have to maintain it myself. There simply isn't a good way right now. And you know what? It's fine. Everything is configured exactly the way I like, and it will never change unless I change it.


I think the last time the startup issue came up on HN we concluded it was because they are mounting a squashfs filesystem image that was created with the compression turned up to 11, and not using the kind of compression algorithm that is only slow on compression. That and the general terribleness of squashfs.

Now keep in mind we can't blame squashfs here. It was developed for use on <16 MiB large NOR flash chips for embedded devices likely connected over SPI - the underlying flash and interface is so unbelievably slow that no amount of compression or terrible kernel code would ever start showing up in some kind of benchmark. Using it on super fast desktop machines with storage that rivals RAM bandwidth and latency is just the opposite of what it was developed for.


The thing that irks me about this is that we have carefully optimized file systems, virtual memory, and dynamic linkers over the years to try to make starting a program scale only in the number of touched pages, and somehow it was considered sane to destroy all of that and store the program compressed on disk, when we know from experience with every other package manager we had been using that he software wasn't what was using up all of our disk space :/. Most of the large assets that come with software (such as images) don't even compress well as they are either difficult to extract entropy from (machine code; we do it but it usually uses more specialized algorithms) or files that are already compressed (such as images)! In contrast, iOS extracts packages to disk when you install them, and Android has developers use zip files that are configured to just not compress (to instead "store") anything the developer might have expected to be able to memory map at runtime and then has them run the file through zipalign to add padding that ensures all the stored files begin at memory mappable page offsets in the file, allowing them to not extract the file (keeping it as a single unit as signed by the developer)--though they do still extract the code to prelink and now even precompile it!--while not compromising on startup performance.

From my experience squashfs can give quite okay perf if you use the right params and compression. I'd expect one of the largest linux distos to tune those params correctly.

We are living in a world where Electron is acceptable. I expect nothing.

The linux distros are usually not the ones pushing Electron and this is one step lower from where a Electron app would run.

Is “quite okay perf” really the goal post though? Using applications is what makes a computer useful. It’s bad enough that applications these days are terribly slow and inefficient to begin with, I don’t want even more unnecessary slow downs on top of that.

I'm not advocating this usecase for squashfs, just saying that the perf I'm seeing in snaps is not necessarily what properly configured squashfs would give.

Is there any replacement for squashfs? I don't know of any other format on Linux that captures the state of the file system as fully as squashfs.



Squashfs is not a problem. Squashfs can be faster than ext4, sometimes many times faster (lots of small files).

Selected squashfs parameters are the speed issue.


> But the problem is: it's the slowest thing in the world.

Exactly. I recently had to replace Chromium with Google Chrome just because the Ubuntu maintainers thought it'd be a good idea to replace the Chromium apt package with an installer that pulls in the snap…

Good thing Firefox is my main browser and I only use Chromium/Chrome for testing (and whenever websites forget that there's more than just one browser to test for), otherwise I would have long ditched Ubuntu, too.


I've been getting bug reports for a browser extension because of Snap. The file system isolation of Snap and Flatpak breaks Native Messaging, so extensions that need to communicate with a local app, such as password managers, are now broken.

https://bugs.launchpad.net/ubuntu/+source/chromium-browser/+...


I had totally forgotten about this! For me it wasn't just the extensions. The font in the "Save Download As…" dialog in the Chromium snap, for instance, has been completely broken for me – all characters get displayed as the infamous glyph-not-found block. To make things worse, the dialog by default suggests some random directory deep inside the snap tree as download destination. Good luck navigating to a folder where you'll find your download again, given that you can't read any of the file or folder names…


I've been using Linux for over 14 years exclusively. Build several linux hosting companies. I know Linux rather well. I've built a simple font manager for gnome in the past. I know the weirdness of gnome font thingies.

I've had this glyph issue for over a year. In chromium, Signal, some ebook app, and several other snaps.

I've tried many things. But gave up. Snap is not 'one layer of indirection' too much. It's hundreds of them. There's chroot, some VM, a virtual gnome, containerisation, weird user and permissions. And so on.

This complexity made me conclude that snap is a bad solution (to a real problem). Not the glyph issue, but the fact that I cannot fix it, is, for me, the reason to conclude it has, or will fail.


Wait, since when is Signal a Snap app?


Snap is more like a virtualization tool that you see in VDI and Citrix environments.

It’s sets out to solve a couple of problems in an app-focused way. As with any packager that packages dependencies, it introduces a few dozen more in the process.


Snap is a distribution format, akin to .msi or .pkg.


No, incorrect. Snap is a distribution format akin to VMDK or anything else which is not intended to be "installed" to a system, but rather run in a virtual machine or sandboxed.


If we are getting into details I think calling it a "virtual machine" is not correct either. Also since snaps have some integration with the host DE through launchers and so on they are not like VMDK's.

To be even more pedantic, there's nothing stopping you from using VMDK (or other disk images like VDI, VHD, raw/dd, etc) files as a container (like tar or zip), other than some extra overhead. They can easily integrate into your desktop environment, possibly even easier since there's probably more support for mounting disk images than there is for mounting archives...

Well, a squashfs file is already a disk image, so that's pretty much what they are doing.

Snap is already a standard in the IOT world. It's the desktop section in which it struggles.

Right! I ran into this not five minutes into a new 20.04 install and could not for the life of me figure out where the hell my download was, or why I couldn't access /home/julian/Downloads

Why this isn't more of a deal breaker I have no freaking idea.


This could be related to Why Chrome is using a different font size that the rest my system. Chrome is the only snap program that I have . The rest are plane Debian packages or Flatpack


I'm not a fan of Snap, but I'm surprised something like this font issue passed QA.


I think you overestimate the amount of QA resources available to the Ubuntu project. Looks like this was an issue that only happened after repeated use and seems somewhat related to individual systems' font configurations?

There's more info here: https://bugs.launchpad.net/ubuntu/+source/chromium-browser/+...


I hope they conduct automated testing with screengrabs for high profile apps such as Chromium (that have something to gain from the snap isolation) and that the test cases have this added.

> I think you overestimate the amount of QA resources available to the Ubuntu project.

  s/available to/allocated by/
I don't know that Canonical cares too much about doing QA themselves. It seems like they've decided to delegate that to the userbase.

Snap programs won't let me access /tmp/ which is really annoying ;(


Yeah, and I can find no way to either modify the fs or at least mount out. Opera, I think it is, requires codecs be installed in a specific directory. So, the snap is effectively useless.

oh my that THAT is why it takes over 5 seconds to start chromium on my Xeon...


That is really disappointing to read. We put a lot of effort into making Chrome start quickly. Back when I was responsible for Linux Chrome, I gave a lighting talk at the Ubuntu Developer Summit about it, and demoed Chrome starting faster than Gnome Calculator.


Not being snarky towards you — Chrome starts really quickly — but was that the snapped calculator from Ubuntu 18.04?


Hats off, Chrome starts instantly on my Debian machines. Even on a 10 years old, half dead Thinkpad it's just couple of seconds. This must be a Ubuntu/Snap specific issue.

Today, starting faster than Ubuntu's calculator is not that hard. I bet this also became a snap app.

firefox from the ubuntu repository is over 5 times faster to start. on any windows machine, it's almost the opposite. sorry to break the news. for what it's worth I still use it everyday... I usually start my browser once a day so it's not that big of a difference.

Both chrome and Firefox on Ubuntu are basically instant for me. Chrome is installed without the snap rubbish tho.

I have been using the snap versions of Chromium, Discord, CLion and Spotify for a while now and I haven't noticed any slowness whatsoever. Sure the applications need some seconds (at most 5) to start up on a freshly booted machine but other than that it's pretty snappy (heh). The only issue I have encountered so far is that some applications don't respect the environments cursor theme (looking at you Spotify and Postman) but that's easily fixable by the package maintainers. Other than that I really don't understand the hate-train for snap. Coming from Arch Linux, PPAs seem like a PITA to me and an elegant solution such as the AUR doesn't exist in the Ubuntu ecosystem so snap / flatpak are the next closest thing to it.


Most applications on Macs start up instantaneously these days. 5 seconds is 50 times the limit for "instantaneous" feel. [1]

All of the raging discussions in this thread would be totally absent if Canonical had taken the time to make apps installed with snaps fast. Unfortunately, these days, the "make it work, make it right, make it fast" mantra seems to stop at the "make it work". At least the 20.04 release seems to be at that stage.

Congratulations, you just got a bunch of users who are going to avoid updates even more because you are going to make everything slower with your shiny new release.

-------------------

[1] https://www.nngroup.com/articles/response-times-3-important-...


No, most apps are slow the first time they are run on Mac (not as bad as first run of snap though, usually). After that, pretty fast. AppStore apps seem to always be fast.


The 5 second load time is exactly the slowness people are complaining about. It's a ~100x slowdown.


> Coming from Arch Linux

Genuinely curious, why would you not just keep using Arch?


In my case, too much gardening.

I got sick of it after six years or so and moved to Linux Mint. (This was before Manjaro was widely visible.) Been on Mint ever since: it's a better Ubuntu than Ubuntu, and a better Windows than Windows (for ordinary uses).

Note that Mint 20, although based on Ubuntu 2004, has removed snap from the base install. `apt install chromium-browser' takes you to a web page explaining why.


I'm curious to the kind of gardening you're referring to, that applies to Arch but not Mint?

I will say, Linux Mint is my recommended Debian based distro for desktop use, it really is a better Ubuntu than Ubuntu.


Debian Stable is the last great linux distro. I keep trying to go back to Fedora or Ubuntu but they are so damn buggy.

And as someone living with OCD (the real thing) the whole Snap thing just makes me so anxious. And like you experienced, it is soooo slow. I cannot even go back to windows because for some reason the wifi on my laptop does not work well with it.

I do not care about bleeding edge anymore, but I am a casual user mostly doing genetic research, so Debian Buster with Cinnamon is, to me, the best distro on Linux today.


Debian Sid is also a great option for people who used to be into the bleeding edge Linux distro scene but can settle for Debian's more conservative version of it.

I've been loving Pop OS! lately. It feels like a "what if" version of Ubuntu had they not spent years pushing Unity, Snap, and their other side projects and just focused on the distro. Super responsive and easy to use, regardless of if you're using free or proprietary software.

In my dotage, I’ve come to appreciate the Debian Stable cadence: I can just set it and forget it for a year at a time. The mostly-annual upgrades have saved me so much time; everything just works in concert for long stretches of stability.

> I care about a particular piece of software because it's the reason I use a computer (go, Emacs, Node, etc.) then I just have to maintain it myself. There simply isn't a good way right now.

Don’t do this to yourself! Debian is not going to give you bleeding edge. But there are plenty of distros that can. Despite being a meme, Arch Linux is one of the best distros available, and has been for years. Node, golang, are usually updated within hours of upstream, while the core system remains stable. If you’re looking for something more modern, Solus has been gaining popularity and also has relatively up to date packages. Debian is great for servers.


Arch has two major differences from Ubuntu and other non-rolling distributions.

1. You're using latest upstream, after it passed through the archlinux-testing repository. This means you won't have to install software from somewhere else just because the repos are outdated. (nighly builds of software are different)

2. Sometimes you have to sys-upgrade to install/fix something, and you don't have much say in when that happens. Typically you need to preemptively do this about once a week to not potentially get interrupted by manual intervention at a bad time. Yes, you will need to do manual intervention, but you won't spend much time on it. It's far less work than getting up-to-date software running on your debian stable.


my favorite part of arch is using the AUR and customizing PKGBUILDS to suit my needs. I recently was messing around trying to get the emacs native-comp development branch built on my machine. I was running into some issues, then tried in an ubuntu container. Eventually I just fired up my Arch VM, found an AUR package for it, tweaked the PKGBUILD options and dependencies for my needs, and then had it compile on the first try. The build worked great, and once I'd native-compiled everything in my emacs configs, it was super fast to start up even in the VM.

Arch will give you small issues every now and then, but it gives you the tolls to fix them and makes it easy to do things that are much more difficult on other distros.


I can't really agree, to be fair it's been about 5 years since I stopped using Arch so things may very well have improved. But my experience using it as my daily driver on my work ThinkPad was that I kept running into quite a lot of problems every other week from upgrades breaking things, it was of course always fixable but probably spent a few hours every month just on maintenance, this is fine for me on a hobby machine at home but not really something I want in a worn environment. I'm not particular fond of Ubuntu either, it was fine in the 00s but then they lost it, nowadays I primarily use 3 operating systems, FreeBSD, Debian and macOS, they all are somewhat "boring" but also tends to mostly keep working without bothering me to much and most things are easy to look up and well documented.

Debian absolutely can give you bleeding edge if you just change your repos from their stable release to testing or sid/unstable. Same thing applies to Fedora and pretty much every major Linux distribution.


Sure it technically can. But it’s not how the distro is designed. The testing repos are meant for just that: testing. I’ve run Sid and always had issues with major version upgrades. Rolling release is just a better model IMO.

The point of using Debian or Ubuntu lts is the reverse of getting bleeding edge software. Sure you want your browser and a couple of other software to be up to date but the reset should stay the same and only have critical/minor updates.


If you need more control over versions, NixOS is by and far the best choice. You can mix and match versions of software from stable & unstable, depending on your needs.


Manjaro is a really great option too, if you want something Arch flavored that’s a bit more user friendly.


Definitely this. Manjaro is out-of-the-box ready, but based on Arch. Manjaro stable is also two more testing steps away from Arch, so the packages are less likely to cause issues. And there are four sources for software: Manjaro repos, the AUR, flatpacks, and snaps. I have never been in a situation where I could not install the latest software via one of those routes. I ran Debian for years and often ran into situations where new software was just not installable due to library incompatibility, etc. Manjaro feels like a Swiss Army knife in comparison.


And with all the options you'll almost never have to rely on snaps.


I couldn’t get Manjaro to install successfully. First operating system I’ve tried to use that just didn’t work. Their easy install thing is a bit too easy, in that it can’t even tell you what went wrong. Also docs were very weak.

I like what they are trying to do but it just did not work for me.

Arch is hard to figure out too with the wiki based docs but at least that’s advertised as unfriendly.


Yeah I've used it for a while with no issues. on my "living on the edge" machine. I still stick with ubuntu for my daily drivers. I don't understand the rant though. I understand why they moved chromium to snap.


Fedora is also a solid OS that allows you to easily stay near the bleeding edge.

Honestly, try out Linux Mint. I'm on 19.3 and it's the best OS I've ever used. Convinience of Ubuntu repos and ease of use, no Snap, some fantastic QoL widgets and apps developed for it, a great DE (well, you can choose multiple DEs but I love Cinnamon). If you want a Debian package base you can even get LMDE (Linux Mint Debian addition) which has all the perks of LM, but based on Debian instead.

I want to like Linux Mint but their Cinnamon DE still breaks stuff occasionally that expects either pure GNOME or Unity. Probably the devs' faults for hardcoding it to those two DEs, but still.

At one point Ubuntu had lots of their built-in apps on snap. I distinctly remember firefox starting seconds faster than the simple calculator or system monitor apps.


Indeed. That the calculator takes longer to start than Firefox should have caused the project to be abandoned.

I see the value of file system and dependency isolation, but it shouldn't result in such significantly degraded performance.


Or at least a hard stop on migrations until the performance characteristics were hashed out.

Sadly I noticed a while ago that Windows began to outrun Ubuntu on my older machines; it doesn’t seem like Canonical really cares about performance anymore.


I'm surprised Mark Shuttleworth allowed that. He has a very low tolerance to mistakes.

To me, Ubuntu's glory days were with 10.04. It seemed much more polished than the alternatives at that time.

IIRC Shuttleworth took a more governance and less hands-on role after that, and Ubuntu started to focus on server and enterprise support.


I saw him tearing people down in 2015-ish. And for much less than this.

I've started to use nix (both nixos and nix installed on ubuntu) to manage those critical packages where I want both control and the option for major updates outside the OS's release cadence.


I tried nix on Ubuntu recently so that I could get an up-to-date, non-snap emacs (if you already think emacs is slow to load, you should see it as a snap), and the resulting build was giving me all kinds of issues, even with just the vanilla config. I tried one other package and had issues there as well. I ended up just compiling both from source and they worked perfectly.

Is there anything special about getting nix working well with other package managers? I'll be honest, the main thing keeping me from digging into it further are the docs and the syntax. I can never tell if I'm reading about nix-the-package-manager, nix-the-language, or nix-the-operating-system; and looking at the syntax makes me understand what most programmers feel when they look at lisp.


Yeah the syntax is intimidating. Its definitely one of those things where its easier to start off by copying examples and then slowly learn all the details. After using it for a while it mostly starts to feel like a slightly different JSON syntax (there are a few exceptions to this but you don't end up needing that for most of what you do).

Sorry to hear you had issues on ubuntu. Its hard to say how you might improve your experience without knowing more details. The nix forum is probably a good place to get support for that sort of thing.


I'm sure I could track down the issue, I just didn't have the time. I was just hoping to use it to install a more up-to-date package here and there just to dip my toes in the water and go from there. I really like the idea behind it, and hope it or something similar catches on. I may just have to throw myself into the deep end and try NixOS out in a vm.

Yeah, you kinda have to drink the whole koolaid before it clicks. It’s great when it does though.

It only clicked for me when I started using NixOS (on my primary laptop in my case) rather than just Nix.


How does it compare to using Ubuntu with Ansible? Do you ever miss being on mainline stable Ubuntu/Debian when using NixOS?


On having your system be managed from configuration its sort of similar to Ansible. One major difference is with NixOS you can easily roll back to previous states of the system. That means both rolling back config changes and also rolling back package versions. That's something that Ansible doesn't really give you. NixOS also forces you to configure things the "right way" (e.g. you can't hand edit files in /etc). That is very good for reproducibility, but sometimes its frustrating when you just want to make things work quickly.

I think the biggest challenge using NixOS vs Ubuntu is if you've got some weird obscure piece of software you need to get working there's a better chance that someone has already figured that out on Ubuntu and you might have to do the work to get it running on NixOS.

On the other hand I've found contributing to Nix easier and less intimidating than contributing to Ubuntu. To add a package to Nix you just open a PR in the nixpkgs repo on github. I've found the community to be friendly and helpful.

I use a lot of LXD containers for when I just want play around with something in a non Nix environment.

Oh and I love being able to run `nix-shell -p <package>` to use a package without "installing" it.


NixOS is leauges above Ansible and similar. They are barely even playing the same game.

The TL;DR is that Ansible is given a description for some part of the system, then squints at that part and trys to make it match the description. This means that it doesn't unify anything that you haven't described and if you stop describing something it doesn't go away (unless you explicitly tell Ansible to remove it). This means that your Ansible configs end up unintentionally depending on the state of the system and the state of your system depends on the Ansible configs you have applied in the past.

NixOS is logically much more like building a fresh VM image every time you apply the configuration. Anything not in the configuration is effectively gone (it is still on the filesystem but the name is a cryptographic hash so no one can use it by accident). This makes the configs way more reproducible. It also means that I can apply a config to any system and end up with a functional replica that has no traces of the previous system. (other than mutable state which Nix doesn't really manage.)

Nix has other advantages such as easy rollbacks (which is just a bit more convenient than checking out an old config and applying it manually) and the ability to have many versions of a library/config/package without conflicts or any special work required if you need that.

I wrote a blog post a while ago that tries to go a bit more into detail over what I just described https://kevincox.ca/2015/12/13/nixos-managed-system/


It seems to me that if Nix were a little more beginner-friendly, it could fill a lot of the space Docker occupies.

I was settling up WSL2 the other day and needed Chrome so I could run it headless for some E2E tests I had going. It took a lot of messing about to get a copy of it that didn't use Snap. Partly because apt has always worked great for me and I didn't want to switch with something else, but also because it required systemd to run it as a daemon?

That they clobbered the apt install just to push Snap forward left a bad taste in my mouth.


It's worth noting that you can use Arch and various other distros on wsl[2], not just Ubuntu.


It is not official, like with AUR check sources

http://archlinux.2023198.n4.nabble.com/Windows-Subsystem-Lin...


You can officially use any official image on WSL2 https://docs.microsoft.com/en-us/windows/wsl/build-custom-di...

I don’t understand why it’s so hard. It’s literally visit chrome website and download the deb file and install...

Ah I'm glad to finally understand why Emacs 27 starts so slowly: I installed it as a snap (while my Emacs 26 was from a PPA), and I spent a great deal of time trying to find what was the culprit in my init.el config, to no avail.

About the slowness - is it just on launch, or does it keep being slow if the app keeps running? A few hundred ms is indeed a deal-breaker for standard Unix-y CLI app that might be chained with 5 other simple apps. It doesn't seem like a big deal though if you're launching something like Chromium that you might normally leave running for weeks.


The problem is that if you use chromium in any automated testing suite, it can get restarted after each test making a test suite take much longer to run.


For me it's just on launch. More precisely, the period before the application actually starts to launch. I guess it has something to do with mounting the various file systems and setting up the connections or whatever it is that it does.

I've had to install Chrome to try a thing or too (so not as a daily driver) and I haven't noticed anything weird during use. I've used Jetbrains' PyCharm daily for a few months and no problem there either (although that is a "classic" snap, not sure if it matters).


It's still pretty bad even for GUI apps. Do you really want to add 0.3s to the startup time of Chrome? I would definitely notice that.


Chrome use deb. Also, I do not remember last time I closed chrome. I can remember my whole system breaking because of ppa hell.

have you tried chromium on Ubuntu? It takes several seconds, maybe 10 on my machine whereas firefox loads in under a second.


I switched from Debian to Manjaro after seventeen years this summer. The Arch-isms are odd at first but it is an enormous pleasure to have a system that is up to date, stable, and works with me instead of against me. That and brew for Linux (which is very rarely needed on Manjaro) has vastly improved my user experience. Doesn't address all of what you said, but hope this is useful.

I also abandoned Ubuntu, and after various adventures in distro land I now use Debian Testing with Xfce.

I am currently using Ubuntu 20.04 with the Sabrent Rocket M.2 SSD and AMD Ryzen 3900x and several times my OS continiously locks up some programs for a short time (Chrome, Firefox, etc.) or becomes really slow. This computer I built is relatively new and used for work mostly so I am not sure if it's Ubuntu or I downloaded a stray snap package but man I have noticed Snap packages can be ... slow.

I used Debian and it seems to be gaining detractors from Ubuntu, my only question is... what made you switch and sick with Debian?


I suppose I'm used to apt-based systems, although I don't know if it really makes any difference. So many other distros are based on Debian, but often it seems like a pretty thin layer, perhaps poorly maintained, over Debian itself. Debian seems to have so many more developers, so why not just use it directly.

The rough edges I've found: no automatic updater or security updates in testing, just run apt update/upgrade once a day. An initial problem with the video driver not working because it requires non-free firmware, solution is just to add Debian non-free (it would have been helpful if a warning was given during installation). Firefox/thunderbird still on an old long-term-service release: I've been installing tarballs manually for now.


> I blew up my Ubuntu install and switched back to Debian. I haven't missed Ubuntu at all.

What are the biggest drawbacks? I know you said you don't miss Ubuntu at all, but is there anything which is causing you pain because it works differently or is just missing?


Not sure about Emacs, but node and go can be maintained with `asdf`


I know Snap isolates dependencies... But does it share them if they are common at all? I think this would make the sweet spot...


Snap sucks. The way open source is straying further and further from its principles is highly annoying. The whole idea that you'd need a container like environment to install an application on a Unix system is very far from where we should be. What's the point of dynamic linking if we then end up shipping half an OS with an application just to get around dependency hell? Might as well ship a pre-linked binary that just does system calls.

And it's yet another way to do an end run around repositories, instead you will sooner or later get an app-store like environment that can be controlled by some entity. These large companies should stop fucking around with Linux, it was fine the way it was. Just fix the bugs and leave the rest to the application developers.


> The whole idea that you'd need a container like environment to install an application on a Unix system is very far from where we should be. What's the point of dynamic linking if we then end up shipping half an OS with an application just to get around dependency hell? Might as well ship a pre-linked binary that just does system calls.

There are two models.

The first model is the traditional distribution model. The distribution curates and integrates software, picking (generally) one version of everything and releasing it all together. Users do not get featureful software updates except when they upgrade to a new distribution release - all at once.

The downside of this model is that developers who want to ship their new software or feature updates without waiting for a new distribution release get stuck into dependency hell and have to operate outside the normal packaging system. Same for users who want to consume this. Third party apt repositories and similar efforts are all fundamentally hacks and generally all end up breaking users' systems sooner or later. Often this is discovered only on a subsequent distribution upgrade and users are unable to attribute distribution upgrade failures to the hacked third party software installation they did months or years ago.

The second model is the bundled model. Ship all the dependencies bundled in the software itself. That's what Snaps (and Flatpaks, and AppImage) do - same as iOS and Android. This allows one build to work on all distributions and distribution releases. They can be installed and removed without leaving cruft or a broken system behind. They allow security sandboxing.

All your objections seem to be criticizing the bundled model itself, rather than anything about Snaps themselves except that they use the bundled model.

If you don't like the bundled model, then you can carry on using Ubuntu 20.04 without Snaps just fine.


I agree with everything you write, except:

> If you don't like the bundled model, then you can carry on using Ubuntu 20.04 without Snaps just fine.

No, you can't. Ubuntu has started replacing apt packages with snaps. So if you want to install those packages (such as Chromium) you now have to use the snap.


You can install Chromium from a third party source just like you could before.

New distribution releases always add and remove packages. Chromium has been removed (as a deb). It's is no longer packaged as a deb because it's a rolling release upstream and it was too much work to backport featureful rolling release to debs.


The transition in Chromium packaging is understandable from a maintenance point of view. However, the general argument that you can just keep using the system without using snaps "just fine" doesn't really hold if the snaps begin to replace rather than supplement software that used to be packaged without it.

The same is also somewhat true if first-class tools such as Software (as in the "store" GUI) begin to push snap packages as the first thing, because then you will, by default, need to go through additional steps to find the packages that you want, and which used to be available as the first choice.

You can still do things, of course, but it might be that you need to start getting around the snap-centric design choices more often, and at that point it doesn't come at no usability cost to you anymore.

I don't actually use Ubuntu at the moment, so I don't know how much that is the case now, but if it is (or if seems like it's going that way), I understand the frustration.


> However, the general argument that you can just keep using the system without using snaps "just fine" doesn't really hold if the snaps begin to replace rather than supplement software that used to be packaged without it.

Well, if you don't like it, you can always fork it. That's the beauty of open source. /sarcasm

Seriously though, all out replacing stuff with Snaps doesn't seem like the right move.


> it was too much work to backport featureful rolling release to debs.

Considering that (as other posters have mentioned) deb packages are released by the vendor in this case it feels like a flimsy excuse.


>No, you can't. Ubuntu has started replacing apt packages with snaps. So if you want to install those packages (such as Chromium) you now have to use the snap.

You can almost always install via deb if you want instead.


For _now_, you can.

Slowly boiling the frog is really effective.


You believe that Canonical will remove the ability to install from deb?


At some point, they likely will.

I'd wager they will require deb packages to be signed with a certain Canonical key that they will use strictly for basic system packages. Everything else will be a sandboxed Snap, left to third-parties to maintain, distributed through a Canonical Appstore that enables payments.

Maybe they will give you an option to "root" the system, and if you use it you'll lose any right to support or updates from Canonical.

Snap is fundamentally a commercial play to reduce support costs and enable an appstore.


IF they do that I'm obviously leaving their ecosystem, though I'm just a consumer so no clue how much effect I would have.

I really don't see what benefit that would provide them over other distros though and I'm not sure why they would make that choice to close down their system?

Fundamentally Linux works so well because of the free software movement and I don't see any app maintainers choosing to charge a fee for their software if they aren't already.


Arguably the Linux desktop ecosystem does not “work so well”. Adoption rates are still minuscule when compared to Apple or Microsoft numbers, and there is precious little support from commercial developers. The “year of Linux on the desktop” never happened, even after Canonical made it really simple to run Linux, so commercial support is still lacking - which in turn keeps users away. The Snap play is their latest attempt to increase monetization rates on the platform.

And will help to kill what little adaptation there is. This really should stop.

Ubuntu concentrates on server's where security patches imported from Debian. Desktop and snap very little part.

If anything can go away its Ubuntu desktop.

Snap may stay since it is the defacto standard in IOT world.


The third model - rolling release. No need to test against many releases, no outdated dependencies and why so many people want to run latest browser on outdated system?

The forth model - multiple versions of dependencies live on the same system, environment constructed for each application. Deduplication works.

Bundled model has its values (just like vm) but it really shines with inadequate package management. If you don't like the bundled model switch to Arch Linux or NixOS.


I can't believe that in 2020 we're still discussing what is the best way to package and distribute software. Can anyone explain why it is taking the profession so long to get this fixed?

Because different solutions have different tradeoffs. It doesn't appear that there is a single solution which is best for all use cases.

So different systems switch between different methods as their prieceved value of the different usecases change which upsets people who see the values of the tradeoffs differently.

If you have the perfect solution I'm all ears.


One issue on Linux in particular is the [relatively] tight bond between your kernel version and libc, which makes using software compiled with different version of libc problematic part of the time (a particular libc version still supports multiple kernels though).

I experienced this lately when my Rust-compiled binary used too modern a libc version for the aging Docker container environment we used for deployment, which forced me to use another Docker container for local development -- which obviously isn't ideal and removes the 100% reproduceability promise.


Because we love re-inventing the wheel instead of repairing the wheel.

fundamental issue between "change everything once in a big step" and "many small changes".

First is pre internet, because updates cost money, and because pre internet security issues weren't common.

Second is now, change one thing every day, always run the latest code, automated testing to make the latest code always work. Also means don't need a security branch and a feature branch.

People hate change, and linux people have the ability to say no and do their own thing.


Because it is a hard problem?

Python has easy_install, pip, anaconda and wheel; virtualenv to isolate packages. Node has npm and yarn (is it resolved?). Ruby gem defines version in code, bundler in Gemfile, Gemfile.lock; vendor/, rvm gemset, BUNDLE_PATH to isolate packages. Even developers can't find right answer.

Because it matters?

Package management is the main differentiator, I love pacman, I love PKGBUILD and makepkg.


I don't like the bundled model. I'm running Arch on my main machine currently. The AUR makes getting third-party packages easy. Unfortunately, I need VirtualBox for a class, and Arch's VirtualBox package won't run on the current Linux 5.8 kernel, and downgrading Linux on Arch is not easy. I switched to Windows for VirtualBox, but in retrospect I could've installed linux-lts.

Ubuntu is more stable in theory, but I've encountered broken packages (like hex editors with an incorrect hard-coded temp path, causing it to be unable to overwrite files).


It is not super simple but I lived with outdated kernel and Xorg for a couple of years (because of Intel Poulsbo [1]). Process described in wiki [2] and uses virtualbox-host-modules as an example.

    $ pacman -Qi glibc
    Depends On      : linux-api-headers>=4.10  tzdata  filesystem
1. This version is not present in Arch Linux Archive anymore [3], search for package page [4], View Changes, search for corresponding PKGBUILD revision [5], makepkg. It is much easier if package is not that old and can be found in /var/cache/pacman/pkg/ or archive.

2. install with `pacman -U linux-4.15.8-1-x86_64.pkg.tar.xz`

3. skip package from being upgraded with /etc/pacman.conf IgnorePkg [6]

I had problems both with Ubuntu and Arch Linux updates. At least I can fix Arch Linux issues, Ubuntu felt broken.

[1] http://sergeykish.com/linux-poulsbo-emgd

[2] https://wiki.archlinux.org/index.php/Downgrading_packages

[3] https://archive.archlinux.org/packages/l/linux/

[4] https://www.archlinux.org/packages/core/x86_64/linux/

[5] https://github.com/archlinux/svntogit-packages/commits/packa...

[6] https://wiki.archlinux.org/index.php/Pacman#Skip_package_fro...


But then you can't run leading edge hardware... My motherboard's network interface, wireless interface and my gpu all require a kernel version that's newer than the 19.10 ubuntu (and even 20.04 support for stability), let alone the last LTS release.

Ukku worked well for me to get newer kernels without issue, but kvm/virtualbox were totally borked, just as my hardware support was stable.


Sorry to hear, KVM does not work? I expect problems with virtualbox but KVM, can you share a story please?

Latest Manjaro has the latest VirtualBox in the package repository. It is easier imho switch from Arch to Manjaro than going to Win10

I agree with your post except for this:

> If you don't like the bundled model, then you can carry on using Ubuntu 20.04 without Snaps just fine.

While it's true you technically can, at some point you have to ask yourself why you're going against the grain instead of picking a different distro (or not upgrading to 20.04, at least). Ubuntu wants you to use snaps. You can sidestep this, but you should ask yourself if you shouldn't just switch distros.

I just recently upgraded from Ubuntu 16.04 to 18.04. I don't see the reason to upgrade to 20.04 as long as the software I need works and I keep getting security fixes. Once this LTS gets EOL'd, I'll see what the current deal is with Ubuntu and seriously consider switching to another distro.


Manjaro has 99% of what Ubuntu has, runs a much more modern kernel, and has packages for literally anything you can think of in the extended software repository (namely Arch Linux's AUR).

I regret not taking the plunge and installing Manjaro against the will of our IT during onboarding (they don't forbid it, they simply can't promise good support if you don't install Ubuntu). I am sure I could have found a way to install the 2-3 corporate VPN / spy / monitoring agents my employer requires on the machines they issue for employees.

But anything I've needed of Manjaro, I always got it. Granted that's an anecdotal evidence, obviously -- I haven't tried running games, for example.


> If you don't like the bundled model, then you can carry on using Ubuntu 20.04 without Snaps just fine.

Not really. The platform, Ubuntu in this case, has clearly signaled that they want to move from debs to snaps. And apart from the technical benefits of snaps (sandboxing) you also get the drawbacks (they are slow, updates are forced, and there is only one source for them). What you propose is fighting the platform you are staying on, and that's not a great place to be. Far better to move to platform more aligned with your choices as a user.

P.S.: If anyone is looking for quick and dirty suggestions, there's pop!_OS which is quite close to Ubuntu so there won't be many changes to the user experience. In my case it was Fedora. The linux ecosystem is diverse enough to offer many choices.


ROFL. When did they signal it ?

Servers are all deb which is main source of their income. If anything will go away is Ubuntu Desktop.

Snap may stay since it is the standard in IOT world.


> The downside of this model is that developers who want to ship their new software or feature updates without waiting for a new distribution release get stuck into dependency hell and have to operate outside the normal packaging system.

When I was writing apps for distros, I had the opposite problem. Every single release of GNOME and Ubuntu would break PyGTK in some way so that my software that used zero new features in the OS, would require at least some modifications and force me to maintain many versions. Finally, Gtk changed something that broke something in a fundamental way (something in Gtk TreeView, I forget what exactly) and I simple gave up maintaining software rather than suffer through figuring out how rewrite everything again.


Wouldn't some kind of static linking of the right GTK libraries inside your binary help? Completely newbie question, I am not claiming anything.

I agree with most of your post. However the last line is off.

> If you don't like the bundled model, then you can carry on using Ubuntu 20.04 without Snaps just fine.

This blog post and many of the contents are claiming that you can't just use Ubuntu without snaps anymore as they are being forced upon users.


Those of us that prefer bundles have been "forced" to use centralized repositories our whole lives.

Isn't the sandboxing a good idea though? It feels that Linux got caught in the past and is actually one of the least secure OSes out there, and what keeps it safe is just its small desktop market share.

Disclaimer: I never used snap and don't use Ubuntu.


Flatpak gives you a sandbox, and has basically skirted around any shortcoming of Snap (faster, open, extensible, adopted by every other distro, doesn't pollute /proc/mounts)


Flatpak refuses to respect the hosts value for XDG_CONFIG and HOME, making it impossible for some applications to share your configuration.


Flatpak's sandboxing is an absolute joke and entirely voluntary.


At worst the "joke" sandbox value of flatpaks is the same as deb/rpm packages and snap "clasic" confinement. So as a user, you don't lose anything security wise by moving to flatpaks.

Flatpak, as well as snaps through clasic confinement, allows the developers to "escape" the sandbox because they know that they don't have all the permissions required to provide feature parity with deb/rpm packages. Another reason this is needed is that application developers are not writing their applications with flatpak compatibility in mind. However flatpak is going in the right direction.

Mobile operating systems have proven the value of sandboxing apps.


> is an absolute joke

Please elaborate.

> entirely voluntary

Fedora Silverblue would like to disagree. And in any case all parties know it's still in flux, but the stable parts are stable in my experience. I would not want something like Snap or otherwise immature forced down my throat.


Entirely voluntary in the sense that flatpak enforces sandboxes which the application tells it to enforce. If the app asks for full system access flatpak doesn't deny that. It kind of destroys the purpose of a system built to run third-party code. If I download a malicious app from flathub or some other repo which asks for full system access flatpak doesn't do anything in the name of security.

Note: I'm not making a value judgement about flatpak's sandboxing, merely describing it to the best of my knowledge.


Enforcing permissions and auditing third party code are two different things. The point of app permissions is that when you know an app should never do something, you constrain it from ever doing that. Then if it has a security vulnerability, it's limited in the damage from the resulting compromise.

The people configuring the sandbox should be packagers that you trust. The upstream developers might provide some recommendations to the packagers, but if it's obvious that an application shouldn't need a permission, it shouldn't have it.

But permissions can't save you from an actually malicious app. Constrain it from accessing the camera and it will still be using your device to host pornography. Constrain it from accessing the filesystem and it will still run up your electric bill mining bitcoin. You either need to trust the developer or you need to get the app through someone you trust to have audited it for you.


I don’t want the sandboxing to be mandatory! Some software inherently doesn’t work in a sandbox, or works less well. A good distribution format should still cover those use-cases, the user just needs to be informed about the capabilities of what they install. Especially on Linux, which is supposed to be about user-empowerment.


None of the flat pack apps I’ve used on Pop_OS work correctly / as stable as installing a deb directly.

I’ve not been impressed at all.


YMMV. Firefox is the golden standard, at least on Arch Linux. Never have I noticed it was running on a sandbox. I run Discord, Spotify, Thunderbird from Flatpak without any issue either.

Visual Studio Code is a regular package, just because its use case it not really suited for a sandbox (access to system binaries, libraries, etc.), but I honestly haven't even tested its Flatpak version.


VSCodium flatpak works fine for me. The caveat is that I use the operating system's terminal instead of the one from VSCodium (for things like make test, python environments, etc). VSCodium's terminal operates on a sandboxed environment and I haven't bothered to figure out how it relates to the OS environment that I get from the terminal app.


You set your shell as some wrapper which allow sandbox escape. It works for shell itself, but then you need to apply it to all external utilities extensions use as well and it gets annoying.

Interesting, are there any examples on the internet for that process?

The problem with flatpack is that many apps don't really run in a sandbox. Or at least not in something with isolation features which you might expect from a sandbox.


To be honest, to me the selling point is not the sandbox, is the reproducible environment that's common for all distributions, which is a first in the Linux world, the sandbox is just the icing on the cake.

The other day I pushed an update to some flatpackaged app, and guess what, it's available to all Linux users. Packaging has become incredibly easier for third parties with these kind of technologies.


The problem is the combination of how updates are (often not) done (in time) + no sandbox.

I also am a strong believer that the future for sane desktop PC's is that every program (except the most fundamental core services) of a desktop OS should be sandbox by default. With basically no permission to access any local files or communicate with any other program/service.

It would need some MAJOR changes, slowly step by step. And I had hoped with snap & flatpack we would be slowly transitioning there. But it doesn't really look that way anymore to be honest.

(PS: And it can be done with reasonable UX experience without requiring the user to configure some magic access rules or anything, but it's tricky to get right and it not be fully backward compatible but often changes just need to go into the GUI toolkit (QT, GTK) so it should be possible.


I guess just to provide a counterpoint, I can't remember having had any issues with flatpak on Pop!, including both open-source apps and proprietary ones like Spotify.


How about zoom? Was the worst and completely unusable for anything more than viewing a meeting.

No issues for me--I've run Zoom pretty regularly, including participating in some fairly large meetings, plus screensharing, etc.

Occasionally I need to replug my headset in at the beginning of the meeting to get my voice audio working, but I'm not sure if that's actually an OS issue or not. Either way, it's never taken more than a couple seconds.


Access control is a good idea, sandboxing on top of it is saying its too nuanced a problem so heres a kitchen sink of overhead.

Linux has had some of the best software security through MAC (SELinux, Apparmor) and cgroups. The problem is that there is no culture in free software to actually write specifications at the point of development, it generally falls on distros / maintainers to try to sort out MAC profiles or cgroups restrictions on a per-package basis.

That is why the packagers largely went with the Snap / Flatpak route of saying screw doing the grunt work heres a total sandbox with all the libraries built in.

It would be great if we could convince the whole ecosystem to start provisioning access specifications for libraries and binaries so upstream could start building apparmor / selinux profiles from provided files rather than having to do learning mode auditing that drove distro maintainers to not even bother.


It's not.

For instance: the Pinebook Pro just got gles3 support via upstream mesa, but all the flatpaks with mesa haven't or won't update with any alacrity.

Users are left having to abandon sandboxing in order to get necessary updates.


Pinebook pro is using an arm64 cpu instead of an x86. That creates a big list of problems that you don't have on x86 because that's what all linux desktop developers have (mostly) been targeting.

As one example, I was surprised to find that the tor browser doesn't have an arm64 build :o


I've installed a few dozen AUR packages that only needed aarch64 added to the arch list. I think only one, syncterm, required any other modifications to build and even that was just setting a preprocessor define (__arm__).

Thus far, it's my favourite laptop purchase ever. Beats the feeling I got with the IBM X series or the Ti Book. It's light, zippy enough, great Linux hardware support, and totally silent.


Even if you did swap a symlink to upgrade to GLES3, all the apps will still call the GLES2 functions. No matter what, the devs would have to go back and rewrite their app to actually use the new features.


Many apps detect GL level and adapt renderers accordingly.

Still, it's not just graphics drivers: there's the oft-cited security patches, but also features for user-facing interfaces; Ie, an update to a URL parsing library might add additional codepage support transparently to an app that uses it.


The freedesktop runtime updates Mesa faster than most distributions. The current version is 20.1.5.


And Arch let me build and install git HEAD, so I'm using 20.3.


Forcing Arch model on eveyone is a perfect way for Linux to forever stay minority platform. There's a reason why most people do not run rolling distros.

Might be nice for hardcore fans, but good luck supporting anything there.


There is a very real shift happening from Ubuntu to Manjaro.

If the sandboxed app package model is what you desire then there's already a great and popular Linux distro for you: Android.


I've observed a lot of recent Manjaro adoption lately (I don't know what the real numbers are). Anecdotal, but in my social circles I'm seeing people moving from Ubuntu to either Manjaro or Arch.

Don't be surprised when all the mainstream distros switch to sandboxed app model as the preferred one. Distro-based packaging of the entire world is hiting the not enough manpower problem today and eventually will be eventually relegated to super fans distro only, with similar status as Amiga has today.


What is Manjaro?


Best way to describe it is: a user-friendly version of Arch that is based on the AUR (Arch User Repository).

As Arch Linux user I am completely confused. Looks like there is no official comparison page like [1]. Web search first entries [2], [3] gives another picture — more stable, graphical installer, graphical package manager, hardware detection, prepackaged desktop environments — that I can relate to.

Nothing about "based on the AUR". Because it is liability. It is insecure, have to check PKGBUILD each install and update.

[1] https://wiki.archlinux.org/index.php/Arch_compared_to_other_...

[2] https://linuxconfig.org/manjaro-linux-vs-arch-linux

[3] https://itsfoss.com/manjaro-vs-arch-linux/


I should have been more clear. In terms of package management Manjaro has access to the AUR, but it's unsupported. Manjaro does not have access to official Arch repositories. Sorry, I should have said "based on Arch" not "based on the AUR." https://wiki.manjaro.org/index.php/Arch_User_Repository

Ok, I would not recommend Arch as a first Linux distro, Arch based and easy to start is a plus. That said I've seen troubled Manjaro user on xmonad IRC, we tracked problem to AUR package, it worked fine on Arch.

Yeah, Arch Linux is definitely not for beginners. The install process alone is hands-on and requires users to be experienced with command line configuration. Arch is for power users who want a ports-based Linux with great package management (pacman).


Why would you expect to have a git snapshot shipped as the official freedesktop flatpak runtime? Could you not build your own freedesktop flatpak runtime using mesa git head?


> Could you not build your own freedesktop flatpak runtime using mesa git head?

It was easier to build it and not use flatpak.


Almost no apps are actually delivered via the Apple or Microsoft app store. Apps from outside app store apps aren't obliged to use the sandbox although mac apps can and they do have a line of defense against unsigned applications. Its harder but not impossible to end up pwned by signed apps on mac.

The number one and two attack vectors have always been tricking users into installing malware and attacking old insecure software.

Distributing virtually all software via app stores substantially solves acquiring safe software and ensuring it receives updates.

Defense in depth is virtuous but Linux is already more secure than windows in the ways that actually count and unlike MS they are actually positioned to in the future sandbox software because its all already mostly coming from app stores.


You don't need snap-style distribution for sandboxing.


Containers, chroot jails and whathaveyous have existed long before snap came along. Like other have said, flatpak is a more sane alternative that doesn't impose idiotic requirements like systemd or x server.


> doesn’t impose idiotic requirements like ... x server

How are you going to sandbox graphical apps without knowing about (and having capabilities around) the system by which a containerized app would communicate with your OS’s graphics subsystem?

I mean, if you’re not going to run any X11-client graphical apps, it should probably be optional to have an X11 server installed; but either way, you’ll need the X11 wire-protocol libraries (“xorg-common” in most package repos) for the sandbox to link in.


> How are you going to sandbox graphical apps without knowing about (and having capabilities around) the system by which a containerized app would communicate with your OS’s graphics subsystem?

Wayland is the way forward.

> you’ll need the X11 wire-protocol libraries (“xorg-common” in most package repos) for the sandbox to link in.

Today, runtimes do contain client x11 libs. However, nothing in flatpak requires it and it is possible to phase them out in the future releases of runtimes.


> Wayland is the way forward.

Wayland has been "The Way Forward(tm)" for 10 years now.

That may be. But nobody in RedHat/Canonical/etc. believes that enough to put sufficient manpower on it to make it true.


> But nobody in RedHat/Canonical/etc. believes that enough to put sufficient manpower on it to make it true.

It is default display server in RHEL8. If that is not believing enough in it, I don't want to even know what would be sufficient to prove otherwise.


Right, sand-boxing should be made at OS-Level not in User-space.


> is actually one of the least secure OSes out there

Linux is one of the most secure platforms to run web applications on, however, because more man hours than I can comprehend were spent hardening that use case.

All of those hardening measures can transfer over to the Linux desktop use case.

For example, seccomp, cgroups and MAC can all be used to harden a Linux server, and they can also be used to harden the Linux desktop. It's just that no one has thrown the same billions of dollars at desktop Linux that were thrown at solving web application security.

If you really wanted to, you could run a lot of your software in unprivileged containers secured with seccomp.


>If you really wanted to, you could run a lot of your software in unprivileged containers secured with seccomp.

We've come full circle, because Snap does run software in unprivileged containers.


They are not the same thing, however, and the complaints people have about Snap don't stem from its use of unprivileged containers.

Not if it slows down things as much as it does, especially launching. What are you doing on your machine that things like apparmor and selinux (for app isolation) aren't enough? Just use a VM. Not everything needs to be sandboxed if it ruins the zen of interacting with your machine


Can’t you actually sandbox any app you want with chroot and cgroups yourself?


chroot is not the same as lxc (linux containers)

You can do very similar, but if it's a gui app, or has specific system dependencies there will be issues to work around.


Sandboxing is great in theory, but last I checked the snap daemon opened Internet connections as root. I'll take my chances with traditional Unix permissions over that.


> Might as well ship a pre-linked binary that just does system calls.

I actually think that’s exactly what we should do. Containers are an over-engineered solution for a problem that never needed to exist.

Dynamic linking doesn’t work unless you can live inside a distro maintainer’s special bubble for all your software. If you can exist in that bubble, great—I really like Debian for certain use-cases—but if you can't, the benefits of dynamic linking everything are clearly outweighed by the drawbacks.


Dynamic linking doesn’t work unless you can live inside a distro maintainer’s special bubble for all your software. If you can exist in that bubble, great—I really like Debian for certain use-cases—but if you can't, the benefits of dynamic linking everything are clearly outweighed by the drawbacks.

Good luck patching that security vulnerability in all those static binaries without proper dependency tracking ;). Not that I am on a particular side of the fence, both have their downsides.

To me the problem are package managers from the '90ies that use a single global namespace, only allows UID 0 to install packages, and do not really provide reusable components.

Modern packaging systems like Nix and Guix allow users to install packages. Packages are non-conflicting, since they do not use a global namespace (so, you can have multiple versions or different build options). They provide a language and library that allows third-parties to define their own packages.

Not to say that they are the final say in packaging, but there is clearly a lot of room for innovation.

Snap and Flatpak are copying the packaging model of macOS, iOS, and Android. This is a perfectly legitimate approach (and IMO the execution of Flatpak is far better). But it is not for everyone -- e.g. if you prefer a more traditional Unix environment.


Rolling out shared library updates to resolve security vulnerabilities is not without its own issues.

The big one, which surprisingly places still manage to fumble due to poor process controls or simple mistakes, is that you have to restart all running processes that use the library after you update it.

I actually prefer to deploy static builds of critical services for this reason, because you already have to know that you're running version 1 build 5 everywhere -- and if everything is build 5, then they all have the fix. You don't also have to check if the process was started after May 5th.


> without proper dependency tracking

Well, there is nothing that says you can't have proper dependency tracking just because something is statically linked. The infrastructure isn't currently there [x], but it definitely is something that languages and language package managers could with each other and provide.

[x] But can be built, now that more and more languages have access to language package managers with proper dependency tracking. One way would be to create a standard for how to query a binary for what it depends on. Then a computer could have a central database of the dependencies of static binaries that is installed.


Well, there is nothing that says you can't have proper dependency tracking just because something is statically linked.

Didn't say so. It is just easier with dynamic linking, because you can see what libraries (and versions) a binary is linked against.

But can be built, now that more and more languages have access to language package managers with proper dependency tracking.

Actually, approaches such as Nix' buildRustCrate (where every transitive crate dependency is represented as as a Nix derivation) + declarative package management offer this today.

But with curl | bash or traditional package managers, which are most widely used today, this is kind of dependency tracking hard/ad-hoc.

But can be built, now that more and more languages have access to language package managers with proper dependency tracking.

But then a static C library is used and nobody knows where it came from. Even if you look at the Rust ecosystem, which generally does things well when it comes to dependency handling, crates are all over the place when it comes to native libraries. I have seen everything from crates that use a system library (or something discoverable via pkg-config), via crates that have the library sources as a git submodule and build them as part of the build-script, to crates that download precompiled library from some shared Dropbox link.

Another fun example from another language ecosystem. numpy uses OpenBLAS. They compile their binary wheels on CI. However, OpenBLAS itself is retrieved as a precompiled binary from another project [1]. However, the rabbit hole goes deeper. In case OpenBLAS is built for macOS, a precompiled disk image is retrieved from yet another repository [2]. This disk image is added to that repository, but comes from yet another place.

This is all sort of the opposite the lessons to take from Reflections on Trusting Trust and the bootstrapping that the Guix folks try to do.

Anyway, with the mindset that most developers have, we will never have proper dependency tracking.

[1] https://github.com/MacPython/openblas-libs

[2] https://github.com/MacPython/gfortran-install/tree/d430fe6e3...

[3] http://coudert.name/software/gfortran-4.9.0-Mavericks.dmg.


I agree. Are there any major disadvantages to that, aside from size and having to wait for each application to update its own dependancies?


The 'horror story' people always mention, which is a special case of "having to wait for each application to update its own dependencies", is that if there's a security vulnerability in a much-used library, you have to wait for each application maintainer to update their application, rather than simply having the distribution maintainer update the shared library. I'm not sure I agree that this would be worse than the current situation...


No plugins or switching implementations. This mainly affects stuff like opencl, opengl, vulkan, qt, Apache, PHP & PAM.

A few this could be solved (esp PAM and GPU ones) by making the full thing work over IPC. Already opengl is a pain to work in generic containers.


Are there any major disadvantages to a nuclear winter, besides that everyone would die and the environment would be destroyed?


I just honestly don't understand this. On Windows and Mac (the OSes that 99% of the planet uses on the desktop), this is exactly how things work. The OS provides a set of APIs. If an application author needs a library that isn't in the OS, they have to ship it themselves. If a vulnerability in that library comes to light, they have to fix it and ship an updated version of your application. If it's a commonly used library, a lot of application authors are going to have to ship updates.

Why couldn't this work for a Linux-based OS? Honest question.


On Windows and Mac, there is Microsoft / Apple who decides what is set of OS APIs, and everything outside is external library.

In Linux, there is no such authority and therefore no sharp line separating 'core OS' and 'external library'. It is just conglomerate of Linux kernel and independently developed tools and libraries (where each of them is more-or-less optional).


That way leads to high numbers of boxes with vulnerabilities, which may be "fine" for non-technical folks. That's not the audience linux serves however.


I don't think it's at all clear that it would lead to high numbers of boxes with vulnerabilities. Is it clear that a Mac is more vulnerable than a desktop Linux box, if you control for for the technical sophistication of the person maintaining it? I don't think that's at all clear.

While it's not guaranteed they'll be installed, the vast majority of linux desktops get security updates, including for all normally installed applications. That's a pretty big advantage over a manual update strategy.

It does seem like there could be a, well, canonical set of common libraries of specified version-ranges for a particular version of a particular distribution which are dynamically linked and with updates pushed by the distro maintainer, and if the application developer needed something else it would be statically linked.

I think the point is it "could" work on Linux, but a significant portion of devs consider that to be be a bad method of solving the problem.


If you have a BSD app that uses an LGPL library, congratulations, your app is now (L)GPL... let alone a differently licensed other library, and now your app can no longer be distributed/licensed.

> Just fix the bugs and leave the rest to the application developers.

Some dev flat out[0] refuse (and I am not debating if it's right or wrong) to package their app for every distro (even major ones: ubuntu, debian, arch, rhel/fedora) so it's up to distro maintainers to package them so users are always at the end of a line of other people packaging the apps (either through distro packages or sandboxed one click installer).

[0] Words are too strong and user CJefferson https://news.ycombinator.com/item?id=24384206 is right to call me out on that. I agree, sorry about that. Poor choice of words. I had a very specific example in mind but there's obviously a whole gamut of reasons for not packaging. My position is that we can't nor should we expect dev to package their apps for the distro we use. Also it's no like app devs and distro/os devs/maintainers are living in hermetic boxes and their code/apps never interacting or evolving. Not editing out for context.


I don't like "flat-out refuse". I don't "refuse" to package my apps for every distro any more than I "refuse" to do my neighbour's gardening.

I tried packaging for Debian once and after 2 days I have up -- I have neither the time or patience to do work for distros I don't use for free.


Yeah, there's more than a whiff of entitlement to that phrasing.

If you haven't seen it, I highly recommend looking at fpm for packaging. Unless you're doing something weird or need an obscure format, it is the tool you want.

https://github.com/jordansissel/fpm


> Yeah, there's more than a whiff of entitlement to that phrasing.

I have been thinking about that. I disagree. It's always nitpicking o'clock on HN and I specifically wrote "Some" and specifically wrote in brackets that I wasn't debating if it's either right or wrong. I agree that it should have been worded differently but the facts remains and that doesn't follow that there's “more than a whiff of entitlement“.


I'm not clear what you mean then, to be honest. Anyone who isn't packaging their software is, effectively, flat-out refusing to package it. I don't really see how I, or anyone else, could "more" flat-out refuse.

There's a spectrum. Some don't care, some refuse and some flat-out refuse. I dislike being accused of entitlement because I point that out.

[dead]


Read it again. I'm agreeing with the parent.


Ugh, apologies.


> The way open source is straying further and further from its principles is highly annoying.

I do think that's why the distinction between "free software" (copyleft) and merely "open-source" matters.

If you look historically, "open-source" became a thing as a reaction to free software where it preserves the most visible benefits, (source code in the open, modifyable by others), but treats these as purely a convenient workflow when working on code, whereas free software is more of a philosophy and so less likely to erode on principles.


I agree that something based strongly on principle rather than convenience is less likely to drift from them, for better and for worse.

Free software doesn't need to be copyleft, though. The MIT license, for example, is a free software license, even though it's not copyleft. Projects such as the various flavours of BSD can have pretty strong principles regarding their software distributions remaining free even though they don't prefer copyleft.


> flavours of BSD can have pretty strong principles regarding their software distributions remaining free even though they don't prefer copyleft

That is true, however speaking to many members of the FreeBSD community in particular, there seems to be a strong sentiment of this simply being a practical model of development, rather than a strong ideological stance. In fact a large portion seems to be rocking Macs, "cause it's BSD anyway", which to me does not seem particularly principled.

In fact they seem to take pride in completely closed systems being based on FreeBSD, like the PS4, Nintendo Switch etc.


> What's the point of dynamic linking if we then end up shipping half an OS with an application just to get around dependency hell?

Not sure if you were commenting on the whole approach or just snap, but FTR, flatpak uses shared base layers which can be updated individually, so there's still an upside to dynamic linking.


Dynamic linking and containers aren’t necessarily incompatible, though nobody has combined them well yet.

Of course, half the point of containers is to “vendor” your dependencies — a container-image is the output of a release-management process. So the symbolic reference part of dynamic linking is an undesired goal here: the container is supposed to reference a specific version of its libraries.

But that reference can be just a reference. There’s nothing stopping container-images from being just the app, plus a deterministic formula for hard-linking together the rest of the environment that the app is going to run in, from a content-addressable store/cache that lives on the host.

With a design like this, you’d still only have one libimagemagick.so.6.0.1 on your system (or whatever), just hard-linked under a bunch of different chroots; and so all the containers that wanted to load that library at runtime, would be sharing their mmap(2) regions from the single on-disk copy of the file.


Hey, you've invented WinSxS.

The primary issue with this approach is that if every program only sees its own version of the library anyway, there's no incentive to coordinate around library versions - you end up with tons of versions of everything anyway, maybe not one for every application but close to.


> Hey, you’ve invented WinSxS.

Oh, I know :)

> there’s no incentive to coordinate

True, but it potentially works out anyway, for several reasons that end up covering most libraries:

• libraries that just doesn’t change very often, are going to be “coordinated on” by default.

• people building these container-images are the same people who actually tend to be running them in production, so they (unlike distro authors) actually feel the constraint of memory pressure; so they, at development time, have an incentive to push back on library authors to factor their libraries into fast-changing business layers wrapping slower-changing cores, where the business-layer library in turn dynamically links the core library. This is how huge libraries like browser runtimes tend to work: one glue layer that gets updates all the time, that dynamically links slower-moving targets like the media codec libraries, JavaScript runtime, etc. Those slower-moving libs can end up shared at runtime, even if the top-level library isn’t.

• on large container hosts, the most common libs are not app-layer libs, but rather base-layer libs, e.g. libc, libm, libresolv, ncurses, libpam, etc. These are going to be common to anything that uses the same base image (e.g. Ubuntu 20.04). Although these do receive bug-fix updates, those updates will end up as updates to the base-layer image, which will in turn cause the downstream container-images to be rebuilt under many container hosts.

• Homogenous workloads! Right now, due to software-design choices, many container orchestrators won’t ensure library-sharing even between multiple running instances of the same container-image. We could fix this issue without fixing the rest of this, but designing a container-orchestrator architecture around DLL-sharing generally, would also coincidentally solve this specific instance of it.


Unless you coordinate the different app to be compiled with a specific version of a library when compiling, effectively creating a distribution.


Apple does something similar with their dylib cache but they sadly don't use content-addressable storage.

> The whole idea that you'd need a container like environment to install an application

Simple example. App A wants tensorflow 1.10, CUDA 8, and python 3.7. App B wants tensorflow 2.2, CUDA 10, and python 3.8. You want App A and B installed at the same time but the two versions of tensorflow are neither forward nor backward compatible. The two pythons will fight with each other for who gets to be "python3". How do you deal with this without containerization?

I don't think it violates the principles of open source at all, it's just making sure each application gets the exact versions of libraries it wants without messing up the rest of your system.


> You want App A and B installed at the same time but the two versions of tensorflow are neither forward nor backward compatible.

And that's the exact problem. Instead of solving it in the proper way you end up with kludge-on-kludge to paper this over.

Backwards compatibility is a great good, you let it go only if you absolutely have to rather than because you can upgrade stuff so easily.

> The two pythons will fight with each other for who gets to be "python3"

An even clearer case, obviously python 3.8 should be backwards compatible with 3.7.


> obviously python 3.8 should be backwards compatible with 3.7

I think so too, but HN downvoted me to oblivion the last time I advocated for that. That's part of the problem, I guess, is that the dev community doesn't actually agree with 3.8 being backwards compatible with 3.7.


Would you argue the same if they were called Python v37.0 and v38.0? Let's imagine that they are and move on. The problem is to use the alias "python3" as if it were the executable name.


At the risk of getting kicked off HN for all these downvotes for trying to have a discussion ... (thanks free speech haters, enjoy your echo chamber after I'm kicked off)

My understanding of semantic versioning is that:

- (x+1).0 and (x).0 don't necessarily need to be able to run code writen for the other

- 3.(x-1) doesn't need to be able to run code written for 3.(x)

- 3.(x+1) should always run code written for 3.(x)

Hence, you should be able to point "python3" at the latest subversion of 3 that is available, continually upgrade from 3.6 to 3.7 to 3.8, and as long as you have a higher sub version of 3, you shouldn't break any code that is also written for an earlier subversion of 3. That's why it is supposed to be okay to have them all symlinked to "python3". If a package install candidate thinks the currently running "python3" isn't recent enough for the feature set it needs, it can request the dependency manager upgrade "python3" to the latest 3.(x+n) with the understanding of not breaking any other code on the machine.

Unfortunately that isn't true between 3.7 and 3.8. There are lots of cases where upgrading to 3.8 will break packages and that violates semantic versioning.


> My understanding of semantic versioning is...

...irrelevant, I'm afraid.

Python doesn't use semantic versioning, so you can't really expect them to follow it. As GP insinuated, if you just pretend that 3.7 is 37, and 3.8 is 38, you'll pretty much be able to apply semver thinking, though.


> Python doesn't use semantic versioning

Right, so because Python doesn't coooperate, we end up needing containerization, which is what I was trying to explain in GGGGP. Because apt will upgrade 3.7 to 3.8 and unfortunately break anything that was written in 3.7 (and vice versa).

An app needs to be able to say "I'm ok with python3>=3.7" and be fine if it gets 3.8, 3.9, or 3.20, if we want to be able to run it without a container. (And likewise for all its other dependencies besides python)


If appA needs python3.6 then call it with `python3.6` not `python3`. It can exist in your /usr/bin in parallel with 3.7. The standard python used by your distribution is python3. I think it's currently the way it's done.


One problem is, that those open source principles have not been codified somewhere. With free software we have its licenses guarding the principles. Like copyleft and the requirement to share modifications if distributing software etc. With many open source licenses we have almost nothing in the way of protecting the principles, because licenses like MIT basically say "I do not care.". Open source will be exploited, until people learn, that they have to protect it.


> With many open source licenses we have almost nothing in the way of protecting the principles, because licenses like MIT basically say "I do not care.".

What's wrong with that? It's perfectly okay to say "I don't care".

I do release all my code under MIT because I care about attribution. I don't mind if people want to use it in commercial or closed source applications, nor if they want to modify it somehow.

I distribute code because that makes _me_ happy, not because I want to share an ideological statement about how others should distribute their code (or not).


It is not unethical to do what you do. I would say, that perhaps it is only a little short sighted. I don't mean this in a negative insulting way and I will explain why I think so.

The problem is rather, what it means in the long run. The point is, that this kind of only caring about for example attribution makes the ecosystem exploitable. It does not uphold ideals or enforce principles. Without upholding ideals and enforcing principles, how do we expect our principles to be followed in the future? If there is no legal obligation to do anything, which capitalistic (We need to maximize our profit! Ethical principles? Nah, come on ...) big company is willing to go the extra mile to respect the principles of some open source community, perhaps even at the expense of making more profit from a closed source solution? And I mean going the extra mile, without seeing it as an opportunity to use the very action as another means of promoting oneself. Simply going the extra mile, because it is the fair thing to do.

As I see it, as long as there is a chance to deviate from following the principles (no copyleft), someone somewhere will do so. Heck, even with such obligation to adhere to principles some people will deviate from the path. The tendency is always towards the wrong direction, if we do not enforce our principles of openness and such. It is an uphill battle. The whole ecosystem goes towards a not so open direction, by these initially small "missteps".

Especially when a a big company with a lot of developers takes stuff and makes proprietary stuff out of it, as its product, which usually initially has more functionality than its open source counterpart, users will quickly switch to the non-open proprietary version. They do so because they want that new shiny functionality immediately. The slightest inconvenience is sufficient for many users to drive them towards proprietary software. They do not know nor often do they care initially about using a non-open, non-free thing. Until they are sufficient users to create a bubble, in which the open source ideas are no longer existent. Then however, the network effects are already strong. "But all my friends use X. No on uses Y. I cannot convince them all to switch from X to Y!"

Example: There are loaaads of at least open source (and some also free software) messengers out there. All people need to do is to use them. But the network effect and features like integration with (a)social media are so convenient for them, that they give up on their freedoms and use things like Whatsapp or Facebook.


I care, and I want the world to be able to do whatever they want with code I gift to it.

MIT is beautiful is its concision, and reasonably reflects the "use however you want but don't blame me" legalese I used to custom craft before I found it.


> They do not know nor often do they care initially about using a non-open, non-free thing.

Man, I get this now esp with AWS services and everyone recommending how "easy" it is to x with y service, and why it we should use to too and it will magically solve all the problems… and I'm like: "no, i'm going to use this open source software that we'll have the code for and be able to tweak to do whatever we want to do and see how it all works (oh and is free to use), and unless you are going to be willing to hack around all the edge cases yourself that with y service without me getting involved at all" and then that usually works, though I suspect once I leave, the costs of running infrastructure are going to go through to roof and no one else is going to have any clue as to why that's so (but more likely, think that its impossible to have it any other way than paying to have y)…

Moments like these are great opportunities for folks that just don't accept "the non-open proprietary" by default, but its only an opportunity because most choose to accept "the non-open proprietary" by default… we all have to pay for the choices we make… some just want to pay more to not have to think about things… tradeoffs.


To answer on your AWS example.

Honestly, it really depends on which stage of life your company is at, and the resources you can allocate to infrastructure work.

At the very beginning of my company, we did exactly what you mentioned here.

- Pay for a managed NAT gateway? No thanks I can do the NAT myself with iptables on a cheaper EC2 instance.

- Pay for a managed NFS? No thanks I can do it myself

- Pay for managed VPN? No thanks I can setup IPsec myself

- etc.

With time though, as the company starts to gain money, and the number of users increase, we switched back to more managed services. The key here is that you want to refocus your infrastructures efforts on more business centric issues.

Also, most of the time, the time spent to maintain a service is exponential with its scaling. NFS is a good example here. Setting up a number of NFS shares for 5/10 users is fine. Once you get 20+ NFS users, you just better focus on your real company product rather spend month and money on maintaining NFS yourself.


A small company in an EM country, I don't think infra spend will ever come close to those in the US to being within the budget without significantly affecting margins… and considering that a lot of companies in the US are funded with either massive amounts of debt and on E rounds of funding… I don't think most can afford it either…

Though saying this, even at a small corp level, are still affordable proprietary solutions out there (not necessarily in AWS), but most people trend towards whats trendy…


Who/What has been exploited here? As others mentioned, you could install debs just fine.


The work of open source community has been exploited to create closed source, walled gardens with superficially more convenience to attract users and make profit of them.

This pulls users from the open source projects and since contributions do not flow back to the open source project, it can quickly become obsolete in the eyes of the most users. The principles of open source will live in that open source project, which in the end (exaggerating) "no one uses" and will become pointless. Most users are not protected by these great ideas or principles any longer, because they will be sucked into the closed source swamp, because all their friends are there already.


"What's the point of dynamic linking if we then end up shipping half an OS with an application just to get around dependency hell? Might as well ship a pre-linked binary that just does system calls."

I'm curious - are there any Linux distributions that contain nothing but fully static, self-contained binary executables ? As in, ships with no libraries ?


It's about sandboxing your apps... though not every app needs to, or should be sandboxed, but plenty can and should be. I happen to like flatpak a bit more, just feels like a more open community, and appimage is okay as well.

It's about getting dependencies with the app. I can't tell you the number of times I borked my OS install because I wanted a version of a single application that had a feature newer than a year or two old version supported in my distro's repository.

It helps to have both as options.


We're already there. The snap store is not open source and can only be run by canonical.


This is the biggest downside for me. I understand why they want to use snaps of huge software applications especially for rolling releases. I just don't like one company being in charge of the method of distribution.


Yes indeed, I don't mind snaps for the really big third party stuff. But they were using it for literally everything. Even htop was snapped at one point. Seriously... :/

Clearly they are looking for a way to put some kind of proprietary dependency in Linux by propagating snap to other distros so they can then milk it for cash (e.g. charging a fee for access to the snap store to big publishers like Microsoft/Google). I don't think they realise the mainstream Linux users will hate it for exactly that reason.


There are many reasons to hate snap for. I have a comment that explains it in more detail, but seriously, this is the most anti-FOSS, anti-Linux crap I have ever encountered. Shame on Canonical.

Original Unix didn't support dynamic linking (despite having already been invented), and for some purists it was a mistake to add support for it. Plan 9 refused to add support for it. Dynamic linking is not some kind of holy principle.


Perhaps, but dynamic linking solves a very important set of problems (Speed, storage, memory, bandwidth, ...etc).

Security update problem.

> These large companies should stop fucking around with Linux

Canonical has 443 employees according to the wikipedia page. Is that large in this context? I don't really think so. Redhat (13k employees) is large. Canonical isn't large.


Large in their mind share, not necessarily by employees.


I disagree on the containerization part. It can absolutely improve security a lot by facilitating isolation of namespaces and filesystems.

But perhaps base container images on scratch, not on ubuntu:latest ;)


> What's the point of dynamic linking

There's a certain subset of developers who are against dynamic linking at all, and they do have some convincing arguments that are worth reading.

I don't necessarily agree with them, but their arguments are worth acknowledging.


Thankfully both Arch Linux and Gentoo have native Chromium and don't require Snap. I can recommend them to everyone. Arch is more practical for everyday use though as compilations take time.

Canonical has been doing this sort of thing for a very long time, they way Ubuntu tough was designed should have been enough to make most people abandon them.

Dynamic linking certainly doesn't work for Steam et al.


Statically linked binaries with an apparmor profile is not such a bad idea.


Open source never had any principles. Open source is just a libertarian hijack of free software to ensure that it is accessible to companies without any legal issues.


The principles are just that, principles.

In practice though, they don’t matter for a vast majority of users, and package managers are far less hassle. Maybe it’s time for the principles to change?


Could you clarify "package managers are far less hassle"? Did that statement have something to do with snap? Less hassle than what?


Applications are open for YC Winter 2021

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: