Hacker News new | past | comments | ask | show | jobs | submit login
Linux Mint drops Ubuntu Snap packages (lwn.net)
1106 points by jonquark 30 days ago | hide | past | favorite | 523 comments



From the linked announcement: https://blog.linuxmint.com/?p=3906

> Applications in this store cannot be patched, or pinned. You can’t audit them, hold them, modify them or even point snap to a different store. You’ve as much empowerment with this as if you were using proprietary software, i.e. none. This is in effect similar to a commercial proprietary solution, but with two major differences: It runs as root, and it installs itself without asking you.

This is a great summary of why people rightfully feel nervous about Snap. People run linux because they want visibility and control into what is happening on their systems. Canonical seems to want to take away that visibility and control from their users.


From an IT perspective:

I can set up an internal APT mirror for my users, servers, test systems, etc., but I can't set up an internal snap mirror as far as I can tell. This means that despite having an internal repo that I can whitelist, some package installations will now arbitrarily require internet access. I can no longer install chromium on a system without access to the internet, and package installation will fail as a result.

I'd rather have an older version than a snap version, personally, but better would be two packages which Provides: the same chromium-browser, chromium and chromium-snap.

The most irritating thing here is that they're using a package distribution system which handles dependencies and updates flawlessly, and has for decades, and using it to install things using a package system which does not solve those problems, and instead ships multiple copies of multiple libraries and applications, which run slower, ignoring any settings I have in APT.

So far, it's a maintenance nightmare, and I loathe it.


There is a big user issue on top of the philosophical and maintenance issues - snaps are SLOOOOOOW. I've only experienced them with two applications, and both took forever to startup compared to the apt-get installed versions I quickly replaced them with.

OK, "forever" is hyperbole - it was probably about 5 seconds. But it was enough of an annoyance for me to figure out how to install a deb packaged version. And every user having to wait an extra 5 seconds every time they open an app aggregates to a lot more time than Canonical saves maintaining packages.

That doesn't sit right with me, so my next OS upgrade will be Mint or pure Debian. I've been with Ubuntu since Dapper Drake, and I'd like to thank Canonical for making great distros all that time. But I'm not going to follow you down the snap-only path, time for me to move on.


Sending some encouragement your way: try Debian. I bet money you won't even notice it's not Ubuntu. Or you will, because your software will launch when you ask it to. I'm super happy with Debian lately. I know it used to be the old neck beard slow and steady distro, but honestly these days packages get updates rather timely and it doesn't feel like the Debian of 10 years ago. And you can always run Debian testing with almost no overhead if you want the shinnies. That's what I do and have only had to deal with gnome not starting after a reboot once XD


I've been using Debian unstable with apt-listbugs for many years with barely a problem. Testing has issues of its own which one should be aware of before choosing it over unstable (see https://www.debian.org/doc/manuals/debian-faq/choosing.en.ht..., "Testing has more up-to-date software than Stable, and it breaks less often than Unstable. But when it breaks, it might take a long time for things to get rectified. Sometimes this could be days and it could be months at times. It also does not have permanent security support.")


One problem I have with Debian (that is not Debian's fault) is that NVidia does not officially support it. It probably works just fine, but it's just another thing you might run into.


There are .deb packages in the non-free repo and it just works as long as the driver has no bugs regarding to card installed on the system.


Lately I tried fedora and surprised they have many up to date packages including exa (ls alternative) and other new shiny rust tools. I've switched to using it for one of my personal server and quite happy.


Made the switch 4 years ago after running raspian with no problems on the pi?.

If someone is debating just setup rasp with headless ssh and have at it.


Does unattended-upgades also work on Debian to install security updates aromatically?


Debian is nice if you don't mind the glacial pace of updates -i.e. you're running a server. After using Arch I don't think I can give up rolling releases.


That’s why people are recommending unstable (sid) which is Debian’s rolling updates channel.


A little while ago I discovered snaps can be slow even when you're not using them. I had the fun of trying to log into an online job interview and couldn't because snapd was hogging the CPU doing god knows what. Restarted and the same happened, the machine became usable about 20 minutes later. Little "surprises" like that are infuriating.

> so my next OS upgrade will be Mint or pure Debian

I've moved one machine to debian stable and haven't looked back. There are a few teething issues like /usr/sbin not being in the default path, some sudo issues, the installer isn't as grandma friendly, but it's rock solid and doesn't nag me about updating 137 packages of things I've never heard of needing an update.


Im here to push you towards Mint. mint is cleaner ubuntu. the wifi drivers work. the UI is really good. it just works.


Just make sure you know what you are doing. It's not one of the major distros.

For the longest time you couldn't dist-upgrade it, at all. The functionality wasn't there. When their web servers got hacked, they cleaned them, and got hacked again.

From a Debian user's perspective it is unclear why these small distros can't just be a Debian derivative or a Fedora spin. It's not scalable that every small distro invents their own infrastructure.


Because pure Debian never worked with my wifi drivers, and Ubuntu didn't come with video codecs and other small lifestyle tweaks by default. If Debian came with those OOTB -- and they won't, since some of those codec ain't FOSS -- then I'd agree with you.

Like, I ain't crazy about what's happening with Mint and all of their hacks -- trust status: eradicated -- but I certainty get why they exist as a distro.


Except when it doesn't? I had all kinds of frustrations with Mint on a 2019 X1 Carbon which had some newfangled digital microphone. Ubuntu 20.04 worked out of the box. There were a bunch of other frustrations too. Bluetooth never worked right. The resolution kept resetting to the display native 4k mode instead of the 1080p I set it to which caused all other kinds of display bugs when switching back

I can see that Mint could be nice though. On a desktop, running hardware a few years old.


Im here to push you away from secondhand distros use:

ArchLinux NOT Manjaro

Debian NOT Ubuntu etc

And so on, the 'originals' work always better in the long run.


> the 'originals' work always better in the long run

Except that normal people can actually install Manjaro and it doesn't make you use AUR for pretty basic things. Except that Ubuntu actually doesn't shit itself almost every time you add a 3rd party package/repo and comes with reasonable defaults. Also, have you used Pop OS?

Your statement is quite simply untrue.


> Except that Ubuntu actually doesn't shit itself almost every time you add a 3rd party package/repo and comes with reasonable defaults.

I'm using debian testing for more than 15 years and it never shit itself for a 3rd party repo.

Debian comes with whathever the original software developers set as the default sans the wallpaper. For sensible default you can bug the software developer in question, not debian.


Maybe the situation is different on the server side or I've just been extremely unlucky, but I keep getting my packages broken. Just a few days ago I was installing Wireguard on a Stretch system (added unstable repo, lowest priority, pinned packages) and APT got tangled up to the point where I couldn't install almost anything because one package was too new, but trying to remove it went down the dependency tree all the way to `util-linux` and I had to do surgery with `dpkg -r --force-depends` to fix it.


On the server side, I only use stable (due to security updates and other critical stuff). Using testing in three desktop systems (two at office, one at home) currently, all workstations which see regular, heavy usage.

To add unstable packages to a stable system with repo pinning and using apt directly is not the best practice though.

If I'm going to add stuff from unstable, I use aptitude. Its dependency resolution and solution suggestions are better and more manageable than ordinary apt. It allows you what's going to happen before actually pulling the trigger.

The only package I get from unstable is firefox. I add the repo, update the package and remove the repo afterwards because unstable and experimental are highly chaotic realms and not suitable to use continuously due to high rate of uncontrolled change. Also unstable and experimental are not guaranteed to not to break. Testing and stable are implicitly (testing) and explicitly (stable) is guaranteed to work.

With this recipe, I only had to re-install a Debian system once; to migrate it from 32 to 64 bits since a very big disk cache with very small files was triggering a bug causing disks to trash and system slow down to a crawl. There were no procedures to migrate a 32 bit system to a 64 one in a reliable manner so, I just reinstalled it.


I've had a friend struggle with half a dozen Linux distributions. Then I told him about Arch Linux. One day later he told me he had a fully functioning Linux desktop.

I was impressed because usually when I install Arch Linux I forget to boot the flash drive with UEFI enabled so I get stuck because I get errors during bootloader installation. I also love to do pointless things like install Arch Linux (not the iso) on a 32GB flash drive.


>Except that normal people can actually install Manjaro

And when 'normal' people try to fix a problem with a rolling distro then they have a problem.

>Except that Ubuntu actually doesn't shit itself

As if that's a problem in Debian, or are you talking about snap's?

>Also, have you used Pop OS

No thanks, using OpenSuse Tumbleweed, FreeBSD, Debian and CentOS is perfectly fine for me.


Manjaro exists because people want a freaking installer.


Oh so make an installer for ArchLinux and call it Manjaro..no need for another Distro :)


I’ve had good luck with Devuan (Debian - systemd), though they tend to run a bit behind Debian Stable.

For me, the stability of it working (well) as it has for the last few decades is worth missing a few of the latest updates.


Snap can't seem to access my mounts. They're not very special mounts, but it happens in linux that you mount something, but snap just won't access. That made "no-go" a no-brainer.


I have the same problem. Snaps are confined to only files within $HOME. I keep almost all data under /media/ and this caused snaps to be mostly unusable for me, at least unusable for productivity apps where I need to process data. Some apps though are self contained, e.g. Spotify, for example, works fine for me as a snap.

The limitation stems from a design problem, details at https://bugs.launchpad.net/ubuntu/+source/snapd/+bug/1643706.


There is also a problem where snapd fails if your home directory isn't called /home/username (i.e. if it's located in a different path).

At this point snap sounds like a bad joke to me. Especially when Flatpak already exists.

https://bugs.launchpad.net/ubuntu/+source/snapd/+bug/1662552


Yup. There needs to be considerable work to integrate snaps from completely sandboxed (no access to file system/network/hardware), to giving them controlled access to some resources, managed/monitored by root and/or user.


For the longest time Snap chromium couldn't access files it had downloaded. So I would click to open a pdf when it was completed and get an error. I had to go whitelist my home dir in some config somewhere ..


You might want to look at ungoogled-chromium,[0] there are downloads for most non Windows platforms. Also packages are available in repos for apt/deb, yum/rpm, etc.

[0] https://github.com/Eloston/ungoogled-chromium


That wouldn't solve the root of the problem, though. The real solution would be to use other distros.

Besides, there are quality concerns with browser forks maintained by an understaffed project. The fact that ungoogled-chromium asks for internet randos to provide its own binary releases doesn't inspire confidence either. If someone desperately needs to use Ubuntu, they'd be better off using Firefox.


I can see why Snap is like it is from Canonial's perspective - but from the User perspective it seems like FlatPaks[1] are much better and address the issues that this article raises

[1] https://flathub.org/home

(Disclaimer: I'm talking in a personal capacity but the company I work for in my day job now owns Red Hat - I don't work on Linux Operating Systems).


> but from the User perspective it seems like FlatPaks[1] are much better and address the issues that this article raises

This is interesting, because the last few days I was actually working on packaging an application of mine as a snap/flatpak.

From my PoV, they both have their fair share of issues.

Snaps enforce a sandbox, which I think is actually a good idea, because the desktop security model is somewhat broken. If your application cannot run as a sandboxed app, you need to be granted special permissions by canonical after manual review (my app needs this), where they also discuss if they can make a new permission in a safe way for your usecase that everyone can use afterwards.

On one hand this sucks because I need to ask canonical for permission to publish something, and there's no certainty that I will get these permissions as a nobody for a new app nobody ever heard of before. On the other hand, I think I like that they're doing something about the desktop security model.

The next problem is, if this is denied, how do I ship updates? Provide a self updater? Easy to write, but if everyone does that, we can just go full windows and abandon package managers. Tell people to just curl | bash? That's not more secure than a potentially shady snap.

But I do have to praise canonical for being very helpful in IRC and the forums for helping me debug issues and file bugs against snap stuff.

Now flatpak on the other hand, just feels kinda weird to me.

It sandboxes things, but every application can pretty much grant itself access to everything. This is a completely different philosophy, but if you rely on everyone tightly sandboxing their applications without granting themselves permissions for sandbox escape, I think something like landlock[0] (when it lands) or pledge is much more suited for this, and baked into the application.

Then there is this weird thing where flatpaks force a runtime on you. My application is a statically linked go binary. But flatpaks pretty much want to force me to add an entire freedesktop suite as a dependency, as you simply cannot choose no runtime.

(Community-)Support for building flatpaks? Pretty much non-existant.

So yeah, the entire linux upstream-packaging situation is still quite depressing honestly. And with the time and energy I have invested into this by now, I could have written a simple but sufficient self-updater about 10 times over.

[0]: https://landlock.io/


As a user I don't like neither Snaps (that for sure as this is Cannonical only) nor FlatPaks (as they seem conceptually a "80% solution" which combines the problems of package systems with the problems of self-contianed apps, but don't improve on anything).

For me the only acceptable solution besides proper .debs are AppImages. AppImage doesn't try to "replace" the package management for desktop apps like the former two candidates. It tries to complement package systems for some special cases (like for example commercial software, or for the cases where the user "just wants to try something out" without "polluting" the whole system with a lot of dependencies).

For my desktop needs AppImage is like "Docker, the good parts". A simple self contained format that runs everywhere without any further dependencies. Compared to that Snap and FlatPak are bloated annoyances.


AppImages are the only thing I run also, as a full time linux user and admin. I just hate the proliferation of questionable services, especially in systemd land. A lot of my focus is on minimization of stack, and snap and flatpak just don't stand up to scrutiny imho.


Appimage has its own issues though: - no update mechanism - no sandboxing and only basic app isolation - no deduplication (Flatpak automatically deduplicates everything it installs on a machine)



AppImage seems like the best replacement then. I've loathed Snap since it first appeared on my horizon and I installed LibreOffice with it and that snap image showed up in my mounted drives forever. I was a big fan of Canonical/Ubuntu for years and switching to Debian has not been without pain points but at least I have more control of my computer. Pop! OS and Linux Mint seem like good Linux desktop alternatives to Ubuntu at this point if you don't want to use Debian.


> Linux Mint seem like good Linux desktop alternatives to Ubuntu at this point if you don't want to use Debian.

Mint also has Debian based distro.


Interesting! That must be something relatively new, as it at least looked like there was nothing back then when I looked up Appimage.


On the sandboxing front, it works great with Firejail: https://firejail.wordpress.com/documentation-2/appimage-supp...


I tend to agree with you, but I really like updating all my software with one click/command.

So out of curiosity:

My application is already self contained and statically linked, so no AppImage needed, but it behaves like one, you can just run download and run it everywhere. And so what you're describing will be for sure an option for those who like it (in fact currently it's the only option in alpha).

How would you like to get updates for something like this? Visit the website yourself occasionally to check for updates? Have the application notify you a new version is available? Have an integrated updater so the binary can update itself?


That's easy. It should be officially in Debian. ;-)

Stuff like AppImage, static binaries, Docker & Co are for me at least a kind of "last resort". Even I'm using Docker a lot[1] to try things out I first look for an AppImage in those cases. But when I decide that some app should become part of my system I will look for a proper package. One source to rule them all…

[1] Docker is a big problem on it's own. But as I can't avoid to have it installed because of work I decided I could use it at least to "keep experiments under control". Before I was forced to have Docker I've used systemd-nspawn for that use-case.


Debian has a lot of rules, some of which prevent statically linked binaries, like Go programs, from being packaged and shipped with it. A notable example is lxd.

Appimages are great but there's no sandboxing or updates. But hey, we used to downlad debs and install them by hand on Debian 1.3, before apt was a thing. Maybe appimages could be signed and distributed in a similar fashion.


Go programs are not (generally speaking) statically linked, unless you're explicitly doing CGO_ENABLED=0, -tags 'netgo' and the whole variety of other tricks needed to coax static binaries out of the compiler. Try running "ldd" or "file" on your Go binaries -- most of the time they'll be dynamic objects and linked to glibc, especially if you're just doing a stock "go build".

In addition, Debian definitely does package Go programs -- they've packaged some of mine and they have possibly the most ambitious way of solving the great vendor/ issue (which most other distributions have ignored).

The likely issue with LXD is that they require specific versions of various system libraries that they package in their source code bundles (including a fork of sqlite3!). I packaged LXD for openSUSE[1]. It wasn't really fun, but it is fairly doable if you have a flexible enough view on "good packaging practices" (and I imagine Debian packagers didn't feel like going through all the necessary workarounds -- which includes patching the ELF binary in the openSUSE case). And note that LXD isn't even a static binary -- I tried to compile it statically to get around a whole range of issues and it was a nightmare.

[1]: https://build.opensuse.org/package/show/Virtualization:conta...


"Appimages are great but there's no sandboxing or updates"

Actually there is sandboxing and updates, if you want to take an extra steps.

I prefer AppImages over Flatpack and Snaps for essentially the same reasons commented above.

I typically launch them with Firejail for the sandboxing. i.e. "firejail --appimage /path/to/appimage" instead of just the appimage alone. Seems to work just fine. Firejail has additional options that can be done.

Some AppImage creators do provide an update method that can be installed using zsync2, but I've not tried it since the whole point is disposability.


> prevent statically linked binaries, like Go programs

What do you mean? There's debian go packaging team: https://go-team.pages.debian.net/ as well as go packages in stable (for example https://packages.debian.org/buster/influxdb)


> Appimages are great but there's no sandboxing or updates.

I know nothing about AppImages, but up above someone said they do have updates: https://news.ycombinator.com/item?id=23773878


Go also supports dynamic linking actually.


Unfortunately even micro releases of the Go compiler break ABI, so Go dynamic linking isn't feasible for distros to use:

https://wiki.debian.org/StaticLinking#Go


There are two types of "static linking" being discussed:

  * Linking with system libraries (what GP mentioned).
  * Linking with Go packages (what you're talking about).
Yes, Go doesn't really support -buildmode=shared anymore (and it was pretty broken from the outset). But this is a separate question to whether a Go binary is actually a static object. In most cases (and by default), Go binaries are dynamically linked to system libraries (with Go packages being statically linked into the program).


pjmlp's comment seemed to be about Go packages to me, not system libraries.


The original comment[1] spoke about "statically linked binaries" which has a pretty specific meaning ("file" says its a static binary). There are similar issues for Debian packaging if you had a hypothetical language which required you to vendor your dependencies because maintenance for security issues would be annoying (Debian worked around this problem in Go through "de-vendoring" and Go modules -- in theory -- mostly solve this issue today). But static binaries are generally not distributed in distributions because they make make core library updates worse in terms of download size as well as inflating memory usage due to lack of page-cache sharing.

[1]: https://news.ycombinator.com/item?id=23773480


Where can I find an up to date guide on how to package software for Debian and what to do to get it included in the repository?

I’ve often came across orphaned packages where Debian was stuck with an old version, but didn’t know what to do about it other than installing from source. Or using a PPA if I was lucky.


For the first part, I found Debian’s Maintainers’ Guide [0] pretty helpful when I was updating xscreensaver last year (hasn’t been uploaded yet though...). It also has a section at the beginning concerning how to adopt orphaned packages. For upstream updates, some projects will have a way to create deb packages themselves, which can make installation/updates easier later on (concrete example, with OpenZFS can create RPMs and debs from the upstream tree).

0: https://www.debian.org/doc/manuals/debmake-doc/index.en.html


Plenty of software companies provide their own Apt repositories. Adding a repository is usually as simple as copying a single file to /etc/apt/sources.list.d/. Sometimes you may also need to add a GPG key, which can also be as simple as copying a file to /etc/apt/trusted.gpg.d/ (though most instructions seem to favor a manual one-liner that imports the key to the global database[1]). Technically you could automate both with a one-time deb install, though I'm not aware of anybody that actually does this. In any event, after that `apt-get upgrade` will upgrade third-party packages just as well as Debian or Ubuntu packages.

I much prefer Debian packages over RPM, and Apt over Yum/DNF, but unfortunately setting up a Debian package repository is more difficult than for RPM. The tools exist, and are more sophisticated and capable--for example, Aptly--but there's a higher learning curve. Also, good luck using them outside Linux. Years ago I published Debian package repositories from OpenBSD, but I had to manually hack the relevant tools to build and work on OpenBSD. For RPM/Yum I could have more easily written (and at a different job later did) an indexing tool from scratch.

The problem with Debian packages and Apt repositories isn't that they're not capable, it's that they have high learning curves due to the slow accretion of features that obscure their potential. A better tooling story would help, particularly for repositories. Better tools for initializing Debian package builds exist; the problem is more a surfeit of choices.

[1] It's more complicated and creates unnecessary headaches than dropping a file into trusted.gpgp.d, but the practice seems to reflect the opacity of the Debian packaging ecosystem. There's a better, simpler way to do it, but everybody cargo cults the same old solutions and then complains that it's too complex or inelegant.


Personally, I rarely think about updating my software unless there's a new feature I need or a fix for a bug that's been ruining my life, so just visiting the website when to check for new versions is fine by me.

However, you could also include a facility to manually check for updates and either self-update or just show the changelog. Why manually? "I know you're about to do this thing real quick that needs to get done, but a new version is available! Would you like to twiddle your thumbs for 5 minutes while watching a progress bar?".


> Why manually? "I know you're about to do this thing real quick that needs to get done, but a new version is available! Would you like to twiddle your thumbs for 5 minutes while watching a progress bar?".

That's a very Windows-centric point of view. Programs on Linux can be updated without closing them. You get the new version when you restart them.


It only works if they are self contained without access additional resources on the file system, as the executable file is kept alive until process termination.

Now if the process tries to access some datafiles that changed contents during the upgrade, it might just crash and burn, eventually corrupting data in the process.


Firefox has entered the chat


Chocolaty GUI for Redmond.

sudo apt update && sudo apt full-upgrade -y && sudo apt autoremove -y && echo 'All done!, rebooting' && sudo reboot now

For everything else.


Ideally security updates aren't deferred for 5 years because its working well enough and happen for the whole system when you aren't using the machine as opposed to just for that app when you launch it.


most AppImages seem to auto-update themselves when you run them, which is convenient, until it isn't.

I'd want it in an apt repo, really.


That actually sounds pretty dangerous - what is more likely to get compromised and stay that for a long time - a random developpers app image update service or distro infrastructure ?


> How would you like to get updates for something like this? Visit the website yourself occasionally to check for updates? Have the application notify you a new version is available? Have an integrated updater so the binary can update itself?

I've seen all three approaches with AppImage'd software.

There are also package managers specifically for AppImage'd software, though it doesn't look like any of 'em are particularly mature.


FlatPack was made to address two things focused on security:

1) Application won't need root privileges

2) Without compromising on the systems security


Some kernels, e.g. Linux, were also made to do just (that amoung other things).

I think fixing what we have in standard kernel/userland, and leaving the containers for specialized developer/devops deployed workloads may well be where this (what will be known as the snap/flatpack saga) started from, and will end as well.


I also prefer appimages as the "least worst" of the three.

However, a quick note: As someone who unofficially maintains a linux port of my companies software, I have considered packaging it as an appimage, but there's one problem with appimages that kills the concept.

Appimages are read-only[1]. I'd love to package my companies product that way, but we already have update-delivery infrastructure that works on windows and mac (and linux), and it assumes it can write to the "install folder". Changing the entire update infrastructure specifically for an OS we don't officially support is a non-starter.

From a developer perspective, I would love the ability to update an appimage's contents in place. However, as a user I'd also like the ability to set it read-only to block updates if I desire. Flatpak's mandatory updates are one of the key reasons I dislike it. Never the less, if the goal is to smooth the path for proprietary software to support linux without making half a dozen different packaging solutions, in place updates need to be supported.

[1] edit: according to comments below, they now have an update mechanism, but it's still a totally appimage-specific process, so my problem remains :/


AppImage is the format I'm using for distributing Linux apps as well, for many of the reasons you mention


Is "I just give you a tgz with small folder of files, one of which is a binary; I managed to make it run just about everywhere" worse than AppImage (notably, for a GUI app)?


That's basically what an AppImage is, except that an AppImage goes one step further and slaps that tarball onto an executable that opens its embedded tarball and runs whatever's inside.


Flatpaks deduplicate everything - all flatpak apps and all flatpak runtimes installed on a machine agains each other. So if you specify the basic freedesktop.org runtime, it is effectively a no-op as pretty much everyone will have it already installed, due to most of the other runtimes being based on it and most flatpak apps needing a runtime.

As for sandboxing with Flatpal - AFAIK this is all intentional for the time being. The authors have ckearly statet their initial effort targets app dependency isolation and app portability, not security just now. That is much harder to do without severely limitting useful applications in what they can do.

I have encountered incomplete sandboxing systems that strived for security in past and it has not been a good experience.

For example I had the onr and only official Ubuntu Touch tablet, which used a predecessor of the Snap technology. One day I wanted to show my friends a couple photos from a micro SD card. I put it in to the tablet, bit no photo viewing app could show photos from the card.

Why ? Because back then you could only open files from outside of the apps sandbox one by one using a special system controlled dialog. Not really usable for hundreds of photos and forget about a text editor or an IDE.

This is where I like Flatpak - it gives you all the goodies of app separation and portability without all the hassle of a strict but unfinished secure sandbox. You only need to make sure you are getting the software from thrusted sources.

A I think that's something one should be doiny anyway, even with a strct sandbox - if it can't support basic app use cases, who knows what security holes have the authors left in it...


> The next problem is ... how do I ship updates?

...why not make a new release and ship that? That's the way package managers work. Developer makes a package. User installs the package. Then when developer fixes some bug and releases a new version the user can install the new version when the user decides it's necessary.

The whole point about this is that it's user-centric. That's good.


It’s largely a good thing but it’s unreasonable not to mention the “flood of bug reports about issues that have since been fixed” effect that comes with manual updates.

This has burnt a few projects pretty badly (especially where distros have packaged an old version and never updated it).


Taking power away from users to address that inconvenience may be the status quo in the proprietary software industry, but it's totally over the line for user-respecting FOSS software. Not least because, once a developer acquires that power over users they very frequently succumb to the temptation to abuse their userbase as involuntary beta testers for half-baked bullshit which users struggle to opt-out of.


Taking power away from the users and putting it in the hands of upstream maintainers is pretty bad, but putting it in the hands of distro maintainers is even worse in my experience. Upstream maintainers are at least the people who develop the software directly and deal with the bug reports.


Traditionally, distro maintainers do not take this power over users for themselves.


Make the user report their current version in the reporting dialog. Automatically close all reports for prior versions and inform the user to update and revisit.


`curl | bash` gets a bad wrap. From a security perspective (assuming you trust web pki), it's 100% no different than a) downloading and running a script, b) downloading a package and and installing it, c) downloading a binary blob and executing it, etc. I actually find that piping an install script to an interpreter is the easiest to audit of all the options because I can see exactly the changes that will be made to my system. I don't know where the whole "zomg pipe to bash is so insecure" vibe came from, but it's effectively developer urban myth. The only improvement is if you somehow directly exchange keys with the vendor out of band with no web pki intermediate step and then verify the signature on the software you're installing... yeah.


You fail to see how piping to bash worse than other awful choices? It is not.

It is bashed by those who value reproduction and discoverability:

* every person receives same script, confirmed by signature

* multiple mirrors, no single point of failure, no pull out by author (left-pad)

* no silent update (browser addons)

* tested to work on your system

* clean remove of installed files

* search in packages not in web (anti fishing)

* hosted in secure environment

* watched by many eyes


> The only improvement is if you somehow directly exchange keys with the vendor out of band with no web pki intermediate step and then verify the signature on the software you're installing... yeah.

Which is what Apt has done for 20+ years - packages are gpg-signed, developers sign with their individual keys (and have to have their identity verified by at least one other developer) and even someone who controls a root CA (which is most national governments and many large corporations, let's be fair) would have to mount a dedicated attack to subvert that process.


> Which is what Apt has done for 20+ years - packages are gpg-signed, developers sign with their individual keys

AFAICT,debian packages themselves are not signed, just the repository release files are gpg-signed.


> Then there is this weird thing where flatpaks force a runtime on you. My application is a statically linked go binary. But flatpaks pretty much want to force me to add an entire freedesktop suite as a dependency, as you simply cannot choose no runtime.

That's a weird criticism to have of Flatpak. The FreeDesktop dependency is the base set of libraries all flatpaks have access to. You don't have to use it. In fact Freedesktop is a great idea, to provide a base framework applications can use that goes further than just libc. If we want Linux on the desktop, we need a common desktop framework interface.

It would be like being unhappy a static go binary on Windows COULD access all of the win32 libraries.

Also, Flatpak currently is best suited to GUI apps. If your go binary is a GUI app using either GTK or KDE, these will need the freedesktop facilities (such as D-Bus) to actually run properly.


For sandboxing the user is supposed to decide if they will give you access to the permissions that you requested. Now in practice most FlatPak apps don't use sandboxing but hopefully that will change and permissions are treated with scrutiny.


I'm in a similar boat with regards to launching a new, unknown product in the snap store. As an interim solution I'm planning to ask users to do use the beta flag: `snap install my-app --beta` which gets me around the secure sandbox requirements.


Did you mean devmode instead of beta?

Just wanted to easy one worry of you: the age, popularity and maturity of your application has no influence on whether or not these permission requests are granted. The approval process is formally defined and mostly depends on what your application does exactly.

You can find the specific processes in the docs at the bottom of this page: https://snapcraft.io/docs/permission-requests


In case this eases your worries: the approval process for permissions is formalised and does not depend on how mature or well-known the app is.

In addition: users always have the option to override permissions. The approval process is for automatic (default) permission grants. Even if these are not granted, users can grant them manually.

The specific processes for approval are listed at the bottom of this page: https://snapcraft.io/docs/permission-requests


I feel like both Snappy and FlatPak are inferior to AppImage, which IMO is much easier to use (it's a self-contained executable) and doesn't rely on any fancy management daemons or what have you.

Like, in terms of user experience it's about as close to the Windows-style "download this .exe and run it" approach as one could get, though it'd be interesting to see a macOS-style "put this .app in your ~/Applications" approach as well (which should be doable with a daemon watching such a folder and generating e.g. menu entries and such, as an optional component or perhaps a feature of the desktop environment itself).


> the company I work for in my day job now owns Red Hat

That's the longest spelling I've ever seen for a three-letter company.


I assume then that you don't speak German?


Every attempt at "solving" this "problem", including Snap, Flatpak, and AppImage, are an absolutely awful regression in the state of Linux. I absolutely hate this trend. These tools solve problems for only two kinds of software: proprietary software, and software with suck reckless, runaway complexity that it can only run in one specific environment. I have no interest in either kind and I will give no quarter to "solutions" for their "problems".

Criticism specifically regarding Flatpak: https://flatkill.org/


I mostly agree with you and I'm not going to get even near Snaps and Flatpaks.

However, ignoring the aspect of software distribution, wouldn't you agree that the approach taken by the Linux desktop today is deficient security-wise? For example, I would like to be able to give mbsync (or Thunderbird or whatever) my IMAP password without giving it to any other program. So I don't want to store it in mbsync's config file in plain text. Neither will I use gnome-keyring (or any other keyring) because it doesn't have any kind of "program authorisation". Any program can just spawn a new "secret-tool" process and get my credentials from gnome-keyring.

I've been thinking for a while about implementing a keyring which runs as a daemon with SUID of a dedicated user and checks which program sends requests to it, using /proc/pid/exe, but I'm not sure if it's a secure source of truth: how e.g. namespaces affect what's visible in /proc/pid/exe. I know you've been developing himitsu[1]. Have you thought about this problem in that context?

[1]: https://git.sr.ht/~sircmpwn/himitsu


I agree with you, but the solution would have been Plan 9 namespaces, not Linux containers. What we're working towards today is awful.


flatkill.org is clickbait, not written in good faith, and doesn't propose any solution. Moreover things like "it's obvious Red Hat developers working on flatpak do not care about security" is just unnecessary and toxic.

Issues mentioned on flatkill are already fixed, will be fixed or doesn't depend on flatpak itself (like the UI / icon in the software app store).

I don't like Flatpak either but I think we should elevate the debate to deeper architectural issues of flatpak that won't be fixed easily. Personally, I do not like the following in Flatpak :

- no effort on full reproducibility like Nix&Guix

- a big fat flat runtime rather than traditional fine grained dependencies (although OStree avoids duplication, but still very elegant)

- you can't install extra pkg in the sandbox. So the quite overkill solution in RedHat's vision is to separate between Toolbox/Podman for devs vs Flatpak for users, rather than trying to make a single unified sandbox for everything. Of course everything breaks down when you try to code using a Flatpaked IDE, if you follow RedHat's vision you basically need to spawn a toolbox container from an unsandboxed flatpak instance of your IDE : https://github.com/flathub/com.visualstudio.code/issues/44

So personally, I'm still waiting for a packaging system that is :

- compatible with the idea of a declarative/immutable os (like nix, guix, silverblue)

- tries to make everything reproducible (like guix)

- sandboxed with runtime permission API (like Flatpak portals, IOS, Android)

- sandbox can be augmented with packages so that you can code in your sandboxed IDE + add necessary dev packages inside a same sandbox without having to break it


My favorite anecdote about Snap is the development team's opinion when it comes to users wishing to relocate their ~/snap directory elsewhere.

It's a commonly requested feature and being able to move it would follow the Freedesktop.org spec, but the developers don't care.


What's stopping someone from doing:

$ ln -s /somewhere-else ~/snap


The problem isn't that the folder exists, the problem is that software is being installed in a non-hidden folder in the home directory. That's supposed to be a space for user files, not system software.

If anything has to be installed in the home folder for some reason, it is supposed to go into .local, so the user doesn't see it among their documents and photos.


I've allocated a 5GiB partition to /home on my SSD, as it does not need to be bigger. I don't want it filling up with software or other things like ivy/maven caches.


~/snap would still exist then


bind mounts


Just hack your ls binary to not show the directory ;)


Fun fact: the Snap team's solution to this problem is to list ~/snap in a .hidden file[1] so that Nautilus or Dolphin hide it from view.

[1] https://en.wikipedia.org/wiki/Hidden_file_and_hidden_directo...


> Applications in this store cannot be patched, or pinned. You can’t audit them, hold them, modify them or even point snap to a different store. You’ve as much empowerment with this as if you were using proprietary software, i.e. none. This is in effect similar to a commercial proprietary solution, but with two major differences: It runs as root, and it installs itself without asking you.

Short version: Canonical does classic vendor lock-in with his Snap Store. I'm pretty sure 99.9% HN users knows what it means :)


You absolutely nailed it and this is why -- if I have any say in this (and I do) -- we will be moving away from Ubuntu.

It's a completely insane stance for server systems. (I believe it's still possible to run server systems without snaps with various "workarounds" etc, but we can feel which way the wind is blowing...)


You'd use docker instead of snaps on a server typically. Snaps are completely redundant there.


I personally would avoid docker, but 'containers', yes.

> Snaps are completely redundant there.

Yes, they certainly are but it seems to be harder and harder to get rid of snapd. It's probably not going to be an actual issue for Focal, but it's about this trend... Maybe Canonical will see sense -- who knows?

Meanwhile, I do have to at least have a broad plan for the next 2-5 years, so...


Have you picked which distro yet?


Not too seriously, but it kind of depends on the project which $OTHER_DISTRO we'll go with, I think.

Debian stable is an obvious choice for projects that don't rely on too much "new stuff" because we already have a lot of stuff that uses APT, deb repositories, etc. Otherwise, probably CentOS/RHEL for those situations that are just ultra-averse to incremental change and prefer a HUGE change once every 5-10 years.

I think we might become a bit more adventurous and move to e.g. NixOS for "newer" projects. That's probably going to have to be trialed for a few projects before we go all in on it, but it seems really nice for servers (and dev machines for that matter), but it'll be interesting to see if you have to truly go all in to reap the benefits. (The worry here would be the amount of upstream 'support' in terms of manpower to bring in security updates, etc.).

(I'm also vaguely aware of SuSE, but I only spent a very brief period of time with it about 10-15 years ago and don't really remember any distinctive features either way. Which is kinda weird, because it seems to be known as the 'popular in Europe' distro?)


>You can’t audit them, hold them, modify them or even point snap to a different store.

In particular, it's easy to inspect the sources for apt packages using "apt-get source". Snap seems to have no equivalent command.


That is, as long as the package publisher distributes the source. While it's the case for the some standard repositories, packages in "restricted" (some of "multiverse" too) and third-party apt repository can be published in binary-only mode. You have no ability to at the source in those circumstances, even if you apt-installed them.


You are not required to publish source packages, most of the third party repos I use either don't have them or they are surprisingly useless because I don't have the build environment they were executed in.


I'm not sure how this is relevant. You're not required to publish source debian package either, yet there is still a nice command to use in case the source package exists.


Sure because Snap is designed to be able to distribute closed source applications. Part of the reason Snap and Flatpak exist is that distributing binaries on Linux is an enormous pain.


So allow viewing sources for the open source packages. If sources aren't available, then the user can easily opt not to install the binary.


Still it's an improvement over the current situation. Using snap IMHO only makes sense with closed-source or Open-source software that you don't want to self-compile but at the same time there is only a package available for a different distribution. So a typical installation procedure might be to forcibly install it with dpkg while ignoring the dependencies which cannot be resolved automatically. (Or worse, convert rpm to deb and then install it...) Of course the source is still downloadable, on the other hand, reproducible builds are anyway far from standard. I would prefer snap any time over other more crude installation methods if compiling is not viable.


> distributing binaries on Linux is an enormous pain.

It doesn't have to. Packaging using FPM [0] allows many targets (deb, rpm, etc) and using ELF2deb [1] (shameless plug) allows packaging any files to a .deb with no effort.

[0] https://github.com/jordansissel/fpm

[1] https://github.com/NicolaiSoeborg/ELF2deb


That's not the difficult part. The hard part is compiling a program that doesn't depend on Glibc 2.X when your users only have 2.6 or use Musl or whatever.


All that, and it takes Ubuntu's default calculator program 4 full wall clock seconds to open on my laptop. Nothing grows in this desert.


Is there no way to sandbox these packages - "have you cake and eat it too?" I'm a noob to modern Linux.

Edit: Are snaps images? ...like containers?

Edit 2: answer:

> mounted dynamically by the host operating system, together with declarative metadata that is interpreted by the snap system to set up an appropriately shaped secure sandbox or container for that application


Sandboxing addresses AN issue but does not address any of the issues discussed in this article.

Also sandboxing and running apps under reduced permissions makes a lot of sense but should be seen as a tool to increase security not an answer outright as poorly used any tool can be less effective or even harmful.

Historically locking down increasingly effectively often leads to impeding things the user actually wants to do which leads to apps asking for and granting permissions with the net effect of training users click yes to enable whatever the app wants to do.

It ends up being not just a technical challenge but a psychological one as well and its less easily resolved once you start having to deal with real users.


It seems to incorporate some novel ideas despite its obvious shortcomings.

I also think that so far Linux distributions have done very poorly when it came to cross-distribution/forward/backward compatibility of packages. This is a non-issue for popular Opensource packages since they can be easily packaged by the distributors. But not so popular packages sometimes need to be installed in a hacky way when they are not available for the target distribution. Even worse when dealing with commercial software, it can happen that it degrades with each distribution update.

Also I think that probably the package format will improve over time and add additional features, that would allow auditing for instance. Also with proper de-duplication (perhaps even on filesystem level) it should be possible to also deal with waste of space.


The main (and better supported) competitor is Flatpak, which at least doesn't have the terrible marketing.


All containerization is just the fever to the sickness that is the futureshock from extremely fast rate of development of major libs like c++$year, glibc (no matter what they say about having stable endpoints), and the like. You can't run a program written today on the system repo libraries from 5 years ago.

Containers try to mitigate this problem but like a fever they often end up making things worse.


I don't run any APT based distribution right now but I understand the issues... I think the biggest problem I see is Canonical developing what looks like a very useful tool but holding it proprietary.

If Canonical provided the snap creation and hosting tools to the community I imagine it would be judged on its technical merits.

As it is, I see more and more reports that Canonical is trying to gain more and more control and that's exactly what I don't like to see.

Would have tried Ubuntu but they've poisoned that well. I'll have to recalculate.


As far as I can tell, snapcraft[0], the tool that allows the creation of snaps is open-source and on github. However the hosting server-side code is closed source.

[0]: https://github.com/snapcore/snapcraft


> glibc (no matter what they say about having stable endpoints)

glibc does have a stable ABI. What they don't have is a frozen ABI so they will all new entry points. But old programs linked against glibc do and will continue to work fine.

> You can't run a program written today on the system repo libraries from 5 years ago.

You can absolutely write programs today targeting 5 year old ABIs. I agree that the FOSS toolchains could be more helpful there so you can just pass some compiler flag for the oldest glibc version you want to support but it is not impossible to achieve that on your own.


Do you see the fast development itself as a problem? I tend to see as the real issue that most package managers only allow one version of a package to be installed, leading to conflicts very quickly.

The best known manager that does allow this is Nix, do you know of any more, maybe ways to get this working with "traditional" package managers (completely transitioning to Nix is would be quite hard, from my limited experience)?


For many packages, gentoo's portage handles this with slots


Wouldn't it just be simpler to statically link apps then?


Indeed! All this "packaging abstraction" stuff is ridiculously silly when you can simply distribute a static binary.


Thank you - :)

Reading about the cons of Snap now - "auto-updates cannot be turned off" - https://en.wikipedia.org/wiki/Snap_(package_manager)


Yep. One of many reasons they're banned in our shop.

Even for client software it is bad - an app quitting out from under me because it wants to update is functionally equivalent to a crashing bug - but they're offering daemons this way, which is just insane.

Who doesn't want all their containers randomly restarting because someone up the distro chain decided when your machine needed to upgrade?

https://snapcraft.io/lxd


The containers do not restart when the snap package is updated.


If snap decides to update lxd on the nodes of your lxd cluster at the same time that a node has failed it is a CF.

The cluster will not come up because your down node is now not the same version as the other cluster nodes.


> an app quitting out from under me because it wants to update

This doesn't happen, at least on account of Snap. It's copy-on-write when it comes to updates, and whatever version you have running at the time of update will happily keep on running. I've been mildly confused by this a couple times, when I would end up with two different versions of VS Code running side by side, but the bright side was that my work wasn't interrupted.

It incidentally also sidesteps all the problems that stem from libraries or configuration being updated under a running application, which can happen with Apt.


Just what we all want on our production hosts.


Especially when they're being installed covertly through APT


Auto-updates of all Snap packages can be turned off at /etc/hosts by adding a line:

127.0.0.1 api.snapcraft.io


This is sort of like saying you can turn off auto-updates by leaving your Internet connection disabled - true, but missing the point that what people are primarily complaining about is having to adopt an adversarial relationship to the software they use.


Depending on the qualities your after, AppImage is also really great. Advantages over Flatpak are that they are completely portable (you can store them anywhere and run them from anywhere), and they don't require a special runtime or package management infrastructure of any kind.


AppImages are basically (kind of) statically linked binaries.

Not my favorite to install software since they're usually huge (500mb is not far out of the norm), but great for things that are hard to install, as they're pretty much guaranteed to work.

I just wish there was a command line switch to stop it from wanting to install itself on my system (move the AppImage to /opt).


I have never seen that behavior from an AppImage. I just mark them executable and run them and they do their thing. If there is a specific AppImage behaving this way then it is probably the fault of the author, and if all of them are doing it then I wonder if it is the behavior of something like appimaged?


If you need this kind of packaging but want to retain freedom, just use an alternative solution like flatpak, which sandboxes from the get-go and lets you control what parts of your system the application can access


> just use an alternative solution like flatpak

Saying "just use an alternative" seems overly dismissive of the complications that entails.

IIUC, the only options for that are (a) abandon Ubuntu, or (b) actively circumvent Ubuntu's software distribution infrastructure, reminiscent of dealing with Windows 10 forced updates.

IMHO this somewhat erodes Ubuntu's value proposition.


I have a laptop that I'm planning on rebuilding from a failed NTFS/Win10 system to Linux something.

Now I know it's not going to be Ubuntu or anything else derived from Canonical.


> Now I know it's not going to be Ubuntu or anything else derived from Canonical.

Why avoid Ubuntu derivates like Mint or Pop!_OS, though?

They're doing the heavy lifting of fighting Canonical on this issue. Maybe using one of those distros instead of vanilla Ubuntu actually strengthens the pressure that Canonical feels on this topic.


Fair point, but honestly I'm in the "don't have enough time to Gentoo this build" category.

If I know I can avoid Snap crap by changing distros for now, that's what I have a timeslice for.


Why not EndeavourOS? It's based on Arch Linux, with minimal numbers of external packages; quite unlike Manjaro actually. You can just treat it as a standard Arch install, and even use the ISO to boot with wi-fi drivers and all and install Arch plainly, without the non-wifi and other regrets. EndeavourOS is pretty great; I personally use it as a live-ISO (XFCE environment) and just install Arch from within the live EndeavourOS booted iso. Just throwing it out there.


Hadn't heard the name but glad I did. I was going to look for a live CD distro with good NTFS read support for the old drive recovery. This sounds good.


I'd seriously recommend looking at FreeBSD instead. Everything Linux seems to be going down this we-know-better-than-you path.


I agree that this erodes Ubuntu's value proposition. I run Manjaro with KDE, personally.


AFAIK Ubuntu does not force reboot like Windows 10.


I believe you're correct. I'm not arguing that Ubuntu is now as bad as Windows 10; just that it's a move in that direction.


Neat - "permissions that are defined by the maintainer of the Flatpak and can be controlled (added or removed) by users on their system" - https://en.wikipedia.org/wiki/Flatpak

I'm wondering now if Wikipedia would be appropriate to host a linux sandboxed app packaging comparison chart in the form of https://linuxhint.com/snap_vs_flatpak_vs_appimage/ - and it would be extensible ...since Wikipedia.


You can easily create very strong sandboxes for daemons using unit files and packaging them as native OS packages.


"Linux on the desktop" is apparently a lot like having sex in a greenhouse: fucking close to Windows.


I wonder how hard it would be, in theory, to write an open-source "cracked" version of snap that lets you do these things.


The Snap server is closed source, so this would require building an open source Snap server as well. At this point, it would be much simpler to use Flatpak or some other solution.


I hate snap as much as anyone, but it may not be as hard as you think: <https://ubuntu.com/blog/howto-host-your-own-snap-store>


The difficult part is maintaining compatibility with snapd.

> snapstore was a minimalist example of a "store" for snaps, but is not compatible with the current snapd implementation. As a result I have removed the contents here to avoid further confusion.

https://github.com/noise/snapstore/


Why not simply use Flatpak instead?


> People run linux because they want visibility and control into what is happening on their systems.

I mean, not really? Or rather, that's not the only reason, or the main reason for many users.

Many people just want to use a FOSS OS, for the reason that any buggy component can be forked, fixed, and PRed, which—if you're an IT organization yourself—often means far less turnaround time than waiting for the appropriate vendor to fix the problem for you.

Honestly, I'd be fine using Windows Server or some other "cathedral" OS, if I could fork/fix/PR its components. I want a stable operational substrate for my app that's quick to fix in an emergency; I don't care whether it's made out of tiny shell-scripts or huge C libraries, as long as it gives me that property.

In that perspective, snap seems fine (you can still fork/fix/PR a snap) just like Docker images are fine, or systemd is fine.

Maybe, in the end, I'm more of a BSD person than a Linux person. I mostly favor Linux installs for the hardware compatibility and performance, not because it really fits my philosophy.


I kind of hate snap. I thought the flatpak idea was OK but snap just feels intrusive.


> Canonical seems to want to take away that visibility and control from their users.

The Microsoft way: over-automated operating system DE's which attempt to make the OS appear user friendly yet create one headache after another. They suffer from massive interconnected webs of program state and logic which fails leaving you with the only sane option of rebooting. The more automation the vendor inserts the less transparency and control you have.


>People run linux because they want visibility and control into what is happening on their systems. Canonical seems to want to take away that visibility and control from their users.

tbh I just run linux because I want a good dev machine and I personally don't care if canonical abstracts updating software away, as far as I'm concerned they can keep everything up to date and do their thing, for me that's a plus for snaps, I generally find them pleasant to use.

In my experience people on HN in particular tend to vastly overestimate how much people value control vs features/abstracting routine tasks away.


Your link just convinced me to switch from Ubuntu. Thank you!


But Mint are too lazy to build their own packages so they use Ubuntu binaries that are installed as root, this is IMO a PR move and a hypocrisy, if you don't trust Canonical don't use Mint use Debian or some distribution that has the capacity to host and build their binaries.


As long as you can audit the source and build chain what's the problem?


>As long as you can audit the source and build chain what's the problem?

As long as Linux Mint developers are not doing that how it helps you? you don't trust Canonical binaries but you trust the deb binaries, you could be honest and claim that you don't like snap but you still trust the deb binaries.


The point isn't that the devs are the ones doing the auditing, even though pretty much everything that lives in DEB and RPM repos has maintainers which do.

The point is that you can audit without having to depend on a third party. Nobody's claiming audits are free or that they're assumed. The point is that you have the option to choose to trust as much or as little of the build chain, from the compiler to the target code to the artifacts.


let me clear some things up and tell me if I am wrong, link me to the correction and I will apologize.

- Mint uses Ubuntu repositories

- Canonical pushes the changes they want into this repos, this changes are probably done by scripts that build source code on Canonical servers.

- the Ubuntu repos also contain binary blobs

- when a Mint user does an update he gets the binary directly from Canonical servers, there is no Mint dev or Mint script that does any check to see if for example the evil Canonical modified the NVIDIA driver and added even more evil in it then already is

Now explain to me if all the above is true why would someone that does not trust Canonical would use Mint? There is no safety checks to prevent evil Canonical people do evil things.

My conclusion is if you don't trust Canonical don't use Mint. Maybe Mint is working on addressing this and soon we will see an PR campaign that announces they are finally able to self host but until then I would stop the hypocrisy about not trusting Canonical. (Btw there are many smaller distros that can host their own repos, not sure why Mint is not doing it)


The differences are the following:

* Nonfree software is generally demarcated as such. There is nonfree software that has available source (in the case of some codecs), other that comes as blobs.

* The Chromium package is open source, and in most distros comes as binary built from the toolchain set up by the package maintainers. In all free software distros if you don't want to download the binary, you can download the source and build locally with the provided build scripts (in the case of most APT packages in Debian/ubuntu).

* Serving Chromium binaries with Snap removes the option of downloading, inspecting, and running the build chain locally

* Serving a different version of Chromium, or replacing the stock version with a different variety, cannot be done without creating a new Snap repository. Downstream distros like Mint need to replace some of the stock Ubuntu stuff, just like Ubuntu changes stock Debian packages

* Because there is no open source Snap repository software, Mint is unable to set up an alternative repo that could work around some of the objections they have with Ubuntu.


You are making my point, the "trust" is not the issue , the issue is "control". Canonical has the control on the snap store and Mint can insert their customization on top.

Again, if Canonical is evil and can't be trusted why I would run Mint? Do the developers run any scripts to alert me if Canonical slips some bad thing in a binary?

I think is fine if they remove snaps but IMO is stupid to accuse Canonical to be evil and not trustworthy while you blindly trust their repos.


> Again, if Canonical is evil and can't be trusted why I would run Mint?

Rational self-interest. I don't think the tech giants are good for society, but not working with them would mean slipping into irrelevance.

Say, I'm a game developer. Do I trust Microsoft? No. Do I sacrifice 90% of my profit to boycott Windows? No. There are different degrees of "evilness", and the scale does matter, too.


But you can keep compatibility with Ubuntu if you want by using the same code but keeping control,

Honestly tell me if this does not sound idiotic "Microsoft is evil and we don't trust them, please run our own Windows copy that is the exact same thing but with different colors, MS can push an update and delete all your files because this is not a supported configuration and we have no scripts to check for it but we are not competent enough to setup our own repos and scripts like other distress"

I understand why Mint does what it does, the only idiotic part is complaining about trust in Canonical.


> * Nonfree software is generally demarcated as such. There is nonfree software that has available source (in the case of some codecs), other that comes as blobs.

This is the same in apt as it is in the Snap Store. Compare https://snapcraft.io/chromium to https://snapcraft.io/spotify for example: the license field is clearly presented there.

> * The Chromium package is open source, and in most distros comes as binary built from the toolchain set up by the package maintainers. In all free software distros if you don't want to download the binary, you can download the source and build locally with the provided build scripts (in the case of most APT packages in Debian/ubuntu).

Also true for the Chromium snap (see next item for details).

> * Serving Chromium binaries with Snap removes the option of downloading, inspecting, and running the build chain locally

This is outright false. The Snap Store page for Chromium is available here: https://snapcraft.io/chromium. It links to the source, which is the git repository here: https://code.launchpad.net/~chromium-team/chromium-browser/+.... You can use these source together with snapcraft (which is Free Software, licensed under GPL-3.0) to download, inspect and run the build chain locally, including with any modifications that you want to make. You'll get a snap package as output, which you are free to distribute and other users can install it using the snap CLI.

> * Serving a different version of Chromium, or replacing the stock version with a different variety, cannot be done without creating a new Snap repository.

Partly false. You can ship a different version of Chromium in the Snap Store under a different name. This is an automated process, rather like creating a PPA. As long as you aren't misleading anyone and you follow the terms of the relevant licenses[1], nobody will stop you.

> * Because there is no open source Snap repository software, Mint is unable to set up an alternative repo that could work around some of the objections they have with Ubuntu.

Partly correct. One generally cited reason for this is that the same criticism was leveled at Launchpad which was opened in response - but nobody is running an alternative production of Launchpad anywhere so Canonical doesn't want to waste that sort of effort again. Another is that store fragmentation is bad. I'm just stating the other side's position here. Please don't shoot the messenger.

[1] Chromium's licenses are listed as: Apache-2.0 AND BSD-3-Clause AND LGPL-2.0 AND LGPL-2.1 AND MIT AND MS-PL AND (GPL-2.0+ OR LGPL-2.1+ OR MPL-1.1)


>when a Mint user does an update he gets the binary directly from Canonical servers

When I update Mint binaries, I get them from one of about 30 mirrors which Mint enables me to choose. (Or it will decide for me based on speed.) Do the mirrors at Clarkson, Harvard, Purdue, UW, etc belong to Canonical? I think not. Nor does most of the code the binaries are built from.

Canonical has made its choice, Clem's made his.


Mirrors are just mirrors, they are not build farms that build from source.

I would convert the next sentence in math logic and prove you that it makes no sense but not sure you can understand the symbols so let me try again in English.

1 Mint does not trust Canonical

2 Mint plugs their users systems directly to Canonical untrustworthy repos to run possible "evil" binaries and scripts as root.

if 1 and 2 are true then as a user you should not use Mint, as a Mint developer you should start working and finally create an independent distribution.

IMO 1 is false, probably they mentioning "trust" was a mistake.


> Applications in this store cannot be patched, or pinned. You can’t audit them, hold them, modify them...

I think it should be noted that this generally applies to the addition of "third party apt repositories" in general, use of which is the problem that snaps fix[1]

Some snaps are built from Free Software and reproducible sources, as are some third party apt repositories.

In other words, if this criticism bothers you, then you should never install from any third party apt repositories ever. If some are acceptable to you, then so should some snaps.

If you don't want to use third party software ever, then you can still use Ubuntu without snaps.

[1] Third party apt repositories often break users' systems.


> if this criticism bothers you, then you should never install from any third party apt repositories ever.

This does not follow at all. Third-party apt repositories work just like Ubuntu's apt repositories; you have just as much ability to audit, hold, pin, etc. in both cases.

If there is a difference in reliability (software from third-party repos is more likely to break your system--and, btw, software from Canonical's repos has also broken systems in the past, so "avoid third party repos" is not a guaranteed way to avoid software breaking your system), using snaps to install third-party software instead of third-party repos does not fix that problem: the third-party provider is still just as unreliable as before and their software is just as likely to do something stupid.


Third-party apt repositories are a security nightmare: you are giving a third party unrestricted semi-silent root access to your computer. I can trust my distro provider with this, but it should be strongly discouraged for third-parties. Instead, installing PPAs to get updated builds of standard packages seems considered “normal”. At least, this doesn’t hold true for snaps.


> I can trust my distro provider with this, but it should be strongly discouraged for third-parties.

Again, it all depends on how much you trust the third party compared to how much you trust your distro provider. I don't have many third party PPAs installed on my computer because there aren't many third parties I trust that much. But there aren't zero either.

Also, a big part of my trust of my distro provider is based on having source code forced to be open, and another big part is based on them not doing things behind my back. Snaps significantly erode both of these aspects of trust.


> Instead, installing PPAs to get updated builds of standard packages seems considered “normal”.

Perhaps we should consider why this is: because people want up-to-date software on their computers (desktop or server), instead of being beholden to whatever version distribution maintainers have decided you can have.


If you don't want software that has been configured and tested to work together by a distro maintainer, you're volunteering to do that work yourself and become the sole maintainer of a bleeding-edge distro with a very small audience.


That's fine - and why I use macOS as a desktop: because Homebrew consistently gives me up-to-date versions of the software I want. If I were to use Linux on the desktop, it would be a distribution which also allows this with minimal fuss - almost certainly not a Debian derivative.

I understand the _reasoning_ behind the distribution model - I just don't think it works very well, and apparently nor do all the people who use PPAs in the course of everyday use to get up-to-date software.

It's also worth noting that FreeBSD does not have this problem - ports are updated _much_ more often than most Linux distributions seem to be.


FreeBSD is pretty much a checkpointed rolling release.


> audit, hold, pin

Originally you said audit, hold, modify.

You can audit snaps just as you can audit third party apt repositories. Either the publisher ships the source such that you can rebuild, or they don't.

You can hold snaps using this method: https://forum.snapcraft.io/t/disabling-automatic-refresh-for...

You can modify snaps just as you can modify what you get from a third party apt repository. Either the publisher ships the source such that you can modify and rebuild, or they don't.


> Originally you said audit, hold, modify

I didn't; the post you were originally responding to (the GP of my original post, which is the GP of this one) did. I am not the person who made that post.

You appear to be saying that you can audit, hold, modify, pin software in third-party repositories. That means you agree with what I was saying in the GP to this post, that you have just as much ability to do all these things with third-party repo software as you do with software distributed using snaps.


Snaps are super laggy. GNOME calculator on Ubuntu runs in a snap and it is baffling that whoever made the decision to package it in a snap by default was OK with the fact that it takes 2 seconds to launch a basic calculator on a 2017 laptop (edit: re-tested, it took 5 seconds).

To top it off, a couple months ago my calculator disappeared. For some reason I have been having problems with snap applications disappearing for a while now, even though I have made no configuration changes. How fun it is to discover that such a basic tool just no longer exists on your machine, way to fuck up a morning.

I get that snaps make application distribution easier, but please don't do it at the expense of the user. I've had more success with Flatpak and AppImages, but not enough experience with any of them to judge which is best.


This move made me 100% opposed to snap. Don't beta test on my daily-drivers as a default.

The calculator should -always- load in a couple ms. It's one of the simplest tools on the shelf! If a calculator snap can't load immediately then I have no interest in a single other snap app. Waiting more than a second is a nightmare, and I've definitely waited 5+ seconds at times. That's about how long it takes my system to boot. As far as I'm concerned it's a complete failure and I couldn't possibly trust it on any desktop or server that I manage.


This is exactly my experience. On top of that snap creates visible directories in $HOME, f-up my loop devices and snap applications can't even access /tmp. Never had any problems with flatpak and appimage. I will however use it solely for proprietary software.


> On top of that snap creates visible directories in $HOME, f-up my loop devices

This. snap seems to spew traces of itself all over the place on an Ubuntu install, and it's less than clear why any of them are there.


It's weird how modernized things can end up so bad at their original purpose. Like a computer being a bad calculator.

The same thing with phones. Smart phones are good at a lot of things, but they are mediocre as a telephone.


I think it's because they extract compressed rootfs tar each time to be mounted in LXC container. Even with a beefy computer, it takes considerable time.


What is weird to me is that apparently there has not been someone in a position of power in this project that would go "Wait guys, this is not good. Any other ideas? How can we improve?"

No, just roll on with it.

There was a period of time, around 2010-2015 when I really felt that computers were fast. SSDs were getting more affordable and that was a huge improvement, every action was immediately responsive. In 2020, that has somehow been undone. It takes 5 seconds to launch a calculator. Software guys really like to undo all the advances that the hardware guys are doing.


In that period, computers made a significant leap forward, and it took developers some time to catch up and fully utilize surplus performance.


Somebody somewhere is probably running a calculator as an electron app that takes 10 seconds to open.


You don't even need Electron. The "modern" Windows 10 calculator takes a dozen seconds to start on my laptop.


An electron app packaged as a snap.


Why does that take considerable time? I’m curious as to the technical reasons.


I think gnome-calculator was just an experiment for Ubuntu 18.04 to test snap packages. In 20.04 it has been switched back.


Sounds like Windows with their UWP apps. Calculator also used to take 5 secs to load. It’s down to about a second now but still baffles me to think that MS management are ok with that


Agreed!

I guess people at MS test and quality departments use I7 with an SSD driver. If so, it's bad. If not it's simply hard to understand, if we leave corporate politics aside.

To me, the calculator has always been more or less OK, but the Image Viewer is a clusterfuck.

With the old viewer, it's almost instantanious opening by clicking a file in windows explorer. The new one is slow as a crawl. The first time may take 5 or 6 seconds (in a 7 year old laptop). The next time is down to 2 or 3. Compare that to milliseconds using the native app. I've never felt the need to use an specific image viewer, but now I'm happy with Irfanview, as fast as the old app, and full of features.

How do such things pass triage stages?


My calculator is now the plain Python interpreter


Have you tried `bc`? I find it faster to start than Python.


The difference is not noticeable, at least on my computer, and since Python is the language I work with the most these days, I always have one REPL opened


can confirm, have exact same problem with the exact same software (gnome-calculator).

I now switched to xcalc. gnome-calculator is better, honestly, but it's not worth feeling in the 90ies.


You can uninstall the snap package for gnome-calculator and install the apt package instead.

I just did it and it's quick to boot up again as it should :)


Well, there is always bc and dc.


not sure if it is only snaps... my galculator takes a couple seconds to open and it isn't a snap... maybe gnome is the issue


why the down vote? I don't even use *buntu... Not that I like snaps, but it just got a lot slower lately


I have no issues with your original comment. But in regards to your "why the down vote?" question, HN's guidelines state:

> Please don't comment about the voting on comments. It never does any good, and it makes boring reading.


I somewhat disagree with this interpretation, I see more as using the votes as part of the argument "considering the downvotes you are wrong" and similar. In this case he got the signal that many people disagree with his message but cannot understand why.

It is one thing when you make a controversial statement and then decide to attack downvoters, it is a different thing if you are trying to understand how and why people disagreed with you.


It's interesting in that I quite like the idea of snap packages, just not their implementation. I recently purged snaps from my Ubuntu Mate machines after a few years of use because of the realization of only using two:

1. The micro terminal editor.

2. Chromium, because it was forced.

Well, #1 was packaged for 20.04 so I didn't need it any longer. That left Chromium. For that single package, I had to tolerate my system being spammed all over:

- Multiple irrelevant loopbacks cluttering my mounts list

- Dedicated folders in filesystem: /snap ~/snap

- Very slow startup, for chromium.

- Lots of disk space taken

- An always running daemon! (Wasn't it root too? Can't remember). apt doesn't need a daemon.

Sheesh! That's not even mentioning the store issues which others have described already.

Sorry, but a few newer packages here and there are not worth all that. I'll handle it myself, thanks. What snap does isn't actually that hard. I'd keep it around if it wasn't so obnoxious at putting itself in front and center of everything.


~/snap is really just wholly unacceptable, how did that even become a thing? Do the people who develop this software never use their own systems?


> Do the people who develop this software never use their own systems?

I've harbored this suspicion of GNOME developers for years. It honestly wouldn't surprise me if a lot of Canonical devs don't use Ubuntu at home.


I have no nice words about the person who thought that folder is acceptable to use. Seriously, what the fuck even?


That and their utter refusal to permit controlling the updates of snaps. They had an announcement where they noted that the community wanted to control their own damn machines, and their solution was something like permitting delayed updates for up to a week.

What the heck is this, windows?


My personal theory by now is that that person/team is sick of being forced to work on a stillborn project, and is trying to create as many pain points as possible to get the project killed already. I see no other reason by now. In that case, hats off to him.


Well ~/snap is pretty terrible but I am almost as annoyed with something like ~/.tmux.conf and no support for the XDG Base Dirs Spec or setting the config path.

With tmux it seems like this is finally possible with version 3.1 or by compiling it yourself but I remember being annoyed about this years ago.

I dislike a home directory cluttered with dotfiles just as much as that snap folder choice because when do we actually ever not list the hidden files?


There are way more offenders here. But snap is the most egregious one because it's not hidden.

Name and shame:

* docker (~/.docker)

* Arduino IDE (~/.arduino15)

* GNURadio (~/.gnuradio)

* IPython (~/.ipython)

* FreeCAD (~/.FreeCAD)

* HPlip (~/.hplip)

* IntelliJ and others (e.g. ~/RubyMine[year])

* Cargo (~/.cargo)

* Audacity (~/.audacity-data)

* PGAdmin (~/.pgadmin)

* ELinks (~/.elinks)

* NPM (~/.npm)

* sqlmap (~/.sqlmap)

* ZAP (~/.ZAP)

* GNUPG (~/.gnupg)

* crashlytics (~/.crashlytics)

* Android Studio (~/.android)

And so so so many others just crap all over home when they could just crap in .config if it's config and in .cache if it's cache. Lazy devs.


Another stupid thing is stuff like ~/.config/<stupidprogram>/Cache.

The cache is not a config, assholes. What do you think ~/.cache is for?!


Quite a lot (npm, cargo, docker, ipython, elinks, gnupg, freecad, ...) already support XDG or may be configured with environment variables [0]:

    export ELINKS_CONFDIR="$XDG_CONFIG_HOME"/elinks
[0] https://wiki.archlinux.org/index.php/XDG_Base_Directory


It should be the default that my home folder isn't cluttered. I have tens and tens of pieces of software, it's very impractical to keep track what does what and if I have to get them to behave somehow.

That export list required got me weary, plus look at how many in that list are hardcoded :/, the situation is rather bad.


You can patch it all.

If world does not match your view it may be you who is outlier. I quite like current convention - hidden files in $HOME belong to applications. There is value in $XDG_CACHE_HOME - it can be safely removed (like /var/cache).

You force your view on open source community, that is rather bad.


> You can patch it all.

As I said, impractical, not a solution.

> If world does not match your view it may be you who is outlier.

Looking at the amount of software that does follow the base directory specification, actually you're the outlier insisting on obsolete conventions.

You and a bunch of other developers insist on those things, in reality that is the actually harmful behaviour for open-source.

Interestingly but yet non-surprisingly, that insistence very often goes in hand with the stubborness to stay on obsolete mailing lists, ugly user interfaces, insecurity by-default, git-email, buggy issue trackers, 80-column commit messages, obsolete security standards and practices and so much more.

> I quite like current convention - hidden files in $HOME belong to applications.

The future is now, home folders aren't to be filled with trash. Move on or stay behind, seriously.


Lets check. I tolerate both versions, use workarounds in my .bash_profile and share them, actually tried to patch and have written article [0].

You shame software, half of it has workaround, see patching as impractical.

XDG Base Directory Specification [1] is not about your home folder. It is about default storage, separation of cache, user data and config, a way to provide another config, so one can:

* remove entire config when stuck with a problem

* remove cache, think /var/cache

* `ssh -F foo` would be `XDG_CONFIG_HOME=foo ssh`

Everyone has a pain point, everyone has a workflow, there is no One True Way. Please stop shaming authors, they quit, sometimes post it here about mob. Patching folder structure is the simplest thing. If you can't do that who is going to fix actual bugs?

[0] http://sergeykish.com/openssh-config-in-xdg-directory

[1] https://specifications.freedesktop.org/basedir-spec/basedir-...


> Please stop shaming authors, they quit, sometimes post it here about mob.

If one feels that talking about a bug in their software is "shaming" them, then maybe they should quit or alternatively, just quit pretending they want feedback or to write FOSS. Same applies to teams writing software.

Not to mention how harmful it is to think that everyone who picks up FOSS is actually good at it. Thinking people as infallible is actively harmful for the end users.

> Patching folder structure is the simplest thing. If you can't do that who is going to fix actual bugs?

Incorrect folder structure is an actual bug. It might be simple to patch for the end user, but you're ignoring the maintenance burden, annoyance and cumbersomeness.

> Everyone has a pain point, everyone has a workflow, there is no One True Way.

There are paths more correct than others, some workflows are obsolete and stupid, and should't be catered to. It's wilful ignorance to ignore that.

https://xkcd.com/1172/

> half of it has workaround

Bwahahaha, you may think that's fine, but I don't.


Your words

> Name and shame

> they should quit

That's why I've called it harmful - choice between your complains and people writing code is obvious. Fork it, patch it, there is no burden - if people care maintainers would switch, if switched enough patch would get in upstream. Or provide own repository with patches, that's FOSS way. If not by yourself than sponsor.

How much do you actually care? How much would you pay? Is it free as speech or free as beer?


> Your words

Without the rest of the context and no, criticism is not harmful. If it is a "sin" like you say, should we look at things you've said about FOSS projects?

> Fork it, patch it, there is no burden

Either you're delusional or you haven't done either of the things.

> if people care maintainers would switch, if switched enough patch would get in upstream or provide own repository with patches, that's FOSS way.

Yeah, and it'll take the next decade, being optimistic. GPU acceleration in Chromium and Firefox on Linux is a perfect example how absolute shit that "way" is.

> How much do you actually care? How much would you pay? Is it free as speech or free as beer?

Feel free (as in freedom) to just type out your arguments instead of asking rhetorical questions.


So the answer is no, you will not pay.

Question was not rhetorical - it is realization of freedom 1. You answer implies there is a fork with GPU acceleration and no one cares (or does not answer my post). Ah, "criticism is not harmful":

Name and same:

* Avamander


> So the answer is no, you will not pay.

No, I won't pay to devs that ignore conventions. That's like having someone take a s* on my porch and me paying them for not doing so. Plus, demanding payment is the thing not really in the spirit for FOSS.

> Ah, "criticism is not harmful":

That's just naming, without listing the reason. In addition to that, I listed projects, not people. Shows that you've totally missed the point of the original list.


This is not yours software, you are just allowed to use it, that's you who are taking other peoples software on your porch (and naming it s*). No one demands, ever heard of bounty, sponsorship? Do not want to support original developers - fine, 3rd party.

Out of principle I can implement XDG Base for ssh client. Just like my work - implementing features I am not that interested in, providing support. I hardly believe any reasonable man expects complains to work in that case. So how much would you pay?

Your account may be group of people (and may be not). Project may be group of people (and may be not). Shows that you've totally missed the point.


> This is not yours software, you are just allowed to use it,

Oh but keep in mind in some cases I'm not given a choice. If I could not use snap for example, I would not. But it was forced upon me. So I have every right to be annoyed at someone figuratively taking a shit on my porch.

Anyways this discussion has depleted itself, you have no good arguments protecting that nonstandard behaviour.


SSH is the most annoying because I need it installed on literally everything.


It is quite easy to patch if all you want is to move files away from $HOME.

http://sergeykish.com/openssh-config-in-xdg-directory

And it is easy to make own repo. There are quite a lot of unofficial repositories https://wiki.archlinux.org/index.php/unofficial_user_reposit...


It's definitely irritating but I consider the root daemon the primary offender. Kill it with fire!

Oh, forgot to mention: /var/cache/snapd/



Just install chrome without snaps?!? Works fine?!?


Not anymore. I found a chromium PPA, and while not perfect, it doesn't do any of the irritating things listed above.


I just went to chrome.Google.com and downloaded it and installed it.

shrug


Google is even less trustworthy than Canonical. I only use chromium for a few tests in incognito mode.

Firefox FTW.


You say Google is less trustworthy than Canonical, but you're using Chromium from a PPA. That's a personal packaging archive with far less auditing/eyeballs than either Canonical or Google.


> and while not perfect, it doesn't do any of the irritating things listed above.

I rarely use it and only in incognito mode. Yes, it's a tradeoff but with numerous benefits.


I use FF as my main browser, I installed chrome (non snap) only for testing, or when I'm subjected to using GoTurdMeeting.


Downvoted for downloading chrome from google to avoid snaps?


Snap sounds to me like the latest of many decisions by Canonical that are more like what you'd expect from a commercial vendor than a FOSS one. This is Microsoft-level coercing people into your own ecosystem.

I don't doubt for a moment that it makes business sense for Canonical, but I really wonder whether there's a market for this - the huge majority of people who don't care about this kind of thing are on Windows or Mac, or even just working happily away on their phones and tablets.

Linux' selling point for me was always that I was in control and could make the system work the way I wanted to; people more ideologically pure than me have slogans like "free as in freedom" or "binary blobs are bad".

I really don't see the market for "linux, but with commercial vendor practices". I switched from ubuntu to mint a while ago and I'm really happy about that right now.


I'm not sure I see how it even makes business sense. Trying to compete with with Apple and Microsoft by eliminating your strengths to focus on your weaknesses isn't a good play. Gnome and Ubuntu are not and will never be as smooth and integrated as MacOS, and that's okay because that's not why people use them. Take away the openness and you have a slower, uglier Mac with a worse app store.


Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: