> Applications in this store cannot be patched, or pinned. You can’t audit them, hold them, modify them or even point snap to a different store. You’ve as much empowerment with this as if you were using proprietary software, i.e. none. This is in effect similar to a commercial proprietary solution, but with two major differences: It runs as root, and it installs itself without asking you.
This is a great summary of why people rightfully feel nervous about Snap. People run linux because they want visibility and control into what is happening on their systems. Canonical seems to want to take away that visibility and control from their users.
I can set up an internal APT mirror for my users, servers, test systems, etc., but I can't set up an internal snap mirror as far as I can tell. This means that despite having an internal repo that I can whitelist, some package installations will now arbitrarily require internet access. I can no longer install chromium on a system without access to the internet, and package installation will fail as a result.
I'd rather have an older version than a snap version, personally, but better would be two packages which Provides: the same chromium-browser, chromium and chromium-snap.
The most irritating thing here is that they're using a package distribution system which handles dependencies and updates flawlessly, and has for decades, and using it to install things using a package system which does not solve those problems, and instead ships multiple copies of multiple libraries and applications, which run slower, ignoring any settings I have in APT.
So far, it's a maintenance nightmare, and I loathe it.
OK, "forever" is hyperbole - it was probably about 5 seconds. But it was enough of an annoyance for me to figure out how to install a deb packaged version. And every user having to wait an extra 5 seconds every time they open an app aggregates to a lot more time than Canonical saves maintaining packages.
That doesn't sit right with me, so my next OS upgrade will be Mint or pure Debian. I've been with Ubuntu since Dapper Drake, and I'd like to thank Canonical for making great distros all that time. But I'm not going to follow you down the snap-only path, time for me to move on.
If someone is debating just setup rasp with headless ssh and have at it.
> so my next OS upgrade will be Mint or pure Debian
I've moved one machine to debian stable and haven't looked back. There are a few teething issues like /usr/sbin not being in the default path, some sudo issues, the installer isn't as grandma friendly, but it's rock solid and doesn't nag me about updating 137 packages of things I've never heard of needing an update.
For the longest time you couldn't dist-upgrade it, at all. The functionality wasn't there. When their web servers got hacked, they cleaned them, and got hacked again.
From a Debian user's perspective it is unclear why these small distros can't just be a Debian derivative or a Fedora spin. It's not scalable that every small distro invents their own infrastructure.
Like, I ain't crazy about what's happening with Mint and all of their hacks -- trust status: eradicated -- but I certainty get why they exist as a distro.
I can see that Mint could be nice though. On a desktop, running hardware a few years old.
ArchLinux NOT Manjaro
Debian NOT Ubuntu etc
And so on, the 'originals' work always better in the long run.
Except that normal people can actually install Manjaro and it doesn't make you use AUR for pretty basic things. Except that Ubuntu actually doesn't shit itself almost every time you add a 3rd party package/repo and comes with reasonable defaults. Also, have you used Pop OS?
Your statement is quite simply untrue.
I'm using debian testing for more than 15 years and it never shit itself for a 3rd party repo.
Debian comes with whathever the original software developers set as the default sans the wallpaper. For sensible default you can bug the software developer in question, not debian.
To add unstable packages to a stable system with repo pinning and using apt directly is not the best practice though.
If I'm going to add stuff from unstable, I use aptitude. Its dependency resolution and solution suggestions are better and more manageable than ordinary apt. It allows you what's going to happen before actually pulling the trigger.
The only package I get from unstable is firefox. I add the repo, update the package and remove the repo afterwards because unstable and experimental are highly chaotic realms and not suitable to use continuously due to high rate of uncontrolled change. Also unstable and experimental are not guaranteed to not to break. Testing and stable are implicitly (testing) and explicitly (stable) is guaranteed to work.
With this recipe, I only had to re-install a Debian system once; to migrate it from 32 to 64 bits since a very big disk cache with very small files was triggering a bug causing disks to trash and system slow down to a crawl. There were no procedures to migrate a 32 bit system to a 64 one in a reliable manner so, I just reinstalled it.
I was impressed because usually when I install Arch Linux I forget to boot the flash drive with UEFI enabled so I get stuck because I get errors during bootloader installation. I also love to do pointless things like install Arch Linux (not the iso) on a 32GB flash drive.
And when 'normal' people try to fix a problem with a rolling distro then they have a problem.
>Except that Ubuntu actually doesn't shit itself
As if that's a problem in Debian, or are you talking about snap's?
>Also, have you used Pop OS
No thanks, using OpenSuse Tumbleweed, FreeBSD, Debian and CentOS is perfectly fine for me.
For me, the stability of it working (well) as it has for the last few decades is worth missing a few of the latest updates.
The limitation stems from a design problem, details at https://bugs.launchpad.net/ubuntu/+source/snapd/+bug/1643706.
At this point snap sounds like a bad joke to me. Especially when Flatpak already exists.
Besides, there are quality concerns with browser forks maintained by an understaffed project. The fact that ungoogled-chromium asks for internet randos to provide its own binary releases doesn't inspire confidence either. If someone desperately needs to use Ubuntu, they'd be better off using Firefox.
(Disclaimer: I'm talking in a personal capacity but the company I work for in my day job now owns Red Hat - I don't work on Linux Operating Systems).
This is interesting, because the last few days I was actually working on packaging an application of mine as a snap/flatpak.
From my PoV, they both have their fair share of issues.
Snaps enforce a sandbox, which I think is actually a good idea, because the desktop security model is somewhat broken. If your application cannot run as a sandboxed app, you need to be granted special permissions by canonical after manual review (my app needs this), where they also discuss if they can make a new permission in a safe way for your usecase that everyone can use afterwards.
On one hand this sucks because I need to ask canonical for permission to publish something, and there's no certainty that I will get these permissions as a nobody for a new app nobody ever heard of before. On the other hand, I think I like that they're doing something about the desktop security model.
The next problem is, if this is denied, how do I ship updates? Provide a self updater? Easy to write, but if everyone does that, we can just go full windows and abandon package managers. Tell people to just curl | bash? That's not more secure than a potentially shady snap.
But I do have to praise canonical for being very helpful in IRC and the forums for helping me debug issues and file bugs against snap stuff.
Now flatpak on the other hand, just feels kinda weird to me.
It sandboxes things, but every application can pretty much grant itself access to everything. This is a completely different philosophy, but if you rely on everyone tightly sandboxing their applications without granting themselves permissions for sandbox escape, I think something like landlock (when it lands) or pledge is much more suited for this, and baked into the application.
Then there is this weird thing where flatpaks force a runtime on you. My application is a statically linked go binary. But flatpaks pretty much want to force me to add an entire freedesktop suite as a dependency, as you simply cannot choose no runtime.
(Community-)Support for building flatpaks? Pretty much non-existant.
So yeah, the entire linux upstream-packaging situation is still quite depressing honestly. And with the time and energy I have invested into this by now, I could have written a simple but sufficient self-updater about 10 times over.
For me the only acceptable solution besides proper .debs are AppImages. AppImage doesn't try to "replace" the package management for desktop apps like the former two candidates. It tries to complement package systems for some special cases (like for example commercial software, or for the cases where the user "just wants to try something out" without "polluting" the whole system with a lot of dependencies).
For my desktop needs AppImage is like "Docker, the good parts". A simple self contained format that runs everywhere without any further dependencies. Compared to that Snap and FlatPak are bloated annoyances.
Mint also has Debian based distro.
So out of curiosity:
My application is already self contained and statically linked, so no AppImage needed, but it behaves like one, you can just run download and run it everywhere. And so what you're describing will be for sure an option for those who like it (in fact currently it's the only option in alpha).
How would you like to get updates for something like this? Visit the website yourself occasionally to check for updates? Have the application notify you a new version is available? Have an integrated updater so the binary can update itself?
Stuff like AppImage, static binaries, Docker & Co are for me at least a kind of "last resort". Even I'm using Docker a lot to try things out I first look for an AppImage in those cases. But when I decide that some app should become part of my system I will look for a proper package. One source to rule them all…
 Docker is a big problem on it's own. But as I can't avoid to have it installed because of work I decided I could use it at least to "keep experiments under control". Before I was forced to have Docker I've used systemd-nspawn for that use-case.
Appimages are great but there's no sandboxing or updates. But hey, we used to downlad debs and install them by hand on Debian 1.3, before apt was a thing. Maybe appimages could be signed and distributed in a similar fashion.
In addition, Debian definitely does package Go programs -- they've packaged some of mine and they have possibly the most ambitious way of solving the great vendor/ issue (which most other distributions have ignored).
The likely issue with LXD is that they require specific versions of various system libraries that they package in their source code bundles (including a fork of sqlite3!). I packaged LXD for openSUSE. It wasn't really fun, but it is fairly doable if you have a flexible enough view on "good packaging practices" (and I imagine Debian packagers didn't feel like going through all the necessary workarounds -- which includes patching the ELF binary in the openSUSE case). And note that LXD isn't even a static binary -- I tried to compile it statically to get around a whole range of issues and it was a nightmare.
Actually there is sandboxing and updates, if you want to take an extra steps.
I prefer AppImages over Flatpack and Snaps for essentially the same reasons commented above.
I typically launch them with Firejail for the sandboxing. i.e. "firejail --appimage /path/to/appimage" instead of just the appimage alone. Seems to work just fine. Firejail has additional options that can be done.
Some AppImage creators do provide an update method that can be installed using zsync2, but I've not tried it since the whole point is disposability.
What do you mean? There's debian go packaging team: https://go-team.pages.debian.net/ as well as go packages in stable (for example https://packages.debian.org/buster/influxdb)
I know nothing about AppImages, but up above someone said they do have updates: https://news.ycombinator.com/item?id=23773878
* Linking with system libraries (what GP mentioned).
* Linking with Go packages (what you're talking about).
I’ve often came across orphaned packages where Debian was stuck with an old version, but didn’t know what to do about it other than installing from source. Or using a PPA if I was lucky.
I much prefer Debian packages over RPM, and Apt over Yum/DNF, but unfortunately setting up a Debian package repository is more difficult than for RPM. The tools exist, and are more sophisticated and capable--for example, Aptly--but there's a higher learning curve. Also, good luck using them outside Linux. Years ago I published Debian package repositories from OpenBSD, but I had to manually hack the relevant tools to build and work on OpenBSD. For RPM/Yum I could have more easily written (and at a different job later did) an indexing tool from scratch.
The problem with Debian packages and Apt repositories isn't that they're not capable, it's that they have high learning curves due to the slow accretion of features that obscure their potential. A better tooling story would help, particularly for repositories. Better tools for initializing Debian package builds exist; the problem is more a surfeit of choices.
 It's more complicated and creates unnecessary headaches than dropping a file into trusted.gpgp.d, but the practice seems to reflect the opacity of the Debian packaging ecosystem. There's a better, simpler way to do it, but everybody cargo cults the same old solutions and then complains that it's too complex or inelegant.
However, you could also include a facility to manually check for updates and either self-update or just show the changelog. Why manually? "I know you're about to do this thing real quick that needs to get done, but a new version is available! Would you like to twiddle your thumbs for 5 minutes while watching a progress bar?".
That's a very Windows-centric point of view. Programs on Linux can be updated without closing them. You get the new version when you restart them.
Now if the process tries to access some datafiles that changed contents during the upgrade, it might just crash and burn, eventually corrupting data in the process.
sudo apt update && sudo apt full-upgrade -y && sudo apt autoremove -y && echo 'All done!, rebooting' && sudo reboot now
For everything else.
I'd want it in an apt repo, really.
I've seen all three approaches with AppImage'd software.
There are also package managers specifically for AppImage'd software, though it doesn't look like any of 'em are particularly mature.
1) Application won't need root privileges
2) Without compromising on the systems security
I think fixing what we have in standard kernel/userland, and leaving the containers for specialized developer/devops deployed workloads may well be where this (what will be known as the snap/flatpack saga) started from, and will end as well.
However, a quick note: As someone who unofficially maintains a linux port of my companies software, I have considered packaging it as an appimage, but there's one problem with appimages that kills the concept.
Appimages are read-only. I'd love to package my companies product that way, but we already have update-delivery infrastructure that works on windows and mac (and linux), and it assumes it can write to the "install folder". Changing the entire update infrastructure specifically for an OS we don't officially support is a non-starter.
From a developer perspective, I would love the ability to update an appimage's contents in place. However, as a user I'd also like the ability to set it read-only to block updates if I desire. Flatpak's mandatory updates are one of the key reasons I dislike it. Never the less, if the goal is to smooth the path for proprietary software to support linux without making half a dozen different packaging solutions, in place updates need to be supported.
 edit: according to comments below, they now have an update mechanism, but it's still a totally appimage-specific process, so my problem remains :/
As for sandboxing with Flatpal - AFAIK this is all intentional for the time being. The authors have ckearly statet their initial effort targets app dependency isolation and app portability, not security just now. That is much harder to do without severely limitting useful applications in what they can do.
I have encountered incomplete sandboxing systems that strived for security in past and it has not been a good experience.
For example I had the onr and only official Ubuntu Touch tablet, which used a predecessor of the Snap technology. One day I wanted to show my friends a couple photos from a micro SD card.
I put it in to the tablet, bit no photo viewing app could show photos from the card.
Why ? Because back then you could only open files from outside of the apps sandbox one by one using a special system controlled dialog. Not really usable for hundreds of photos and forget about a text editor or an IDE.
This is where I like Flatpak - it gives you all the goodies of app separation and portability without all the hassle of a strict but unfinished secure sandbox. You only need to make sure you are getting the software from thrusted sources.
A I think that's something one should be doiny anyway, even with a strct sandbox - if it can't support basic app use cases, who knows what security holes have the authors left in it...
...why not make a new release and ship that? That's the way package managers work. Developer makes a package. User installs the package. Then when developer fixes some bug and releases a new version the user can install the new version when the user decides it's necessary.
The whole point about this is that it's user-centric. That's good.
This has burnt a few projects pretty badly (especially where distros have packaged an old version and never updated it).
It is bashed by those who value reproduction and discoverability:
* every person receives same script, confirmed by signature
* multiple mirrors, no single point of failure, no pull out by author (left-pad)
* no silent update (browser addons)
* tested to work on your system
* clean remove of installed files
* search in packages not in web (anti fishing)
* hosted in secure environment
* watched by many eyes
Which is what Apt has done for 20+ years - packages are gpg-signed, developers sign with their individual keys (and have to have their identity verified by at least one other developer) and even someone who controls a root CA (which is most national governments and many large corporations, let's be fair) would have to mount a dedicated attack to subvert that process.
AFAICT,debian packages themselves are not signed, just the repository release files are gpg-signed.
That's a weird criticism to have of Flatpak. The FreeDesktop dependency is the base set of libraries all flatpaks have access to. You don't have to use it. In fact Freedesktop is a great idea, to provide a base framework applications can use that goes further than just libc. If we want Linux on the desktop, we need a common desktop framework interface.
It would be like being unhappy a static go binary on Windows COULD access all of the win32 libraries.
Also, Flatpak currently is best suited to GUI apps. If your go binary is a GUI app using either GTK or KDE, these will need the freedesktop facilities (such as D-Bus) to actually run properly.
Just wanted to easy one worry of you: the age, popularity and maturity of your application has no influence on whether or not these permission requests are granted. The approval process is formally defined and mostly depends on what your application does exactly.
You can find the specific processes in the docs at the bottom of this page: https://snapcraft.io/docs/permission-requests
In addition: users always have the option to override permissions. The approval process is for automatic (default) permission grants. Even if these are not granted, users can grant them manually.
The specific processes for approval are listed at the bottom of this page: https://snapcraft.io/docs/permission-requests
That's the longest spelling I've ever seen for a three-letter company.
Criticism specifically regarding Flatpak: https://flatkill.org/
Issues mentioned on flatkill are already fixed, will be fixed or doesn't depend on flatpak itself (like the UI / icon in the software app store).
I don't like Flatpak either but I think we should elevate the debate to deeper architectural issues of flatpak that won't be fixed easily. Personally, I do not like the following in Flatpak :
- no effort on full reproducibility like Nix&Guix
- a big fat flat runtime rather than traditional fine grained dependencies (although OStree avoids duplication, but still very elegant)
- you can't install extra pkg in the sandbox. So the quite overkill solution in RedHat's vision is to separate between Toolbox/Podman for devs vs Flatpak for users, rather than trying to make a single unified sandbox for everything. Of course everything breaks down when you try to code using a Flatpaked IDE, if you follow RedHat's vision you basically need to spawn a toolbox container from an unsandboxed flatpak instance of your IDE :
So personally, I'm still waiting for a packaging system that is :
- compatible with the idea of a declarative/immutable os (like nix, guix, silverblue)
- tries to make everything reproducible (like guix)
- sandboxed with runtime permission API (like Flatpak portals, IOS, Android)
- sandbox can be augmented with packages so that you can code in your sandboxed IDE + add necessary dev packages inside a same sandbox without having to break it
However, ignoring the aspect of software distribution, wouldn't you agree that the approach taken by the Linux desktop today is deficient security-wise? For example, I would like to be able to give mbsync (or Thunderbird or whatever) my IMAP password without giving it to any other program. So I don't want to store it in mbsync's config file in plain text. Neither will I use gnome-keyring (or any other keyring) because it doesn't have any kind of "program authorisation". Any program can just spawn a new "secret-tool" process and get my credentials from gnome-keyring.
I've been thinking for a while about implementing a keyring which runs as a daemon with SUID of a dedicated user and checks which program sends requests to it, using /proc/pid/exe, but I'm not sure if it's a secure source of truth: how e.g. namespaces affect what's visible in /proc/pid/exe. I know you've been developing himitsu. Have you thought about this problem in that context?
Short version: Canonical does classic vendor lock-in with his Snap Store. I'm pretty sure 99.9% HN users knows what it means :)
It's a commonly requested feature and being able to move it would follow the Freedesktop.org spec, but the developers don't care.
$ ln -s /somewhere-else ~/snap
If anything has to be installed in the home folder for some reason, it is supposed to go into .local, so the user doesn't see it among their documents and photos.
It's a completely insane stance for server systems. (I believe it's still possible to run server systems without snaps with various "workarounds" etc, but we can feel which way the wind is blowing...)
> Snaps are completely redundant there.
Yes, they certainly are but it seems to be harder and harder to get rid of snapd. It's probably not going to be an actual issue for Focal, but it's about this trend... Maybe Canonical will see sense -- who knows?
Meanwhile, I do have to at least have a broad plan for the next 2-5 years, so...
Debian stable is an obvious choice for projects that don't rely on too much "new stuff" because we already have a lot of stuff that uses APT, deb repositories, etc. Otherwise, probably CentOS/RHEL for those situations that are just ultra-averse to incremental change and prefer a HUGE change once every 5-10 years.
I think we might become a bit more adventurous and move to e.g. NixOS for "newer" projects. That's probably going to have to be trialed for a few projects before we go all in on it, but it seems really nice for servers (and dev machines for that matter), but it'll be interesting to see if you have to truly go all in to reap the benefits. (The worry here would be the amount of upstream 'support' in terms of manpower to bring in security updates, etc.).
(I'm also vaguely aware of SuSE, but I only spent a very brief period of time with it about 10-15 years ago and don't really remember any distinctive features either way. Which is kinda weird, because it seems to be known as the 'popular in Europe' distro?)
In particular, it's easy to inspect the sources for apt packages using "apt-get source". Snap seems to have no equivalent command.
It doesn't have to. Packaging using FPM  allows many targets (deb, rpm, etc) and using ELF2deb  (shameless plug) allows packaging any files to a .deb with no effort.
Edit: Are snaps images? ...like containers?
Edit 2: answer:
> mounted dynamically by the host operating system, together with declarative metadata that is interpreted by the snap system to set up an appropriately shaped secure sandbox or container for that application
Also sandboxing and running apps under reduced permissions makes a lot of sense but should be seen as a tool to increase security not an answer outright as poorly used any tool can be less effective or even harmful.
Historically locking down increasingly effectively often leads to impeding things the user actually wants to do which leads to apps asking for and granting permissions with the net effect of training users click yes to enable whatever the app wants to do.
It ends up being not just a technical challenge but a psychological one as well and its less easily resolved once you start having to deal with real users.
I also think that so far Linux distributions have done very poorly when it came to cross-distribution/forward/backward compatibility of packages. This is a non-issue for popular Opensource packages since they can be easily packaged by the distributors. But not so popular packages sometimes need to be installed in a hacky way when they are not available for the target distribution. Even worse when dealing with commercial software, it can happen that it degrades with each distribution update.
Also I think that probably the package format will improve over time and add additional features, that would allow auditing for instance. Also with proper de-duplication (perhaps even on filesystem level) it should be possible to also deal with waste of space.
Containers try to mitigate this problem but like a fever they often end up making things worse.
If Canonical provided the snap creation and hosting tools to the community I imagine it would be judged on its technical merits.
As it is, I see more and more reports that Canonical is trying to gain more and more control and that's exactly what I don't like to see.
Would have tried Ubuntu but they've poisoned that well. I'll have to recalculate.
glibc does have a stable ABI. What they don't have is a frozen ABI so they will all new entry points. But old programs linked against glibc do and will continue to work fine.
> You can't run a program written today on the system repo libraries from 5 years ago.
You can absolutely write programs today targeting 5 year old ABIs. I agree that the FOSS toolchains could be more helpful there so you can just pass some compiler flag for the oldest glibc version you want to support but it is not impossible to achieve that on your own.
The best known manager that does allow this is Nix, do you know of any more, maybe ways to get this working with "traditional" package managers (completely transitioning to Nix is would be quite hard, from my limited experience)?
Reading about the cons of Snap now - "auto-updates cannot be turned off" - https://en.wikipedia.org/wiki/Snap_(package_manager)
Even for client software it is bad - an app quitting out from under me because it wants to update is functionally equivalent to a crashing bug - but they're offering daemons this way, which is just insane.
Who doesn't want all their containers randomly restarting because someone up the distro chain decided when your machine needed to upgrade?
The cluster will not come up because your down node is now not the same version as the other cluster nodes.
This doesn't happen, at least on account of Snap. It's copy-on-write when it comes to updates, and whatever version you have running at the time of update will happily keep on running. I've been mildly confused by this a couple times, when I would end up with two different versions of VS Code running side by side, but the bright side was that my work wasn't interrupted.
It incidentally also sidesteps all the problems that stem from libraries or configuration being updated under a running application, which can happen with Apt.
Not my favorite to install software since they're usually huge (500mb is not far out of the norm), but great for things that are hard to install, as they're pretty much guaranteed to work.
I just wish there was a command line switch to stop it from wanting to install itself on my system (move the AppImage to /opt).
Saying "just use an alternative" seems overly dismissive of the complications that entails.
IIUC, the only options for that are (a) abandon Ubuntu, or (b) actively circumvent Ubuntu's software distribution infrastructure, reminiscent of dealing with Windows 10 forced updates.
IMHO this somewhat erodes Ubuntu's value proposition.
Now I know it's not going to be Ubuntu or anything else derived from Canonical.
Why avoid Ubuntu derivates like Mint or Pop!_OS, though?
They're doing the heavy lifting of fighting Canonical on this issue. Maybe using one of those distros instead of vanilla Ubuntu actually strengthens the pressure that Canonical feels on this topic.
If I know I can avoid Snap crap by changing distros for now, that's what I have a timeslice for.
I'm wondering now if Wikipedia would be appropriate to host a linux sandboxed app packaging comparison chart in the form of https://linuxhint.com/snap_vs_flatpak_vs_appimage/ - and it would be extensible ...since Wikipedia.
> snapstore was a minimalist example of a "store" for snaps, but is not compatible with the current snapd implementation. As a result I have removed the contents here to avoid further confusion.
I mean, not really? Or rather, that's not the only reason, or the main reason for many users.
Many people just want to use a FOSS OS, for the reason that any buggy component can be forked, fixed, and PRed, which—if you're an IT organization yourself—often means far less turnaround time than waiting for the appropriate vendor to fix the problem for you.
Honestly, I'd be fine using Windows Server or some other "cathedral" OS, if I could fork/fix/PR its components. I want a stable operational substrate for my app that's quick to fix in an emergency; I don't care whether it's made out of tiny shell-scripts or huge C libraries, as long as it gives me that property.
In that perspective, snap seems fine (you can still fork/fix/PR a snap) just like Docker images are fine, or systemd is fine.
Maybe, in the end, I'm more of a BSD person than a Linux person. I mostly favor Linux installs for the hardware compatibility and performance, not because it really fits my philosophy.
The Microsoft way: over-automated operating system DE's which attempt to make the OS appear user friendly yet create one headache after another. They suffer from massive interconnected webs of program state and logic which fails leaving you with the only sane option of rebooting. The more automation the vendor inserts the less transparency and control you have.
tbh I just run linux because I want a good dev machine and I personally don't care if canonical abstracts updating software away, as far as I'm concerned they can keep everything up to date and do their thing, for me that's a plus for snaps, I generally find them pleasant to use.
In my experience people on HN in particular tend to vastly overestimate how much people value control vs features/abstracting routine tasks away.
As long as Linux Mint developers are not doing that how it helps you? you don't trust Canonical binaries but you trust the deb binaries, you could be honest and claim that you don't like snap but you still trust the deb binaries.
The point is that you can audit without having to depend on a third party. Nobody's claiming audits are free or that they're assumed. The point is that you have the option to choose to trust as much or as little of the build chain, from the compiler to the target code to the artifacts.
- Mint uses Ubuntu repositories
- Canonical pushes the changes they want into this repos, this changes are probably done by scripts that build source code on Canonical servers.
- the Ubuntu repos also contain binary blobs
- when a Mint user does an update he gets the binary directly from Canonical servers, there is no Mint dev or Mint script that does any check to see if for example the evil Canonical modified the NVIDIA driver and added even more evil in it then already is
Now explain to me if all the above is true why would someone that does not trust Canonical would use Mint? There is no safety checks to prevent evil Canonical people do evil things.
My conclusion is if you don't trust Canonical don't use Mint. Maybe Mint is working on addressing this and soon we will see an PR campaign that announces they are finally able to self host but until then I would stop the hypocrisy about not trusting Canonical. (Btw there are many smaller distros that can host their own repos, not sure why Mint is not doing it)
* Nonfree software is generally demarcated as such. There is nonfree software that has available source (in the case of some codecs), other that comes as blobs.
* The Chromium package is open source, and in most distros comes as binary built from the toolchain set up by the package maintainers. In all free software distros if you don't want to download the binary, you can download the source and build locally with the provided build scripts (in the case of most APT packages in Debian/ubuntu).
* Serving Chromium binaries with Snap removes the option of downloading, inspecting, and running the build chain locally
* Serving a different version of Chromium, or replacing the stock version with a different variety, cannot be done without creating a new Snap repository. Downstream distros like Mint need to replace some of the stock Ubuntu stuff, just like Ubuntu changes stock Debian packages
* Because there is no open source Snap repository software, Mint is unable to set up an alternative repo that could work around some of the objections they have with Ubuntu.
Again, if Canonical is evil and can't be trusted why I would run Mint? Do the developers run any scripts to alert me if Canonical slips some bad thing in a binary?
I think is fine if they remove snaps but IMO is stupid to accuse Canonical to be evil and not trustworthy while you blindly trust their repos.
Rational self-interest. I don't think the tech giants are good for society, but not working with them would mean slipping into irrelevance.
Say, I'm a game developer. Do I trust Microsoft? No. Do I sacrifice 90% of my profit to boycott Windows? No. There are different degrees of "evilness", and the scale does matter, too.
Honestly tell me if this does not sound idiotic "Microsoft is evil and we don't trust them, please run our own Windows copy that is the exact same thing but with different colors, MS can push an update and delete all your files because this is not a supported configuration and we have no scripts to check for it but we are not competent enough to setup our own repos and scripts like other distress"
I understand why Mint does what it does, the only idiotic part is complaining about trust in Canonical.
This is the same in apt as it is in the Snap Store. Compare https://snapcraft.io/chromium to https://snapcraft.io/spotify for example: the license field is clearly presented there.
> * The Chromium package is open source, and in most distros comes as binary built from the toolchain set up by the package maintainers. In all free software distros if you don't want to download the binary, you can download the source and build locally with the provided build scripts (in the case of most APT packages in Debian/ubuntu).
Also true for the Chromium snap (see next item for details).
> * Serving Chromium binaries with Snap removes the option of downloading, inspecting, and running the build chain locally
This is outright false. The Snap Store page for Chromium is available here: https://snapcraft.io/chromium. It links to the source, which is the git repository here: https://code.launchpad.net/~chromium-team/chromium-browser/+.... You can use these source together with snapcraft (which is Free Software, licensed under GPL-3.0) to download, inspect and run the build chain locally, including with any modifications that you want to make. You'll get a snap package as output, which you are free to distribute and other users can install it using the snap CLI.
> * Serving a different version of Chromium, or replacing the stock version with a different variety, cannot be done without creating a new Snap repository.
Partly false. You can ship a different version of Chromium in the Snap Store under a different name. This is an automated process, rather like creating a PPA. As long as you aren't misleading anyone and you follow the terms of the relevant licenses, nobody will stop you.
> * Because there is no open source Snap repository software, Mint is unable to set up an alternative repo that could work around some of the objections they have with Ubuntu.
Partly correct. One generally cited reason for this is that the same criticism was leveled at Launchpad which was opened in response - but nobody is running an alternative production of Launchpad anywhere so Canonical doesn't want to waste that sort of effort again. Another is that store fragmentation is bad. I'm just stating the other side's position here. Please don't shoot the messenger.
 Chromium's licenses are listed as: Apache-2.0 AND BSD-3-Clause AND LGPL-2.0 AND LGPL-2.1 AND MIT AND MS-PL AND (GPL-2.0+ OR LGPL-2.1+ OR MPL-1.1)
When I update Mint binaries, I get them from one of about 30 mirrors which Mint enables me to choose. (Or it will decide for me based on speed.) Do the mirrors at Clarkson, Harvard, Purdue, UW, etc belong to Canonical? I think not. Nor does most of the code the binaries are built from.
Canonical has made its choice, Clem's made his.
I would convert the next sentence in math logic and prove you that it makes no sense but not sure you can understand the symbols so let me try again in English.
1 Mint does not trust Canonical
2 Mint plugs their users systems directly to Canonical untrustworthy repos to run possible "evil" binaries and scripts as root.
if 1 and 2 are true then as a user you should not use Mint, as a Mint developer you should start working and finally create an independent distribution.
IMO 1 is false, probably they mentioning "trust" was a mistake.
I think it should be noted that this generally applies to the addition of "third party apt repositories" in general, use of which is the problem that snaps fix
Some snaps are built from Free Software and reproducible sources, as are some third party apt repositories.
In other words, if this criticism bothers you, then you should never install from any third party apt repositories ever. If some are acceptable to you, then so should some snaps.
If you don't want to use third party software ever, then you can still use Ubuntu without snaps.
 Third party apt repositories often break users' systems.
This does not follow at all. Third-party apt repositories work just like Ubuntu's apt repositories; you have just as much ability to audit, hold, pin, etc. in both cases.
If there is a difference in reliability (software from third-party repos is more likely to break your system--and, btw, software from Canonical's repos has also broken systems in the past, so "avoid third party repos" is not a guaranteed way to avoid software breaking your system), using snaps to install third-party software instead of third-party repos does not fix that problem: the third-party provider is still just as unreliable as before and their software is just as likely to do something stupid.
Again, it all depends on how much you trust the third party compared to how much you trust your distro provider. I don't have many third party PPAs installed on my computer because there aren't many third parties I trust that much. But there aren't zero either.
Also, a big part of my trust of my distro provider is based on having source code forced to be open, and another big part is based on them not doing things behind my back. Snaps significantly erode both of these aspects of trust.
Perhaps we should consider why this is: because people want up-to-date software on their computers (desktop or server), instead of being beholden to whatever version distribution maintainers have decided you can have.
I understand the _reasoning_ behind the distribution model - I just don't think it works very well, and apparently nor do all the people who use PPAs in the course of everyday use to get up-to-date software.
It's also worth noting that FreeBSD does not have this problem - ports are updated _much_ more often than most Linux distributions seem to be.
Originally you said audit, hold, modify.
You can audit snaps just as you can audit third party apt repositories. Either the publisher ships the source such that you can rebuild, or they don't.
You can hold snaps using this method: https://forum.snapcraft.io/t/disabling-automatic-refresh-for...
You can modify snaps just as you can modify what you get from a third party apt repository. Either the publisher ships the source such that you can modify and rebuild, or they don't.
I didn't; the post you were originally responding to (the GP of my original post, which is the GP of this one) did. I am not the person who made that post.
You appear to be saying that you can audit, hold, modify, pin software in third-party repositories. That means you agree with what I was saying in the GP to this post, that you have just as much ability to do all these things with third-party repo software as you do with software distributed using snaps.
To top it off, a couple months ago my calculator disappeared. For some reason I have been having problems with snap applications disappearing for a while now, even though I have made no configuration changes. How fun it is to discover that such a basic tool just no longer exists on your machine, way to fuck up a morning.
I get that snaps make application distribution easier, but please don't do it at the expense of the user. I've had more success with Flatpak and AppImages, but not enough experience with any of them to judge which is best.
The calculator should -always- load in a couple ms. It's one of the simplest tools on the shelf! If a calculator snap can't load immediately then I have no interest in a single other snap app. Waiting more than a second is a nightmare, and I've definitely waited 5+ seconds at times. That's about how long it takes my system to boot. As far as I'm concerned it's a complete failure and I couldn't possibly trust it on any desktop or server that I manage.
This. snap seems to spew traces of itself all over the place on an Ubuntu install, and it's less than clear why any of them are there.
The same thing with phones. Smart phones are good at a lot of things, but they are mediocre as a telephone.
No, just roll on with it.
There was a period of time, around 2010-2015 when I really felt that computers were fast. SSDs were getting more affordable and that was a huge improvement, every action was immediately responsive. In 2020, that has somehow been undone. It takes 5 seconds to launch a calculator. Software guys really like to undo all the advances that the hardware guys are doing.
I guess people at MS test and quality departments use I7 with an SSD driver. If so, it's bad. If not it's simply hard to understand, if we leave corporate politics aside.
To me, the calculator has always been more or less OK, but the Image Viewer is a clusterfuck.
With the old viewer, it's almost instantanious opening by clicking a file in windows explorer. The new one is slow as a crawl. The first time may take 5 or 6 seconds (in a 7 year old laptop). The next time is down to 2 or 3. Compare that to milliseconds using the native app. I've never felt the need to use an specific image viewer, but now I'm happy with Irfanview, as fast as the old app, and full of features.
How do such things pass triage stages?
I now switched to xcalc. gnome-calculator is better, honestly, but it's not worth feeling in the 90ies.
I just did it and it's quick to boot up again as it should :)
> Please don't comment about the voting on comments. It never does any good, and it makes boring reading.
It is one thing when you make a controversial statement and then decide to attack downvoters, it is a different thing if you are trying to understand how and why people disagreed with you.
1. The micro terminal editor.
2. Chromium, because it was forced.
Well, #1 was packaged for 20.04 so I didn't need it any longer. That left Chromium. For that single package, I had to tolerate my system being spammed all over:
- Multiple irrelevant loopbacks cluttering my mounts list
- Dedicated folders in filesystem: /snap ~/snap
- Very slow startup, for chromium.
- Lots of disk space taken
- An always running daemon! (Wasn't it root too? Can't remember). apt doesn't need a daemon.
Sheesh! That's not even mentioning the store issues which others have described already.
Sorry, but a few newer packages here and there are not worth all that. I'll handle it myself, thanks. What snap does isn't actually that hard. I'd keep it around if it wasn't so obnoxious at putting itself in front and center of everything.
I've harbored this suspicion of GNOME developers for years. It honestly wouldn't surprise me if a lot of Canonical devs don't use Ubuntu at home.
What the heck is this, windows?
With tmux it seems like this is finally possible with version 3.1 or by compiling it yourself but I remember being annoyed about this years ago.
I dislike a home directory cluttered with dotfiles just as much as that snap folder choice because when do we actually ever not list the hidden files?
Name and shame:
* docker (~/.docker)
* Arduino IDE (~/.arduino15)
* GNURadio (~/.gnuradio)
* IPython (~/.ipython)
* FreeCAD (~/.FreeCAD)
* HPlip (~/.hplip)
* IntelliJ and others (e.g. ~/RubyMine[year])
* Cargo (~/.cargo)
* Audacity (~/.audacity-data)
* PGAdmin (~/.pgadmin)
* ELinks (~/.elinks)
* NPM (~/.npm)
* sqlmap (~/.sqlmap)
* ZAP (~/.ZAP)
* GNUPG (~/.gnupg)
* crashlytics (~/.crashlytics)
* Android Studio (~/.android)
And so so so many others just crap all over home when they could just crap in .config if it's config and in .cache if it's cache. Lazy devs.
The cache is not a config, assholes. What do you think ~/.cache is for?!
That export list required got me weary, plus look at how many in that list are hardcoded :/, the situation is rather bad.
If world does not match your view it may be you who is outlier. I quite like current convention - hidden files in $HOME belong to applications. There is value in $XDG_CACHE_HOME - it can be safely removed (like /var/cache).
You force your view on open source community, that is rather bad.
As I said, impractical, not a solution.
> If world does not match your view it may be you who is outlier.
Looking at the amount of software that does follow the base directory specification, actually you're the outlier insisting on obsolete conventions.
You and a bunch of other developers insist on those things, in reality that is the actually harmful behaviour for open-source.
Interestingly but yet non-surprisingly, that insistence very often goes in hand with the stubborness to stay on obsolete mailing lists, ugly user interfaces, insecurity by-default, git-email, buggy issue trackers, 80-column commit messages, obsolete security standards and practices and so much more.
> I quite like current convention - hidden files in $HOME belong to applications.
The future is now, home folders aren't to be filled with trash. Move on or stay behind, seriously.
You shame software, half of it has workaround, see patching as impractical.
XDG Base Directory Specification  is not about your home folder. It is about default storage, separation of cache, user data and config, a way to provide another config, so one can:
* remove entire config when stuck with a problem
* remove cache, think /var/cache
* `ssh -F foo` would be `XDG_CONFIG_HOME=foo ssh`
Everyone has a pain point, everyone has a workflow, there is no One True Way. Please stop shaming authors, they quit, sometimes post it here about mob. Patching folder structure is the simplest thing. If you can't do that who is going to fix actual bugs?
If one feels that talking about a bug in their software is "shaming" them, then maybe they should quit or alternatively, just quit pretending they want feedback or to write FOSS. Same applies to teams writing software.
Not to mention how harmful it is to think that everyone who picks up FOSS is actually good at it. Thinking people as infallible is actively harmful for the end users.
> Patching folder structure is the simplest thing. If you can't do that who is going to fix actual bugs?
Incorrect folder structure is an actual bug. It might be simple to patch for the end user, but you're ignoring the maintenance burden, annoyance and cumbersomeness.
> Everyone has a pain point, everyone has a workflow, there is no One True Way.
There are paths more correct than others, some workflows are obsolete and stupid, and should't be catered to. It's wilful ignorance to ignore that.
> half of it has workaround
Bwahahaha, you may think that's fine, but I don't.
> Name and shame
> they should quit
That's why I've called it harmful - choice between your complains and people writing code is obvious. Fork it, patch it, there is no burden - if people care maintainers would switch, if switched enough patch would get in upstream. Or provide own repository with patches, that's FOSS way. If not by yourself than sponsor.
How much do you actually care? How much would you pay? Is it free as speech or free as beer?
Without the rest of the context and no, criticism is not harmful. If it is a "sin" like you say, should we look at things you've said about FOSS projects?
> Fork it, patch it, there is no burden
Either you're delusional or you haven't done either of the things.
> if people care maintainers would switch, if switched enough patch would get in upstream or provide own repository with patches, that's FOSS way.
Yeah, and it'll take the next decade, being optimistic. GPU acceleration in Chromium and Firefox on Linux is a perfect example how absolute shit that "way" is.
> How much do you actually care? How much would you pay? Is it free as speech or free as beer?
Feel free (as in freedom) to just type out your arguments instead of asking rhetorical questions.
Question was not rhetorical - it is realization of freedom 1. You answer implies there is a fork with GPU acceleration and no one cares (or does not answer my post). Ah, "criticism is not harmful":
Name and same:
No, I won't pay to devs that ignore conventions. That's like having someone take a s* on my porch and me paying them for not doing so. Plus, demanding payment is the thing not really in the spirit for FOSS.
> Ah, "criticism is not harmful":
That's just naming, without listing the reason. In addition to that, I listed projects, not people. Shows that you've totally missed the point of the original list.
Out of principle I can implement XDG Base for ssh client. Just like my work - implementing features I am not that interested in, providing support. I hardly believe any reasonable man expects complains to work in that case. So how much would you pay?
Your account may be group of people (and may be not). Project may be group of people (and may be not). Shows that you've totally missed the point.
Oh but keep in mind in some cases I'm not given a choice. If I could not use snap for example, I would not. But it was forced upon me. So I have every right to be annoyed at someone figuratively taking a shit on my porch.
Anyways this discussion has depleted itself, you have no good arguments protecting that nonstandard behaviour.
And it is easy to make own repo. There are quite a lot of unofficial repositories https://wiki.archlinux.org/index.php/unofficial_user_reposit...
Oh, forgot to mention: /var/cache/snapd/
I rarely use it and only in incognito mode. Yes, it's a tradeoff but with numerous benefits.
I don't doubt for a moment that it makes business sense for Canonical, but I really wonder whether there's a market for this - the huge majority of people who don't care about this kind of thing are on Windows or Mac, or even just working happily away on their phones and tablets.
Linux' selling point for me was always that I was in control and could make the system work the way I wanted to; people more ideologically pure than me have slogans like "free as in freedom" or "binary blobs are bad".
I really don't see the market for "linux, but with commercial vendor practices". I switched from ubuntu to mint a while ago and I'm really happy about that right now.