Kudos to the OP, I'm glad this got posted on hacker news because Snap is so ridiculously broken.
Since 16.04, Snaps have been a huge pain for me with running LXC in production environments.
By default, Snap applies updates and restarts Systemd services anytime it likes, and there's no way to turn this behavior off! The only way to get around it is to download the Snap package binary and install that directly. Then Snap won't "know" where to get updates.
(Caveat emptor: "Workarounds" like this can easily lead to a bad security scenario, since any critical security patches won't be installed by any standard system update process)
Did I mention that a fair percentage of the time the Snap updates would leave LXC in a completely broken state? In production (and development, too)!
The final nail in the coffin in this scenario comes in the form of Snap being the official recommended way to install LXC. I don't know if Stéphane and friends even publish Debian packages anymore.
I get the idea behind snap and appreciate it, but the lack of configurability and no clear definition of what stable really even means . . .
Hear hear. I imagine for dogfooding, LXD is only packaged in snap, so we can't use apt as the source anymore. After migrating, an upstream push to a 'stable' LXD snap channel introduced a regression that borked our environments, and there was no way to:
1. prevent machines in the fleet from pulling the broken LXD update
2. rollback broken machines to the previously working LXD version on the same channel, since it no longer existed in the Snap Store™.
I've been looking forward for this work to come to something working for some time already, but it's been already 3 years I've eyed on it, is it now coming to something working "soon"?
Debian is a bad choice if you want to package go applications (or rust apps, for that matter). Debian requires that all those little static dependencies be individually packaged. Common container software like lxd, podman, and umoci are not found in Debian.
The following distributions package LXD (that I know of):
* Void Linux
* Alpine Linux
* Arch Linux
* Gentoo
Some of those are more suitable for production installs than others, but if you know what you are doing and manage your deployments well, all of them could work.
Do you happen to know _why_ Debian decided to require that for Go projects? It's so absurdly complicated.
I've been looking into .deb packaging for Caddy but it really feels like they require us to jump through too many hoops to make it happen. I'd much rather just ship a prebuilt binary.
Security. Dynamically linking stuff is always going to be better than statically linking. Do you trust upstream to keep track of security issues and rebuild in the absurd dependency tree that is Golang software? Or the multi-year old effort which is the current Debian security team?
The reason why it's absurdly complicated is solely on the Golang team, not Debian.
I was not aware. Too late to edit now, but that is indeed a great option! Rolling release is a bit ambitious for a hyperlink without nixos like features.
Funny. I have been working on a blogpost detailing the silly things I encountered while packaging LXD properly for Arch Linux. Should probably finish it up one of these days.
Without too much details. But I personally got into LXD while bootstrapping kubeadm clusters with salt. LXD profiles made it very easy to work with compared to alternatives.
It's clearly not production stuff, but it works very well for a quick test bed before production deployments.
I guess users have been sufficiently beaten into submission that many of them will put up with software randomly quitting on them, but the idea that daemons should be snaps is doubly crazy. "Oh, Tomcat just updated itself, that's why we were down."
I really have no idea what people are thinking sometimes.
I always resisted running Ubuntu in production, even when it was much easier to install the necessary software. When you pick a distro, you're picking their packaging and maintainer culture. I've always prefered CentOS for that reason. Even the little things, like installing a new package won't start/enable the services (because you should have a chance to configure it first) always came off as more pragmatic for system administration.
I run Debian on my workstation and laptops but still prefer to use CentOS/RHEL on my servers.
The automatic startup of newly installed services (before you even have the opportunity to configure them) on Debian (and derivatives) has always bothered me and is a (small) factor in my decision.
Fortunately, there are a few "workarounds" to prevent newly installed services from being automatically started!
First, there's the brute-force / heavy-handed approach -- override the systemd presets [0] to set all services to disabled by default:
Alternatively, as a one-time thing, you can "mask" the service before installing the package but this requires you to manually create the symlink (instead of using "systemctl mask") -- and requires you to know the name of the service unit beforehand:
Finally, there's the old, "sloppy" (IMO) method of "hacking" the /usr/sbin/policy-rc.d script to immediately exit with a status code of 101 ("action forbidden") [1].
(I'd recommend avoiding the latter entirely and going with whichever of the first two options better suits your needs.)
---
EDIT: Just remembered another method but I'm not sure if it still works:
> A cultural acceptance of WIP software seems common, in free software communities, in my experience.
On the one hand, yeah it's harder to complain when something is only built in someone's spare time for free. On the other hand... is non-FOSS really any better? Has Windows yet managed to re-merge the old and new control panels?
The Snap team has some experience with this, seeing as how 20.04 has released and you still can't move the fricking ~/snap folder. Creating some generically named top-level folder in the users home directory is a straightforward fuck you to all users.
This. Came here to post the same bug. I've been following the thread for the last 3 years, and it's an absolute mess.
I feel bad for the maintainers; it must be difficult to deal with all the rude and borderline disrespectful comments -- and particularly for something you're donating your time for free after all. But I must say that the architectural decision of creating a ~/snap was a colossal mistake.
Are they really donating their time, aren't they Canonical employees? Canonical being a company that is paid mostly by companies that use their distro in production?
Dang, that really sucks. I was planning on updating my desktop to 20.04 but after reading about Canonical pushing snaps and now this thread basically claiming that moving or renaming the directory is impossible because what is in my opinion a design flaw in the entire system design, I think I'll switch to Manjaro or Arch for my desktop instead.
I am on the rolling releases already but I'm not going to install an LTS version that is so deeply integrated with the distributor while at the same time ignores the directory standards so incredible blatantly.
I think the (not that bad) manual installation process might keep some folks from installing it, but it makes you learn to choose from the very beginning.
After the initial install, update it on your schedule, and install things as you decide they are needed.
Ubuntu is the opposite -- install the world, then go madly trying to disable and remove the phone-home, forced-update, bloat.
I agree, I'm using Manjaro on my laptop and it works like a treat. I have read the horror stories of a botched Arch upgrade so I think I'll stick to the slightly-more-stable Manjaro but the ecosystem itself is very nice for the moderately familiar Linux user.
I develop software that tries to target as many distros as possible, and I really don't see how Manjaro is in any way better than Arch. Botched upgrades aside, Manjaro is the only distro that couldn't produce a valid iso to boot on KVM/qemu. It has a very weird kernel config, sometimes users don't seem to have modules that are found on literally any other distro. Which is very sad because Arch just works.
In my experience the Manjaro config tends to work better on some of the hardware I've tried it on, requiring less tweaking to get it to run properly.
I'll also admit that I can't for the life of me get Arch to properly install with all the tools and features that are normally built-in to desktop environments to work on first boot. I'm sure I can make it work if I give it another try now that I've gotten more used to using Manjaro, but the Arch installation experience is not something I want to go through again any time soon.
I understand that most Arch users will want to decide exactly how they set up their system and all, but for my personal (non-work) machine I just want an operating system that works, allows me to mess with the standard Linux stuff and manages to install itself without me holding its hand. I'd much rather tick a box that says "enable full disk encryption" than manually configure cryptsetup and LUKS parameters. It's just too much effort for what I get out of it.
I think automatic updates on all platforms are probably an immense net benefit to the world given the huge degree of security vulnerabilities it avoids.
Ubuntu users may be more tech proficient than average users but even for developer machines, I think it takes a lot of concern out of the equation if the software is up-to-date.
As far as I'm concerned this doesn't deserve an '/s'.
The option to have automatic updates (whether it is enabled by default or not) is an entirely different issue than the obligation to have automatic updates. The first one is a blessing, the second one is a nightmare.
But wait for the user to have a week-long job running for three days before you do it. That way they have to explain to their boss why there is now a 3 day delay.
I’m not sure how they’re trying to bill it these days, but certainly actions speak louder than words. All of Canonical’s design decisions seem to be aimed at making a consumer OS rather than a workstation or server one.
An important clarification: You never upgrade across major versions in place, or you never `apt-get upgrade`? I understand not jumping versions, but never applying security updates seems... imprudent.
I've got an ansible playbook that modifies my ~/.config/lxc/config.yml for certain deployment scenarios.
Due to LXC now being a snap, the file is simply not there.
I guess it's 5 layers deep in namespaces, overlayfs and other stuff.
I was so fed up with this that I removed Ubuntu (and replaced it with Gentoo).
Editing a config file for a program should not be more complicated than opening the file in your preferred editor, no matter your distro.
Snaps break that.
I can't understand why they would break such fundamental thing and seemingly don't care about it.
LXC (and LXD, by extension) is packaged on Arch Linux as a binary, not a snap. I'm really grateful that the packagers worked with upstream to make this possible, as I heavily utilise LXC in my workspace.
YEP. For various reasons, we want to run Ubuntu on a server. Reboots and autoupdating are hazards and bugs for us. Upgrades & fixes mean patching our containers and starting fresh servers. Instead of rebooting a running server, we delete it. Trying to use Ubuntu environments means having to add custom disable-reboot-upgrade-downtime init scripts that we do not trust across updates: will bumping a minor version break them 5 days later? Super bad for reputation. I rather be updating Ubuntu more frequently, not less!
(Same-but-different for enforcing non-interactive mode on installs.)
Same! Unfortunately we have a system is more or less tethered to use of LXD. While other workloads have moved to CentOS or Red Hat systems, LXD is supremely broken on those systems (or any distro that uses SELinux). It's extremely frustrating to be tied to Ubuntu for this one reason. It's even more frustrating when one of the key selling points from Canonical is "cross distribution!" Which appears to only be true for basic cases, or in the sense of, "Sure it's cross distro if someone other than us is willing to put in the work for their distro!"
I get the idea behind snaps is that someone recognized that configuration management was a problem but didn't yet appreciate just how deep that problem goes.
It is a seriously difficult thing to get right. I always applaud people to take on difficult problems and try to solve them, but I would not expect it to have any sort of robustness for another 5 years at least.
My god! Being unable to use snaps when you have “non-standard” home directory is ridiculous. What’s even more ridiculous is a response from snap developers - they marked this issue as “Won’t Fix”.
Dear snap developers, don’t make false assumptions that every user in Linux has to have home directory as /home/username just because standard Ubuntu installer does this. Only thing that you should care about here is $HOME
People will ditch snap and they'll end up using flatpak or something else just like the rest of the world. It happened with upstart and it then it happened with mir and unity.
I uninstalled it on all my Ubuntu systems and put it on hold. No new Ubuntu installs, switched to Debian.
What the actual fudge?
So my Asad AD trust users, whose home directories are /home/example.com/sam-account-name, are going to run into trouble when they upgrade to 20.04?
I think I am going to have to start figuring out a plan to migrate them all over to RHEL...
I don't know how must be installed on your version, but on 19.10, snap can be completely removed: the only draw back is that pulseaudio depends on it, but I guess that for a server it's not that much of a problem.
Is Pulseaudio no longer viable using APT alone? One of the first things I do is excise Snap from the system entirely and install Flatpak. I had been planning to upgrade my daily driver to 20.04 in the very near future (I prefer LTS versions) but audio problems would be a complete deal breaker.
Snap developer responses are hilarious. No matter what your use case is, Snap developers know better than you, you silly irresponsible sysadmin/user. Snap is basically just another App store.
I'm using LXC from ubuntu packages for that very reason, got burned when a cluster upgraded automatically. I guess I must be very out of date if the debs no longer exist.
I actually really like snaps and that's the only bad experience I've had. I don't know why LXC would come in that form, very weird.
Snaps also are completely broken for NFS mounted home and data directories with automount. Which is the classic corporate Linux deployment. Of course, systemd doesn't even support wildcard automount....
On top of all of this bad behavior... I did try Snap to install Opera browser. Opera is written such that you need to put widevine libs in a certain directory for it to work. After tons of hacking and Googling, there is zero way to inject files into the file system of a Snap. I understand the security precautions, but that feels ridiculous. I left Ubuntu after more than a decade over their obsession with Snap.
Since 16.04, Snaps have been a huge pain for me with running LXC in production environments.
By default, Snap applies updates and restarts Systemd services anytime it likes, and there's no way to turn this behavior off! The only way to get around it is to download the Snap package binary and install that directly. Then Snap won't "know" where to get updates.
(Caveat emptor: "Workarounds" like this can easily lead to a bad security scenario, since any critical security patches won't be installed by any standard system update process)
Did I mention that a fair percentage of the time the Snap updates would leave LXC in a completely broken state? In production (and development, too)!
The final nail in the coffin in this scenario comes in the form of Snap being the official recommended way to install LXC. I don't know if Stéphane and friends even publish Debian packages anymore.
I get the idea behind snap and appreciate it, but the lack of configurability and no clear definition of what stable really even means . . .