Hacker News new | past | comments | ask | show | jobs | submit login
Disabling Snaps in Ubuntu 20.04 (kevin-custer.com)
599 points by temeritatis 36 days ago | hide | past | web | favorite | 422 comments



Kudos to the OP, I'm glad this got posted on hacker news because Snap is so ridiculously broken.

Since 16.04, Snaps have been a huge pain for me with running LXC in production environments.

By default, Snap applies updates and restarts Systemd services anytime it likes, and there's no way to turn this behavior off! The only way to get around it is to download the Snap package binary and install that directly. Then Snap won't "know" where to get updates.

(Caveat emptor: "Workarounds" like this can easily lead to a bad security scenario, since any critical security patches won't be installed by any standard system update process)

Did I mention that a fair percentage of the time the Snap updates would leave LXC in a completely broken state? In production (and development, too)!

The final nail in the coffin in this scenario comes in the form of Snap being the official recommended way to install LXC. I don't know if Stéphane and friends even publish Debian packages anymore.

I get the idea behind snap and appreciate it, but the lack of configurability and no clear definition of what stable really even means . . .


Hear hear. I imagine for dogfooding, LXD is only packaged in snap, so we can't use apt as the source anymore. After migrating, an upstream push to a 'stable' LXD snap channel introduced a regression that borked our environments, and there was no way to:

1. prevent machines in the fleet from pulling the broken LXD update

2. rollback broken machines to the previously working LXD version on the same channel, since it no longer existed in the Snap Store™.

What a joke! Now we're burnt on snap _and_ LXD.


Canonical packages LXD as a snap package. Other distributions can package LXD in their favorite packaging format, and some do.

Some developers are trying to package for Debian. The work is in progress at https://wiki.debian.org/LXD


I've been looking forward for this work to come to something working for some time already, but it's been already 3 years I've eyed on it, is it now coming to something working "soon"?


Debian is a bad choice if you want to package go applications (or rust apps, for that matter). Debian requires that all those little static dependencies be individually packaged. Common container software like lxd, podman, and umoci are not found in Debian.

The following distributions package LXD (that I know of):

* Void Linux

* Alpine Linux

* Arch Linux

* Gentoo

Some of those are more suitable for production installs than others, but if you know what you are doing and manage your deployments well, all of them could work.


Do you happen to know _why_ Debian decided to require that for Go projects? It's so absurdly complicated.

I've been looking into .deb packaging for Caddy but it really feels like they require us to jump through too many hoops to make it happen. I'd much rather just ship a prebuilt binary.


>It's so absurdly complicated.

Security. Dynamically linking stuff is always going to be better than statically linking. Do you trust upstream to keep track of security issues and rebuild in the absurd dependency tree that is Golang software? Or the multi-year old effort which is the current Debian security team?

The reason why it's absurdly complicated is solely on the Golang team, not Debian.


I would love a deb-packaged Caddy as well :(


You'll be happy to know Caddy will have a deb package in the next release (albeit not part of official debian repos), see https://github.com/caddyserver/caddy/pull/3309


I would add NixOS to that list. Especially if you value rolling back upgrades or pinning package versions.


I was not aware. Too late to edit now, but that is indeed a great option! Rolling release is a bit ambitious for a hyperlink without nixos like features.


Funny. I have been working on a blogpost detailing the silly things I encountered while packaging LXD properly for Arch Linux. Should probably finish it up one of these days.


Ahh Gentoo :) I once build-from-source think it was called stage-1 install on a SBC(Single Board Computer) 486DX. I compiled well over 4 days :D


Podman is in the NEW queue, and debs can be built from the packaging repo in the meantime., FYI.


Thanks for the info! Good to hear.


Yes, the situation with LXD packaging is ridiculous. I always wondered, do the maintainers really use it themselves for anything non-trivial?


Without too much details. But I personally got into LXD while bootstrapping kubeadm clusters with salt. LXD profiles made it very easy to work with compared to alternatives.

It's clearly not production stuff, but it works very well for a quick test bed before production deployments.


I guess users have been sufficiently beaten into submission that many of them will put up with software randomly quitting on them, but the idea that daemons should be snaps is doubly crazy. "Oh, Tomcat just updated itself, that's why we were down."

I really have no idea what people are thinking sometimes.


I always resisted running Ubuntu in production, even when it was much easier to install the necessary software. When you pick a distro, you're picking their packaging and maintainer culture. I've always prefered CentOS for that reason. Even the little things, like installing a new package won't start/enable the services (because you should have a chance to configure it first) always came off as more pragmatic for system administration.


I run Debian on my workstation and laptops but still prefer to use CentOS/RHEL on my servers.

The automatic startup of newly installed services (before you even have the opportunity to configure them) on Debian (and derivatives) has always bothered me and is a (small) factor in my decision.

Fortunately, there are a few "workarounds" to prevent newly installed services from being automatically started!

First, there's the brute-force / heavy-handed approach -- override the systemd presets [0] to set all services to disabled by default:

  $ cat /etc/systemd/system-preset/00-disable-all-services.preset
  disable *
Alternatively, as a one-time thing, you can "mask" the service before installing the package but this requires you to manually create the symlink (instead of using "systemctl mask") -- and requires you to know the name of the service unit beforehand:

  $ ln -s /dev/null /etc/systemd/system/nginx.service
  $ apt install nginx
  # ... configure nginx as desired ...
  $ systemctl enable nginx
  $ systemctl start nginx
Finally, there's the old, "sloppy" (IMO) method of "hacking" the /usr/sbin/policy-rc.d script to immediately exit with a status code of 101 ("action forbidden") [1].

(I'd recommend avoiding the latter entirely and going with whichever of the first two options better suits your needs.)

---

EDIT: Just remembered another method but I'm not sure if it still works:

  $ RUNLEVEL=1 apt install nginx
---

[0]: https://www.freedesktop.org/software/systemd/man/systemd.pre...

[1]: https://people.debian.org/~hmh/invokerc.d-policyrc.d-specifi...


> I guess users have been sufficiently beaten into submission that many of them will put up with software randomly quitting on them

A cultural acceptance of WIP software seems common, in free software communities, in my experience.


> A cultural acceptance of WIP software seems common, in free software communities, in my experience.

On the one hand, yeah it's harder to complain when something is only built in someone's spare time for free. On the other hand... is non-FOSS really any better? Has Windows yet managed to re-merge the old and new control panels?


And the middle one from Win 7?


The rumour is that the old control panel will be removed in the next version of Windows 10, so I'd hope so?


There's a difference between bugs and software developers who cares less about the user than software being up-to-date.

The first is a fact of life. The second is bad judgement.


Ugh, is it still impossible to disable Snap autoupdates?

Even if people should be updating regularly, forcing them feels completely antithetical to the Linux ethos of users having control over their devices.


The Snap team has some experience with this, seeing as how 20.04 has released and you still can't move the fricking ~/snap folder. Creating some generically named top-level folder in the users home directory is a straightforward fuck you to all users.

Since 2017: https://bugs.launchpad.net/ubuntu/+source/snapd/+bug/1575053


This. Came here to post the same bug. I've been following the thread for the last 3 years, and it's an absolute mess.

I feel bad for the maintainers; it must be difficult to deal with all the rude and borderline disrespectful comments -- and particularly for something you're donating your time for free after all. But I must say that the architectural decision of creating a ~/snap was a colossal mistake.


My response to "it's free" is "Here's a turd. SHUT UP IT'S FREE"

It's not an argument. It's freeness is a completely irrelevant property.


Are they really donating their time, aren't they Canonical employees? Canonical being a company that is paid mostly by companies that use their distro in production?


Yeah. That is what winds me up the most about snap. It is so... obnoxious! Why can’t it be .snap?! Or even better, something xdg compliant?


Dang, that really sucks. I was planning on updating my desktop to 20.04 but after reading about Canonical pushing snaps and now this thread basically claiming that moving or renaming the directory is impossible because what is in my opinion a design flaw in the entire system design, I think I'll switch to Manjaro or Arch for my desktop instead.

I am on the rolling releases already but I'm not going to install an LTS version that is so deeply integrated with the distributor while at the same time ignores the directory standards so incredible blatantly.


arch is really a pleasure to use on the desktop, especially if you're already comfortable with a rolling release distro.


I love arch.

I think the (not that bad) manual installation process might keep some folks from installing it, but it makes you learn to choose from the very beginning.

After the initial install, update it on your schedule, and install things as you decide they are needed.

Ubuntu is the opposite -- install the world, then go madly trying to disable and remove the phone-home, forced-update, bloat.

  # ps
(see pages of stuff you don't need running)


I agree, I'm using Manjaro on my laptop and it works like a treat. I have read the horror stories of a botched Arch upgrade so I think I'll stick to the slightly-more-stable Manjaro but the ecosystem itself is very nice for the moderately familiar Linux user.


I develop software that tries to target as many distros as possible, and I really don't see how Manjaro is in any way better than Arch. Botched upgrades aside, Manjaro is the only distro that couldn't produce a valid iso to boot on KVM/qemu. It has a very weird kernel config, sometimes users don't seem to have modules that are found on literally any other distro. Which is very sad because Arch just works.


In my experience the Manjaro config tends to work better on some of the hardware I've tried it on, requiring less tweaking to get it to run properly.

I'll also admit that I can't for the life of me get Arch to properly install with all the tools and features that are normally built-in to desktop environments to work on first boot. I'm sure I can make it work if I give it another try now that I've gotten more used to using Manjaro, but the Arch installation experience is not something I want to go through again any time soon.

I understand that most Arch users will want to decide exactly how they set up their system and all, but for my personal (non-work) machine I just want an operating system that works, allows me to mess with the standard Linux stuff and manages to install itself without me holding its hand. I'd much rather tick a box that says "enable full disk encryption" than manually configure cryptsetup and LUKS parameters. It's just too much effort for what I get out of it.


Void Linux is also great if you're not a fan of systemd.


pop os would be good alternative too. since pop os will be on flatpak n apt.


Would you expect anything else from the distro that rammed half-baked versions of systemd, pulseaudio, and unity down users throats?


I really wish it had been .snap


Props to Microsoft, for getting their approach to Windows updates adopted as an industry standard! /s


I think automatic updates on all platforms are probably an immense net benefit to the world given the huge degree of security vulnerabilities it avoids.

Ubuntu users may be more tech proficient than average users but even for developer machines, I think it takes a lot of concern out of the equation if the software is up-to-date.

As far as I'm concerned this doesn't deserve an '/s'.


The option to have automatic updates (whether it is enabled by default or not) is an entirely different issue than the obligation to have automatic updates. The first one is a blessing, the second one is a nightmare.


Next step, introduce the mandatory restart as a standard.


But wait for the user to have a week-long job running for three days before you do it. That way they have to explain to their boss why there is now a 3 day delay.


It's critical? Don't use hobbyist software for that then. Take Debian or CentOS or something else with a non-cavalier development culture.


Ubuntu is "hobbyist" now? That's certainly not how they bill their entire platform.


I’m not sure how they’re trying to bill it these days, but certainly actions speak louder than words. All of Canonical’s design decisions seem to be aimed at making a consumer OS rather than a workstation or server one.


Ubuntu, no. Snap, seemingly yes.


She was asking for it by wearing that eh?


If it's good enough for Boeing...



I remember updating Ubuntu once. It bricked my installation for some reason. Ever since then I rarely update software.

With Ubuntu I just wait until the next LTS.


An important clarification: You never upgrade across major versions in place, or you never `apt-get upgrade`? I understand not jumping versions, but never applying security updates seems... imprudent.


May as well use Debian and escape the insanity.


I've got an ansible playbook that modifies my ~/.config/lxc/config.yml for certain deployment scenarios.

Due to LXC now being a snap, the file is simply not there. I guess it's 5 layers deep in namespaces, overlayfs and other stuff. I was so fed up with this that I removed Ubuntu (and replaced it with Gentoo).


I came to the realization that my home server doesn't need Ubuntu, it needs Debian.


Editing a config file for a program should not be more complicated than opening the file in your preferred editor, no matter your distro. Snaps break that.

I can't understand why they would break such fundamental thing and seemingly don't care about it.


I've thought of switching to Debian but I'd rather use a stripped-down Ubuntu so I can use PPAs.


Wait do PPAs not work on Debian?!


All this shit actually drove me to NetBSD for my home server


Why do not use Devuan? No systemd, no snapd(tm) and no same shits (small microsofts) hiding deep in the OS...


Mostly because it's a large bloated whale compared to NetBSD.


Nah, it's just there, ~/snap/lxd/current/.config/lxc/config.yml


LXC (and LXD, by extension) is packaged on Arch Linux as a binary, not a snap. I'm really grateful that the packagers worked with upstream to make this possible, as I heavily utilise LXC in my workspace.


YEP. For various reasons, we want to run Ubuntu on a server. Reboots and autoupdating are hazards and bugs for us. Upgrades & fixes mean patching our containers and starting fresh servers. Instead of rebooting a running server, we delete it. Trying to use Ubuntu environments means having to add custom disable-reboot-upgrade-downtime init scripts that we do not trust across updates: will bumping a minor version break them 5 days later? Super bad for reputation. I rather be updating Ubuntu more frequently, not less!

(Same-but-different for enforcing non-interactive mode on installs.)


It's unfortunate that they don't realize they're alienating the very customers that have money and will be in it with them for the long term.

The idea with LTS is to be predictable and stable.

Put another way:

- desktop people want ALL UPDATES NOW.

- server people want NOTHING TO CHANGE EVER.

(yes, not exactly that, but kind of)


Same! Unfortunately we have a system is more or less tethered to use of LXD. While other workloads have moved to CentOS or Red Hat systems, LXD is supremely broken on those systems (or any distro that uses SELinux). It's extremely frustrating to be tied to Ubuntu for this one reason. It's even more frustrating when one of the key selling points from Canonical is "cross distribution!" Which appears to only be true for basic cases, or in the sense of, "Sure it's cross distro if someone other than us is willing to put in the work for their distro!"


Have you tried Proxmox? It runs LXD containers along with VMs.


It runs LXC. NOT LXD. And now LXD does the same as proxmox pve. It runs both VMs and containers.


I get the idea behind snaps is that someone recognized that configuration management was a problem but didn't yet appreciate just how deep that problem goes.

It is a seriously difficult thing to get right. I always applaud people to take on difficult problems and try to solve them, but I would not expect it to have any sort of robustness for another 5 years at least.


Does anyone know if they fixed the bug where snaps don't work if your home directory isn't /home/<user>, or is on NFS?


of course not, the only progress so far has been people coming up with horrifying workarounds: https://bugs.launchpad.net/snappy/+bug/1620771


My god! Being unable to use snaps when you have “non-standard” home directory is ridiculous. What’s even more ridiculous is a response from snap developers - they marked this issue as “Won’t Fix”.

Dear snap developers, don’t make false assumptions that every user in Linux has to have home directory as /home/username just because standard Ubuntu installer does this. Only thing that you should care about here is $HOME


People will ditch snap and they'll end up using flatpak or something else just like the rest of the world. It happened with upstart and it then it happened with mir and unity.

I uninstalled it on all my Ubuntu systems and put it on hold. No new Ubuntu installs, switched to Debian.


What the actual fudge? So my Asad AD trust users, whose home directories are /home/example.com/sam-account-name, are going to run into trouble when they upgrade to 20.04?

I think I am going to have to start figuring out a plan to migrate them all over to RHEL...


s/Asad/SSSD/


You had to install snap on a production server?

I don't know how must be installed on your version, but on 19.10, snap can be completely removed: the only draw back is that pulseaudio depends on it, but I guess that for a server it's not that much of a problem.


Is Pulseaudio no longer viable using APT alone? One of the first things I do is excise Snap from the system entirely and install Flatpak. I had been planning to upgrade my daily driver to 20.04 in the very near future (I prefer LTS versions) but audio problems would be a complete deal breaker.


This is honestly probably an unexpectedly good heuristic -- remove anything pulseaudio depends on, and you'll likely get a more stable system.


> By default, Snap applies updates and restarts Systemd services anytime it likes, and there's no way to turn this behavior off!

For context: https://forum.snapcraft.io/t/disabling-automatic-refresh-for...

Snap developer responses are hilarious. No matter what your use case is, Snap developers know better than you, you silly irresponsible sysadmin/user. Snap is basically just another App store.


I'm using LXC from ubuntu packages for that very reason, got burned when a cluster upgraded automatically. I guess I must be very out of date if the debs no longer exist.

I actually really like snaps and that's the only bad experience I've had. I don't know why LXC would come in that form, very weird.


Snaps also are completely broken for NFS mounted home and data directories with automount. Which is the classic corporate Linux deployment. Of course, systemd doesn't even support wildcard automount....


On top of all of this bad behavior... I did try Snap to install Opera browser. Opera is written such that you need to put widevine libs in a certain directory for it to work. After tons of hacking and Googling, there is zero way to inject files into the file system of a Snap. I understand the security precautions, but that feels ridiculous. I left Ubuntu after more than a decade over their obsession with Snap.


Canonical always has tried to differentiate themselves, and they just can't execute. Remember Unity, Mir, Juju, upstart and all the other failed shit they've come out with? Snap is just more of the same. I don't want to run that garbage on my desktop. I don't need more daemons and forced auto-updates and all the baggage.

I strongly recommend anyone similarly frustrated to check out debian, which is a fantastic distro. Thanks to Kevin for posting this, but if you're using Ubuntu and disabling snap, you're fighting against the current and I have to imagine it's going to be increasingly difficult with subsequent releases.


I recently hit the wall with Ubuntu too. I'll still run -server in the cloud, but each time over the last 5-7 years I did point-samples of "is Linux viable as a desktop other than ChromeOS", it was always Ubuntu/Gnome. It turns out that that was my problem all along!

I tossed Gentoo and KDE (this is not a Gentoo endorsement, it was just a "hey I wonder what Gentoo's been up to in the last dozen years since I last used it") on a spare laptop. It turns out that KDE is amazing now. It's seriously the best DE I've ever used, and I'm a Mac user! (Half of the utilities I install out of the box on a fresh macOS are built in, and the annoying stuff that used to be editing arcane files is now easy preference settings. It's actually great.)

What the hell are Ubuntu doing shipping Gnome (with the ugliest custom theme known to man, to boot)? Admittedly it was my own ignorance, for which they are not responsible, but their mindshare and bad choice tainted my whole view of the state of the art for a long time.


> Half of the utilities I install out of the box on a fresh macOS are built in, and the annoying stuff that used to be editing arcane files is now easy preference settings

That's been the case with KDE for 15-20 years now. KDE 3.5 was a great environment (and Trinity (TDE) is a modernized fork of it).

Note that, this year, KDE added telemetry to their Plasma desktop environment. Of course, it's opt-in, so it must be acceptable, right? Well, of course, users who objected to the telemetry found bugs that caused data to be recorded even when disabled.

KDE's response was to ban said users from reddit.com/r/kde and call them "paranoid schizos." (The mods there are KDE members wearing "KDE developer" flair, not random Redditors.)

So, despite using and recommending KDE for almost 2 decades, it's hard for me to do so any longer. I wholeheartedly recommend checking out TDE instead.


Thanks for letting me know. It looks like the telemetry (kuserfeedback) isn’t even a dep of the Gentoo plasma-meta metapackage, so I don’t think it was even built on my system (but will double check when not on mobile).

If I dabble with debian or kubuntu I will make sure to mitigate it, thank you for making it known, keep up this kind of good work!

It’s a real shame that they found it necessary to even build a telemetry client. I switched to free software after years on macOS (I remember upgrading to System 7) because of all the phone-home that Catalina STILL does even with iCloud, Siri, analytics/crashes, Screen Time, iMessage/FaceTime, Location Services, ntp, App Store, and software update all disabled.

iOS apps in the App Store are all allowed to phone-home like mad, too, and Apple permits this on the basis of you “opting in” to it in the App Store TOS, as if we have any sort of choice on iPhones. It’s totally endemic, and entirely undermines the credibility of Apple’s claims to caring about user privacy.

Seems like spying on users is getting heavily normalized these days. :(


The grandparent is overreacting.

1. KDE's telemetry is opt-in (as you mentioned)

2. All data is anonymized

3. You can select which information you're comfortable to share

4. If you have it turned off, the data is recorded on your hard disk.

5. That data was always recorded in order to enable other features like "Recently used documents".

Here's some explanation:

https://www.reddit.com/r/kde/comments/fmgyy9/kuserfeedback/

Honestly, KDE has been one of the most open, privacy aware and idealistic communities out there. The criticism is quite unfair.

(I used to be a KDE contributor)


Are you able to address this point that comment made?

> users who objected to the telemetry found bugs that caused data to be recorded even when disabled.

> KDE's response was to ban said users from reddit.com/r/kde and call them "paranoid schizos."


The recording is local, on the hard disk. I think people are worried that by producing the files, there is just one layer of bug (accidental upload) keeping them private.

I agree. The data should not even be generated if the analytics are not opted-in-to.

I do give them credit for making the system opt-in. They deserve that.


Not really. I wasn't aware of this. The only thing I found on reddit regarding the situation was the link I sent above which didn't have any controversy.


> all the phone-home that Catalina STILL does even with iCloud, Siri, analytics/crashes, Screen Time, iMessage/FaceTime, Location Services, ntp, App Store, and software update all disabled.

Do you have examples of what you mean?


I have a bunch of Little Snitch screenshots around here somewhere, but it is easy to reproduce: do a fresh install on spare machine or VM (opt out of all services like iCloud, Siri, Location, MAS, et c), install Little Snitch, and then disable the built-in Little Snitch silent allow rules for “system services/iCloud” or whatever it’s called. Reboot and observe.


Do you happen to have links?


It appears to be this: https://www.reddit.com/r/kde/comments/f7ojg9/kde_plasma_kuse...

It sounds a bit hyperbolic, but I can't recall a time when telemetry was added to a system and then later dialed back in any way. It always increases.


Echoing the other similar comment, where can I find citations for this annoying drama I now need to take into account?


It's interesting how underrepresented KDE is in the "big distros". While it makes sense that Fedora and Ubuntu ship GNOME, and there are "spins" of each that include different desktops out of the box, it still surprises me.


In my opinion KDE has always been way less polished than gnome and is currently not financially backed in any meaningful way. They also have problems focusing on the core product and won't stop shipping half-assed programs nobody asked for.


Wasn’t there some big kerfuffle years ago involving KDE, Qt, and licensing?


yes; and again about 2 weeks ago:

https://news.ycombinator.com/item?id=22821050


There was.


Correct. A few more details:

>[...] To show our commitment to this dual licensing model, the KDE Free Qt Foundation was founded in 1998. It is a non-profit foundation ensuring that Qt will always be available under open source licensing terms. The agreement that governs this foundation has stayed mainly unchanged over the last 17 years. As a lot of things have changed during these years, we have been working with KDE over the last year to create a new and updated agreement that takes todays realities better into account.[...]

From: https://www.qt.io/blog/2016/01/13/new-agreement-with-the-kde...

The updated contract mentioned in that blog post is available here: https://kde.org/community/whatiskde/Software_License_Agreeme...


You could look it up on Wikipedia. KDE and Qt are Free Software.


I put arch/gnome on a system right next to an ubuntu box.

It was like a different gnome - quickly reaching the desktop and lots of nice differences (like the privacy menu wasn't crafted by marketing and legal)


> and they just can't execute

That’s a bit rich: are they not the #1 consumer distro, which hardly implies they are failing to execute. A successful product has missteps, so what.

> I don't want to run that garbage on my desktop.

So don’t. Why complain that others do? I use Ubuntu because it works and I can mostly find information about how to do what I want. There are major aspects of Ubuntu I don’t like (Gnome, Snap) but selecting a distro is all about choosing your compromises. I have tried Debian and other distros, but I tend to go back to Ubuntu because it works best for me.


Most of HN is shitting on popular things with a hot take and a smug condescending tone, usually erroneously. If Conical hadn’t tried new projects and failed, the poster would complain that they never innovate.

People complain like this because they have no real control of their own lives. It makes them feel smart, if only they were in control, then things would be better. It would be so easy, the people in charge must be stupid. It comes from a lack of experience and the inability to understand the challenges in those positions.


>Projecting this hard.

The shortcomings of snap are well documented. I assumed anyone reading the comments would have been aware of them. But apparently some people will jump on any chance to virtue-signal.


> That’s a bit rich: are they not the #1 consumer distro, which hardly implies they are failing to execute

All the hard work to make it a viable OS is done by Debian. Canonical just adds some polish and then wrecks it all with poor design decisions over and over again.


> All the hard work to make it a viable OS is done by Debian

You could equally say that all the hard work to make a viable OS is done by Linux, so screw Debian? Last I hurd, the GNU developed OS is unviable (no 64 bit, no SMP).

Or equally say that all the hard work to make the majority of end-user programs (you know, the raison d’être for an OS) is done by other open source projects, not Debian, so screw Debian...

These are open source projects, with cross-pollination everywhere, each with their own opinions on licensing. Ubuntu mostly helps the ecosystem, and certainly isn’t a parasitic player (although like all, they are not perfect).

Why bag on Ubuntu just because it happens to be popular? Should we also cancel all the other Debian based distros?

PS: complaining about upstart shows you are just being biased (or perhaps misinformed). Canonical were developing upstart before systemd was developed - and systemd was developed by RedHat. The main con given against upstart was not technical, but due to licensing. “In terms of overall feature[s] there is really rather little to distinguish upstart from systemd” https://wiki.debian.org/Debate/initsystem/upstart


> You could equally say that all the hard work to make a viable OS is done by Linux, so screw Debian?

No, you couldn't say that. Without toolchains, userlands, and packaging, a kernel is pretty worthless. The barest bones you can go is still gcc, linux, uclibc, and busybox. There is more code that goes into a computer running linux, then there is in the linux kernel. By a wide margin.


If it had to be done, GNU programs could all be replaced. Port BSD tools or improve busybox tools, use KDE instead of Gnome, and there is a variety of great packaging solutions that aren’t .deb. AFAIK GCC is already being replaced by clang due to the GCC codebase, amongst other reasons. Distros mostly use GNU programs for historical convenience. Given incentive, GNU could be dropped by Ubuntu for the desktop. The most popular Linux distro Android has moved away from GNU already.

FSF does fabulous work, which we are all appreciative of, but some decisions are peeing in the open source pool.

I think RMS creates unnecessary division against Linux and Linus for what I feel are poor reasons. I went to a lecture by him where he spent half his time being negative towards Linux and Linus (that felt like he was just pissed off because Linux was popular) and a bit because Linus had used the GPL2 (not trivial to change, and you don’t get change by attack). Being negative towards the people who are on your own side is wrong IMHO. It could equally be argued that Debian should be called Debian/Linux. Edit: I just found a quote from Linus about RMS that summarises what I wished to say here: “It's not passion for something, it becomes passion against something else.“ - http://torvalds-family.blogspot.com/2008/11/black-and-white....

PS: I totally admire RMS and his relentless idealism. He has given so much to the world, and the faults I see in him are interwoven with the strengths I see: I’m not sure the faults could be mitigated without badly weakening the virtues.


> The most popular Linux distro Android has moved away from GNU already.

Android went from non GNU absolutism to GNU LGPL for its standard java library a few years ago:

https://arstechnica.com/tech-policy/2016/01/android-n-switch...


Replacing GNU has already been done, but that isn’t the point. The point is that without projects like Debian, there is no operating system.

FWIW the whole GNU/Linux pedantry bothers me too.


Neither Debian nor GNOME are FSF projects.


Sorry, you are quite correct about both. I jumped to a conclusion about GNOME because I did do a quick check and saw the “G” stood for GNU, but I didn’t check more deeply. I have no excuse for confusing Debian with GNU/FSF, and the comments will stand to remind me of my shame.

GNOME history: https://unix.stackexchange.com/questions/141114/what-is-the-...

To return to topic: “How to install Flatpak apps on Ubuntu 20.04 LTS”: https://jatan.blog/2020/04/25/how-to-install-flatpak-apps-on...


Then why doesn't everyone use Debian?


Debian has its own set of problems. Like the Chromium package maintainer deciding unilaterally several years ago that installing extensions remotely shouldn't be allowed and gated that standard functionality behind a command ling flag.

There was zero documentation on the change and the error you received attempting to install an extension was basically "operation failed". I discovered the cause only because there was an open bug about it on the Debian bug tracker where the maintainer refused to acknowledge the problem. Eventually, sane minds prevailed and that stupid patch was reverted.

So, unfortunately - you'll end up having to deal with people that refuse to look at things from the user's perspective no matter what distro you use.

And yes - I'm still bitter. :-/


There is also the ffmpeg kerfuffle a few years back when Debian decided to replace the ffmpeg package with an incompatible and inferior fork. You can imagine the amount of confusion that ensued.

That said, I think Debian's occasional messups are far less egregious and damaging than Ubuntu's though.


There was also that time many, many years ago they decided to remove the ability to load binary firmware blobs for things like network cards in the kernel because of an interesting interpretation of the GPL. :-/


Maybe when they tought that none-free drivers opt-in was a good idea, also old kernel / packages.

LTS on Ubuntu was always better than Debian, I mean saying that Ubuntu is just a repackage of Debian is very short sighted, especially on the security side, Canonical security team is top notch.


> Canonical security team is top notch.

They need to, for a long time, Ubuntu has been shipped with EOL Kernel versions https://ubuntu.com/kernel/lifecycle


Canonical is at a disadvantage here because Red Hat directly employs or has significant established relationships with many of the people who make kernel release decisions. Plus, RHEL is the de facto standard enterprise distribution, which means any decision the kernel community makes regarding what they believe "enterprise" requires will often be a reflection of Red Hat's plans.

But what benefits Red Hat in the enterprise world is to their detriment in the consumer world. There's a reason the Debian/Ubuntu package ecosystem is richer and more featureful than RPM, and this is why Ubuntu dominates in the container space--because almost any piece of software that one could expect to have been packaged has been packaged as a .deb and already exists in the default package archives. I can't count the number of times I couldn't find an RPM--certainly not in the default repositories (RHEL, CentOS, or even Fedora), but not even in the third-party community repositories. And those that do exist are of lesser quality than the comparable .deb, for various reasons. (That is, the long-tail of packages is of higher quality for Debian.)

By pushing Snap, Canonical is definitely going astray. Ubuntu's competitive advantage is the Debian package ecosystem. Both Canonical and Red Hat seem to underestimate the role and importance of their respective packaging ecosystems. How many projects to revolutionize or replace RPM/Yum/whatever at Red Hat have crashed and burned? Many, though it's hard to count because half-way through they often realize what they're trying to do is functionally or even technically impossible (as with their aborted 2017 plans for RPM package streams), and scale things back to iterative improvements.

Containers are a security nightmare, and pretty much the only reason to pay Canonical and Red Hat licensing fees is for security and bug fix maintenance of their package archives. On our large Kubernetes clusters at work there are thousands of open CVEs for the containers that are being run, and we'll have to boil the oceans to get them all updated, let alone keep them updated. But updating packages is as simple as an apt-get/yum upgrade[1], and rarely do you have to worry about anything breaking, especially relative to the pain that updating containers regularly brings.

[1] If the container uses Ubuntu, Red Hat, etc you can sometimes just rebuild the container to get the newer packages. But that assumes you control the container image. Most containers come from third-party, decentralized sources (that's the point!). But Docker Hub doesn't cajole and coordinate container owners to update their crappy images. It's no substitute for the orchestration of people that are traditional package repositories.


> also old kernel / packages.

So, LTS/stable.


Ubuntu has been marketed as a beginner friendly distro, with communities easily accessible using a Google search (thinking about the likes of askubuntu, omgubuntu, ...). So I'd say online presence and beginner-friendliness.


Similarly, Mint is also one of the most popular distros, and its marketing, at least for a long time, has been that it's even more user-friendly than Ubuntu.


Marketing?


Lots of software out there with Ubuntu PPAs.


Debian did not send free CDs by mail


Did a large numbers of people use those? Legitimate question—it always struck me as a cool initiative for a very small number of people, but only that.

I'd expect most people tech-savvy enough to install Ubuntu would also have a decent enough internet to download a ~700mb file.


Back in the day, not much of my country, or even the US, had particularly fast internet. Nor did everybody have a disc burner in the days before USB booting being supported by the majority of computers' firmware.


If my memory serves correctly, this was 2004/2005, around the time I was discovering my home burnt CDs and DVDs were going bad.

This was also around the time I would often brick my primary (only) workstation for whatever reason. Having a properly mastered Live CD was super useful.

I would order at least two with every release cycle for a few years at least.

Thankfully, I saw the light early with Ubuntu-server, and stayed with Debian. Ubuntu-desktop makes for a good enough live / recovery / troubleshooting environment, but not sure I’d use it for anything more.


The free LiveCDs were great marketing. As a teenage computer geek, it was way easier to convince casual computer users to try it out when I could lend them a nicely printed CD. And it looked way better than handing them a sketchy CD-R with some marker scribbles on it.


> I'd expect most people tech-savvy enough to install Ubuntu

we installed ubuntu with friends in freakin junior high school. it's not like it's rocket science...

OTOH the village where I grew up as a child only got DSL > 512k around 2008 iirc.


Checked a bit, the figure I could find was an ubuntu employee estimating the figure to half a million in 2004, so at the beginning of the programme -> https://ubuntuforums.org/showthread.php?t=1691&page=2&p=3255...

So yeah, lot of people got those.


I ordered a few back in the day - the sleeve design was really nicely done.


Third-party support.


Because it's popular it gets 3rd party support. So people use it making it popular. Vicious/virtuous circle depending on whether you're looking at it from above or below.

Just like windows Shuttleworth admires and seeks emulate so very much.


Sigh. ok then let me explain that last comment.

It's called a network externality. Microsoft achieved this with piracy based market penetration of dos and it worked great for them and they've built on it ever since.

Shuttleworth understood the importance of getting established as being popular when he launched and would press install disks, as many as you asked for, and ship them to you at his own cost. As one example of nakedly going after market share and spending resource to do so.

Bug #1 in the ubu bug tracker is literally "windows is the most popular os."

Separate to the marketing, which is worth discussing on this site because some of us actually care about what works and why so wish to discuss it, let's talk engineering decisions.

Between 2 choices that are technically about equal. Choose the one that is more popular. Many feet trample more bugs. Better support. More likely to be around after $time_period. If it needs to work with something else, the managers of the something else project will likely suppor the more popular first etc. Obviously popularity is not the only concern but it really does count for something, ignore it as a dimension in your decision process at your peril. I use linux, and ubuntu as it happens on this laptop. I'm aware windows and osx are more popular and that popularity makes certain things easier. For /my/ purposes and to /my/ taste linux and ubuntu are worth paying that cost to have installed here and I'm very comfortable with that decision.

Just quietly, perhaps people who don't care for business decisions and engineering decisions are on the wrong website? There's plenty of places "boosters" can go to do that.


> I don’t like (Gnome, Snap)

Snaps may be a pain sometimes, but Gnome seems to be working like a charm...


GNOME is great as long as you use it the way that GNOME devs want you to (this year). If you want extensions, themes, customization, to use software the way it worked last year, or tray icons, it becomes... less supported.


I tried it recently and went back to i3. For me, Gnome didn't work well for multi-monitor setups and seems surprisingly lacking in customisations. It did seem very polished though.


People often complain about design decisions of the GNOME team: removing desktop icons, status bar, ...


Those people can choose another DE with legacy features like desktop icons. Tradition isn't a reason to keep up bad habits and I'm thankful to Gnome for daring to take tough decisions for the greater good.

I wouldn't use any other DE, at this point.KDE has always been cluttered and XFCE is buggy and not particularly intuitive.


There was a very good post here in Hacker News explaining why the decisions that GNOME (and others as well) have taken are bad: https://news.ycombinator.com/item?id=22901541


Yes I saw this the other day. The post is incorrect in several ways but there are some valid opinions in there that I agree with too.

Nothing as big as a DE is perfect, thankfully it's broadly moving in the right direction.


They're not "legacy features" merely because your tastes have changed. They're still just "features." Calling them mean names doesn't change that, it just makes it harder to take your argument in good faith -- same as calling desktop icons "bad habits." I'm a happy GNOME user but ascribing moral judgements to common computer functionality is kind of weird.


This is debatable because it's a matter of taste (or habit), but one thing they got right is optimizing the interface for Touchscreens.


How is removing features "for the greater good"?


> like a charm.

Like a Juju one? :)


>> and they just can't execute

> That’s a bit rich: are they not the #1 consumer distro, which hardly implies they are failing to execute.

No, that's Debian doing almost all the work.

90% of the packages in Ubuntu are simply taken from Debian without significant modification.


Unity was very good in a lot of respects. Both its UI elements and its performance. Unfortunately, Unity 7 depended on Compiz somewhat heavily, and when it came to writing a replacement of the full stack, Canonical didn't manage to execute.

But have you been following last year's improvements to GNOME's performance and responsiveness? A lot of it is Canonical's devs bringing their experience from Unity.


Gnome 3 runs like an absolute dog on my Skylake notebook using Ubuntu 19.10. I don't know what metrics you have been looking at, but as a regular user I "feel" that the UI is constantly lagging during regular use. I didn't think it was this bad when I was using Fedora in the past, but that was a wayland based installation.


Enabling the BFQ scheduler reportedly boosted the responsiveness https://bugzilla.redhat.com/show_bug.cgi?id=1738828 and I can't remember where I saw someone saying it had a positive impact on Gnome in particular.


Thanks, this is definitely worth a shot


Weird. I'm on a ThinkPad x230 Ivy Bridge and Gnome (3.36) is butter smooth for me (Ubuntu 20.04, fresh install). I did the minimal installation though.


What were you using prior to 20.04?


Windows 10.

And Void Linux/ i3-gaps before that.

And Linux BBQ / OpenBox before that.


It's better than 19.04 was. :-)

20.04 should be better still (I've yet to upgrade).


it's not a fair comparison, but for many years i have been running i3wm+dmenu+xterm as my desktop environment (and dwm instead of i3wm before that), and not even once i had given as much as a thought about performance of that. it just responds to my commands... instantaneously?

there is no need for latency-hiding animations and subsequently trying to make them run smoothly on the gpu if there's no perceptible latency.


The only caveat is the initial configuration which makes many people reluctant to use i3 or WMs in general.


> Unity was very good in a lot of respects. Both its UI elements and its performance.

I hear a lot of praise for Unity and I'm the kind of person who enjoy trying out new stuff and Linux Desktops is no exception.

For me, Unity was broken because of alt-tab (behavior and lack of configurability).

It might work for everyone else but when I want to switch back to the last or second last thing I worked with I want that done now.

I don't want to look at the tab switcher to ponder what to do next, just alt-tab, done.

This has worked consistently in every Windows since at least 3.1 (the first my family owned), and in every Linux desktop environment I've used except Unity and Gnome 3. And in Gnome 3 it was at least configurable.

This might seem trivial to a lot of you but to keep focus I keep one application maximized most of the time. I don't use them side by side. Then when I need to reference something (Jira, vendor documentation etc) I alt-tab. Same goes for slack.


I also Alt-Tab a lot, and don't remember this being a problem. Either I got used to its behavior quickly, or it was configurable too.


I recommend Debian too, but potential switchers should be aware that it has a very particular update model. You get (non-security!) software updates once every two years, and that is the version you will be on for the next two years. There are sometimes workarounds (namely backports), but they're less well tested and sometimes break.

I think this model is underrated, for all that it can sometimes be annoying. Consistency is valuable. Constant change is not good, even when the changes themselves are positive. But it does mean you'll sometimes be left with out-of-date software.

Edit: Oh, I should mention that you can also use Debian testing to get frequent updates. Primary issue here there is Debian Testing actually gets security updates later than Debian Stable.


Unity had a rough start, but grew to be a pretty nice DE. I actually remember it warmly every time I see how Gnome3 eats 3 horizontal bars of screen space, placing a freaking WATCH on the center of the top bar.

Seriously, a whole bar for a WATCH? How come good old Gnome2 did it better 15 years ago, and had a terrific hierarchical menu, to add?


Do you mean the clock? I don’t like it it there either. Used to looking in the corner.


Unity was a good DE, and I have come to appreciate their design, which even today differentiates from standard GNOME. Also recall that it was made to be one DE for all: Phone, tablet, desktop. The idea was that you would have an Ubuntu phone, dock it, and use it as your PC!

And that is doable now, considering Thunderbolt. Hell, Oneplus should try to push OxygenOS to be tablet-like and this would set them apart from everyone.

Upstart was started alongside or even before systemd, if I recall correctly.


> I strongly recommend anyone similarly frustrated to check out debian, which is a fantastic distro.

It is true that the Debian people are doing a great job.

> [...] if you're using Ubuntu and disabling snap, you're fighting against the current and I have to imagine it's going to be increasingly difficult with subsequent releases.

Actually, snap was harder to remove in the previous release: you had to rebuild certain packages (actually, just pulseaudio, so it only matters for desktops) to get rid of the dependency, but it seems now that it's just a couple of apt commands, so you have to give Canonical credits for making it easier.


Now, this is embarassing, here the list of reverse dependencies on snapd:

  python3-ubuntu-image
  xubuntu-desktop
  xubuntu-core
  vanilla-gnome-desktop
  ubuntustudio-desktop-core
  ubuntustudio-desktop
  ubuntukylin-desktop
  ubuntu-unity-desktop
  ubuntu-snappy-cli
  ubuntu-snappy
  ubuntu-mate-desktop
  ubuntu-mate-core
  ubuntu-core-launcher
  ubuntu-budgie-desktop
  snapd-xdg-open
  snapcraft
  snap-confine
  qml-module-snapd
  plasma-discover-backend-snap
  lxd
  lubuntu-desktop
  libsnapd-qt1
  kubuntu-desktop
  ember
  cyphesis-cpp
  chromium-browser
  ubuntu-server
  ubuntu-desktop-minimal
  ubuntu-desktop
  ubuntu-core-snapd-units
  livecd-rootfs
  maas
  apparmor
  libsnapd-glib1
  gnome-software-plugin-snap
  command-not-found
Any of these packages is going to pull snapd in if installed. Soon after writing the above comment, I decided to install chromium, and ... snapd got installed as well as a result. I guess I should double check each claim I am about to make, BEFORE making it.

sigh...

Edit: Please note that many of these are "leaf" packages, by which I mean that no other packages depend on them.


> Now, this is embarassing, here the list of reverse dependencies on snapd:

Not at my computer now, but that cannot be correct. Could recommended packages be included in that list?

I did apt-purge snapd on my xubuntu 20.04 and it did not pull out anything unrelated.

When you install something later an it tries to pull in something later you can always try --no-install-recommends.

In the worst case there is eqivs https://wiki.debian.org/Packaging/HackingDependencies but it has been 3+ years that I needed to use that the last time.


Many of those are also the main meta packages for specific flavors.


Which means that removing them (when removing snapd) doesn't matter much.


Can't you hold snapd, and then install with force-depends?


Give them credits to fix what they have broken?


Well, tbh, that sounds a bit far fetched. I meant that we should cut them some slack (English is not my first language).

That being said, it's not as bright as I thought it was - see my other comment.


It's not broken. People prefer not to use snap and want to remove it. But overall snap works, and apps installed via snap work.


The one I'm currently fighting server-side is netplan. I want to use systemd-networkd directly, since it exposes a lot more features than netplan, but getting netplan to stop intervening is a ballache. Like, it's not a systemd service, it has to be disabled on the kernel command line?!


Yeah I was trying to get systemd-networkd to handle hotplugging, and I spent about a day trying to figure out how to describe a computer with one Ethernet port to netplan before giving up and removing the whole package. At which point systemd-networkd started working beautifully as-expected. For a system administration tool netplan is wonderful at making the simple things needlessly complicated.


I've found that if you delete /etc/netplan (just making sure this is at least empty seems to be the most important part) and /var/run/systemd/network netplan doesn't really seem to do anything. My org has been using systemd-networkd directly after doing that for about a year and it's working fine for us.


Yes I've done that and it definitely works, though I've still got "netcfg/do_not_use_netplan=true" in my cmdline for good measure, not sure if it does anything though, or even when I got that from, come to think of it.

But still, emptying a directory is not how I expect to disable what is a system service. It should be systemctl disable netplan...

I think emptying would be better than deleting the directory since it'll probably just get recreated on an update.


It's because netplan uses a system generator /lib/systemd/system-generators/netplan, to parse the files in /etc/netplan and generate systemd-networkd configs from them on boot (Like how /etc/fstab gets parse into systemd mount units). It would be nice if there was a flag file or something you could touch to disable it though.


> upstart [...] failed shit they've come out with?

upstart was pretty good. It just lost the popularity contest with systemd. I'm not sure if there's really anything serious to complain here about.


There were some serious complains around upstart. Notably, jobs may get stuck in a bad state, difficult to recover. This was never fixed. See https://bugs.launchpad.net/ubuntu/+source/upstart/+bug/96420..., https://bugs.launchpad.net/upstart/+bug/406397 and http://www.markhneedham.com/blog/2012/09/29/upstart-job-gett....


> I don't want to run that garbage on my desktop

Please read the HN guidelines: "Please don't post shallow dismissals, especially of other people's work". Link at the bottom-left.


It's not that bad. Linux threads attract the biggest drama queens in the galaxy.


>upstart

Do you know Red Hat and google were using upstart? This companies must be hyper incompetent if they run garbage.


I agree. The last Ubuntu release I actually liked was 12.04? After Unity I've been fighting Ubuntu ever since.

I'm seriously considering not taking the 20.04 LTS release and either using 18.04 until it's untenable or switching to something else.


Yep this hits the nail on the head. There was some kind of fundamental shift around that time. Running other GUIs has been a bandaid, but the underlying thought processes that led to unity are still clearly at work. Im going to stop even trying out new ubuntu releases anymore.

I think this is ok though. If ubuntu were better thought out, the linux ecosystem might be less vibrant than it is. I would like to think other distros are learning from these failures. Personally ive had excellent luck with scientific linux (CentOs based).


Mint is also a possibility. They don't use snaps but flatpaks.


Netplan is another fail... although I love Ubuntu otherwise


Numpty. Minimal. End of.


I really like the idea of snaps. I really don't like a lot of the execution decisions around them.

Snaps break debain's stable release model. They allow upstream to ship updates outside of the normal 6 month ubuntu releases. There are times when you might want this, but it should be opt in not mandatory. I thinking specifically of lxd which is only shipped via snaps.

The snap store's trust model is confusing. Its hard to tell who is making the packages and how they are sandboxed. If I'm going to install a proprietary piece of software I want to know exactly what it can and can't do. Lately I've been using firejail when I need to run things like this.

And now for a minor complaint that also feels most user hostile to me: why do the snap developers think its ok to require a non hidden directory in $HOME? Seriously my home directory is MINE, if you have to store application state there at least have the decency to do it in a hidden directory.


>And now for a minor complaint that also feels most user hostile to me: why do the snap developers think its ok to require a non hidden directory in $HOME? Seriously my home directory is MINE, if you have to store application state there at least have the decency to do it in a hidden directory.

4 years old now.

https://bugs.launchpad.net/ubuntu/+source/snapd/+bug/1575053


Unrelated to Snap, but on Windows, trying to use the My Documents folder for your actual documents is frustrating, because a huge number of apps and games like to use the folder for all sorts of nonsense, instead of using %appdata% like they're supposed to.


At least on Windows you can mark folders hidden without having to change the name. (i.e. the app that created the folder will still find it)


Create the file ~/.hidden and with the line "snap". It will then be hidden from your file manager.


dot prefixed files is a kludge that is now a standard rather than an actual hidden attribute like on Windows/NTFS.

The actual heritage is when `ls` was written to exclude .. (dot dot) directories from it's listing the code literally just had the logic to exclude any files which had a dot/period as its first character. Later operators exploited that bug/feature to create items with a dot prefix to make those files and folders hidden. Eventually this became convention.

Whereas systems released more recently understood the need to have a hidden file attribute so made it a first class property.

I do love POSIX and often the first to defend some of Linux or BSDs idiosyncrasies but if there was ever anything that was worth someone saying "I know we've always done it this way but it's shit and it's about time we implemented a proper solution", it would be the dot prefix kludge. (I also know it's easier said than done and would take years for all the tooling to catch up)


If it's still visible in ls, it's not a fix.


Make an alias that adds ls --hide the same way you do for ls --color.


The issue is that you have to continue "fixing" lots of individual utilities. In addition to your file manager and `ls`, what about tools like `fd`/`rg`/`ag` (requires adding `snap` to `~/.ignore` or `~/.gitignore`), `fzf` navigation in the terminal (may require updating `FZF_DEFAULT_COMMAND`), "open file" and "save file" dialogs in non-GNOME apps (like Zotero), and so on. The `snap` directory keeps popping up in new places, and discovering all the different ways to hide a non-dot-file is exhausting. In contrast, adding that initial dot to the `snap` directory would avoid the problem entirely.

I have still been using `snap` for the past year, and have mostly been happy with it. It works well for apps like `spotify` and `zotero`, it's a better solution than `ppa` for third-party software, and I like that it tries to introduce a permission model. But not following the dotfile convention is annoying, and not prioritizing a search-and-replace of `~/snap` to e.g. `~/.snap` in the code after four years is strange when this is their most upvoted bug.


This would be nice if it was easy to toggle showing hidden files and hidden files wouldn't show up by default in all folder listings.

Of course you do have system files, which are hidden from all standard folder listings, but aren't usually used by software, presumably because they're too damn difficult to find again (although I really hope google drive realizes at some point that desktop.ini is supposed to be a system file, not merely hidden).


AFAIK one common reasoning is that it is a writable folder that is easy for the users to find when they need to contact support.


This behavior is such normal for all the apps (especially games) that at one point I thought it was a specification.


I feel like in 2000 or before, My Documents was the "home" folder and you were supposed to put your own subdirectories inside of that. Might be wrong about that though.


The problem in Windows' case is that there never was a clear specification regarding what kinds of files should go where. The directory structure and environment in every version of Windows is a little different, and it's hard to write an app (or an installer) that does the right thing on all of them.


As long as you follow knownfolderid nothing has changed since Vista, or CSIDL constants go even further back.

https://docs.microsoft.com/en-us/archive/blogs/patricka/wher...

https://docs.microsoft.com/en-us/windows/win32/shell/csidl

.NET has its own set which is also very old and goes back to .NET 1.0 (as old as Win98) https://docs.microsoft.com/en-us/dotnet/api/system.environme...


I think the combination of complexity and obscurity of those two links reinforces my point more than it refutes it. There's a reason why basically no one gets this stuff 100% right, including some of Microsoft's own first-party applications.


Does anybody know why lxd is shipped via snaps? It's kind of confusing, because it seems like snap should be built on top of lxd. If you're running a container inside of lxd inside of snap, does that mean it's two layers of sandboxing?


My understanding is that the actual features needed to provide containers are part of the Linux kernel. What LXD provides is tooling and a daemon running as root that are used to manage containers. There's no reason why these can't be part of a snap, it doesn't introduce another layer of sandboxing for the actual containers.

(That said, I would still prefer that they not be.)


> Snaps break debain's stable release model. They allow upstream to ship updates outside of the normal 6 month ubuntu releases.

In fairness, I think that's the point, just like RHEL streams.

> There are times when you might want this, but it should be opt in not mandatory. I thinking specifically of lxd which is only shipped via snaps.

Yeah, decent idea, poor execution. Especially for system/infrastructure software like LXD. I want a stable kernel, a stable glibc, a stable LXD/Docker/k8s/whatever, and then on top, bleeding-edge applications that will not break the world if they autoupdate at random to a less-tested version.


I don’t have any experience with snaps so this is all new to me. Snaps install software at a system level right? Why on earth would there be anything in a user home directory? What if you have multiple users? Does snap get confused?


Snaps run out of a squashfs partition, but the app may save app config in the user's home directory (if they request the HOME interface, I believe).

After fighting with node, vlc and ffmpeg snaps not working reliably, I had to add a warning for my PhotoStructure users on Ubuntu to avoid those packages (and how to install them via apt).

I agree with a bunch of people here: snap promises some great features, but in practice the horrible performance, additional resource consumption, and spotty reliability led to me actively avoiding snap packages and looking for an alternative installation.


Snap stores user-specific application data in /home/user/snap/ folder. Like when packaged application wants to write into $HOME, snap gives it some sub-folder inside /home/user/snap.


Ah ok I understand now. Sibling comments reference the MacOS Library and *NIX ~/.local approaches. Both of those seem to make a lot more sense. Is there a config for Snaps to store user data in a different path, like ~/.local/snap/?


No; they use some magic with security policies and mounts so $HOME/snap is hard coded. Some of the early comments in that "4 year old bug" linked here explain it.


Most likely they download some things when you first run the software. This pretty common but usually opaque to the user due to it being in .local or .cache.


I've always liked OS X's concept of the Library folder, particularly back in Snow Leopard when it wasn't hidden by default.

Applications need a place to save state, and you don't really want it to be hidden, because sometimes I as a user do need to change it manually for any number of obscure reasons. So you make a subfolder that's designated for apps to write to.

There's no good way to put this into the existing Linux model though. All you'd do is add yet another standard a la https://xkcd.com/927/


You as a user can still access hidden directories. Most new applications are nice enough to respect the XDG specification.


Oh, for sure, but it's mildly annoying.

Especially when a . is used to make the directory hidden, because I can't unhide that specific directory without changing the name apps expect.


You can press cmd-shift-g to open a "Go to folder" prompt which has some basic tab completion. This also works in file selection dialogs. Hopefully this saves someone as much time as it has saved me over the years.

I get into the Library folder by doing: cmd-shift-g -> ~/Library -> enter


Cmd-shift-. will show hidden files and folders in the Mac Finder before Catalina.

In Catalina, it's now Cmd-shift-fn-.

https://discussions.apple.com/thread/250756110


Whoa, even better tip. Thanks!


What? Of course you can!

$ ln -s .hidden not_hidden

It's that easy. Really.


Well, that's a symlink. It's a good idea, and something I should remember if I ever do start using Linux again—but the downside is that if I do need to temporarily enable hidden files, I'll see the same directory twice.

I'm being picky, I know! But I still think the Library approach is cleaner. (Again, back when it wasn't hidden by default!)


Not exactly the same, but the closest Linux equivalent that's semi-widely used would probably by ~/.local


And the problem is gigantic, programs in general are completely broken - why does my entire system have to be compromised because I decided to play Catan. We need the iOS/Android model for permissions on desktop.


> We need the iOS/Android model for permissions on desktop.

This model is terrible. OS asks up-front: program wants permissions to do A, B, C, X, Y, Z: grant/refuse? You the user decide that the program should not be allowed to do X, so you refuse. Now the program will not run at all. That's about the worst design possible. A 10 year old can brainstorm a better design within a few minutes.

Investigate prior art:

    man 7 apparmor
    man 4 capsicum
    man 2 pledge


At least on recent Android versions the apps only ask for permissions when they actually need them, not on install time, and you can grant/refuse individual permissions.


This is also how iOS has worked since forever. I think it's helpful, but not that helpful.

In practice, apps that want access to my contacts usually keep asking for access every time I try to do anything, until I eventually either relent to make the prompts go away or click allow by accident.

And that's me as a computer-enthusiast. I would bet money most normal people just hit allow always. Because it's easier.


The missing piece is "refusing" permission should instead give dummy data: an empty address book, a random location, etc. Better would be profiles: Full, Basic, Public, Anonymous, etc


I'm not necessarily against this but there are some pretty major UX implications for non-tech users who don't understand what's going on. "Who is Foo Smith and why does this app think I know them?"


In Android 10 you "only" need to deny a permission 3 times until it is auto-rejected without a prompt.

Some apps even try to circumvent this system by showing a "help" screen with instructions on how to re-enable the permission manually, but I only saw this twice.


> At least on recent Android versions the apps only ask for permissions when they actually need them, not on install time, and you can grant/refuse individual permissions.

Still, this is not practical enough. A true user-centric strategy would be to offer "mock-permissions" to an app, so that if an app says that it needs to read your home dir, you grant mock permission to the app, and it sees an empty home dir, not yours. From the point of view of the app, it should be impossible to know if it has been granted the real permission or just a mock permission.


Do we really?

People still just click "allow" on everything. They get tired of getting questions asked, and just want the program to work, so they don't even read anything and just tap tap tap until it lets them through.

Access your files? Sure. Access your documents? Whatever. Send data off to our corporate data vacuum? Whatever, I need to see what my face will look like with an AR moustache!

Also, are you running Catan as root or something? How is it able to compromise your whole system?


I agree with you on everything but the last two sentences. An unprivileged user account still has a ton of power in a modern Unix or Windows system, because the security model doesn't really solve this problem: https://xkcd.com/1200/


Thats one thing that snaps are trying to do. I personally prefer firejail for application sandboxing and permissions but then you have to be extra careful how you install the application (dpkg post-install scripts have been known to do sketchy things) and you have to make sure you never accidentally run the application without firejail.


I wanted to like snaps (and flatpaks before then), because trying to ease packaging and deployment of apps on Linux is a noble goal, but in both cases I eventually gave up on non-trivial use of them because they were always broken in some annoying way because of their sandboxing.

The latest snap I had to get rid of was Visual Studio Code, because I was trying to work on an open source game with it, and I found out that if I launched the game from inside Visual Studio Code, my game wouldn't play sounds because it couldn't communicate with PulseAudio, and attempting to use ALSA just straight up gave me an error.

On the other hand, I've only had positive experiences with AppImage. Gives you an all-in-one image that you can directly execute if you like, and no sandboxing nonsense.


AppImages are a disaster waiting to happen. They are like Windows MSI installation files just downloaded from somewhere, running without any confinement.


You can run the AppImages you don't trust with the Firejail sandbox which has native support for AppImage.

https://firejail.wordpress.com/documentation-2/appimage-supp...


This is a nice workaround I guess.


I'd call it following the Unix philosophy instead of a workaround.

> Make each program do one thing well. To do a new job, build afresh rather than complicate old programs by adding new "features".

Both AppImage and Firejail do one thing and do it well, you can easily combine both and get what you want.


SELinux does the job better though if someone just configures it. AppImages should be confined automatically IMO.


Sounds like a perfect addition to a -- preferable, but sometimes too heavyweight -- repository-based solution like apt. Basically a more powerful binary tarball.


Gnome-Builder, the official Gnome IDE, builds flatpaks. I'm not sure how good it works, but idea is that one should imagine this brave new world of Linux sandboxed apps similarly how IDEs could work on smartphones. In the end the IDE must be either blessed with special powers or it must build packages for the sandboxing system. Gnome-Builder seems to take the latter direction.

I had run it via X11Forwarding (I had a contusion and used a puny laptop to connect to my desktop) and it was not a smooth ride, so it's not an entirely representative experience, but it shows that it's not all painless, yet.


AppImages are definitely the best, I just wish they had an easier way to integrate with the system. I know about appimaged, but I had problems settings it up, and it's just not as easy as integrating Flathub into Discover, or getting your packages from the CLI by accessing your distro's repositories.


I didn't get even that far with VS Code snap.

I realize I have to soon upgrade from Ubuntu 16.04, so a while ago I installed 19.10 to see what to roughly expect from 20.04, as there are quite a few changes. Trying things out, I installed VS Code from the Ubuntu Store; I was quite disappointed to see things going south right on the Welcome page, as the snap version of vscode couldn't even open a web browser (instead, Firefox promptly crashed every time).

Of course, being able to open a web browser from VS Code isn't a necessity, but for some reason for me it just seemed like the install was rather broken by default, and surely almost no one could have used it much not to notice a simple thing like that right away – and the thought that even if the problem was noticed and simply shrugged at didn't really make me want to see what else did not work.


Snaps certainly have some usability problems. E.g. to use videoconferencing in Chromium I had to give it rights to connect to record audio explicitly.

I expect you needed to do something like that for the VS Code snap.


I did that too, and it works for one minute, every time I start the browser. Firefox it is.


That's weird. I've been doing Zoom through web browser multiple times for hours since then.


I used to use it for daily webex until the snap broke it.


It used to work, with the latest update audio doesn't work anymore in Chromium (although it does see the device), I'll have to report a bug. (snap connections does show audio-record connected)


Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: