Since 16.04, Snaps have been a huge pain for me with running LXC in production environments.
By default, Snap applies updates and restarts Systemd services anytime it likes, and there's no way to turn this behavior off! The only way to get around it is to download the Snap package binary and install that directly. Then Snap won't "know" where to get updates.
(Caveat emptor: "Workarounds" like this can easily lead to a bad security scenario, since any critical security patches won't be installed by any standard system update process)
Did I mention that a fair percentage of the time the Snap updates would leave LXC in a completely broken state? In production (and development, too)!
The final nail in the coffin in this scenario comes in the form of Snap being the official recommended way to install LXC. I don't know if Stéphane and friends even publish Debian packages anymore.
I get the idea behind snap and appreciate it, but the lack of configurability and no clear definition of what stable really even means . . .
1. prevent machines in the fleet from pulling the broken LXD update
2. rollback broken machines to the previously working LXD version on the same channel, since it no longer existed in the Snap Store™.
What a joke! Now we're burnt on snap _and_ LXD.
Some developers are trying to package for Debian. The work is in progress at https://wiki.debian.org/LXD
The following distributions package LXD (that I know of):
* Void Linux
* Alpine Linux
* Arch Linux
Some of those are more suitable for production installs than others, but if you know what you are doing and manage your deployments well, all of them could work.
I've been looking into .deb packaging for Caddy but it really feels like they require us to jump through too many hoops to make it happen. I'd much rather just ship a prebuilt binary.
Security. Dynamically linking stuff is always going to be better than statically linking. Do you trust upstream to keep track of security issues and rebuild in the absurd dependency tree that is Golang software? Or the multi-year old effort which is the current Debian security team?
The reason why it's absurdly complicated is solely on the Golang team, not Debian.
It's clearly not production stuff, but it works very well for a quick test bed before production deployments.
I really have no idea what people are thinking sometimes.
The automatic startup of newly installed services (before you even have the opportunity to configure them) on Debian (and derivatives) has always bothered me and is a (small) factor in my decision.
Fortunately, there are a few "workarounds" to prevent newly installed services from being automatically started!
First, there's the brute-force / heavy-handed approach -- override the systemd presets  to set all services to disabled by default:
$ cat /etc/systemd/system-preset/00-disable-all-services.preset
$ ln -s /dev/null /etc/systemd/system/nginx.service
$ apt install nginx
# ... configure nginx as desired ...
$ systemctl enable nginx
$ systemctl start nginx
(I'd recommend avoiding the latter entirely and going with whichever of the first two options better suits your needs.)
EDIT: Just remembered another method but I'm not sure if it still works:
$ RUNLEVEL=1 apt install nginx
A cultural acceptance of WIP software seems common, in free software communities, in my experience.
On the one hand, yeah it's harder to complain when something is only built in someone's spare time for free. On the other hand... is non-FOSS really any better? Has Windows yet managed to re-merge the old and new control panels?
The first is a fact of life. The second is bad judgement.
Even if people should be updating regularly, forcing them feels completely antithetical to the Linux ethos of users having control over their devices.
Since 2017: https://bugs.launchpad.net/ubuntu/+source/snapd/+bug/1575053
I feel bad for the maintainers; it must be difficult to deal with all the rude and borderline disrespectful comments -- and particularly for something you're donating your time for free after all. But I must say that the architectural decision of creating a ~/snap was a colossal mistake.
It's not an argument. It's freeness is a completely irrelevant property.
I am on the rolling releases already but I'm not going to install an LTS version that is so deeply integrated with the distributor while at the same time ignores the directory standards so incredible blatantly.
I think the (not that bad) manual installation process might keep some folks from installing it, but it makes you learn to choose from the very beginning.
After the initial install, update it on your schedule, and install things as you decide they are needed.
Ubuntu is the opposite -- install the world, then go madly trying to disable and remove the phone-home, forced-update, bloat.
I'll also admit that I can't for the life of me get Arch to properly install with all the tools and features that are normally built-in to desktop environments to work on first boot. I'm sure I can make it work if I give it another try now that I've gotten more used to using Manjaro, but the Arch installation experience is not something I want to go through again any time soon.
I understand that most Arch users will want to decide exactly how they set up their system and all, but for my personal (non-work) machine I just want an operating system that works, allows me to mess with the standard Linux stuff and manages to install itself without me holding its hand. I'd much rather tick a box that says "enable full disk encryption" than manually configure cryptsetup and LUKS parameters. It's just too much effort for what I get out of it.
Ubuntu users may be more tech proficient than average users but even for developer machines, I think it takes a lot of concern out of the equation if the software is up-to-date.
As far as I'm concerned this doesn't deserve an '/s'.
With Ubuntu I just wait until the next LTS.
Due to LXC now being a snap, the file is simply not there.
I guess it's 5 layers deep in namespaces, overlayfs and other stuff.
I was so fed up with this that I removed Ubuntu (and replaced it with Gentoo).
I can't understand why they would break such fundamental thing and seemingly don't care about it.
(Same-but-different for enforcing non-interactive mode on installs.)
The idea with LTS is to be predictable and stable.
Put another way:
- desktop people want ALL UPDATES NOW.
- server people want NOTHING TO CHANGE EVER.
(yes, not exactly that, but kind of)
It is a seriously difficult thing to get right. I always applaud people to take on difficult problems and try to solve them, but I would not expect it to have any sort of robustness for another 5 years at least.
Dear snap developers, don’t make false assumptions that every user in Linux has to have home directory as /home/username just because standard Ubuntu installer does this. Only thing that you should care about here is $HOME
I uninstalled it on all my Ubuntu systems and put it on hold. No new Ubuntu installs, switched to Debian.
I think I am going to have to start figuring out a plan to migrate them all over to RHEL...
I don't know how must be installed on your version, but on 19.10, snap can be completely removed: the only draw back is that pulseaudio depends on it, but I guess that for a server it's not that much of a problem.
For context: https://forum.snapcraft.io/t/disabling-automatic-refresh-for...
Snap developer responses are hilarious. No matter what your use case is, Snap developers know better than you, you silly irresponsible sysadmin/user. Snap is basically just another App store.
I actually really like snaps and that's the only bad experience I've had. I don't know why LXC would come in that form, very weird.
I strongly recommend anyone similarly frustrated to check out debian, which is a fantastic distro. Thanks to Kevin for posting this, but if you're using Ubuntu and disabling snap, you're fighting against the current and I have to imagine it's going to be increasingly difficult with subsequent releases.
I tossed Gentoo and KDE (this is not a Gentoo endorsement, it was just a "hey I wonder what Gentoo's been up to in the last dozen years since I last used it") on a spare laptop. It turns out that KDE is amazing now. It's seriously the best DE I've ever used, and I'm a Mac user! (Half of the utilities I install out of the box on a fresh macOS are built in, and the annoying stuff that used to be editing arcane files is now easy preference settings. It's actually great.)
What the hell are Ubuntu doing shipping Gnome (with the ugliest custom theme known to man, to boot)? Admittedly it was my own ignorance, for which they are not responsible, but their mindshare and bad choice tainted my whole view of the state of the art for a long time.
That's been the case with KDE for 15-20 years now. KDE 3.5 was a great environment (and Trinity (TDE) is a modernized fork of it).
Note that, this year, KDE added telemetry to their Plasma desktop environment. Of course, it's opt-in, so it must be acceptable, right? Well, of course, users who objected to the telemetry found bugs that caused data to be recorded even when disabled.
KDE's response was to ban said users from reddit.com/r/kde and call them "paranoid schizos." (The mods there are KDE members wearing "KDE developer" flair, not random Redditors.)
So, despite using and recommending KDE for almost 2 decades, it's hard for me to do so any longer. I wholeheartedly recommend checking out TDE instead.
If I dabble with debian or kubuntu I will make sure to mitigate it, thank you for making it known, keep up this kind of good work!
It’s a real shame that they found it necessary to even build a telemetry client. I switched to free software after years on macOS (I remember upgrading to System 7) because of all the phone-home that Catalina STILL does even with iCloud, Siri, analytics/crashes, Screen Time, iMessage/FaceTime, Location Services, ntp, App Store, and software update all disabled.
iOS apps in the App Store are all allowed to phone-home like mad, too, and Apple permits this on the basis of you “opting in” to it in the App Store TOS, as if we have any sort of choice on iPhones. It’s totally endemic, and entirely undermines the credibility of Apple’s claims to caring about user privacy.
Seems like spying on users is getting heavily normalized these days. :(
1. KDE's telemetry is opt-in (as you mentioned)
2. All data is anonymized
3. You can select which information you're comfortable to share
4. If you have it turned off, the data is recorded on your hard disk.
5. That data was always recorded in order to enable other features like "Recently used documents".
Here's some explanation:
Honestly, KDE has been one of the most open, privacy aware and idealistic communities out there. The criticism is quite unfair.
(I used to be a KDE contributor)
> users who objected to the telemetry found bugs that caused data to be recorded even when disabled.
> KDE's response was to ban said users from reddit.com/r/kde and call them "paranoid schizos."
I agree. The data should not even be generated if the analytics are not opted-in-to.
I do give them credit for making the system opt-in. They deserve that.
Do you have examples of what you mean?
It sounds a bit hyperbolic, but I can't recall a time when telemetry was added to a system and then later dialed back in any way. It always increases.
>[...] To show our commitment to this dual licensing model, the KDE Free Qt Foundation was founded in 1998. It is a non-profit foundation ensuring that Qt will always be available under open source licensing terms. The agreement that governs this foundation has stayed mainly unchanged over the last 17 years. As a lot of things have changed during these years, we have been working with KDE over the last year to create a new and updated agreement that takes todays realities better into account.[...]
The updated contract mentioned in that blog post is available here: https://kde.org/community/whatiskde/Software_License_Agreeme...
It was like a different gnome - quickly reaching the desktop and lots of nice differences (like the privacy menu wasn't crafted by marketing and legal)
That’s a bit rich: are they not the #1 consumer distro, which hardly implies they are failing to execute. A successful product has missteps, so what.
> I don't want to run that garbage on my desktop.
So don’t. Why complain that others do? I use Ubuntu because it works and I can mostly find information about how to do what I want. There are major aspects of Ubuntu I don’t like (Gnome, Snap) but selecting a distro is all about choosing your compromises. I have tried Debian and other distros, but I tend to go back to Ubuntu because it works best for me.
People complain like this because they have no real control of their own lives. It makes them feel smart, if only they were in control, then things would be better. It would be so easy, the people in charge must be stupid. It comes from a lack of experience and the inability to understand the challenges in those positions.
The shortcomings of snap are well documented. I assumed anyone reading the comments would have been aware of them. But apparently some people will jump on any chance to virtue-signal.
All the hard work to make it a viable OS is done by Debian. Canonical just adds some polish and then wrecks it all with poor design decisions over and over again.
You could equally say that all the hard work to make a viable OS is done by Linux, so screw Debian? Last I hurd, the GNU developed OS is unviable (no 64 bit, no SMP).
Or equally say that all the hard work to make the majority of end-user programs (you know, the raison d’être for an OS) is done by other open source projects, not Debian, so screw Debian...
These are open source projects, with cross-pollination everywhere, each with their own opinions on licensing. Ubuntu mostly helps the ecosystem, and certainly isn’t a parasitic player (although like all, they are not perfect).
Why bag on Ubuntu just because it happens to be popular? Should we also cancel all the other Debian based distros?
PS: complaining about upstart shows you are just being biased (or perhaps misinformed). Canonical were developing upstart before systemd was developed - and systemd was developed by RedHat. The main con given against upstart was not technical, but due to licensing. “In terms of overall feature[s] there is really rather little to distinguish upstart from systemd” https://wiki.debian.org/Debate/initsystem/upstart
No, you couldn't say that. Without toolchains, userlands, and packaging, a kernel is pretty worthless. The barest bones you can go is still gcc, linux, uclibc, and busybox. There is more code that goes into a computer running linux, then there is in the linux kernel. By a wide margin.
FSF does fabulous work, which we are all appreciative of, but some decisions are peeing in the open source pool.
I think RMS creates unnecessary division against Linux and Linus for what I feel are poor reasons. I went to a lecture by him where he spent half his time being negative towards Linux and Linus (that felt like he was just pissed off because Linux was popular) and a bit because Linus had used the GPL2 (not trivial to change, and you don’t get change by attack). Being negative towards the people who are on your own side is wrong IMHO. It could equally be argued that Debian should be called Debian/Linux. Edit: I just found a quote from Linus about RMS that summarises what I wished to say here: “It's not passion for something, it becomes passion against something else.“ - http://torvalds-family.blogspot.com/2008/11/black-and-white....
PS: I totally admire RMS and his relentless idealism. He has given so much to the world, and the faults I see in him are interwoven with the strengths I see: I’m not sure the faults could be mitigated without badly weakening the virtues.
Android went from non GNU absolutism to GNU LGPL for its standard java library a few years ago:
FWIW the whole GNU/Linux pedantry bothers me too.
GNOME history: https://unix.stackexchange.com/questions/141114/what-is-the-...
To return to topic: “How to install Flatpak apps on Ubuntu 20.04 LTS”: https://jatan.blog/2020/04/25/how-to-install-flatpak-apps-on...
There was zero documentation on the change and the error you received attempting to install an extension was basically "operation failed". I discovered the cause only because there was an open bug about it on the Debian bug tracker where the maintainer refused to acknowledge the problem. Eventually, sane minds prevailed and that stupid patch was reverted.
So, unfortunately - you'll end up having to deal with people that refuse to look at things from the user's perspective no matter what distro you use.
And yes - I'm still bitter. :-/
That said, I think Debian's occasional messups are far less egregious and damaging than Ubuntu's though.
LTS on Ubuntu was always better than Debian, I mean saying that Ubuntu is just a repackage of Debian is very short sighted, especially on the security side, Canonical security team is top notch.
They need to, for a long time, Ubuntu has been shipped with EOL Kernel versions https://ubuntu.com/kernel/lifecycle
But what benefits Red Hat in the enterprise world is to their detriment in the consumer world. There's a reason the Debian/Ubuntu package ecosystem is richer and more featureful than RPM, and this is why Ubuntu dominates in the container space--because almost any piece of software that one could expect to have been packaged has been packaged as a .deb and already exists in the default package archives. I can't count the number of times I couldn't find an RPM--certainly not in the default repositories (RHEL, CentOS, or even Fedora), but not even in the third-party community repositories. And those that do exist are of lesser quality than the comparable .deb, for various reasons. (That is, the long-tail of packages is of higher quality for Debian.)
By pushing Snap, Canonical is definitely going astray. Ubuntu's competitive advantage is the Debian package ecosystem. Both Canonical and Red Hat seem to underestimate the role and importance of their respective packaging ecosystems. How many projects to revolutionize or replace RPM/Yum/whatever at Red Hat have crashed and burned? Many, though it's hard to count because half-way through they often realize what they're trying to do is functionally or even technically impossible (as with their aborted 2017 plans for RPM package streams), and scale things back to iterative improvements.
Containers are a security nightmare, and pretty much the only reason to pay Canonical and Red Hat licensing fees is for security and bug fix maintenance of their package archives. On our large Kubernetes clusters at work there are thousands of open CVEs for the containers that are being run, and we'll have to boil the oceans to get them all updated, let alone keep them updated. But updating packages is as simple as an apt-get/yum upgrade, and rarely do you have to worry about anything breaking, especially relative to the pain that updating containers regularly brings.
 If the container uses Ubuntu, Red Hat, etc you can sometimes just rebuild the container to get the newer packages. But that assumes you control the container image. Most containers come from third-party, decentralized sources (that's the point!). But Docker Hub doesn't cajole and coordinate container owners to update their crappy images. It's no substitute for the orchestration of people that are traditional package repositories.
I'd expect most people tech-savvy enough to install Ubuntu would also have a decent enough internet to download a ~700mb file.
This was also around the time I would often brick my primary (only) workstation for whatever reason. Having a properly mastered Live CD was super useful.
I would order at least two with every release cycle for a few years at least.
Thankfully, I saw the light early with Ubuntu-server, and stayed with Debian. Ubuntu-desktop makes for a good enough live / recovery / troubleshooting environment, but not sure I’d use it for anything more.
we installed ubuntu with friends in freakin junior high school. it's not like it's rocket science...
OTOH the village where I grew up as a child only got DSL > 512k around 2008 iirc.
So yeah, lot of people got those.
Just like windows Shuttleworth admires and seeks emulate so very much.
It's called a network externality. Microsoft achieved this with piracy based market penetration of dos and it worked great for them and they've built on it ever since.
Shuttleworth understood the importance of getting established as being popular when he launched and would press install disks, as many as you asked for, and ship them to you at his own cost. As one example of nakedly going after market share and spending resource to do so.
Bug #1 in the ubu bug tracker is literally "windows is the most popular os."
Separate to the marketing, which is worth discussing on this site because some of us actually care about what works and why so wish to discuss it, let's talk engineering decisions.
Between 2 choices that are technically about equal. Choose the one that is more popular. Many feet trample more bugs. Better support. More likely to be around after $time_period. If it needs to work with something else, the managers of the something else project will likely suppor the more popular first etc. Obviously popularity is not the only concern but it really does count for something, ignore it as a dimension in your decision process at your peril. I use linux, and ubuntu as it happens on this laptop. I'm aware windows and osx are more popular and that popularity makes certain things easier. For /my/ purposes and to /my/ taste linux and ubuntu are worth paying that cost to have installed here and I'm very comfortable with that decision.
Just quietly, perhaps people who don't care for business decisions and engineering decisions are on the wrong website? There's plenty of places "boosters" can go to do that.
Snaps may be a pain sometimes, but Gnome seems to be working like a charm...
I wouldn't use any other DE, at this point.KDE has always been cluttered and XFCE is buggy and not particularly intuitive.
Nothing as big as a DE is perfect, thankfully it's broadly moving in the right direction.
Like a Juju one? :)
> That’s a bit rich: are they not the #1 consumer distro, which hardly implies they are failing to execute.
No, that's Debian doing almost all the work.
90% of the packages in Ubuntu are simply taken from Debian without significant modification.
But have you been following last year's improvements to GNOME's performance and responsiveness? A lot of it is Canonical's devs bringing their experience from Unity.
And Void Linux/ i3-gaps before that.
And Linux BBQ / OpenBox before that.
20.04 should be better still (I've yet to upgrade).
there is no need for latency-hiding animations and subsequently trying to make them run smoothly on the gpu if there's no perceptible latency.
I hear a lot of praise for Unity and I'm the kind of person who enjoy trying out new stuff and Linux Desktops is no exception.
For me, Unity was broken because of alt-tab (behavior and lack of configurability).
It might work for everyone else but when I want to switch back to the last or second last thing I worked with I want that done now.
I don't want to look at the tab switcher to ponder what to do next, just alt-tab, done.
This has worked consistently in every Windows since at least 3.1 (the first my family owned), and in every Linux desktop environment I've used except Unity and Gnome 3. And in Gnome 3 it was at least configurable.
This might seem trivial to a lot of you but to keep focus I keep one application maximized most of the time. I don't use them side by side. Then when I need to reference something (Jira, vendor documentation etc) I alt-tab. Same goes for slack.
I think this model is underrated, for all that it can sometimes be annoying. Consistency is valuable. Constant change is not good, even when the changes themselves are positive. But it does mean you'll sometimes be left with out-of-date software.
Edit: Oh, I should mention that you can also use Debian testing to get frequent updates. Primary issue here there is Debian Testing actually gets security updates later than Debian Stable.
Seriously, a whole bar for a WATCH? How come good old Gnome2 did it better 15 years ago, and had a terrific hierarchical menu, to add?
And that is doable now, considering Thunderbolt. Hell, Oneplus should try to push OxygenOS to be tablet-like and this would set them apart from everyone.
Upstart was started alongside or even before systemd, if I recall correctly.
It is true that the Debian people are doing a great job.
> [...] if you're using Ubuntu and disabling snap, you're fighting against the current and I have to imagine it's going to be increasingly difficult with subsequent releases.
Actually, snap was harder to remove in the previous release: you had to rebuild certain packages (actually, just pulseaudio, so it only matters for desktops) to get rid of the dependency, but it seems now that it's just a couple of apt commands, so you have to give Canonical credits for making it easier.
Edit: Please note that many of these are "leaf" packages, by which I mean that no other packages depend on them.
Not at my computer now, but that cannot be correct. Could recommended packages be included in that list?
I did apt-purge snapd on my xubuntu 20.04 and it did not pull out anything unrelated.
When you install something later an it tries to pull in something later you can always try --no-install-recommends.
In the worst case there is eqivs https://wiki.debian.org/Packaging/HackingDependencies but it has been 3+ years that I needed to use that the last time.
That being said, it's not as bright as I thought it was - see my other comment.
But still, emptying a directory is not how I expect to disable what is a system service. It should be systemctl disable netplan...
I think emptying would be better than deleting the directory since it'll probably just get recreated on an update.
upstart was pretty good. It just lost the popularity contest with systemd. I'm not sure if there's really anything serious to complain here about.
Please read the HN guidelines: "Please don't post shallow dismissals, especially of other people's work". Link at the bottom-left.
Do you know Red Hat and google were using upstart? This companies must be hyper incompetent if they run garbage.
I'm seriously considering not taking the 20.04 LTS release and either using 18.04 until it's untenable or switching to something else.
I think this is ok though. If ubuntu were better thought out, the linux ecosystem might be less vibrant than it is. I would like to think other distros are learning from these failures. Personally ive had excellent luck with scientific linux (CentOs based).
Snaps break debain's stable release model. They allow upstream to ship updates outside of the normal 6 month ubuntu releases. There are times when you might want this, but it should be opt in not mandatory. I thinking specifically of lxd which is only shipped via snaps.
The snap store's trust model is confusing. Its hard to tell who is making the packages and how they are sandboxed. If I'm going to install a proprietary piece of software I want to know exactly what it can and can't do. Lately I've been using firejail when I need to run things like this.
And now for a minor complaint that also feels most user hostile to me: why do the snap developers think its ok to require a non hidden directory in $HOME? Seriously my home directory is MINE, if you have to store application state there at least have the decency to do it in a hidden directory.
4 years old now.
The actual heritage is when `ls` was written to exclude .. (dot dot) directories from it's listing the code literally just had the logic to exclude any files which had a dot/period as its first character. Later operators exploited that bug/feature to create items with a dot prefix to make those files and folders hidden. Eventually this became convention.
Whereas systems released more recently understood the need to have a hidden file attribute so made it a first class property.
I do love POSIX and often the first to defend some of Linux or BSDs idiosyncrasies but if there was ever anything that was worth someone saying "I know we've always done it this way but it's shit and it's about time we implemented a proper solution", it would be the dot prefix kludge. (I also know it's easier said than done and would take years for all the tooling to catch up)
I have still been using `snap` for the past year, and have mostly been happy with it. It works well for apps like `spotify` and `zotero`, it's a better solution than `ppa` for third-party software, and I like that it tries to introduce a permission model. But not following the dotfile convention is annoying, and not prioritizing a search-and-replace of `~/snap` to e.g. `~/.snap` in the code after four years is strange when this is their most upvoted bug.
Of course you do have system files, which are hidden from all standard folder listings, but aren't usually used by software, presumably because they're too damn difficult to find again (although I really hope google drive realizes at some point that desktop.ini is supposed to be a system file, not merely hidden).
.NET has its own set which is also very old and goes back to .NET 1.0 (as old as Win98) https://docs.microsoft.com/en-us/dotnet/api/system.environme...
(That said, I would still prefer that they not be.)
In fairness, I think that's the point, just like RHEL streams.
> There are times when you might want this, but it should be opt in not mandatory. I thinking specifically of lxd which is only shipped via snaps.
Yeah, decent idea, poor execution. Especially for system/infrastructure software like LXD. I want a stable kernel, a stable glibc, a stable LXD/Docker/k8s/whatever, and then on top, bleeding-edge applications that will not break the world if they autoupdate at random to a less-tested version.
After fighting with node, vlc and ffmpeg snaps not working reliably, I had to add a warning for my PhotoStructure users on Ubuntu to avoid those packages (and how to install them via apt).
I agree with a bunch of people here: snap promises some great features, but in practice the horrible performance, additional resource consumption, and spotty reliability led to me actively avoiding snap packages and looking for an alternative installation.
Applications need a place to save state, and you don't really want it to be hidden, because sometimes I as a user do need to change it manually for any number of obscure reasons. So you make a subfolder that's designated for apps to write to.
There's no good way to put this into the existing Linux model though. All you'd do is add yet another standard a la https://xkcd.com/927/
Especially when a . is used to make the directory hidden, because I can't unhide that specific directory without changing the name apps expect.
I get into the Library folder by doing: cmd-shift-g -> ~/Library -> enter
In Catalina, it's now Cmd-shift-fn-.
$ ln -s .hidden not_hidden
It's that easy. Really.
I'm being picky, I know! But I still think the Library approach is cleaner. (Again, back when it wasn't hidden by default!)
This model is terrible. OS asks up-front: program wants permissions to do A, B, C, X, Y, Z: grant/refuse? You the user decide that the program should not be allowed to do X, so you refuse. Now the program will not run at all.
That's about the worst design possible. A 10 year old can brainstorm a better design within a few minutes.
Investigate prior art:
man 7 apparmor
man 4 capsicum
man 2 pledge
In practice, apps that want access to my contacts usually keep asking for access every time I try to do anything, until I eventually either relent to make the prompts go away or click allow by accident.
And that's me as a computer-enthusiast. I would bet money most normal people just hit allow always. Because it's easier.
Some apps even try to circumvent this system by showing a "help" screen with instructions on how to re-enable the permission manually, but I only saw this twice.
Still, this is not practical enough. A true user-centric strategy would be to offer "mock-permissions" to an app, so that if an app says that it needs to read your home dir, you grant mock permission to the app, and it sees an empty home dir, not yours. From the point of view of the app, it should be impossible to know if it has been granted the real permission or just a mock permission.
People still just click "allow" on everything. They get tired of getting questions asked, and just want the program to work, so they don't even read anything and just tap tap tap until it lets them through.
Access your files? Sure. Access your documents? Whatever. Send data off to our corporate data vacuum? Whatever, I need to see what my face will look like with an AR moustache!
Also, are you running Catan as root or something? How is it able to compromise your whole system?
The latest snap I had to get rid of was Visual Studio Code, because I was trying to work on an open source game with it, and I found out that if I launched the game from inside Visual Studio Code, my game wouldn't play sounds because it couldn't communicate with PulseAudio, and attempting to use ALSA just straight up gave me an error.
On the other hand, I've only had positive experiences with AppImage. Gives you an all-in-one image that you can directly execute if you like, and no sandboxing nonsense.
> Make each program do one thing well. To do a new job, build afresh rather than complicate old programs by adding new "features".
Both AppImage and Firejail do one thing and do it well, you can easily combine both and get what you want.
I had run it via X11Forwarding (I had a contusion and used a puny laptop to connect to my desktop) and it was not a smooth ride, so it's not an entirely representative experience, but it shows that it's not all painless, yet.
I realize I have to soon upgrade from Ubuntu 16.04, so a while ago I installed 19.10 to see what to roughly expect from 20.04, as there are quite a few changes. Trying things out, I installed VS Code from the Ubuntu Store; I was quite disappointed to see things going south right on the Welcome page, as the snap version of vscode couldn't even open a web browser (instead, Firefox promptly crashed every time).
Of course, being able to open a web browser from VS Code isn't a necessity, but for some reason for me it just seemed like the install was rather broken by default, and surely almost no one could have used it much not to notice a simple thing like that right away – and the thought that even if the problem was noticed and simply shrugged at didn't really make me want to see what else did not work.
I expect you needed to do something like that for the VS Code snap.