Hacker News new | past | comments | ask | show | jobs | submit login
Ubuntu stops shipping Flatpak by default (lwn.net)
409 points by chr15p on March 29, 2023 | hide | past | favorite | 614 comments



Wow they're still pushing snaps, the project with some the worst engineering I've ever seen.

- Extremely slow at doing anything, even the most basic commands.

- Ridiculous auto-update mechanism (you can't even disable it wtf).

- Random, nonsense limitations (why can't I open dot files and dot directories???).

So terrible that for most apps that I installed with snaps I end up installing the deb version later on.

What an abomination, it is a devil that's hurting the Linux desktop everyday.


My favorite experience with snaps is Firefox just closing in the middle of doing something because it wanted to update, and it didn't even bother to check if I was using it first.


This also happened to me. So I decided to uninstall the snap and “apt install firefox”

Guess what I got?

A freaking snap. Yes, try it.

I’m done with Ubuntu


Everyone's done with Ubuntu. It's just not good. Its got a stereotype at this point for being the easy noob distro but that's not even true. Its top to bottom awful and has been for many years.


Call me a noob if you like, but I don't like hunting down drivers. Tried to go to debian on my last dev machine upgrade, but reverted straight back to ubuntu. I may be lazy, but I really don't want to hunt down drivers. I'll try debian again next cycle.


I feel like hunting down drivers hasn't been an issue on Linux for any relatively modern machine I've run in over 10 years.

If it really is something that you have had problems with, maybe try PopOS instead of Debian. The restricting non-free repos by default out of principle with Debian can sometimes get annoying when you need to install certain non-free drivers (looking at you Nvidia), but PopOS is a really well-polished ootb experience that is trivial to install. Second to PopOS for a set it and forget it experience, OpenSUSE is a rock-solid distro that does not seem to get much praise.


>I feel like hunting down drivers hasn't been an issue on Linux for any relatively modern machine I've run in over 10 years.

Try installing any Debian flavor on an Intel Mac. Keyboard, mouse, bluetooth, wifi drivers all incredibly hard to get working. Need to perform some voodoo extracting the drivers from a MacOS image then making them available during boot.


That probably depends on what vintage machine you're trying to install it on. I'm typing this on a "late 2009" 27" iMac running just that, Debian. I did not have to hunt down any drivers or slaughter any black cockerels to get things working, the only "special" thing I did was install rEFInd [1] to deal with the EFI bootloader. That's it, nothing more. A simple network install later I had Debian running on the thing, sound and network and Bluetooth and wifi and accelerated graphics and all. The "iSight" camera works, the "remote control sensor" works, I can control the screen backlight, things... just work. With 16 GB of RAM and the standard 2TB hybrid drive the thing has years of life left in it as long as I can keep the graphics card running - it has been baked in the oven once to get it back to life, no complaints from me since I got the machine for free because of the broken graphics...

[1] http://www.rodsbooks.com/refind/


Debian is not the competitor to Ubuntu, Mint is.


Or another recommendation. If you want all the drivers and you want to run Debian, use the non-free image which I believe they just decided to make it easier to find?


I've tried non-free KDebian last month. It still booted without wi-fi. But what's most frustrating - it didn't detect any partitions on my drive. The KDE installer I mean. lsblk showed everything just fine. Same with Neon live.

And guess what? It works in KUbuntu. But ubuntu is just SO slow now. ( And I couldn't install it of my current dual-boot anyway because it does NOT have option to NOT TO install new bootloader. :-/ I love Linux (lie).


Didn't know that was an option! Thank you! If I only had a time machine to 3 weeks ago. Oh well. Thanks anyways. I'll try that again in a couple of years.


In all fairness to Debian, they do cover this in their installation instructions.


I haven't had trouble with drivers using Fedora in years. RPM fusion handles Nvidia drivers just fine. It's a far cleaner and "noob friendly" distro in my opinion, so long as you're able to google "how to install nvidia drivers fedora"


This is mostly just a Debian problem, due to their "no non-free software" philosophy, which extends to device drivers. I have never in my days booted a Debian install that worked with wifi out of the box, and I suspect I never will, due to that philosophy.. For that reason, I've stopped trying and I default to Ubuntu (-based) distros instead. All it takes to get rid of snaps forever is `sudo apt purge snapd`.


> All it takes to get rid of snaps forever is `sudo apt purge snapd`.

That's not enough. Some package could eventually drag it back in.

    $ apt show firefox
    Package: firefox
    ...
    Pre-Depends: debconf, snapd
If you really want to keep it off your system for good, you need something like this:

    $ cat /etc/apt/preferences.d/no-snapd
    Package: snapd
    Pin: release a=*
    Pin-Priority: -1
    $


When I try to remove snapd it says it'll also remove ubutu-server-minimal. And that scares me.


Use the moment to replace ubuntu-server with Debian and you'll be glad you did when Canonical decides on its next move to ensnare users. Even when I used Ubuntu - back in the early brown-desktop days when they sent out free CD-ROMs to anyone who wanted one - I never felt tempted to use it on a server since it was never clear to me what it offered that Debian could not deliver while it was clear that keeping Debian up to date was (and is) far easier than doing the same with Ubuntu.

Ubuntu had its place in popularising Linux but they jumped the shark a long time ago, now they are just another player jostling for their own niche.


Thanks for the recommendation. Before I do that I'd have to check which binary drivers I have so I don't end up with a server that has no internet.

And by server I mean that dusty NUC...


In the past you could use the images from here [0] to get installs with wifi firmware, but future versions will have it included in the official images. They've worked out of the box on almost all recent (last 5-10 years) systems I've tried it with.

[0] https://cdimage.debian.org/images/unofficial/non-free/images...


that does not get rid of snaps forever when you want to 'apt install firefox' or any package that is snap only. That's all it takes


Yeah I use both Fedora and OpenSuse (Tumbleweed) and it's a really stupid easy no-config setup on both distros.

Ubuntu is just maddening.

Pop OS is nice out of the box too, but I just don't want to use Ubuntu derivatives at this point even if they've removed snaps


I run Mint on my primary desktop and it's fantastic. What Ubuntu LTS would've become if it had continued to focus on a good desktop experience and pushed Flatpak instead of Snap


I came here to say this too.

Ironically I was about to set up a new Linux Dev machine with Ubuntu and now I'm more inclined to go back to Mint since I never had a bad experience with it. I was fortunate to skip the Gnome 3 days and the Cinnamon and Xfce implementations have been very stable for a while now.


Give OpenSUSE Tumbleweed a shot. Better than Ubuntu (in my experience) when it comes to drivers. They have the MacBook Pro 2015 facetimehd webcam (which Ubuntu doesn't have) and my brand new Asus Zenbook S13 OLED was perfect right out of the box. It's a rolling release so you get extremely recent packages. And its KDE is amazing (I switched from XFCE, it was so good). I love it.


You know what distro I had the most experience of hunting down drivers? Ubuntu.

I've given-up on Debian-like systems on a laptop, because the drivers were never good, just decide one last try with bare Debian, and have everything work out of the box. In my experience, Ubuntu never works, and when you suddenly get most things to work, they break down again in a week or two.

No other distro ever gave me that experience.


Use PopOs. Better hardware and is mostly Ubuntu with lots of the dumb Snap stuff removed. Plus some other cool features.


You could also try Linux Mint. I moved to that two years ago when Ubuntu started to go sideways and I've been very happy so far.


What machine are you using ?


You want Linux Mint.


My favorite fail with Ubuntu is every version changes how DNS configuration works.


This was my last straw. I installed some 6 month old LTS release, and it had to go through a 2-5 second timeout step on the initial lookup of each new dns name. Then, it would populate a local cache, and work well until the TTL expired or whatever.

Anyway, if you are looking for a noob distro, I recommend manjaro. (The AUR packages are extremely unstable, but other than that, it’s pretty competitive with what Ubuntu was 10-15 years ago.)


Please do not use Manjaro. They are known to ship half baked WIP patches that cause massive breakages in their distro. Here is one of their latest instances: https://fosstodon.org/@alyssa@treehouse.systems/110049699665...

I personally have had them ship out WIP patches not meant for production, which has wasted a lot of my (volunteer) time chasing down phantom bugs in software I maintain. This has personally happened to me on at least four different occasions. A lot of other FOSS maintainers I know have similar stories.

More info: https://manjarno.snorlax.sh/


> manjaro

Is Manjaro really that noob-friendly? All I know about Manjaro is that it's based on Arch, which I always understood as being the LEAST noob-friendly distro besides LFS.


Arch isn't really noob unfriendly, it just requires an intimidating procedure to begin setup, which manjaro used to just do automatically for you, that was basically it's selling point, "arch without manually installing all your software."

Now it's just adware and unstable crap, not near as bad as Ubuntu but I won't recommend Manjaro anymore.

There are several less noob friendly distributions than Arch, I'd say NixOS, Void and Alpine probably top that list. Theyre great distros but they deviate significantly from what you'd expect from mainstream Linux.


It's middle-of-the-road IME. Arch with good (but not amazing) defaults and a team that has had a number of controversies that kinda give them a shady vibe overall.


Canonical makes decisions based on their own self interest. Not for their users and not for the benefit of the greater community. That's what drove everyone away.


But driving everyone away isn't in their own best interest. They're basically shitting in their own well.


They're probably doing just fine selling support for Ubuntu Server.


I have meager needs so I haven't run into (m)any of the issues here, but what's a deb based alternative that isn't meant for absolute stability at the expense of anything modern?

(I ask with actual curiosity; I'm ignorant to most distros.)


FWIW I’ve been using Linux Mint for years and have never had a major issue. Most minor issues are with out of date repository packages which can usually be installed by other means.


Isn't Mint an ubuntu offshoot, or does it avoid the flatpak/snap issues?


Linux Mint is an Ubuntu offshoot that doesn't use Snaps but does include Flatpak support.


Not only does it include Flatpak support, but as of 21.1 it can even handle Flatpak updates through the GUI Update Manager alongside .deb packages from standard repos/PPAs.[1]

[1] https://linuxmint.com/rel_vera_cinnamon_whatsnew.php


I'll add to this: one that uses KDE, please. Kubuntu has served me well but I'm tired of Canonical's shit.


I've had several people recommend the testing branches of Debian for relatively up-to-date software while still being stable FWIW


Sid?


popos maybe?


Ubuntu can be a bit easier to get a laptop running than Debian (although I personally use Debian).

But whenever I see someone running Ubuntu on a server I think that there is a very real competence issue. Ubuntu should be kept as far from the server room, data centre or cloud as possible.


Not userfriendly at all, but there is a solution:

  sudo tee <<EOF /etc/apt/preferences.d/firefox-no-snap >/dev/null
  Package: firefox*
  Pin: release o=Ubuntu*
  Pin-Priority: -1
  EOF

  sudo add-apt-repository ppa:mozillateam/ppa
  sudo apt update
  sudo apt install firefox
I'm getting old for this; Canonical is getting Microsoft's manners.


Fedora's really nice. Try it~

Alternately, I could shill for NixOS. It's also really nice. Eventually.


Note that Fedora has an immutable version of the OS that's similar in intent to NixOS. It's called Fedora Silverblue.


I've tried it. And while I'm not sure why, its updater never worked for me -- hung while downloading, I think, but there was no feedback.

Honestly, I prefer NixOS. It's far more configurable, which is a nice thing to have in an immutable OS.


yes, I tried Ubuntu briefly in 2006 before switching to the Fedora/EL ecosystem since then. Fedora seems to have won every "battle" so far (systemd vs upstart, gnome-shell vs. unity, etc.)


It is possible to get an apt package, you have to jump through a few hoops but it can be done; I do it every time I install Ubuntu (frequently) because I won't touch that snap shite again.

I can't move work off of Ubuntu; it's too embedded now, but I'm looking for something else for home. Switching distro-base isn't so easy when you've been using it for decades though; I tried NixOS but it wasn't comfortable (Nix is a steep learning curve), though their community is top notch, and everything I do is deb based.

Looking for a way to get a modern debian (something akin to non-LTS Ubuntu) or just go all out and switch to something Arch based like EndeavourOS.


> Looking for a way to get a modern debian (something akin to non-LTS Ubuntu)

Not exactly sure what you mean by modern, but I'd recommend debian "unstable" (also called "sid"). Despite its name it's pretty stable. Normal debian stable releases are LTS style, unstable is where newly built packages show up first—so it will generally have the latest version of stuff and not be stuck a year or 2 back. It's basically a rolling-release style thing—I put in a little cron-job that does `aptitude safe-upgrade -y` every night to keep me up-to-date.

You can also use debian "testing", which one step back from "unstable"—packages are promoted from "unstable" to "testing" when if they've gone 2 weeks without a bug report of some particular severity (that I can't remember off the top of my head).

What's nice is you can have both testing and unstable in your apt sources—on my machine I set the priority on my testing higher than unstable so I generally get the testing packages, but I can grab unstable if I need to. I've been running this way for about 20 years now, and it seems the right balance of new but consistent.


By modern I made access to fairly new packages.

I don't want things breaking left, right and centre but I want access to later versions of tools and libraries I'm using.

For example, at work we were told to upgrade Wireshark and VirtualBox to major versions that aren't available in apt on 22.04 after an audit due to vulnerabilities in older versions.

What you're doing sounds like it'll work nicely for me, thanks.


I moved from Ubuntu to Fedora when Canonical started pushing snaps 4 years after the auto update debacle that's also mentioned elsewhere here. Couldn't be happier.

Key differences I noticed:

- apt vs dnf

- Intalling on a new computer.

Would totally recommend.


I've used fedora, I have no real issues with it, but I'm not sure if it's going to work for me. At work we target Debian/Ubuntu and I lead the backend team so I need to be on-point; that means not having to mentally switch "environment" all the time because I use something else at home.

Still undecided though; I'm too old (read; jaded) for distro hopping now, but maybe I'll try find a Debian setup as another commenter suggested that'll work.


I’ve been considering switching and haven’t used fedora in years. I’ll have to give it another chance. Snap has seriously annoyed me.


Just be aware that Fedora's got a six-month release cycle rather than whatever Ubuntu's LTS lifetime is (4 years?), and Fedora only supports current release and one back. So realistically, you've got a year a month to upgrade your workstation.

I've had Fedora for over five years and I've never had my laptop get completely borked by an upgrade, but I've had just enough things break between releases in the past that I still get get the sweats every time I've gotta do the restart upgrade, whether it will come up completely and just work or whether my WiFi is now broken because resolve-d changed to systemd-resolved.


Actually regarding upgrades Fedora Silverblue - which I currently use - may be better.

Key benefits: - Applications through flatpak don't depend explicitly on system libs so there's less chance of breakage. - If upgrading to new fedora version breaks anything, switching back is just one command away (rpm-ostree rollback). I don't think going back is so easy on normal fedora.


Would you be interested in a session for me to better understand (and hopefully eventually fix) why Nix was not comfortable? Not looking to evangelize, but to learn about the experience from your perspective.


Hey. Yeah, I'd be happy to, time allowing.

I really enjoyed the results of NixOS with flakes but a couple of things were a little more challenging than I have time for to switch it into my daily driver.

It was that steep curve that stopped me going back to date; I liked everything about it, the community was very welcoming and helpful, the declarative nature, and ability to define my machines' states in Git, the documentation, no complaints except the time I'd need to feel as proficient as I am elsewhere.


do a video or record it for everyone's benefit :)


I've mostly used Ubuntu in the past & decided to try EndeavourOS and I don't think I'll go back. I've had a great experience with it.


Manjaro was a very smooth transition from an ubuntu-based system (neon, in fact) for me.


I ran Manjaro on my gaming desktop for a couple of years but I hated KDE, it felt so clunky, always misbehaving compared to Gnome where I've had relatively few issues.


You can choose the desktop before install time: Gnome, XFCE, and KDE have official support; just download the appropriate ISO from https://manjaro.org/download/


I've been angry at Canonical for lots of things, but no longer having native apt package for Firefox was forced by Mozilla...


Not true.



That sounds more like Canonical marketing-speak, than Mozilla. My guess would be that it is Canonical who approached Mozilla for snap support, and Mozilla said yes.

Meanwhile, Mozilla still maintains ppa (mozillateam) with apt version. There's also Flatpak version, which delivers what snap promised.


*sigh

Another firefox blunder to add to their growing list.


If someone (Ubuntu) wants to package and distribute free software (Firefox) in their own format (Snap), the upstream maintainers (Mozilla) shouldn't hinder it no matter how bad the format is - it's not their job.


People are free to distribute Firefox however they want... Without the logo and Firefox branding. If they want to distribute it _as_ Firefox they have to meet Mozilla's conditions


Yes, Mozilla (and any other upstream maintainer who owns a trademark on free software) can set such conditions but no, it's not their job.


I went for OpenSuse & Linux Mint, no complaints.


Looking at the responses to this post, a more expected headline would've been “Ubuntu stops shipping Snap by default”.


If they actually wanted Ubuntu desktop usage to go up instead of down this is exactly what they would have done


> A freaking snap

Here's how to properly install firefox on ubuntu

https://www.omgubuntu.co.uk/2022/04/how-to-install-firefox-d...

and, once you're done:

   apt-get purge snapd


I had to do this, after an update installed the snap version. I had crashes, UI that wouldn't render, all sorts of deal breaking bugs. And I don't really care how I have to install something, as long as it's painless. No idea why the snap copy had those issues but 0 issues with the apt.


I can't recommend EndeavourOS enough. You get all the good parts of Arch with an easy to use graphical installer, XFCE or another DE + great Nvidia support out of the box.


And if you are down to install and configure your distro from the CLI, NixOS is amazing.


It's not really what I would want on the desktop, but I did mess around with it a little and it's pretty interesting. Next time I need to set up a server for something I think I'll probably use it.


Oh so that's why both the Snap and the APT-installed versions of VLC were broken, they were the same one???


Yes, I followed this procedure[1] to get a real .deb version again, which has been working fine.

1: https://fostips.com/ubuntu-21-10-two-firefox-remove-snap/


This was what made me find an alternative distro. When I say "apt install", I mean "apt install"


Come back to Debian (testing, if you want new package versions).


This happened to me and since I had to immediately stop what I was doing, I used the opportunity to take snap off my system completely and reinstall Firefox properly.


I specifically moved away from Windows to get away from automatic behavior like that.

When will these system designers realize that the system shouldn't do anything observable without me telling it to?

Imagine if a kitchen oven decided to perform a self-clean without any human interaction "because it hadn't performed one in awhile".


If you're moving away from Windows, why not go all the way and install Manjaro or something? Snaps are meant to emulate the Windows ease of use, including the automatic behavior that you dislike (but many users like).

> When will these system designers realize that the system shouldn't do anything observable without me telling it to?

My vehicle changes gears without asking me. If someone makes a broad statement like "when are these vehicle designers going to realize that a vehicle shouldn't do anything observable without me telling it to?" I'm just going to laugh at them.


This happened to me on my work laptop and I immediately closed the lid and started using my personal Mac instead. Fuuuuuuuuuuck that.


My snap version of Firefox on Ubuntu kept bugging out my plasma taskbar too. Consistently reproducible, and annoying until I figured it out. I uninstalled the snap and manually added it through apt.


Snaps are getting me off Desktop Ubuntu after 12 years of happily using it.


Canonical seems to be trying to push users off of Ubuntu. I switched to Arch from Ubuntu about 6 years ago after seeing how aggressively Ubuntu would auto-update, and because of Zeitgeist. I would never look back.

Arch is customizable, simple (in the sense that there are no surprises; things work as expected), and has a great community. Folks here can argue about snaps or flatpaks, and I can happily use AUR to install nearly anything. If it’s not there, I can publish it.

I’m not forced to adopt whatever GUI Canonical thinks is best for me in a given year or whatever their trendy new craze is. I can enjoy i3, tmux, vim, and ignore the rest.


I also switched to arch for a bit, but then I was left with an unbootable system after the arch devs shipped grub's master branch as stable. The arch devs were completely unapologetic and told me 'well maybe you shouldn't use arch if you can't recover a system who won't boot'

Immediately formatted and switched to pop OS and I've never been happier.


Yeah, Arch is definitely geared toward a more technically proficient user base. Their users, myself included, are typically willing to wrestle with changes like that. Recovering a system that won’t boot is almost a rite of passage in the community, since there’s an expectation that you probably built up the entire boot process by yourself, so you ought to know what it’s doing. For some users, that’s simply not true.

For future reference, if you ever decide to switch back, breaking changes or ones which require manual intervention are usually announced on archlinux.org.


Yeah I was using endeavoros which was basically vanilla arch with an installer at the time.

https://old.reddit.com/r/EndeavourOS/comments/wygfds/full_tr...


That was the event that got me to stop using "Basically" arch (Endeavour, arco, etc) and just use arch itself.


iirc this also happened in arch itself, the endeavor team just happened to have the better writeup on it.


Gnome keeps me off of pop. I look forward to the popos team are ready to ship cosmic as the default desktop.

Have you tried OpenSuse Tumbleweed or Gecko Linux? Tumbleweed is a rolling distro but the maintainers apparently test all of the updates they push. OpenSuse can feel a but clunky (it asks for passwords for “everything” for instance), but theres Gecko, which acts a bit as a wrapper of a distro to make OpenSuse a bit more user friendly.


I don't understand the hate for gnome, but xfce / kde / i3 or whatever is just a sudo apt install away.

I've heard good things about tumbleweed and it even has support for WSL, so I might try that if I ever build a gaming pc and have to main windows.


> I don't understand the hate for gnome

I avoid it because I find it hard to use and hard-or-impossible to configure adequately. It takes a "my way or the highway" approach. If you like how it does things, it's great. If you don't, you're better off using a different DE, which is what I do.


Regarding gnome, I personally don’t like that you have to install browser extensions to change settings on the ui.

I’ve tried adding desktop environments to a pop installation, but I really don’t like having all of the apps included with other desktop environments cluttering the taskbar menu and the like.


Yeah this is what keeps me off arch personally. The community that instantly goes 'just get good' when you have an issue. While I never needed any help, I didn't like how the community treated other newbies. I know it's not always meant in a bad way, there's some tough love 'don't give a man a fish but teach him how to fish' sentiment there that makes sense. But the elitism is pretty strong too in my experience.

Also I wanted a distro without systemd and the init system is the one thing you can't choose or change on arch. I tried it but didn't like arch, in the end I moved my stuff to alpine which still runs my docker server.

In the end I chose FreeBSD which has a really nice combo of stable OS but rolling packages which is not common on Linux at all. And the community is much nicer IMO.


Shilling Manjaro as the best of both worlds, imo.

It's a user-friendly and maintained Arch with some goodies like kernel switcher and driver updater GUIs.


https://archlinux.org/news/grub-bootloader-upgrade-and-confi...

In regards to this specific incident, The Arch team did release a statement on how to handle the update.


Same here, also i was surprised how well popOS worked "out of the box" with just the default settings, running on a new pc/latest hardware.


> Canonical seems to be trying to push users off of Ubuntu.

I'm on Ubuntu for now because Snaps can be disabled but it does make me wonder since they also dumped Unity Desktop a few years back. It almost seems like they don't care about Linux Desktop users any more.


Same here. I was a happy Ubuntu desktop user for over a decade. Now I’m a happy Arch user.


Same here off Ubuntu and onto centos/fedora rpm dnf world


Glad you have something sorted out.

Just interested: why not Debian? i.e. still deb based distro?

I'm guessing the poster above you wants recent package versions as they went to Arch.


In my case Debian’s old package versions can often be awkward because I’m not using Linux exclusively… my macOS and Windows boxes are running latest releases of most things which can cause problems with e.g. sync features.

I usually run Fedora rather than Arch though, because in my limited experience with Arch it really doesn’t like to not be booted into for extended periods of time — if you do that the piled up updates are much more likely to break somehow or things like required config changes will slip through the cracks, whereas I have yet to experience this with Fedora.


That's kind of ironic considering how ancient some of MacOS's userland is.


For my use case, the details of the userland CLI is mostly irrelevant (particularly since I maintain a FreeBSD server, which means I’m reasonably familiar with both BSD and GNU styles of these tools). Most of the time it just needs to exist, not be any particular version, and exceptions are handled well by Homebrew. Third party apps with UIs being up to date is more important.


For me it's now Debian on the server (debs) and Arch on the desktop (rolling releases, AURs).


Who controls quality and security in the AUR world? It doesn't seem like something I'd want to trust?


the AUR is user supported, no claims are made, but AURs are built off of short scripts called PKGBUILDs so it's easy to audit, you're gonna want to look for the line that links to a tar archive or git repository.


Nobody, but the specs are so simple you can audit them yourself usually. For me it's mostly about low friction packaging my own software tbh.


Need nixos for haskell as well


It’s pretty much a meme at this point, but yeah. Snaps just finally did it in for me.

I’ve switched all my laptops and workstations from Ubuntu to Arch.

There are rough edges here too, but on overall I’m much happier. I feel like I know how my machine works again. Ubuntu lately has been giving me that windowsy feeling with lots of things running for god knows what reason.

Also in arch the repos seems to not only contain more current versions of things. They seem to bundle more things altogether.


Same.


I moved to Debian 11 and recreated exactly the same quite customized GNOME desktop I had on Ubuntu 20.04. Nothing is missing except snaps.


I've had issues recently with WiFi on multiple different machines on Debian 11 and switched back to Ubuntu, is WiFi working for you?


What wifi chip are you using? Debian by default has no unfree wifi drivers included, so that can cause issues with them. If you have a good wifi adapter by a vendor that pushes drivers upstream like Mediatek, it should work just fine out of the box.


or install from non-free ... which will not be needed anymore for next release

https://cdimage.debian.org/cdimage/unofficial/non-free/cd-in...


Yes I was using the non-free version. It was working fine until I'd set the machines to auto-update then one day found out they had zero connectivity after an update. I thought about going around to each machine and manually trying to revert back whatever was in the updates that broke it and just decided to wipe and install Ubuntu back on them.


Would using debian non free address this?


I presume so, but I wouldn't know because I haven't ever tried.


WiFi is working for me even if I mostly use the Ethernet card.

lshw tells me

  description: Wireless interface
  product: Centrino Advanced-N 6235
  vendor: Intel Corporation
I'm using the iwlwifi driver.


Same here. I used to use it coz it was one of the better debugged distributions that "just worked" and I didn't want to futz with deep config files like I would with arch or gentoo or deal with some confusing "nonfree" workarounds like with Debian.

These days it's looking like Fedora holds that crown.


I switched to pop OS (after a brief stint with an arch distro) and have never been happier.


Pop really is a very pleasant experience, and probably the best I've had on a personal computer. 3/4 of my immediate family run it on Thinkpads; the only hold-out is a Gentoo teenager wanting to be "weird".


Hey man, sometimes you want a distro where you can really muck about with the internals. I started with redhat 6.2 and then went to slackware until I cared more about being productive than learning internals. It's good to explore as a teen.


I left Ubuntu server and lxd because of it. Maybe a bit emotional, but f*ck that, I don't need this in my life.


I’ve switched everything off Ubuntu but my servers. That bit is just too much effort and I can’t be bothered yet.

Next server though. Not Ubuntu for sure. Probably Debian?


I chose Debian, FWIW.


How is Debian-support for ZFS these days?

Is it doable to setup Debian with a ZFS root file system?


Can you share bit more details? I have no issues with that setup or at least I cannot notice the issues.


There were a few things wrt to clustering (which I don't need) and storage pool management that tripped me up at first, but not too bad. However, it made me worry about the fact that I might not be the target audience for the LXD project (I just need simple lightweight machines with snapshots, nothing more).

When they decided to stop officially distributing debs, and promote Snap as distribution channel, that was the final straw. I don't understand what's their target audience anymore. Desktop users? My machines are sensitive about reboots, but pretty much sealed off the internet. Upgrades are tested, and happen in scheduled windows. Yet Snap auto-update insists on restarts, whenever it sees fit.

Sure, I can find workarounds [0], but the complete disregard on this issue and the reason that they probably forced the LXD project to promote the dumpster fire that is Snap just didn't sit well with me. I'm gone for good.

[0] https://forum.snapcraft.io/t/disabling-automatic-refresh-for...


Clustering/storage pools are very optional, defaults are rather sane to my taste (I don't use clustering, may be yet, but use storage pool to define LXD stuff should reside on btrfs which has dedicated LV for that).

Auto update happens and I can understand your pain here with sealed machines/testing updates. Most small to medium companies around don't care though (from what I see) and probably have unattended upgrades on anyways.

Bit more on autoupdates - just to align yourself with how people care on keeping versions, you can imagine and check yourself of how many dockerfiles contain `latest` or no any specification of versions of pulled images. Many, many, not giving a shit.

In practice, though, autoupdates are not bringing VEs/VMs down and I find update happened after my `lxc shell some-ve` sessions are disconnected from time to time (I tend to keep those in tmux and it could be attached for weeks or even months).

As for use cases and audience - both Desktop/Server works for me - on desktop I use LXD under my WSL (it has systemd support for ~ 6 months now) to quickly play around with something and on servers to split one big machine into smaller ones/limit access to system for other users. Even had the case using it in CI/CD - custom Linux software to be packaged and doing basic installation test for centos6/7/8, Ubuntu 16/18/20.04 and so on. Package installs were done via dynamic creation of fresh VE each time, to ensure system is "clean".


Same, my next OS reinstall will be Fedora and I've been a loyal Ubuntu user for the last 20 years. It takes around 20 seconds to start a silly Spotify client on my dual Xeon workstation with 64 GB of RAM. Numerous users reporting slow startup times but Canonical just pretend the problem doesn't exist and proceed to shove snap down everyone's throats like their lives depend on it.


Same. Just moved to Fedora about 6 months ago and have pleasantly surprised. Works better on my laptop as well.


My father in law has been driven mad by the snap notification that say you have to close an application within the next 30 days in order for it to be updateable.


And it's not anytime within the next 30 days, it's a time in the next 30 days where it being closed coincides with snaps auto-updates.


It's like they were inspired by Microsoft's worst practices.


For some reasons they ship it with servers as well. So the first task after getting ubuntu up and running, you have to uninstall snap.


One of many reasons to not use ubuntu for your servers.


LXD is a legit product. Too bad they only ship it as snaps. I think Debian finally has packages for it but I haven't tested it. I actually stopped using it because I don't want to use an Ubuntu stewarded project. More and more it's getting harder to use plain lxc. Almost all resources are talking about it in an lxd context nowadays.


openSUSE has natively packaged LXD in their official repo. Looks like Debian bookworm does as well.


After using lxc and then lxd for some time I switched to Docker and never looked back.


I'm curious - what's wrong with snaps on the server?


Same as on Desktop. They are terribly slow, resource hungry, update automatically.

On a bit older desktop I have seen it take 5-10 seconds just to start Chromium. And not the initial start after fresh install, it happens every single time. Meanwhile Flatpak or local packages start instantly on the same machine.


So if I install a web server as a snap, it'll be slower, take up more resources, and restart randomly? I find that hard to believe.


It updates, not restarts. That can lead to problems after restart.


Sounds like a SRE nightmare. Updates happening when you didn't expect them guarantees problems. I'm starting to get a clearer picture of Canonical now.


That said, if they are working very hard to ensure that the updates don't break anything, then this is perfectly fine. Are they doing that? Has anyone been bitten by a bad Linux daemon update? Unless things are breaking in reality, it's foolish to dismiss snaps on the server outright.


Well snaps are supposed to be this great server feature because you can install whatever great program from whatever other distro ecosystem using snaps… or something


I have no idea how to update firefox because of this. I get some notification about not being able to update but it doesn't tell me why.


You have to quit Firefox and then manually update the Snap. I thought restarting Firefox would trigger auto-update, but no.


Canonical is an expert marksman footgunner.

They take bizarre risks and insist on reinventing things badly.

Security, infrastructure ecosystem integration, and defaults that don't work with reality.

The advantage of RHEL/Cent and sometimes Fedora server-side is it's boringly-reliable. The kernel especially. For development, the userland isn't great and the desktop is mediocre.

Qubes is interesting for security, containerization, and running apps isolation where Fedora, Cent, and Debian (possibly Ubuntu and Windows) are all side-by-side choices as app substrates.


Canonical makes a server OS and also releases a desktop version for nostalgia reasons (my guess). They should just deprecate Ubuntu Desktop already and be done with it.


The file limitations kill my productivity. For permission reasons Firefox won't open local html file on my machine. My work VPN loads a loval file to log in.


I've been using ubuntu server for I don't even know how long, but am going to be moving all my stuff to debian when I rebuild my hardware here soon.

I should've done it a long time ago but until recently Ubuntu has been "debian but with some nice little extra bits of effort here and there to help make it smoother". Now it's "debian but with a lot of spicy canonical opinions dumped on it".


I hate snaps, doubly so when they appeared on a LTS release and some /dev/random sillyness broke simple things like booting.

However this seems pretty silly when what actually changed is the default OS installs will not have flatpak installed. Easily fixed with "apt install flatpak". It's just a default they are changing, not purging flatpaks or preventing them from working well.


I like the security features of Snaps would accept some trade-offs for that benefit, but Canonical started shipping a Snap of a browser with known issues like breaking media keys.

Security improvements are welcome, but that was a feature that seemed important and reasonable to keep working. (Maybe it works now, but based on that experience, I gave up on Snaps until I heard more positive reports).


And:

- Violates XDG Base Directory Specification

https://bugs.launchpad.net/ubuntu/+source/snapd/+bug/1575053


I've started using Ubuntu in Warthog times, when their value prop was refreshing and sorely needed: an out of the box usable Linux distro. It was my distro until 2020, when I realized they were going in a direction I don't care for (you could argue a direction opposite their original mission), and that other distros have reached and exceeded Ubuntus level of polish. Basically, take your pick, and it's gonna be at least as good.

I ended up with Debian because I like stable but not ancient (CentOS?) and it comes with a release cadence similar to Ubuntu LTS.


This is some crazy NIH syndrome from Canonical.

We've been here before, of course - they pushed their own DE (the original Unity, not the current GNOME theme) for a while as a competitor to GNOME, and they also pushed Upstart over systemd. There are probably other cases I'm missing.

Eventually they gave up on those pet projects for pragmatic reasons, but Snaps seem to be the hill they want to die on (presumably for internal political reasons and/or some weak attempt at lockin).


This is bone-headed NIH move from Canonical, but I don't think either of those are good examples. Both Unity and Upstart were released prior to Gnome 3.0 and Systemd respectively and had a quite a significant investment in development time from developers and Canonical as well as an existing base of regular users and corporate users.

The original Unity DE was released in 2010 prior to the release of Gnome 3.0 in 2011. Upstart was originally included in Ubuntu in 2006 to replace sysvinit, and writing upstart scripts was a huge breath of fresh air. Systemd was released in 2010.

As a developer and user, I hate snaps _and_ flatpak. Both are user-hostile and constant source of problems requiring hours of Googling (especially the Flatpak sandbox!). I ended up purging both from my system a month ago and have been much happier since.


Deleting and pinning snapd is the first thing I do on every fresh Ubuntu install. Next, install ff from mozillateam ppa and chromium from flatpak.


I do the same, pin snap. For Firefox I just download it, it lets me update it from its Help menu then offers to restart itself. I don't use Chromium, use Chrome. It does seem like Canonical is trying to push away all but corporate users, perhaps even all Desktop users.


This is the kind of attitude that holds Linux back. Nothing is ever good enough, but "doing nothing" is considered just fine. Snaps aren't perfect, but adding "the ability to edit dotfiles" later on is a whole heck of a lot easier than whatever you're a fan of.


Yup, I ditched Ubuntu for good when they started with this snap nonsense.


On my servers I use LXD and certbot from snaps, cannot say I have any noticeable complains here.


> it is a devil that's hurting the Linux desktop everyday

Don't you just remove it if you're using Ubuntu, never install it if you use Debian/Fedora/Arch, and pretend it doesn't exist? I've never run into an app I want that is only packaged for Snap.


Some apps chose to distribute with snaps first, past and present. And without snaps they might've pushed a more unified or better experience with something better.

Canonical has money to do lots of good, too bad they waste it on terrible engineers and terrible projects.


On Ubuntu they hijack apt commands to install the snap instead for some packages, so removing snap is only part of the steps you'd have to do.


snaps just create so many weird sandboxing issues with the environment. If I run "firefox" with it already open it will not create a new window, it will wait 30 seconds and generate "firefox is already running".

I tried about 10 times to get mysql workbench running, but it depends on some key store backend through dbus, I haven't be able to get the conncetions working through snap so i cannot access a database since, for whatever reason, it has to go through the keystore.

The failure message? 'dbus-launch' does not exist.


Another snap failure story:

    You install chromium-browser through apt

    Chromium gets installed through snap

    You go to a PWA to install locally (like M$ Teams)

    The desktop entry is saved with a path hardcoded to /snap/chromium/1234/... (1234 is the id)

    You update chromium, either intentionally or automatically, the ID changes

    Now you can't access your PWA anymore unless you edit the desktop entry and change the id or replace it with `current`.
Such horrible user experience for no positive gain. And this is not a Linux problem, but an Ubuntu problem. Yet there is no difference in the eyes of people because "it's their Linux that broke...".

Bad user experience, bad implementation, bad decisions, and bad reputation to the wrong target, just because Canonical's Ubuntu is the "default" Linux.


Can we go back to just shipping apt packages?

They worked fine, and I don't feel like having multiple types of containers and update methods and mounted image file systems really made anyone's life better.


What you are really saying is you want all software developers to publish Debian packages (no such thing as "apt" packages) compatible with your particular Debian distribution, even though they might be using a different Debian based distribution, a completely different altogether, maybe even a different architecture, even though that compatibility isn't their primary concern.


The original model is that software developers don't package their software for distributions at all, at best they provide helper scripts that the distribution maintainers can use to do that. That's why distributions are typically either giant volunteer-run organizations like Debian or companies like RedHat or Canonical.


The problem with that model is that it puts the burden on the distro maintainers to package every possible application for their distro. And application developers have to essentially wait for each distro to repackage their app before it becomes available on that distro. Or start messing with alternate repositories for each distro they want to support.

The old model works for established software, but breaks down a little now that it becomes easier to write software applications.

Personally I use my distro packages for foundational stuff (DE, systemd, tools and utilities) but I use an alternative package manager (flatpak) for most other applications.

What Flatpak, Snap and AppImage try to achieve is to limit the packaging burden for both distro maintainers and application developers, so that end-user applications can become immediately available on a wide variety of distros.


>> The problem with that model is that it puts the burden on the distro maintainers to package every possible application for their distro.

That's kinda what the distros ARE. Also, if you're debian based and debian packages are not compatible with your distribution you're actively fucking something up for "reasons" - stop doing that.

If an app can't use a standard .deb or .rpm then the distro is doing something wrong. If dependency version management is too hard, someone is doing something wrong - not sure who, could be a library maintainer or an app maintainter. Let's not ship half an OS with every app to avoid this stuff.


Have you ever actually worked on distribution packaging? There is no such thing as a "standard" .deb or .rpm. Unless you're statically linking against distro policy they have dependencies on the particular version of the distribution they are built for.

You can't take a "standard" .rpm from the Fedora repositories and install it on CentOS. You can't take a .deb from Debian 11 and install it on Debian 10.


>> There is no such thing as a "standard" .deb or .rpm. Unless you're statically linking against distro policy they have dependencies on the particular version of the distribution they are built for.

If you're using a "Debian based" distribution, the "standard" .deb is the one shipping with Debian. If it doesn't work on the derivative distro, they are doing something wrong. Or like I said, maybe the dependencies are doing something wrong.


Find a deb packaged for an older version of Debian and see if it runs, there's no guarantee, if the software was closed-source you're basically out of luck, unless people statically link all of glibc, ssl etc into their application which is a big no no.

This is the problem that Flapak, Snap etc try and solve. I won't put AppImage onto that list because it actually doesn't solve the problem it just makes it worse.


So, Flatpak and Snap solve the problem of wanting to run stale software? That ought to be a very niche requirement, I would think.

I was under the impression that Debian already solved that problem by allowing you to deploy an older Debian version in a chroot with debootstrap? As long as the Linux kernel is binary compatible, that should work fine. Although I have to admit I don't use stale software that often, so I have little experience in that area.


They solve the problem of being able to run software against a known set of dependencies instead of depending on the versions that come with the distro. Older packages can still run even when the dependency version the distro provides has changed in a breaking way (as parent suggests), but also the other way, new packages can ship features using newer dependency versions that the distro might not have yet.


Distrobox is an option too.


> You can't take a .deb from Debian 11 and install it on Debian 10.

I mean, you can, if you also install its dependencies. And you may end up with a weird franken-system, but you can. You can even automate it and set preferred distros with pinning, it's how people run things like hybrid testing-unstable distros.


That's the point though: this sort of thing is not ridiculous on most OSs. I should be able to use old versions of software (or new ones for that matter) without having to worry about causing my system to catch fire and explode.


That just requires the ability to have multiple versions of a dll, or to install specific versions with an app, or to statically link. Lots of ways to deal with that, don't distributions support any of those options?

Oh, not all libraries support that. They need to...


Yes, but the difficulty and franken-nature of the resulting system means that it's not for the faint of heart.


So upgrade the whole thing. It's open source so in most cases that's possible.


You're shifting more work onto distros and users that shouldn't be work in the first place, and basically preventing non-linux literate people from using their OS.

If I install software on MacOS or Windows, I don't have to care if it was packaged for an older version etc, or that my distro may not package a dependency.


> shouldn't be work in the first place.

It seems very much intentional. You could just keep multiple different, vulnerable versions around and keep everything working. Instead distros say "Nope. We support exactly one version. Update or die."

That is also why you have runtimes, grafting, support sunset,... . I agree that a different trade off makes much more sense for desktops. For servers though...


Most updates aren't security updates. Not all vulnerabilities in a library affect all consumers of that library. Distros don't have every library packaged. Distros often are not often willing shipping patched versions of dependencies. Distros often offer out of date versions of libraries.


No. The libraries are not "out of date" but intentionally static. These static foundations are what companies pay lots of extra money for with windows ltsc, red hat, oracle, SuSE etc.

> distros don't have every library packaged.

Exactly. And for those that are packaged they say "these are the versions we support. If you want to us to do the support work, use these". Again for stuff like windows ltsc that means I install version X now and want this to be supported for the next 5 years. If I instead install a consumer version of windows it means X will be out if support by then and I am expected to have upgraded to X+1, X+2, X+3 during these 5 years.

Case in point, Firefox has multiple current versions: 102 ESR and 111. Both get regular updates and neither is "out of date".


Maybe you should, those dependencies may contain vulnerabilities.


> I mean, you can, if you also install its dependencies.

You will very likely bump into conflicts. Or you you need to upgrade a lot of fundational libraries (like libc), at which point why stay on Debian 10?

Backports exist for a reason.


You might, you might not!

Backports do indeed exist for a reason, I just felt like challenging “can’t”


If you automate it and work out the kinks, you basically get flatpak.


Technically there is such a thing as a standard RPM, as specified by LSB.

https://refspecs.linuxfoundation.org/lsb.shtml

These are, of course, not distro packages, but ISV packages and most RPM features cannot be used.


> Unless you're statically linking against distro policy they have dependencies on the particular version of the distribution they are built for.

The irony here is that we’re discussing flatpak/snap, which take the idea of static linking to the absolute extreme by doing something closer to a container where every dependency is part of the package. Maybe static linking being “against distro policy” is tossing the baby with the bath water by causing maintainers to reach to a much worse packaging method (snap) because the distro policy is just too obnoxious.

There’s no good reason you couldn’t just statically link (or relocatably copy) your dependencies into your .deb except the distro maintainers being purists. It would make the process of building a deb (or RPM or whatever) trivial because you’re using it as a dumb archive format for your build artifacts, similar to how a container works.


Static vs dynamic linking is the core of the packaging debate isn't it? Like, distro maintainers say that dynamic linking is better because it lets them swap out libraries underneath the app in case of like security vulnerabilities and stuff. Devs don't like that because inevitably minor version changes break stuff unexpectedly, plus devs prefer to use shiny and new libraries that distros often don't have. Containers were IMO primarily a packaging and deployment solution, and container-style package formats like flatpak and snap are efforts to force distros to use static linking and stop breaking the app's libraries. IMO distro maintainers should realize that their advantages in security aren't as useful as claimed, and their claimed advantages in distro coherence are only relevant to maintainers, not devs and rarely users.


As both a dev and a user, I prefer dynamic linking over static linking. Usually.


> That's kinda what the distros ARE

For the base system and libraries, yes. But why should the distro maintainers be burdened with additional work for every possible enduser application out there? If I write a GTK app and want to make it available for Ubuntu/Debian users through official repositories, I need to make sure it gets added to the official package list, and every time I make a new release, someone else somewhere else has to do additional work just to repackage the application so it is available in the repository.

Maybe a far-fetched analogy, but imagine if browser maintainers have to perform additional work every time a new website launches before it is available to its users.

Also, in this system, the application developer has a lot of extra work for making the application run and build against older versions of its dependencies. If I want to make my app available for Ubuntu 22.04 LTS which has libadwaita 1.1, I cannot use any of the new widgets in libadwaita 1.2 (released 6 months ago) or 1.3 (released earlier this month). I can use those widgets but I'll have to write ifdefs and provide fallback behavior/widgets when building/running against these older versions. I will also have to manually track the various distro versions to detect when I can remove the fallbacks from my codebase.

This is what Flatpak is for. Using Flatpak I can target a specific GNOME SDK version and make use of all the new code immediately, without having to write fallbacks. The downside is that when a user downloads my application through Flathub or another Flatpak repository, it might have to download the additional libraries as well, but they will be the correct versions and they won't be downloaded if that SDK is already available due to other Flatpak applications already installed.

Essentially, something like Flatpak is a middle-ground solution that trades of some disk space for the benefit of less work for distro maintainers (so they can focus on core packages) and less work for application developers (that can use more recent dependency versions and don't have to worry about the versions in other distro's)


> If I write a GTK app and want to make it available for Ubuntu/Debian users through official repositories, I need to make sure it gets added to the official package list, and every time I make a new release, someone else somewhere else has to do additional work just to repackage the application so it is available in the repository.

Yes, and that's a good thing for a whole bunch of reasons.

If you don't want to leave it up to the distro maintainers, nothing's stopping you from standing up your own repo to distribute your software to a particular distro. It's a one-liner for users to add your repo to their list so they can use their package manager to install and update your software as usual.

> Essentially, something like Flatpak is a middle-ground solution

Yes, I get that it's convenient for maintainers. But it kinda sucks for users (at least for me), which is why I avoid using software packaged that way.

I'll give flatpack this much credit, though -- it's not a nightmare like snaps are.


> Yes, I get that it's convenient for maintainers. But it kinda sucks for users (at least for me), which is why I avoid using software packaged that way.

My browser, mail client, rss reader, music player, video player, image viewer, steam, ... all have been working incredibly well as Flatpaks. I also get free sandboxing for all of them, so for example Steam and its games, don't have access to my ssh and gpg keys or documents.

The only applications which don't work that well with Flatpak are things like my IDE or file manager.


> That's kinda what the distros ARE.

What things are can change, sometimes for the better. Imagine if distros maintainers could spend their time doing something more productive than doing the same work as hundreds of others are doing.


But if hundreds of distro maintainers don’t do it then millions of users have to do it.


No, users don't do it. The application developers do it in their CI pipelines. Application developers should be the ones building and testing the app, not distro maintainers responsible for a dozen other applications.


Why should a developer of a free and open-source application, provided free of charge and without any guarantees, have any obligations to package and even test their software on random, thirdparty distributions?

If a distro wants to include their application they have every right in the world to do so. So its up to them to do what ever is necessary to enhance their product with the freely available product of the unpaid developer who created it.


That's the point of flatpaks - you don't.

You build one flatpak and it will work for all distributions.


History has shown that application developers are very bad at releasing good deliverable without too much security holes in the packaged libraries or bad practices. And the sandboxing in flatpak is actually meant to protect users from harm done by clueless devs but it fails because devs can actually build non sandboxed flatpaks and they will do it because they don't care


History has shown that distro maintainers aren't perfect at patching security vulnerabilities either and that sandboxing is useful regardless. It also shows that user want working software and will go through the effort of inventing new package formats like flatpak to work around distro maintainers. Maintainers now have a choice between complaining that everyone else is doing it wrong and eventually becoming irrelevant, or getting with the program and maybe even offering their expertise to accomplish what people want to do


Flatpak hasn't been invented by users but by distro maintainers.


Why would you use software if you think the dev is too incompetent to package it?


Because I trust that distro maintainers catch the most obvious errors before packaging and releasing the software.


Package it for what? There are a lot of distros. Should the dev be packaging it for every one of them? Debian, red hat, suse, arch, other more esoteric ones? Which distro versions? How many years back should they be maintaining the packages?


…the context was Flatpaks and snaps which directly address it by simplifying the process… the developer would explicitly avoid that confusion.


How often do distribution maintainers actually audit the package source code?


I wasn't talking about audit but dependency lifecycle.


It’s called a distribution. Literally distributing the software. The distro deals with integrating all the packages into a single compatible system. This includes setting options to maintain system compatibility.

Packaging is not required for testing individual applications. That happens at build time and the developer writes the tests. These are not distribution specific.

The separation of concerns is very clear. If a distribution doesn’t package the code then the user is left to build the application themselves. It’s impractical that a developer would build and maintain their own packages for every flavor of every distribution.


The discussion is mostly about what a distribution should contain. I don't think a distribution has to contain all the possible software applications in existence.

Instead, I think distros have to provide the base packages like desktop environments and related software. All configured for compatibility and complying with the distro philosophy.

But third party desktop applications that are not directly related to the desktop environment are a different category. There is an endless amount of them with varying quality and resources. You cannot expect distro maintainers to spend time on all these random applications.

However, if a third party app is not included in a distro, it does not mean users have to build the software by themselves. That is the problem that Flatpak and Snap and others are trying to solve. They provide sets of distro-agnostic libraries that developers can target instead of having to target each distro separately.

This way a developer can only package the app once, distro maintainers don't have to do extra work, and users can install applications without having to manually configure and build them. Everyone is happy.


That’s a reasonable position but it puts developer and maintainer experience ahead of user experience.

Flatpak and friends are a pain in the ass to use and offer a shitty UX. Having a single point of contact and well understood mechanism for software management is a feature for users.

I don’t expect my distribution to have every software package ever. I do expect it to fulfill my needs. As long as there are applications in the repository that do what I need I am happy.


No. I trust distribution maintainers a lot more than I trust other developers.


You can't have distro specific policies using this methodology.


Arch Linux would like to have a word


This. Also void and alpine. Which have simple, no-nonsense package formats. We get it that deb and rpm are a hassle to learn and write, but it's completely a false dichotomy to say snap or flatpak are the only alternative. In fact they push a widely different model of software distribution, one that completely destroys "user unions" which is a crucial aspect of what distributions are.


> We get it that deb and rpm are a hassle to learn and write,

They solve problems that arch/pacman didn't start even thinking about. Like reliably updating an installation, that wasn't kept in a tight loop with the upstream repo.

> false dichotomy to say snap or flatpak are the only alternative

we are slowly moving into the world of immutable base systems, like fedora silverblue for example. The last thing you want is for a random app package to modify your base system. Separating system and apps is a good thing.

Edit: names


> They solve problems that arch/pacman didn't start even thinking about. Like reliably updating an installation, that wasn't kept in a tight loop with the upstream repo.

So they've decided to degrade the baseline UX because they want to optimize for people who don't keep their system up to date? As someone who has no problem keeping my system fresh, this isn't a use-case I want prioritized in my package manager.

> we are slowly moving into the world of immutable base systems, like fedora silverblue for example. The last thing you want is for a random app package to modify your base system. Separating system and apps is a good thing.

The "last thing I want" is a package manager that's invasive to use, doesn't have the latest software and is slow. Immutable systems can be a nightmare to actually use. Wrote your own software? Copying to /usr/local/bin is no longer an option, hope you like packaging up your one-off tool!


> So they've decided to degrade the baseline UX because they want to optimize for people who don't keep their system up to date? As someone who has no problem keeping my system fresh, this isn't a use-case I want prioritized in my package manager.

So, they decided that the update path is always defined, from any state to the latest, without having to update the packages in specific order, where some steps needed may disappear. You know, being robust.

If the year of linux desktop has to happen, not borking the system during updates is a requirement. You don't have a problem with daily updates? Congratulation, but your grandma probably has.

> mmutable systems can be a nightmare to actually use. Wrote your own software? Copying to /usr/local/bin is no longer an option, hope you like packaging up your one-off tool!

Immutable system does not prevent writable /usr/local/bin. Your one-off tool has no business messing with /usr/bin or /usr/lib.

Immutable systems are also minimal; they don't care about your additional software, as it is separated from the base system. You can update your software at any pace you want; nightlies if you want. It just cannot touch anything in /usr (with /usr/local being exception).


I actually don't want a package manager tuned for "my grandma" (the white whale of the linux community). I'm a professional dev, Arch perfectly suits my needs where as Flatpak/Snap... cause excruciating pain every time I'm forced to interact with them (something I go out of my way to avoid at this point).

> Immutable system does not prevent writable /usr/local/bin.

I can't speak to the others, but Nix most definitely does not make /usr/local/bin writable. You have to package up your tools to use them.


> I'm a professional dev

So you have particularly skewed perspective; you should try a bit of operations, or at least devops, to normalize it.

> Arch perfectly suits my needs

Good for you. You just need to realize that you are minority. Do you notice that Arch is not the dominant distro? There's a reason for that.

> where as Flatpak/Snap... cause excruciating pain every time I'm forced to interact with them

You are holding it wrong ;)

> but Nix

Nix is not exactly a typical immutable system; it has strong opinions about many things, that other systems don't. You can't evaluate other systems through assuming they are like nix.

Nowadays, even MacOS is immutable and people live with it just fine.


> So you have particularly skewed perspective; you should try a bit of operations, or at least devops, to normalize it.

I didn't say that you should use Arch on the server, but it's great for the desktop.

You can refer to this entire thread to see how well Flatpak/Snap... are working out. Pretty much universally reviled by developers (people who actually use Linux on the desktop).


> I didn't say that you should use Arch on the server, but it's great for the desktop.

And I said that apt/dnf solve problems that pacman didn't start thinking about yet. So we are in agreement here.

Flatpak seems to be working out pretty fine.

> Pretty much universally reviled by developers (people who actually use Linux on the desktop).

Two things:

1) Reviled by some very conservative types, often not willing to consider things from different perspective than what they are used to. If they would build the desktop, it would end up like Homer's dream car.

2) Don't you think that it is a problem when only developers use the desktop as it is? (Not really true, but let's consider that for the sake of argument).


> 1) Reviled by some very conservative types, often not willing to consider things from different perspective than what they are used to. If they would build the desktop, it would end up like Homer's dream car.

I would consider myself non-conservative to the extreme, which is exactly why I don't like Snap since it's built out of fear of problems that I simply don't have. I want the latest versions of any given piece of software and I want my package manager to be performant. I don't think those are conservative values.

> 2) Don't you think that it is a problem when only developers use the desktop as it is? (Not really true, but let's consider that for the sake of argument).

I don't think developers having a great computing environment is a problem at all, quite the opposite. I think it's very desirable and I think efforts that hamper that for the mythical "grandma" are indicative of the core UX problems that plague the "desktop linux" community.


People who actually use Linux on the desktop en masses is Chromebook users. Immutable OS is just fine as we see, let's people reach their goals.


silverblue ;-)


you are right, brainfart ;)


> That's kinda what the distros ARE.

Yeah but that the worst part of what they ARE and if they didn't have to spend so much time doing that maybe they could do more innovative things.


Ah, a bunch of “you’re holding it wrong”s, completely devoid from reality. This certainly is an argument about Linux.


AppImage is not like the other two, it is fundamentally just a way to make a self-contained binary directory into a single runnable file - a way to avoid having to tell the user to extract a tarball and run a particular file inside. Convenience, nothing more. The other two define entire alternate runtimes.


No, an Appimage still ships a dynamically linked executable + lots of dynamically linked libraries, that only link against system libraries when strictly needed, e.g glibc and libGL. Those files are bundled in a squashfs and have a script setting a custom LD_LIBRARY_PATH to the squashfs content for the binary. Snap does use this technique, too, but does more, for worse.

AppImages are not comparable with statically linked binaries.


I can't make out what you're disagreeing with - you seem to be arguing against something I didn't say? The point is that the contents of an AppImage - "a dynamically linked executable + lots of dynamically linked libraries" - works just as well without all the squashfs backflips. If you can ship an AppImage, you can ship a regular ol' tarball with a binary named "RunMe" inside. The purpose of an AppImage is simply to condense such a tarball to a single runnable file, for convenience - nothing more.

Meanwhile, Snap and Flatpak are package managers - what's more, they're invasive heavyweights with permanent costs that are even worse than distro package managers. Snap permanently runs a daemon, and Flatpak requires you to run programs with "flatpak run" - yuck! They are both trying to impose some dubious vision of sandboxed apps that require bureaucracy to communicate with each other, instead of just tackling the core issue of software distribution. Maybe you even like sandboxing! But I've seen no justification why that should be co-mingled with package management.


> Flatpak requires you to run programs with "flatpak run" - yuck

To begin with Flatpak makes .desktop files so no one should be needing to use that command manually.

Secondly, Flatpak has an option folder you can add to your path that lets you run applications by running their FQDN. e.g. org.gnome.Gimp myFile.png rather than gimp myFile.png


> But I've seen no justification why that should be co-mingled with package management.

Building sandboxing on top of package management makes a lot of sense because you want sandboxing to work by default, and for that you need to identify the sandboxable things without making the user point to each one individually.


> want sandboxing to work by default

Yeah, wake me up when Flatpak is remotely close to doing this. Most "apps" simply disable the sandbox.

Not to mention I'm not going to trust "app" developers setting their own permissions. That's the job of package maintainers.


Afaik they disable filesystem sandboxing, not process namespaces. Still better if programs can't ptrace around, although this is indeed a big issue.

If someone knows why this sandboxing is better/worse than SELinux or AppArmor access rules, can you pls elaborate? I'd really like to know.


You don't need any fancy packaging to restrict ptrace: https://www.kernel.org/doc/Documentation/security/Yama.txt


I'm not comparing sandboxing against SELinux/AppArmor. It's a social problem, not a technical one.

I'm comparing "app developers holding themselves accountable" to "package maintainers dish out consequences for misbehavior".

I have absolutely zero trust in the former, and lots of trust in the latter.


What do you mean? If package developers don't specify permissions correctly, their code doesn't work when sandboxed.


First of of all, I don't like Snap or Flatpak. I'm only really interested rpm-ostree as used in Fedora CoreOS and Silverblue. I don't hate Appimages, you're claim that they're "fundamentally different" is just hypocritical.

> The purpose of an AppImage is simply to condense such a tarball to a single runnable file, for convenience - nothing more.

This still forces the user to learn the internals if the AppImage doesn't work. E.g. if MATLAB would use Appimage, I'd have to extract squashfs contents, fix the broken libraries inside and the repackage it. Or I would have to write a script to start the executable outside. It's a simpler from a pure technical standpoint when it's a tarball + wrapper script.

> Snap permanently runs a daemon, and Flatpak requires you to run programs with "flatpak run" - yuck!

Snap has many issues Canonical just refuses to solve (e.g. users without home under /home), so I just ignore that. What flatpak does is arguably exactly what Appimage does, a wrapper script. Maybe more complex, but with additional features and the it's the same script for all packages. If you have 100 AppImages installed, you have the same thing as "flatpak run" in up to 100 slightly different versions. I can't see how that reduces complexity.


The problem with AppImage is that it doesn't tackle the core issue of software distribution: how to build the software so that it runs on all the systems you want it to run on.

Flatpak and Snap have SDKs for this purpose, but with AppImage an ISV is forced to guess which libraries need to be bundled and which may be dynamic link dependencies from the OS.

Not to mention the requirement for fuse2, which is being replaced with fuse3.


> And application developers have to essentially wait for each distro to repackage their app before it becomes available on that distro.

Nonsense. Developers can do their thing, and distro maintainers can do theirs. The two have very different priorities and goals.

I'm perfectly happy downloading the latest version of hip package X directly from the developer's website while relying on distro packages for all the boring, stable stuff.

The whole "it's either all hip or all boring" is a false dichotomy.


> The problem with that model is that it puts the burden on the distro maintainers to package every possible application for their distro.

That's ... kind of the job of a distro maintainer.


Can you name one distribution which isn't severely understaffed?


> The problem with that model is that it puts the burden on the distro maintainers to package every possible application for their distro.

You know, that was never a problem when we had to package the other 500000 packages available in debian, just when users want seemingly popular packages updated more than once per decade. Firefox for starters.


I feel one day you will make step further and join me on Windows desktop path:)


> The problem with that model is that it puts the burden on the distro maintainers to package every possible application for their distro.

But isn't that the primary job of a distro?


For base packages like desktop environments, tools, libraries, popular applications, etc, yes. But I don't think they should be responsible for every end-user third party application. It's a massive waste of time for everyone involved.


I think they should be responsible for the packages in their package manager. That's the main thing that distros do.

For software that they can't do this with, or that isn't worth their time, they just omit it from the package manager.


For me, the "stop the world" distribution model is pretty broken, compared to FreeBSD Ports or similar. I don't ever want a maybe-years-old version of software, even with random security patches applied to it.


There are rolling release and hybrid (stable base + rolling apps) Linux distros as well.


I know - those ones don’t suffer from the same problem, typically.


This model doesn't necessarily work for software that developers don't want others shipping for them.


Setting aside that those developers are not good team players / open source citizens, they always have the option to ship a regular tarball with everything inside, a la Blender / Firefox / VSCode / Arduino etc...

I resent having to install and maintain another package manager, and another set of base runtime libraries. I won't do it. Give me standalone binaries, give me source that compiles, but don't give me a link to some third party app store thingy.


I wish all software could provide portable binaries. However, compiling portable binaries is hard. Not many developers have the skill.


Indeed. This seems like a tooling issue, and one that strikes me as more worthy of community attention than reinventing the package manager for the nth time.


Yes, and once we have the tooling to build portable .tar.gz application packages someone should come up with a tool to automatically upgrade them.


Perhaps. Extrapolating the idea that .tar.gz application packages should be standalone, a better system than an external updater might be that the aforementioned build tooling includes an "update" script inside the .tar.gz. Then the user can run them at their own convenience. This is similar to how Firefox's updater works, except Firefox itself does double duty as the update script.


>Setting aside that those developers are not good team players / open source citizens

...yeah, no.

Open sourcing your code doesn't mean you necessarily want people packaging it up for distribution elsewhere, because when they inevitably fall behind the release schedule and people get on old versions, you are often the one who gets the support triaging work. It is entirely reasonable to not want other people doing this.


Or you end up with useless packages like ancient versions of nodejs sticking around for half a decade.


At which point was nodejs five years older than the distro it shipped on?

Not being facetious, genuinely curious.


> At which point was nodejs five years older than the distro it shipped on?

https://askubuntu.com/questions/1259840/why-an-old-nodejs-ve...

Nodejs v4.x was "new" when Ubuntu 16.04 LTS came out. It was added to its apt repos, LTS releases are supported for 5+ years, and LTS policy is not to update major versions of software within a release.

So while nodejs was pumping out new major versions every 6-months, people running Ubuntu 16.04 and installing "apt-get install nodejs" were stuck on the same ancient version.


This is what is expected on LTS releases and is what is expected by people that highly value long term support releases. That said, I think modern security and modern software development practices have obsoleted a lot of the thinking behind LTS releases.


Sadly, modern software development practices have neutered a lot of LTS releases -- but the need for real LTS releases is stronger than ever.


> but the need for real LTS releases is stronger than ever

Actually, I think the LTS mentality is one of the bigger problems in security right now. The hardest problems I've had to deal with in tech all stem for LTS:

* Getting an not-substantial budget to update an essential but forgotten server with custom software and an unpatched heartbleed problem.

* Convincing developers to even look at old web services that have massive SQL injection and were built with libraries with known (six years ago) exploits, all running on some 13 year old version of RedHat.

* Inevitable meetings where you try your best to avoid saying "I told you so" when a disclosure, cryptolocker or malware infestation happens because of the above. These are no fun because they quickly devolve into career-end bingo.


(This entire comment is about my use of my own machines, not about the use of machines in an enterprise setting. In the enterprise, much of this is very, very different)

From a security point of view, yes, you have a point.

But I blame the problem on the industry shift to lumping security and feature updates together. I hate, and prevent, automatic software updates because I don't want feature changes to happen except if/when I'm ready for them. Feature updates are very disruptive, and sometimes break things horribly.

If I could just get security updates, I'd allow those to automatically happen without thinking twice. LTS releases were a (poor) compromise to accomodate those of us who can't, or won't, take on random feature updating.

Sadly, the LTS time periods are getting so short that they're often not effective for this purpose anymore -- so in those cases, I resort to blocking updates entirely until I'm ready for them.

That's also a bad security place to be. I just don't see any other way to handle it aside from separating security and feature updating, like we used to do. But that's not going happen. So all I'm left with is LTS releases and blocking updates.


That is very much missing the point. They were not "stuck", they explicitly asked to keep using that version and get support for it. Same reason Windows ltsc exists and is popular with such customers.


That's a feature not a bug. People use LTS because they need stable platforms they can rely on.


Or that distributions don't want to ship. Or don't want to ship/maintain as frequently.


Such developers can stand up their own repository and distribute their software there, where they have complete control. And users can still install and update that software as normal.


Just use AppImage then. Or create a good old tarball. Problem solved.

These should be absolute exceptions, not the rule, anyway.


Good, then we stay away from such software.


That sounds mutually beneficial. :)


You can


> That's why distributions are typically either giant volunteer-run organizations like Debian or companies like RedHat or Canonical

Quite a few maintaining teams are the exact same people in Debian and Ubuntu.


That works for big important software like Firefox or GCC, but less well for someone’s niche hobby project that maybe 5 or so other people will use


x86_64 ELF (SYSV) is a pretty good lingua franca, why pollute my machine with some crappy pauper?


The last app image I've installed took 800 MB. It was approximately 720 MB of libraries, and just 80 MB of core program. Using self-contained images is simply not scalable.

I have some experience with packaging, and if we talk about standalone applications, dependencies aren't a big deal; producing deb packages for different targets is not difficult. The last (only) breaking change across different targets I can remember was old graphic libraries (I think it was gtk2-based WxWidgets, removed in newer Ubuntu versions).


> The last app image I've installed took 800 MB. It was approximately 720 MB of libraries, and just 80 MB of core program. Using self-contained images is simply not scalable.

This is one of the points they are missing with snaps and flatpaks.

These formats might be OK for desktops, but are absolutely awful for servers, containers, etc., which, incidentally, are where Linux dominates.

Do I want to spend my time worrying about setting up a package cache, because a handful of updates are costing me a small fortune in transfers? Why would I need to think about what snap is doing in my small instance with a 16GB volume, when I use it to run a certainly small application?

Snaps are an absolute nightmare in these scenarios.


Not sure about snap, but at least for Flatpak, it has runtimes and runtime extensions that Flatpaks can share, which already reduces the individual Flatpak size. But further more, all runtime, extensions and Flatpaks live in a single OSTree repository on the system, which does automatic de-duplication on the file level.

While probably still less efficient than a properly maintained and integrated distro, where all software uses a single set of shared core libraries, its actually (in the Flatpak case) quite optimized & has additional benefits, such as installing a set of apps that don't share a common set of required library versions, which would normally be impossible.


> These formats might be OK for desktops, but are absolutely awful for servers, containers, etc., which, incidentally, are where Linux dominates.

Containers have already won the war on the Linux server, there's no point Flatpak etc trying to compete here. Additionally, desktop software is a complete design paradigm to server software. I wouldn't even say they make sense to be in the same package manager.

Although they do share tech, ostree, namespacing etc.


> Containers have already won the war on the Linux server, there's no point Flatpak etc trying to compete here.

There are things that you just cannot run inside a container.

Also, in cloud environments, default provisioning is not done with containers, which makes sense, but directly causes the issues I mentioned, e.g. Amazon provisioning SSM on Ubuntu Server instances.


The whole point of a distribution is that this job is done by the distributors, not the authors.


Is there a distribution (or whatever one would call it - a Linux) where the aim is to not be managing the whole end-user system of OS+Apps, but just the OS. E.g. similar to Windows, or Android?


No distribution forces you to use package managers for everything. Even before flatpak/snap, distributing (especially commercial) software as just archives or executable installers was/is perfectly possible.

The caveat is, there's no common baseline for libraries. Graphics libraries, the libc itself, GUI widget libraries etc. will appear in different version combinations in different distributions, since they have different release schedules.

So if you want to go the Windows/MacOS route and ship a readily compiled program for Linux, you have to vendor in a loooot more libraries, and they have a higher risk of breaking due to incompatibilities (kernel interface for your graphics driver changed, configuration file syntax for your font renderer changed, etc.)

Flatpaks/snaps try to solve this (poorly) by vendoring the entire kitchen sink and then some, which just creates more bloat and a worse DLL hell than Windows could ever have dreamed of. So using it for everything isn't really feasible still.


> they have a higher risk of breaking due to incompatibilities (kernel interface for your graphics driver changed, configuration file syntax for your font renderer changed, etc.)

Simply don't vendor glibc and graphics drivers and you will be fine. Vendoring drivers doesn't make sense anyway as your application will be obsolete by the next HW cycle.


glibc breaks regularly, as does mesa occasionally, so it would realistically be required if you wanted win32-style 20+ years of backwards compatibility.

And there is a lot of more obscure examples: fontconfig e.g. at some point changed their config file format in a not backwards compatible way, and now some Steam games crash on startup because the developers earlier vendored it to get around its ABI breaking repeatedly.

And that sort of software doesn't really obsolete. Steam and GoG allow games to have a 10+ years long tail of slow but steady sales that don't really justify constant maintenance releases but still both let people enjoy good games (those don't really obsolete), and serve as advertisement for the developers' newer games.


> glibc breaks regularly, as does mesa occasionally

Neither of this is true.

> And there is a lot of more obscure examples: fontconfig e.g. at some point changed their config file format in a not backwards compatible way, and now some Steam games crash on startup because the developers earlier vendored it to get around its ABI breaking repeatedly.

Bundling fontconfig but not it's internal config is retarded, yes. Really, games have no business looking at system fonts at all.


Calling everyone except yourself too retarded to figure out compiling Linux software is not a productive discussion. And glibc/mesa related breakages are well documented, if you care.


Arch Linux is pretty faithful to shipping software exactly or as close as possible to as the upstream authors intended. It's on a rolling release which means you kind of get a firehose of nonstop major version updates though, whether you want them or not.


how does that work in practice?

upstream didn't build with all the dependencies at the same version as the latest in the arch repo. Nor are most upstream developers bumping all their dependency versions constantly

if you're a big player like ubuntu/debian you can go to the authors/upstream and ask them to fix the program to work with their dependency versions (in say.. an LTS release). most would be glad to bc it's just something upstream needs to do once every couple years and doesn't need maintenance. If even thats too much and yyou want esomething even more handsoff/timeless then you make an Appimage. It'll work for ever

With Arch youd need to constantly monitor the thing doesn't break?Are they just #yolo'ing dependency versions most of the time ? (itd prolly work 99% of the time..)


In practice it works because GP is exaggerating. Arch does hold back library and/or program updates until they work together, or simply ships multiple major releases of libraries, and have a separate testing branch.

Right now e.g. there's openssl (still at version 3.0.x, because nobody supports 3.1.x yet) and openssl-1.1 (1.1.x-y) to handle the rather slow migration of upstreams to the new major release.


Fedora Silverblue https://silverblue.fedoraproject.org/

You're expected to run most apps as flatpaks.


I stopped distro-hopping a few years ago after landing on Silverblue a few years ago (back when it was still called Fedora Atomic Workstation). While it doesn't satisfy all use cases, I view it as a boring (in a good way) 80-percent solution that normally just works.


Silverblue doesn't get a lot of press but it makes a really solid desktop OS.


Fedora has always been quite bad at branding and making their distro appealing.

It's probably one of the most solid Linux distros at the moment and their website makes it seem.. boring.. No screenshots, features, mentions of whats available (like apps), that Fedora uses newer graphics/drivers kernels, that you can run Steam, that apps are sandboxed etc.

Stark contrast to something like https://vanillaos.org/


There are operating systems - like FreeBSD - which have a very strong and clear distinction between the Operating System base and third-party software.


For server there's Flatcar Container Linux, which only provide just enough to run containers. Everything else is provided by the container packager.


Slackware, perhaps?

Honestly, if you just want the naked OS, you don't need a distro at all.

Or just don't use the package manager.


I'm curious what a Linux would look like that behaved more like windows or android in the sense that there isn't such a blurry line between the OS and its applications. For example, installing a C# or Go compiler on Linux is great with a package manager. But doing the same with a C or Python is different? This is basically the Unix "I'm not just an OS, I'm a platform for building and running C programs" heritage that isn't properly hidden I think.


We've reached the point where the C ecosystem has so given up on having a package manager that isn't Linux distributions that Microsoft added WSL and Docker(-compatible) Linux images are the most widespread distribution format. I don't think we can still figure out a way to decouple the two again, and without that, we can't really create a Linux environment that isn't a package manager with an email client.

And to a degree, the line is blurry with Windows as well, what with how deep MSVC/win32 is integrated into the system. .NET took decades to dis-entangle itself from Windows and become a proper standalone platform, and Microsoft's attempts at getting people to use UWP instead of win32 is ongoing and not very successful so far.


> in the sense that there isn't such a blurry line between the OS and its applications.

Interesting. It never occurred to me that there's a blurry line between the two. It seems to me that there's a very hard line between the two. I'm not sure I understand what you're saying here.

> For example, installing a C# or Go compiler on Linux is great with a package manager. But doing the same with a C or Python is different?

I'm confused by this question, so forgive me if this answer isn't really addressing it. A package manager is a convenience, nothing more. Using one means makes your life easier in a number of ways. But not using one isn't really that much more difficult. Package managers are 100% optional.


Hot take: package distributors are 'fake jobs' that don't need to exist, or exist only for the make-work of packaging. Tools like flatpak, snap, appimage, containerization, etc. have made the need for adapting software to different distributions unnecessary.


Yeah, if you don't mind hugely increased start-up times, RAM usage and less curation and more crap, sure no need for package distribution.

I personally do mind very much. Just the differences in startup times between apt and snap applications are huge and I would absolutely despise working with such a sluggish system. I would rather build everything myself from source if forced to.


Not to mention it introduces a single point of failure, that once compromised can start pushing malware directly to user.

With maintainers in the loop, there is at least one more person that can notice something is fishy. Not to mention there is usually so time before packages are updated, so there is more time to notice an attack.


> With maintainers in the loop, there is at least one more person that can notice something is fishy

Also one more person who can inject malware or break something. How did that Debian keygen issue happen again? Oh right.


> How did that Debian keygen issue happen again?

The people on the openssl mailing list said it was fine. That's how it happened.


And yet the upstream developers themselves never incorporated the change, it was entirely because an unnecessary third party middle man made unnecessary changes.


That snap slowness is snap's problem, not general non-apt problem. Flatpak doesn't suffer from it.


The mass proliferation of electron and other such webpages-as-desktop-applications would seem to indicate that the average user doesn't care for any of those things.


So much weird assumptions.

First of all, there is no such thing as an average user. Also, it is not relevant to the discussion. I care about what I want in a system, not what some imaginary users want.

Now, we could stop there but I will play: Electron solves a real problem for developers. Writing cross-platform GUI apps is a real pain. Yes, there are native solution but ensuring the user has the exact same experience on every platform tends to be orders of magnitude more costly compare to web based solutions. (That or they are exotic options like Lazarus with Free Pascal that have a much lower Developer pool.) Many apps wouldn't even have Linux ports if they were not written on Electron.

Now, why do users accept them? Why are they not out-competed by native solution? Oh, honey. Why do I have Microsoft Teams installed? Because I like it? Hell, no! Because I need it for work. Why do I have the Discord Client? Because it is great? Nah, I long for good old IRC but Discord it there the people currently are. Did I think the Epic Games launcher is such an great app? Nah, they bought me with offering free games.

Users tolerate shitty software for many reasons, mostly because they have to. It does not follow from that, that they don't mind software being shitty.


>First of all, there is no such thing as an average user. Also, it is not relevant to the discussion. I care about what I want in a system, not what some imaginary users want.

In which case I hope you are prepared for software to get many times worse, because the software industry doesn't give a tuppeny fuck what you want in a system, they care what sells to the lowest common denominator. And that means slow, bloated electron web sites shoehorned into the desktop because the pool of mediocre JavaScript developers that can extrude a minimum viable product is huge compared to the pool of native developers of any language.

And it will continue this way for as long as it's accepted. So, forever basically, because the average user you claim doesn't exist will put up with anything placed in front of them without significant enough complaint to impact profits.


Debian packages are maintained by volunteers, the software industry has nothing to do with this.


Open source is part of the industry.

You keep trying to hand wave this relatively straightforward point: users are already used to, and accepting of, slow and bloated applications. Therefore, the downsides that app containerization introduces are irrelevant to most people that will use them.


There is a very good and strong use case for a "distribution" of packages working well together. But those packages need to be selected carefully.


I agree but I don't think the software packages themselves should be held back by the available time and attention of packagers. In the new world a packager or distro creator can just pick and choose what flatpaks, snaps, appimage, etc. sources they deem good enough to push to users. They shouldn't be a roadblock in the way of users getting the latest version of software.


Then you install the new version, find out it requires a new version of python and all its libraries, but you can't install it because pytorch only works on old versions of python.


That doesn't happen when apps are packaged in their own containers with all dependencies. The new version you installed has its own python dependencies.


I agree. Devs would like to statically link and have their app work. Maintainers don't want that to happen for IMO reasons that have far less benefit than claimed. flatpak and snap are ultimately efforts to get around the vortex of maintainers, maintainer efforts to get things not to be statically linked, and all the complexity that entails. I think there's still currently value in distro maintainers that do things like getting KDE running on Wayland using pipewire, but IMO that role should be limited to configuration and compilation only. There should not be any distro-specific patches in existence


> Maintainers don't want that to happen for IMO reasons that have far less benefit than claimed.

You have apparently no idea how much it costs to recompile everything, every single time that openssl fixes a bug.


I don't want debian to be like the android app store, where there are thousands apps that work badly and overflow me with ads.

I much prefer the f-droid model, of having curated repositories to keep crap outside.

Also, I can't understand why people on the internet think that upstream developers are omniscient. They make lots of mistakes and errors. Distribute maintainers fix a lot of things, and send the fixes to the authors.


There's nothing stopping someone from making a flatpak or other similar tech feed or 'store' that's only the curated apps they deem appropriate for users.

Bugs should be fixed upstream, not kept in distro specific silos. There's no reason why only a packager can fix some upstream issues or become a contributor. On the contrary shipping your app as a universal tech like flatpak means Redhat, Debian, Arch or any other user can use it, develop for it, and send fixes upstream.


The strawman sounds compelling on the surface, but that's not what happens in practice: Bug fixes are sent upstream by all relevant distributions, and they regularly cooperate with each other directly, as well as upstream. The "new world" of users of all distributions working together has been reality for the past 20+ years.


> Bugs should be fixed upstream, not kept in distro specific silos.

This is mostly a Debian issue fwiw. Debian is literally notable because they're obsessed with making sure that any package in their repositories is kept with the same "API"[0], no matter how old the software is. The result is that Debian packages can end up hugely derivative compared to the equivalent of upstream and other distros, but it's usually also because the software in question is half a decade old.

With other distros, packaging changes to upstream usually just reflect the preference to match a certain style of configuration (to pull another example from Debian: nginx ships with sites-{enabled,available} folders and is configured to load from sites-enabled by default. This is to match the same configuration style that's used for apache2 and that it's associated tools assume you configure apache2 with, even though nginx just uses a conf.d folder and has no extra tools to facilitate anything fancy with sites-{enabled,available}).

The extreme end is nix, which actively requires you to have the upstream written with nix in mind because nix will basically demand you configure the source code in ways to accommodate for it.

[0]: This includes actual software-intended interfaces and the ones hacked together by users by ie. reading out logfiles.


And even Debian tries to upstream fixes where feasible, which isn't too uncommon for security vulnerabilities, they tend to lurk in old parts of codebases that haven't been refactored for a while.


> The extreme end is nix, which actively requires you to have the upstream written with nix in mind because nix will basically demand you configure the source code in ways to accommodate for it.

I'm pretty sure this isn't true.

What does upstream emacs have to do for Nix to provide https://github.com/NixOS/nixpkgs/blob/nixos-22.11/pkgs/appli...

I will agree though that applications which have their own update mechanism or do other things that make reproducibility harder are much more difficult to create a Nix expression for.


Bugs should be fixed upstream but unfortunately sometimes upstream does not see them as bugs - for example, telemetry.


What's the base image for the containers? Do none of your Dockerfiles have apk or apt commands?


It's best practice not to have any shell tools in your app container, including package managers. It bloats the image and can be a security vulnerability if a zero day exploit hits the app. Ideally a container is something like distroless which just has the libc and dependencies you care about and nothing else, not even bash.


But where do your dependencies come from? Your compiler? Are you building everything from source after libc?


If you're shipping modern code like go or rust you have a static build with no real dependencies. If you're shipping a scripting language like python you're probably going to use their base images, and if you're shipping native C/C++ you have to figure out your risk tolerance for trusting a distro to ship good dependencies vs. just building them yourself. It's not hard to build all your deps in a container, and arguably is the best security practice so you have total control and knowledge of their versions.


Wouldn't something like that be really hard to debug in real world scenarios ?


As long as there is unavoidable software made by organizations like Mozilla that don't care for their users's choices there is an important role for distributions: To make sure what you get is what you want.


I agree. Maintainers are useless middlemen (at best) who only exist because of Linux userland's particular diseases and a desire for distros to rule over their own little repo fiefdoms.

No other desktop OS has done it like Linux and for good reason. People have been citing this as a reason they don't want to use Linux as a desktop for decades to mostly deaf ears, who then turn around and wonder loudly why no one wants to use their OS. Hell, even Linus Torvalds himself complained about it.


I have no earthly idea what you are talking about.

For decades, Linux package managers have been the killer app for Linux. They made installing and updating every single one of your applications trivial. You didn't google for sketchy download sites and unsigned exe's. You didn't have to fight the system to cleanly uninstall things. Even release upgrades were the smoothest thing ever. In 25 years, I've never had a Debian release upgrade go wrong.

Anyone bitching about package managers as user hostile is a flat out idiot.


What's wrong isn't so much the package managers as the necessity for them and for an army of third party volunteers to maintain packages and all the problems that predictably arise from that. Linus famously complained about how things work at DebConf 14[0], but I guess he's an idiot for doing so? That's a pretty hot take but whatever.

If a package is not in the repo? Sorry, you have to compile from source. Want a newer version? Compile from source and hope that the build environment dependencies are in the repo. Want an older version for some reason? Break out docker or KVM so you don't break your system.

None of this is fundamental to the model, that much is true, but in practice it is how all Linux distributions using a package manager/repo model without things like Snap, AppImage, and Flatpak work.

Here's the best part though: Even with Flatpaks and AppImage you can still use a repo! In fact Fedora Silverblue, which uses an immutable base system and installs everything through Flatpak and Toolbox, uses a Fedora controlled Flatpak repo by default.

[0] http://saimei.ftp.acc.umu.se/pub/debian-meetings/2014/debcon...


This has to be the dumbest take on this thread.

If you want the dystopian hellhole you seemingly long for, just use Android and enjoy the ad-infested crapware? No reason to moan about things you seemingly don't understand.


There are plenty of projects to semi-automatically produce packages for a large number of distributions starting from just a git repo of the software to package.

Sure, packaging software is a bit of a thankless task, but with enough automation, packaging thousands of bits of software on every git commit should be doable by just a few volunteers.


What I would like to see is:

1) a unified database for package names (it's all unnecessarily ad-hoc right now, with different distros having different policies for capital letters, whether libraries are prefixed with "lib", whether python packages have "py" or "python" or "py3" etc...

2) a standard format for declaratively describing how to build a package.

Basically, we have FreeBSD ports, Arch PKGBUILDs, Void templates, Gentoo ebuilds - they all do more or less the same thing in the same way, they all work really well, and they're all incompatible for sheerly incidental reasons. I'd like to see these incompatibilities papered over in such a way that I can write one template and generate all the others.


There are already many standard build specification formats: CMake, autotools, Meson. For each, most distro's package managers can build the software with very little configuration, e.g. Debian's debhelper will cover most standard build systems and Gentoo has eclasses that you just need to inherit in a package. I'm not sure that adding another makes things easier.

Configuration options will need to be specified manually if you want any but I don't see a way around that as they are both package-specific and also something where distros want to differ.

For dependencies there are usually language-specific registries and pkg-config for C/C++-land. Some distros (e.g. RPM-based-ones) already support specifying dependencies by the pkg-config names or library SONAMEs. There is no distro-agnostic way to specfy your dependencies except for a README as far as I know - this would be something worth specifying.


You'll invent new standard and have N+1 standards.

It's not as if package formats can't be declarative - just that some issue always turns up in package X or Y that the declarative format didn't anticipate.

I think there may indeed be tools to convert these formats but in the end the devils are always in some distro-specific set of details that make life miserable. e.g. your library has some option enabled or doesn't and some other package in that distro needs it and so on and on.


Congrats you just invented containers :P


Not sure how you figure that. Containers are completely orthogonal to both of my requirements, which again are 1) a globally canonical package namespace, and 2) tooling to convert some standard recipe file format into distro-specific recipe files (which would require point 1 to work).


I take your point (it's the general point many use to argue for Snap/Flatpak/Nix/AppImage/0install/Homebrew/&c.) but if you want to ensure you're not replacing per-distro-package-manager-fragmentation with completely-arbitrary-chaotic-package-manager-fragmentation there needs to be some unity & consideration for what users want in order to ensure they'll willingly subscribe to your system of choice.

While I wouldn't call Flatpak "popular" with users per se, it's probably one of the least-worst alternatives that have come out. The horse Ubuntu has backed (Snap) may be the most used by virtue of being rammed down user's throats by projects with existing user capture (e.g. LetsEncrypt), but that's not going to make the debate go away: it just strengthen's the argument to return to distro packaging.


> (Snap) may be the most used by virtue of being rammed down user's throats by projects with existing user capture (e.g. LetsEncrypt)

Care to expand on this? I'm not in Ubuntu-land but can't imagine how a service with an open protocol and multiple clients forces you onto Snap.


I just mean distributing software exclusively through 1 channel (or at least only officially supporting 1).

This wouldn't be the worst situation (devs needing to more time preparing myriad distribution formats is a valid problem) if the specific 1 channel being used wasn't a broadly resented channel.


> What you are really saying is you want all software developers to publish Debian packages

That's exactly what I want. Developers and Linux distribution maintainers should be working more closely with one another instead of reinventing static linking with "snaps" or whatever just to avoid working with the community.


Yes

(I'm not the guy who wrote that comment but I would very much like this)


It's not actually that hard for cross platform apps, at least. We do this in Conveyor and can build debs and apt repos, with inferred dependencies, even from Windows and macOS:

https://conveyor.hydraulic.dev/7.2/outputs/#linux

It works by downloading the Ubuntu package index and using it to look up ELF dependencies, devs can also extend the inferred list with other package names if they like. The deb installs an apt sources file and public key so users can stay entirely in the GUI. The control scripts are auto-generated but can have extra code appended to them via the config file if the developer so wishes.

It works pretty well. Conveyor is used primarily for cross-platform apps that use {Electron, JVM, Flutter, C++/Rust} and those don't normally have very complex Linux specific dependencies. Also, in practice Ubuntu and Debian and its various forks are close enough that package metadata is compatible in basically all cases.

People do ask us for Flatpak sometimes. Probably RPM is a higher priority though. Those two get most Linux users, they're stable, there are portable libraries that can make them and they can package anything without code changes. Flatpak requires mastering a new file format and the mandatory sandboxing is guaranteed to cause problems that will end up in the support queue of the deployment tool.


As someone who does this for a lot of distros (ZeroTier), I can say that it is hell and I understand why devs don't want to do it.

Two specific examples:

(1) For some reason a lot of Debian distros decided to rename and/or re-version-number OpenSSL for no good reason (pedantry is not a good reason), meaning if you depend on OpenSSL/libcrypto your packages will mysteriously break all over the place because you're not using the versioning scheme or naming convention. We're not doing it now but we may switch to statically linking this.

(2) We use a UPnP library called miniupnp in the current version. We have to statically link it because of issues like (1) and also because some distros were for a time stuck on a version that had security bugs that they would not upgrade because it would break other packages.

So imagine (1) and (2) ad infinitum times a hundred little distributions and Debian forks and... it's completely untenable. Static linking solves many of the problems but that defeats some of the purpose of package management.

We do it for all the major Debian and Ubuntu distributions by using a qemu-chroot based build farm that builds on each distribution+architecture combo with actual qemu binary emulation. It's over 200 gigabytes of chroots and takes two hours on a 64-core Threadripper to build all the artifacts. We tried cross compilation and had too many problems: it builds fine, seems to run fine, then users complain that some mysterious glibc symbol was missing on their system. We switched to building on actual chroots and most of those problems went away. Most of them.

Snap, FlatPak, and Docker are all variations on the same basic conclusion that many developers reached long ago: "fuck it, just distribute software in the form of tarballs of entire Linux installs."

The only thing worse is Windows MSI installers and Windows drivers, though with those at least there's less platform variation so once you battle those horrors the result mostly works on most systems. Mostly. But whoo boy is the Windows installer system horrific. I shall quote Pinhead: "I have such sights to show you..."

Apple is the easiest because they have less legacy baggage than Windows and only one "distribution." They do have some horrors around signing and notarization and we don't distribute in the Mac App Store (yet?).

A major reason SaaS delivered through browsers is eating the world is the immense and totally unnecessary pain of distributing apps for major OSes. Even Apple is quite a bit harder than making a web site that you visit in a browser. All this pain is 100% the fault of OSes not caring about developer experience and in the case of Linux the multiplication of endless "vanity distributions" to no benefit to the community.

If you want packages instead of tarballs of OSes, Linux distributions could (1) consolidate and (2) invest some time in the very boring totally un-sexy work of massive amounts of clean-up and standardizing things as much as possible. But it's more fun to make another vanity distribution I guess. You'll get it right this time and everyone will switch to your distribution.


> Snap, FlatPak, and Docker are all variations on the same basic conclusion that many developers reached long ago: "fuck it, just distribute software in the form of tarballs of entire Linux installs."

..wait a minute

> (2) We use a UPnP library called miniupnp in the current version. We have to statically link it because of issues like (1) and also because some distros were for a time stuck on a version that had security bugs that they would not upgrade because it would break other packages.

Wouldn't this solve the same problem without adding snap/etc?


It solves some of the problems, but not all, and it requires a lot of testing and trial and error and setup. Sometimes libraries are hard to statically link. Sometimes you have problems with different glibc versions.

I forgot to say: some distros that people still use are too old for us to build for, so we build fully static binaries for those. But you can't really statically link glibc, so we build them on Alpine statically linked with musl. That usually works okay. It's the best we can do for the old stuff.

We've invested a lot of time in this. For most developers I totally understand "fuck it."


I have had a lot of success with building against old glibc version and then bundling all required libraries except system ones (glibc, libGL, libX11 etc). Anything optional needs to be loaded with dlopen/dlsym but for many things that detail is already handled by libraries like SDL. That's generally what seems to be the common solution for games (well used to be at least, today most just build against the Steam runtime and call it a day). It's really not that different from Windows IMO except that the set of base system libs that you can rely on to be backwards compatible and don't need to bundle is smaller.

You can still pacakge that distro-agnostic build up into distro-specific package formats to make updates easier for users without needing zillion of different build roots just for Linux.


Gentoo uses packages shipped in deb format all the time, its not just a debian or ubuntu thing.


Yes. Deal with it.


Building apt packages isn't the issue, libraries is. Debian ships very old libraries, forcing applications to run in docker. Or bring their own copies of system libraries.

Look at Arch Linux, we just write a short AUR script and the package is integrated. Once the script is written, everyone can use it. This is possible because Arch always ships recent libraries.


Conversely, `pacman -Syyuu` doesn't complete reliably without user intervention, which is far from ideal. I say this as an Arch user - the expectation that I'm going to manually attend to package upgrades in practice leads to me using outdated packages


I'm genuinely curious about this, as I see people say it a lot but it's not my experience at all.

I have an Arch Linux desktop (KDE + AMD), I update it every few days, and it's always fine.

I also have a shell VM that runs IRC etc that often isn't updated in months, and I run `pacman -Syyu` and everything works fine.

I've never had the infamous 'Arch updates are unreliable' issue. Is it certain packages that are more prone to it, or something?


I haven't had it often but I've had stuff break sufficiently badly that I couldn't boot to a graphical environment before.

I agree with the other posters that a lot of the stuff that breaks is in the AUR, but practically most users rely on the AUR so if the AUR is prone to breakage so is the typical user's arch Linux environment.

I'll also freely concede that part of this stems from the common practice of treating the AUR and pacman packages interchangeably through helpers, but that's a practice that's almost necessary for reasonable use and is present on the official wiki.

I think ultimately Arch is just a UX which was designed in a simpler time when users could reasonably expect to account for all the packages they installed


>> but I've had stuff break sufficiently badly that I couldn't boot to a graphical environment before.

Is there some "restore to previous state" (Windows-style) feature for that case? Because I'm too afraid to install Arch on my only working PC.


It's probably because you run updates every couple of days that you don't have the issue. For me, I have a laptop that can go untouched for up to a month, and it's not fun catching up on a month's worth of updates. Lots of packages have been replaced, I usually have to futz with the keyring, etc.

Also worth pointing out - the fact that you can so easily perform a partial update with pacman and totally break your system is infuriating to me. If an upgrade fails, it should revert to a cached package database. Otherwise an update will fail, the user will go to install something else, and all of a sudden nothing can link to libcrypt because you installed ntpd, which upgraded all of your auth stuff after an openSSL vuln was discovered, and everything is hosed.


> If an upgrade fails, it should revert to a cached package database.

I'm prettttty sure it does now? When did you run into this problem? It definitely used to be a problem, a long time ago (I ran into it once when I ran out of disk space mid-upgrade), but I think system updates are atomic now.


Happened to me a few months ago (whenever that big SSL 0-day was announced).

I use a 3rd party repo for ZFS drivers, which gets checked for compatibility with each new kernel release by the maintainers, so ZFS frequently stops me from upgrading, and crucially it stops me after I've already fetched the new package databases.

Running pacman -Sy and then installing an individual package isn't supported and it's understandable why.

Running pacman -Syu and having it break sticks you into this limbo where if you install anything before finishing the upgrade, you risk shooting yourself in the foot.


Interesting! How does the ZFS upgrade "stop" the package upgrade process, I wonder? Might be worth reporting a bug, maybe pacman can handle that kind of failure better, or maybe the ZFS package could be changed to fail more gracefully. I think the pacman devs would agree leaving the system in a partial-upgrade state is a bad thing to do.


It's not a bug, it's intended behavior.

The official release of ZFS on Linux I believe only supports kernel 5.x. Since arch is always on the cutting edge, the repo maintainers need to manually test the driver with each new kernel release before pushing it out to the world. They stop you from borking your system in the interim by having a strict version requirement for Linux on the zfs package. Pacman only does that check, though, once the -Sy operation has finished


Ohhh, I see what you're saying. Yeah, that is a tricky corner case :-/


I do have a shell VM (accessed via SSH & xpra) that I update much less frequently and still don't have issues.

I think the main reason I have fewer issues compared to other comments here is I try to minimise my use of the AUR, and don't install third-party repos.


> I've never had the infamous 'Arch updates are unreliable' issue. Is it certain packages that are more prone to it, or something?

Liferea is currently broken as of the latest update 5 days ago due to https://github.com/lwindolf/liferea/issues/1217

I also suffered from the grub issue mentioned here elsewhere, had to rescue my laptop after it became unbootable (though it was very simple to Google the problem and to fix it). I don't think using grub is particularly unusual choice in packages.

I also had to downgrade Samba and wait a couple of weeks for a fix to a bug it had mounting password protected shares. Again, Samba isn't exactly an exotic package.

Also had to downgrade something in my audio stack (I forget if it was pipewire, pulseaudio or a kernel issue that was the root cause) and wait a couple of weeks for a fix due to bluetooth issues.

There's also the occasional update that requires manual intervention.

All of that said, I don't think I cumulatively spent all that much time on maintaining my Arch systems as all these errors are spread across a long period of time and didn't take that long to handle. I probably spent a lot more time and nerves on Ubuntu as I don't think I've actually ever had a dist-upgrade work flawlessly and each time takes a lot of effort and mental energy.


Those are all upstream bugs, not Arch's. That's the trade off when you're running bleeding edge, but you can't blame Archlinux for packaging a buggy application.


I'm no really blaming anyone, I'm thankful for Arch, I love it, and I recommend it to other people who ask (well, at least those that can handle it :)).

> Those are all upstream bugs, not Arch's.

IIRC the grub issue was in the update script, so it may have been an Arch bug rather than an upstream bug?

Still, it would also be nice if a package is known to be broken due to an upstream bug it would get rolled back, so once the breakage is known, no one else will update into a broken state. That would save some time over each person individually updating to a broken state, debugging for a bit and then downgrading the broken package and then also paying attention not to reupdate it each time they update the system until the problem is resolved.

But again, not trying to complain or assign blame, I was just responding to a question in a parent comment.


Ehem: https://old.reddit.com/r/EndeavourOS/comments/wygfds/full_tr...

EndeavorOS is basically vanilla arch with an installer, I just posted their thread because they were open and transparent about the issue while the arch sub did very little.

When I raised this with an arch dev, I was told they purposely shipped grub's master branch (not release) because they didn't feel like backporting a security fix, and if I couldn't fix a grub issue with no working computer I probably didn't have any business using arch.

Ok then. Went and downloaded popOS from a friends house and haven't touched arch since.


I may be misreading this comment, but it feels a little stuck up. Arch is created for Arch developers. This is well-known and they have never hidden that. The wiki states as much somewhere in the installation guide. That the distribution turned out to be useful to many others is purely coincidental. If you tried it and decided you're not one of them, no need to pour shit on devs as if they owed you something and didn't fill their part of the bargain.


I agree, it is a little stuck up, on the part of the elitist devs.

If this was a little hobbiest distro, I'd agree with you, but this is one of the most popular distros out there. There is a certain expectation of competency and quality.

I would argue that shipping the master branch of a bootloader isn't competent when there are stable releases available, and not addressing the issue when literally thousands of users are having the problem isn't very respectful of your userbase.

FWIW, I did eventually fix it, but the endeavour os folks were much more understanding and helpful with the issue. The arrogance I saw on r/arch was insane and offputting.


I think it depends on when folks were using Arch. I've been using it since 2007, and back then before the "conf.d" configuration file convention became common, it was very frequent that you would need to manually merge your local config files with the upstream one. If you screwed up or forgot, your system would be some variety of hosed. It's become much, much less common in the past ~10 years. I'd hazard a guess I only have to manually attend to an update once or twice a year now, where it used to be almost daily.


Yeah, `pacman -Syyu` works fine most of the time, except when you use a lot of AUR `devel` packages, which can break because they often do not have strict dependencies definition.


AUR packages are not part of the distribution though, so that's understandable - they're explicitly user contributed packages and are very much "here there be dragons" and "if it breaks, you get to keep both pieces".


Agree


You might enjoy Void Linux - its package management story is similar to Arch, except that updates invariably work.


Without having looked closer at how to build either of snap and flatpack; if you can static link or bundle libraries with your snaps/flatpack packages, what stops you from doing the same in a .deb?

It's just a convention not to, no? IIRC the Steam .deb comes with pretty much everything statically linked. Works fine.


It's called stability and it's a good thing. And no, nobody is forced to use docker.


But what's the point? You install debian stable, where every library is at around 2 years old. Now you want run SomeApplication (Blender, Gimp, video games) as that's what you actually use your computer for.

If you wanted to run a 2 year old version of SomeApplication, that would work just fine. Is that what you really want? Is that what most users want?

If you instead install a snap/flatpack of the latest version of SomeApplication, you are installing also new libraries instead of the good old stable libraries your distro provides. So what's the point then?


> that's what you actually use your computer for

No, you use Debian stable for servers, embedded, and anything production-related that has to be stable.

You use Debian Sid for your playground or hobbies and it will be just as up to date as every rolling distro.


Ubuntu has a much more aggressive 6 months release cycle, it's not nearly as outdated as Debian sometimes can be.


But that means I have to update my OS and my application when I want a new version instead of just my application. Say what you want about Windows, but I can download and run the latest version of all my applications on Windows 7 without it missing a beat.


Yes, because either you, the user, must track down .NET Framework X.Y (less of a problem now that they're stabilized) or because the application packs all of it dependencies and shoves them into \Program Files or \Windows\System32 until it's eventually 30gb with 1000 copies of msvcpXXX.dll

That's not that different from the distribution model of Flatpaks/AppImage (or APK, or .app on MacOS/iOS). It's more that, traditionally, Linux packaging solutions try to only have ONE copy of glibc or any other library, and packages are recompiled from source so the symbols resolve. Something which isn't an option on Windows, as a generality.


I don't think Windows is a good comparison, since major upgrades for it tend to be vastly more invasive than major upgrading the average Linux distribution. Debian 6→7, ten years ago, was a really massive infrastructure upgrade, but ever since it's been pretty smooth sailing. Ubuntu is a bit bumpier, but it's still only on the level of Windows' "major feature upgrades (that totally aren't service packs because we don't want to extend your warranty)" updates that Windows 10 and newer get every 6 months.

And Windows 7 is extremely old and only works with new software because, and as long as, developers go the extra mile to make their software work with its old APIs. Valve recently announced that they'll drop support for it next year, and other companies will follow soon. It's not too much different from the situation of, say, Debian Oldstable or Ubuntu LTS: Outdated, but popular enough that people tend to put in the effort anyway.


Hackers can also run their applications on your Windows 7 installations without missing a beat


Is that true?


This puts too much unnecessary burden on developers and maintainers. It doesn't make sense to adapt software to tens of distributions (and then to different OS like FreeBSD).

Apt should be used only for system software (like init system, system libraries) but not for applications.

Also apt allows to run scripts as root during installation and this is not secure.


There's a Pareto effort, of course. If you release Debian and Redhat packages, you cover 80% of the user base. It's not hard to maintain, if you compare to building msi installers. Further, the process of generating packages for derivatives of debian or redhat can be mostly automated, so user coverage is bound to grow if your package is popular.

As for root requirements, what you are asking for are (non privileged) user installable packages. Packages that install only to the user account. This feature doesn't exist, but it'd be a much saner approach than snap/flatpak.


> If you release Debian and Redhat packages, you cover 80% of the user base.

That means you need three builds to cover Ubuntu and its derivatives (20.04, 22.04, 22.10). Two to cover Debian (10 and 11). Three to cover RHEL (7,8,9) and also a few more if you want to support Fedora. If you want to support multiple architectures you get at least twice as much builds.

No matter how you put it, that's a lot more work than maintaining a simple flatpak manifest, which allows you to target all those platforms with one automated build for each architecture. And you also get the benefit of being placed in the app store and not being hold back by whatever API is the lowest-common-denominator between all those platforms or having to clutter your code base with `ifdef`s; if you want your app to use an API which was only added in GLib 2.74, then you can just do that and it'll work.


I go to cool app website, I click download, I get a '.deb' file, I click .deb file, it opens some installer GUI (instead of Ark because I'm lucky today), I click install, I get error, I google error, I find cryptic commands that modify my system in weird ways just to get the app to install, I install app, I open app, it crashes, I open firefox, I get error "error: libX.so.1.2 not found", I can no longer google the error.

The system package manager is convenient when it works, because it's already there. But that's about it. Using it to install any random apps is a recipe for disaster, it leads to fragmentation since everyone on a different distro uses different commands/workarounds to fuck up their systems in different ways when trying to install poorly packaged software.

If everyone just used Debian, we wouldn't need Flatpak, but obviously that's not the case. Whenever you find a "linux app" that's packaged for distroA, but you're on distroB, there's a chance that it will work. That is 100% luck and coincidence, because most linux distros just so happen to ship the same family of system software/libs, sometimes even with the same versions.

Rather than leave it up to the undefined behavior lottery, a standardized non-system packaging format can guarantee that things will work across any distro. That's better for everyone involved: users, sysadmins, distro maintainers, and developers. Whether Flatpak is that format idk, but IMO it's the best overall out of the three main contenders (Flatpak, Snap, AppImage)


> It's not hard to maintain, if you compare to building msi installers.

I don’t think building MSI installers is especially difficult; it’s kind of a set-it-and-forget-it thing. Configure Wix or Inno Setup once and then you’re good to go (on every Windows machine, no need to worry about multiple distros).


Snaps/Flatpacks/other binary-containerised stuff is one of the major reasons why I abandoned Linux distributions after 22-odd years (1999-2022), and am now a happy Apple user (the other one being abandoning X11 in favour of Wayland, and a switch to laptop-first usage, on which Linux still has horrible power-consumption).

Linux distros have a history of abandoning useful, well-understood technology for fads that then cause power users all kinds of headaches later-on (e.g. by cluttering output of useful tools like mount). Eventually, power users just lose the will to adapt, and move on.

Could I have just switched distros, or compiled my own stuff? Sure. But I am no longer a student. I am become a mid-40s guy who is becoming aware his lifetime is not unlimited and can be spent better than learning the ropes of a new system.

In the time of Jenkins et al the burden of creating bistro-specific packages is negligible.


> Snaps/Flatpacks/other binary-containerised stuff is one of the major reasons why I abandoned Linux distributions after 22-odd years (1999-2022), and am now a happy Apple user (the other one being abandoning X11 in favour of Wayland, and a switch to laptop-first usage, on which Linux still has horrible power-consumption).

Flatpaks/Snaps are almost exactly the same, conceptually, as apps on MacOS. Go look inside something in /Applications/... The .app is a folder, with all of its deps.

> Linux distros have a history of abandoning useful, well-understood technology for fads that then cause power users all kinds of headaches later-on (e.g. by cluttering output of useful tools like mount). Eventually, power users just lose the will to adapt, and move on.

"Power users" have a history of complaining about distro maintainers abandoning useful, well-understood techology because the "power users" don't actually understand the technology, nor do the understand the enormous headaches it causes distro maintainers to put a sane face on 30 year old design philosophies and continue moving forward.

The goal of distro maintainers is to STOP investing endless man hours in trying to make X work with high DPI displays, scaling, high refresh rates, a fundamentally insecure design model, and so on. The goal of systemd is/was for distro maintainers to STOP wasting endless man hours on edge cases because sysvinit didn't have reasonable mechanisms for "if anything in this dependency chain is restarted or changed, re-evaluate it" or "don't try to start this service if the mount isn't available so systems stop hanging on startup if some NFS server isn't available" or whatever.

> In the time of Jenkins et al the burden of creating bistro-specific packages is negligible.

Under the assumption that they are ABI stable, which is a bad one. Otherwise, it's the same old kaleidoscope of mapping out whether it's `libfoo-dev`, `libfoo-devel` for build requirements, whether it's a distro with a completely different naming system, dissecting the ELF header or reversing depchains in Python/Node/whatever to try to find deps and what THOSE are named in some distro, or whether they're even packaged at all, then building for every possible supported version of RHEL, Debian, Ubuntu, Fedora, SuSE, and down the line.

This is why "power users" aren't an important user model. Arch, Gentoo, Nix, and others exist if you want to be a "power user". Otherwise, extremely naive assumptions about the amount of effort expended by distro/package maintainers hand-wave the complexity away with "duh, just put it in Jenkins."

Flatpak/Snap are essentially a re-invention of putting things in /opt and using LD_LIBRARY_PATH or chrooting. Disk space is much less of a concern than it was before, and sacrificing a little space to AVOID the mess of shit above is a good investment.


Mac and iPhone apps don’t automatically update unless you tell them to.

The goal of systemd is to make everyone do everything the way Lennart Poettering likes it on his personal laptop. Perhaps some of it is nice but also some of it is not nice. And his holier and smarter than thou attitude is off putting and rightly so.

And don’t handwave away wasting my resources just so you can avoid work. That’s how we end up with Microsoft Teams.


The goal of systemd as an *init system* is not the same thing as the goal of some of the systemd umbrella projects, and they shouldn't be conflated. systemd as an init system is leaps and bounds ahead of sysvinit, openrc, and upstart for distro maintainers, large scale sysadmins, etc. No more need for supervisord, random scripts to flag off and on was part of VPN connection, convoluted "meta" scripts which carefully restart 5 different services in the same order and a huge mess of shell.

That said, no, Lennart did not/does not do things that way on his personal laptop. His position is that users shouldn't need to know how to configure dnsmasq to have a local caching DNS server, that 99% of the options for dhcpcd aren't used by 99% of users (who are perfectly happy to simply get an address in a fraction of the time), that most users don't need to know how to configure /etc/sysconfig/network-scripts/ifcfg-* or /etc/network/interfaces/* for their use case.

If you do, you can disable those things. You can think this is a good opinion or a bad opinion, but at least he's pushing towards some kind of solution which isn't "RTFM". If you think his ideas are bad, propose new ones. Start a project which does it better. "Just don't change anything" is not a meaningful or productive way to design software or operating systems.


Some interesting gaslighting here, complaints about the forced systemd ecosystem can be deflected by pointing at the init system. Which, just like the rest of the ecosystem, improves some parts and deteriorates others like the absolutely worthless journaling log replacement.

Anyway I’m not about to engage the systemd evangelization task force, thanks. Good luck elsewhere.


> This is why "power users" aren't an important user model. Arch, Gentoo, Nix, and others exist if you want to be a "power user". Otherwise, extremely naive assumptions about the amount of effort expended by distro/package maintainers hand-wave the complexity away with "duh, just put it in Jenkins."

Surely distros could be based on top of Nix with carefully curated sets of packages with less effort than it takes to package everything for Debian.


And the advantage of doing this over letting application authors maintain their own packages and deptrees with flatpaks/appimage/whatever is...?

They could be based on top of Nix. In practice, it's more likely they'd be based on top of rpm-ostree. But that doesn't do anything to close the gap between the wild west of copr/ppa/aur and "get your application accepted into mainline repos" for someone who wants to distribute their app.


Application authors can still maintain their own Nix packages.


Translating from Nix to another packaging ecosystem usually entails removing specificity. This is why going in the opposite direction is harder. So I suspect we can provide escape hatches from Nix into Debian/RPM/Arch/Gentoo so that one would only have to maintain one fully specific package, but get easy translators for the other ecosystems.


It worked fine? Did you ever apt install nextcloud-client? Then find out (after pulling your hairs out) that it doesn't work because it is some ancient (and I mean years old) version?

And this is just one example. Many devs warn you out right about package manager's versions on their websites.


But exactly this example got fixed! I maintained "inofficial" debian packages for quite some time, but since Debian bullseye came out `sudo apt install nextcloud-desktop` just works :-)


Thank you for doing Gods work.


When Let's Encrypt changed the installation instructions to heavily recommend Snaps, I was quite disappointed. I'd been using their apt package for years already, across my fleet, and to have them suddenly change tack and all but disavow their apt repo made me question my choice in them for the first time.

Would their apt repo suddenly disappear? Probably not, but who knows.


Thankfully you can use any ACME client you want. You don't need to use certbot!


They don't work fine when they're months and sometimes years behind the current versions. It's a huge maintenance burden for everyone--debian package maintainers, upstream maintainers fielding tons of confused user questions, etc.


This is what I like, as a user. When it works, and it usually does, it's pretty slick. I hear people saying it only works when things are new and fresh but when you try to install old packages or new packages on older systems, things break. I guess that's true, and that's unfortunate. But things break for me using Snap/Flatpak anyway, new or old. And I find it's easier to debug distro packages breaking than application images.

The developer experience on Linux I find is much better than Windows and macOS. But putting my 'user' hat on, the experience on Linux in general is still pretty poor, despite all the progress that's been made over the decades. And package management is one example of this. I don't think it's fair to ask users to work with a new paradigm of maintaining software, even for a "free" OS, and despite how hard it is on developers to maintain distro packages.


Moving away from Debian packages would probably be better. Packaging a .deb is an absolute chore, with a bunch of tools and configuration files, while packaging for Arch or NixOS is almost trivial by comparison.


flatpaks have permissions so i'd argue they can lead to better security


When your break your fancy apt-architecture beyond repair aka You have held broken packages (whatever it means), you will be very thankful for snaps and flatpaks.


Thats a bug of apt... There should be an apt fix-my-system command that just looks at the state of everything, and figures out how to get everything back towards a working system.

And make sure it keeps a log of everything it did so it can be rolled back if it doesn't work as promised.


Aptitude does total fix-my-system. I have tried it several times. The suggested fix is always: remove everything and then reinstall the broken package. It does not even try to reinstall those packages it has just destroyed.


apt -f install


I never broke a Debian install beyond repair, and I've used it for a couple of decades. Even when doing some really off the charts stuff, including powering off a system mid dist-upgrade. Apt is really really solid and well documented/community supported.


Then community-solve this:

    $ sudo aptitude install wine-stable
--- 5 pages of useless garbage removed and then:

    62)     wine32:i386 [Not Installed]

    Leave the following dependencies unresolved:
    65)     wine64 recommends wine32 (= 3.0-1ubuntu1)

    Accept this solution? [Y/n/q/?] 
In other word the suggested solution is: Do not install wine


    sergio@sergio-laptop:~ > sudo apt-get install wine
    Reading package lists... Done
    Building dependency tree... Done
    Reading state information... Done
    The following additional packages will be installed:
        fonts-liberation fonts-wine libcapi20-3 libodbc2 libosmesa6 libwine libz-mingw-w64 wine64
    Suggested packages:
        odbc-postgresql tdsodbc ttf-mscorefonts-installer q4wine winbind winetricks playonlinux wine-binfmt dosbox exe-thumbnailer | kio-extras wine64-preloader
    Recommended packages:
       wine32
    The following NEW packages will be installed:
       fonts-liberation fonts-wine libcapi20-3 libodbc2 libosmesa6 libwine libz-mingw-w64 wine wine64
    0 upgraded, 9 newly installed, 0 to remove and 15 not upgraded.
    Need to get 105 MB of archives.
    After this operation, 700 MB of additional disk space will be used.
    Do you want to continue? [Y/n] 
No need to overcomplicate things by going multi-arch.


I found a better example. Only solution aptitude ever gives is not to install paprefs. Please solve that.

    $ sudo aptitude install paprefs
    Keep the following packages at their current version:
    ...
    11)     paprefs [Not Installed]                            
    ...
    Accept this solution? [Y/n/q/?]


Don't know your install, but Debian by default uses Pipewire, not Pulseaudio. If you are using Pipewire, indeed you can't install a Pulseaudio application. That's expected.

You can replace pipewire with pulseaudio. It's doable, just not trivial enough for an HN comment. In general, you'd have to force removal of pipewire without removing all dependent packages, install pulseaudio and then have apt check that all dependencies are ok. Then, having Pulseaudio in the system, you can install paprefs, a Pulseaudio application.

As I said, it's a solid system, rather impossible to break into an unrecoverable state.


But do you then get Wine that is capable of running 32-bit Windows binaries? Or can Wine do it nowadays without?


There's work to do that since 8.0. Current version is 8.4 but I haven't been following, don't know the status. There's a HN thread about the 8.0 release with 201 comments.

https://news.ycombinator.com/item?id=34505239


Mir, Unity, now Snap. Ubuntu has a track record of wanting to go it alone.

But, I'm all for competition, long may it continue. The only real negative here is that some apps will only release Snaps, others will only release Flatpaks and people will end up having to just revert to copr/AUR like before.

Choosing a distro is basically choosing a DE and package manager these days anyway - a single unified packaging format that works everywhere is more frustrating than an alternative DE or windowing system.


To put things into context, snaps were created before flatpak: in a sense, RedHat wanted to go at it "alone" (or at least separately from Canonical) with xdg-app/Flatpak. These ideas were already present even before those times and outside of both RH/Canonical, so it was more of a who's committed first.

Basically, Ubuntu phones were using click packages (predecessor to snaps) back in 2011 and 2012, with snapcraft shipped in January 2013, whereas xdg-app (later renamed to Flatpak) was started in December 2014.

Another thing to consider is that Canonical is orders of magnitude smaller than RedHat, and they do have a problem getting the community involved, but part of that is the size and limited time their developers have.

Now, even as a long time Ubuntu fan, I'll probably be switching to Debian just because I dislike the snap upgrade model.


And lets not forget an even more prominent example: Upstart by Canonical that RedHat replaced with Systemd.

Basically, Canonical will start at a project first, but RedHat specifically will look for ways to not join them, and start their own project instead.

Part of that is certainly due to Canonical itself, but I can't help but think a lot of it is RH making a call and then throwing more developers at something.


Upstart’s CLA was a non-starter.

Canonical has quite a history of the behavior you're dinging Red Hat for: Mir, Unity, LXD, Juju/Charms, Launchpad, and I'm sure I'm forgetting several.

Also it’s Orwellianly named “Harmony” effort to popularize CLAs because Canonical has long sought to control upstream technologies - and consistently failed because they do not play well with others.


Just to add to those wondering, CLA = Contributor License Agreement.


Ah, yes. Thank you - sorry for assuming on that one. I tend to get lazy when replying on my phone...


Sure, I have acknowledged there are plenty of missteps by Canonical too.

Though I don't think CLAs are a problem (FSF requires them for GNU projects too), but lack of committment not to switch to a non-free license.

Unity happened as GNOME 3.0 already went in an entirely different direction (mostly led by RH engineers) from 2.x series and as Canonical simply couldn't influence GNOME design. With a paradigm shift one way or another, it was a sensible move.

Launchpad was created as a tool to develop Ubuntu and free software: there was nothing else (and there still isn't) quite like it. Sure, it took a while to get it open sourced, but lack of contributions afterwards kinda proved the point that that wasn't really important (I mean, GitHub wasn't and still isn't).

Mir/LXD/Juju were attempts to improve on the incumbents (Wayland/Docker mostly).


The reason why Lennart decided to start systemd is publicly documented: the 2 facts that upstart had a serious design flaw requiring large changes, and that Canonical required copyright assignment for those changes. CLAs that unilaterally benefit a single for-profit business entity are generally seen as problematic.

Read this thread:

https://web.archive.org/web/20140928104327/https://plus.goog...

Here, Upstart author and former Canonical employee Scott James Remnant wrote:

Had the CLA not been in place, the result of the LF Collab discussions would have almost certainly been contributions of patches from +Kay Sievers and Lennart (after all, we'd all worked together on things like udev, and got along) that would have fixed all those design issues, etc.

But the CLA prevented them from doing that (I won't sign the CLA myself, which is one reason I don't contribute since leaving Canonical - so I hold no grudges here), so history happened differently. After our April 2010 meeting, Lennart went away and wrote systemd, which was released in July 2010 if memory serves.


snap's original sin was the moat of server being closed source and client /really/ wanting to talk only to the Canonical's instance. The latter is shared by Docker, in a way.


I dont use docker anymore (podman ftw) but that is unfair criticism of docker.

You can use any container registry (including self hosted) with docker and it will work. Last I checked, you cannot use any other repo at all with snap without recompling it to add support for your snap repo.


By "in a way" I've meant the encroachment of the default namespace. Much less smaller sin than snap's, but then docker is much bigger.


It's so upsetting that canonical will continually come up with cool tech, try to keep complete control over it and ideally make it so it long term becomes the only option, which people understandably don't jive with. Had they learned that in the space theyre in an open approach works better, we could still be using mir, upstart and most importantly unity now. God I miss unity.


Isn't the server software proprietary?

https://en.wikipedia.org/wiki/Snap_(software)

So I don't see how RedHat or any other FOSS distributor could have been happy with Snaps.


Flatpak development actually started much earlier than that. The first version happened in 2007 when it was known as "Glick".

https://github.com/flatpak/flatpak/wiki/Flatpak's-History


And AppImage was already a thing in 2004. To bad that everyone has to reinvent the wheel when it comes to packaging instead of building on the existing solutions.


I thought flatpak was a response to Ubuntu not open-sourcing the snap store code.


Sure, but I don't think it would have been much effort to build a snap store server with the client and package format open source.

IOW, if RH wanted to "join in", there was a cheaper path forward.


You don't want to be a sharecropper; using a format you can't meaningfully influence for something as key as application installation, is an obvious non-starter for any serious distribution.


Click was preceded by Alex Larsson's glick project, see this page from October 2007.

https://web.archive.org/web/20071012194830/http://www.gnome....

There was also glick2 in 2011.

https://web.archive.org/web/20111022070435/http://people.gno...

You may have heard of Alex Larsson as author of xdg-app.

I think the oldest related project was Klik, started in 2004 (and later renamed to AppImage).

This is his blog about the history of Flatpak.

https://blogs.gnome.org/alexl/2018/06/20/flatpak-a-history/

Also, the reason why Canonical have "a problem getting the community involved" is that they put a KEEP OUT sign on every one of their projects in the form of a CLA requiring copyright assignment.


> they do have a problem getting the community involved, but part of that is the size and limited time their developers have.

A tiny inconsequential part. The bigger one is their refusal to give a shit about usage outside of Ubuntu with Canonical-controlled servers and their insistence on CLAs. Canonical has noone but themselves to blame for Red Hat to continously win the community for their solutions even when they were late to the party.


> Mir, Unity, now Snap. Ubuntu has a track record of wanting to go it alone.

This. Also bzr. They seem to want to control their projects completely and so even when they have good tech they lose out to more open, community developed, equivalents that build wide engagement and momentum.

I honestly don't understand it, you would have thought they would have learned by now that they don't have the engineering resources to do everything by themselves.

Compare that to Red Hat who always try (and sometimes even succeed!) at developing projects with the community and are far more successful at getting their projects adopted (I know people don't like them, but you cant deny they are effective at it)


> I honestly don't understand it, you would have thought they would have learned by now

The simple answer is that the company culture really, really wants to be "the Apple of Linux", with all that it entails. Whereas RedHat wants to be the Linux of Linux, they've learnt how the opensource game really works and they play it every day.


Bzr is a great counter example.

Bzr "lost" because git had GitHub, whereas Launchpad was one too many things and slow to optimize for modern sensibilities.

(And Linux used a different VCS before git, so that didn't matter in adoption)

Imagine a world without GitHub, and I don't think git would be our go-to VCS. Though maybe not even Baazaar, but there are things like Mercurial too.


Git already had momentum before GitHub became popular. And yes, it is absolutely Linux and other high-profile projects that ensured its success in the OSS world. Claiming that Linux having used a different VCS before git means that Linux's use of git doesn't matter is really odd when git was developed for the Linux project.


It sure had momentum. As did a bunch of other distributed VCSes. If you were a party to voting what VCS to switch to for some of those high-profile projects, I'd very much like to hear about it.

In GNOME, a decision was delayed and bzr and git were pretty evenly matched.

Linux has previously used BitKeeper, but that didn't make it "win out", just like it didn't do so for Git. Sure, it wouldn't have existed if there wasn't a need for Linux.

I am only pointing out that it was GitHub that helped popularize arguably the worst UX among DVCSes: I don't hear people say "your free software contributions portfolio" — they'll just say "your GitHub profile".


bzr lost because it was poorly-architected, infuriatingly slow, and kept changing its repository storage format trying (and failing) to narrow the gap with Mercurial and Git performance. Or, at least that's why I gave up on it, well before GitHub really took off and crushed the remaining competition.

For my own sanity I began avoiding Canonical's software years ago, but to me they always built stuff with shiny UI that demoed well but with performance seemingly a distant afterthought. Their software aspirations always seemed much larger than the engineering resources/chops/time they were willing to invest.


Sure, that's a fair point as well (though bzr came out of GNU arch, which didn't originate at Canonical, and it was finally redesigned into something good at Canonical — not a knock on arch either, it was simply early).

The question then becomes: why not Mercurial which still had a better UX than git itself?

My point is that git won because of GitHub, despite lots of suckiness that remains to this day (it obviously has good things as well).


Another way to look at this situation is that canonical comes up with innovative solutions that are reasonably well engineered out of the box but they are rejected just because they are from canonical.

I'm struggling to find a way to characterize the difference between Red Hat/IBM and canonical's approach to the community. The most succinct I can come up with is that canonical releases projects and assumes that they are the one responsible for their creation., Red Hat releases rough ideas and code. There also seems to be a heavy political/disinformation campaign going on tearing down any solutions by canonical.

In either case, none of us can resolve the conflict. It's a pissing contest between canonical and IBM/Red Hat. I will keep choosing solutions that let me get my job done and get paid which is all that matters.


At an old job, we used probably hundreds of hardware and software vendors. I never had to deal with any of them directly, but I often spoke with those who did. There were complaints about all of them I'm sure, but the only ones that inspired bitch sessions over a drink were Oracle and Canonical. I'm told that both were just thoroughly unpleasant to deal with.


I don't think it is a pissing context between them, they can both happily exist in the same world, it just interesting to see the difference in approach and try and figure out why one seems more successful than the other.

I think you're right that Canonical creates and releases projects and assumes they are in charge of them, but I disagree about Red Hat (honestly not sure what you mean by "rough ideas and code"), I think they tend to see whats already out there and then throw their weight behind that, then only if there isn't do they create their own and even then they are more open about how the project runs. That difference means Red Hat gets more momentum behind its projects, and that is what counts. (of course RH can throw more engineers at stuff as well, and that also helps a lot)

Its not some sort of conspiracy, nothing Canonical has ever done has had the same amount of hate as systemd has, its just a difference in approach.


What I mean by rough idea and code is simple. Is a project something complete you can take and just use or is it a bag of parts. xcp-ng is a take it and use it project. KVM is a bag of parts.

My experience with Red Hat is that it's frequently IKEA level assembly required. Canonical projects tend to be read the docs and just use it. Although there are some exceptions. For example a couple of years ago, cloud-init was not documented well enough for my taste. Took a second look just now and found new documentation that may revise my opinion.


> they are rejected just because they are from canonical.

Or rather because they're proprietary, often closed-source, like Snap server.


Exactly. Canonical's Snap Store service is closed source and the Snap client is designed to only interface with Canonical's proprietary service. It's not "disinformation" to point out that Snap is a locked-down product controlled by Canonical, while most other packaging solutions for Linux are fully free and open source on both the client and server side. Canonical's one-sided approach to interacting with the Linux community will only encourage Linux users to reject Ubuntu and adopt distros with more sensible defaults.


And even if they are open source, the development is not (initial development often closed completely, later development requiring CLAs) and the projects only care about Canonical's use of them to the point where even building them on other distros is often far from trivial.


> Choosing a distro is basically choosing a DE and package manager these days anyway

What distribution doesn't support at least half a dozen desktop environments these days?


>What distribution doesn't support at least half a dozen desktop environments these days?

Most usually come out of the box with official support for only 2 DEs. Of course they can theoretically support every one out there if you manually install them.


Usually for all major distributions you get in the official repositories

- Gnome

- KDE

- Xfce

- Cinnamon

- Mate

- LXDE/LXQt

- Several *box stacked and assorted tiling window managers and assorted glue scripts

That's a fairly solid base of "support" even if the distributions don't provide live CDs with any particular setup preinstalled.

The only exception I've seen are the "we took an existing distro, threw away half the repository, patched 2 packages and slapped our own logo on top" wannabe hipster distributions that last an entire two years before folding due to being pointless.


You make it sound like so much work, "manual installing".

It's as simple as

     apt-get install $DE


Try that command on Arch, Fedora or OpenSUSE and let me know how it works.

Also, having multiple DEs in parallels rarely plays well with most distros. That's why they usually give you a downloads with one or two options already setup and tested.


> Try that command on Arch, Fedora or OpenSUSE and let me know how it works.

Using apt, yum, pacman or the equivalent is still just a single command. Your implication that "manual installation" is extra work when it is a single command on all the desktop distros.

Saying that "executing a single command" is too much manual work is simply delusional.

> Also, having multiple DEs in parallels rarely plays well with most distros.

Nonsense. I'm currently running Plasma, which was not the default. I've installed so many in the past on this machine that I lost track of them.

I've switched DEs and window managers multiples on this machine, with no problems.


They all have wikis you can read. And running DEs in parallel is a different problem from simply switching between two.


Competition of implementations is good but competition of standards less so. My biggest problems with Ubuntuland-alternatives is that they expect the world to pay the maintenance cost for their NIH syndrome. Mir as a Wayland compositor is fine. Mir as something that every toolkit needs to implement is a giant waste of everyone's time. The same is true for Snap - if you could make one package that works with flatpak and Canonical's implementation then I would be a lot less opposed. But you can't. Because Canonical doesn't care how much extra work everyone else has to do as long as they get their way. Thankfully, so far they don't in the end - I expect Snap to follow.


I've had some success with Nix. Still I would only generally recommend it for development (even though I use NixOs)


Also a Nix/NixOS user/contributer. I feel like Nix+nixpkgs could become the universal packaging solution adapted to build deb, rpm, etc packages.

Through nix-env it even already has the idea (albeit discouraged) of imperative package management like apt.

Having spent a while with NixOS, running containers for user applications seems like using a cannon to kill a fly. The 'simple' alternative is to drop FHS altogether - which containerization is kind of doing in a roundabout way by isolating the app filesystem from system fhs.

As for it being for developers only... I get that perspective. Debian/Ubuntu packaging is also hard, AUR packaging has its quirks. A lot of this is hidden behind the package manager, wheras with NixOS it is more obvious.

The killer idea for NixOS would be to make package management for Joe Q Public as easy to use as apt while. Tools like MyNixOS[1] are emerging which might bridge that gap.

[1] https://mynixos.com


The one advantage Flatpak provides for me over Nix is containerisation. Not the bullet-proof kind, which allows you to run malware, but of the "reasonable" kind, which stops apps from storing to any directory they like just like that (only chroot level of "security"/isolation would be fine for me).

When there's a package manager / runtime that does both then I'm extremely interested.


Looking things up, someone has linked the Flatpak containerisation tech (bubblewrap) into the Nix store: https://github.com/fgaz/nix-bubblewrap

It looks... somewhat abandoned, but I'd wager it still works today. Failing that, setting up a shell alias to launch a regular binary in bubblewrap isn't too hard either.


It's a long standing goal of mine to put together a distro with the system management aspect of NixOS, the isolation of bubblewrap and the lightness of Alpine. I'm gonna start this project when I have time™.


Why do you see it as more frustrating?


For users who want to install something like Slack who realise their distro supports Flatpak but only Snap is available, it's definitely a source of frustration.


It speaks volumes to the success of Snap that they’re now moving to ensure Flatpak isn’t part of the default desktop experience.

The only outcome this leads to is monetisation attempts through their store and other OS integrations to try and turn free users into a source of revenue. If you’re using Ubuntu for desktop I’d urge you to start exploring alternatives.


Great, now stop shipping snaps.


yeah lost a day this week to bloody snaps


Is there a single user who likes snaps?


I started using a snap to run Emacs 28 on my Ubuntu 22.04.2 LTS system. The snap is maintained by a Canonical employee, and I like it better than any of the other ways I have used to get more bleeding-edge versions of software in the past, such as using a random Personal Package Archive (PPA) or building from source myself.

Would I rather have the deb package be up-to-date? Sure. But when I've used distributions that try to stick closer to the bleeding edge everywhere, I've had bad experiences with stuff breaking. This lets me keep a stable, well-tested distribution for everything except for the one package I want to be newer.

I can't compare to Flatpak or AppImage because I've never used them.


Forced auto updates and a closed store mean I won't even try to like snaps. Those are the top two reasons I moved off Windows.


While i don't go out of my way to use them, i've never, ever encountered any of the problems stated by others. And I'm not being selective with my devices at all (Asus laptops, hp minipc).

I really wonder how i can have such a different experience to theirs apart from really esoteric parts/apps. Not saying it's not happening.

While i'd prefer just using apt, they let me work without being a pain so i don't really care.


They’re better than Flatpak for CLI applications I guess since Flatpak doesn’t aim to support those

Is there another option for that than Snap, or Docker which is a bit too complicated to set up? (that’s not rhetorical, I would like to know if there is)


Another option https://appimage.org/


Those are actually kind of nice.


me. They provide solutions that work. There have been some teething pains but for the most part nonissues. It's at the point where I'm considering learning how to build snaps for private products I deliver.

I think the argument over flat pack versus snaps may expressed in technological terms but in reality, it's just your damn ego. Let it go, it's really not worth arguing over. Use what solves your work problem and then go have a life away from computers.


> it's just your damn ego.

It's really not. First, those technical problems are real problems that will create user-visible problems, ex. forced updates. Second, it has problems that are already user-facing, ex. startup times.


you are only proving my point. seriously, most users don't notice and don't care. heck, I know enough to notice and I still don't care because it does not interfere with my ability to get my job done. any mentioned technical problems are implementation issues, not design. they will be solved in 2ish years or less


I've used it to install Nextcloud Hub and it was nice actually.


https://fosstodon.org/@wimpy/109908489437633387

@bluesabre@floss.social "[…] Ultimately, each of the current flavor leads agreed to the change. […]"

@wimpy@fosstodon.org "@bluesabre Did we agree? I think we complied with the requested change. You and I both played our part in ensuring this was clearly and openly communicated."

- - -

https://nelsonaloysio.medium.com/installing-ubuntus-snap-on-...

> […] As Ubuntu’s snap requires access to the root file system in order to create a symlink in /snap to /var/lib/snapd/snap, successfully installing it requires a few extra steps. Besides, $HOME directory in Silverblue defaults to the more traditional path /var/home/$USER, and since snap expects it to be on its modern location /home/$USER, this must be worked around as well. […]


Moved to Fedora for the mess around snaps and I'm not going to look back.


Fedora + flatpak + everything transparently running in bubblewrap automatically has been a very nice experience.

Snaps drove me off Ubuntu, and I'm glad I landed on Fedora.


Fedora drove me off twice now, because I had bad luck both times with release updates. I haven't even configured the system too much, but there were several things broken after the release upgrate, the most notable being the file manager crashing while opening. Since Fedora bumps its version twice a year, with my experience, you don't get much time to run a smooth system.


Fedora supports two releases at a time, that stability is in N-1

I try to give the new release about a month to bake


Yeah, me too. I don't like to adopt early, don't have the energy, I'll just let them figure it out.


Ouch, and still some woes? The latest (37) wasn't too monumental if memory serves. 38 is being tested now, I'm not ready!


Yeah, this last time I went from a pristine Fedoda 35 to 36. There was some error upon login, I couldn't open the file manager, and the Night Light feature didn't work afterwards.


If Redhat has one thing going for them it's that they make an incredibly cohesive system. You really get the sense that Fedora was designed to be a single usable thing top to bottom rather than some base packages and a grab bag of random applications that run wild.

Really is a joy to use.


I've been setting up the new Fedora 38 sway spin (Wayland + sway wm out of the box) and am really impressed. I've been an Ubuntu derivative user for a long time but Fedora is great so far.


I've switched to Fedora KDE spin, and I've been loving it. KDE recently got native window tiling support which has been nice. It's not nearly as powerful at tiling as Sway, but it fits the bill for my use.


same here, left Ubuntu ecosystem when they ignored community feedback on forced auto-updates. It's been amazing.

P.S. Bonus for people using fedora who "discover" Silverblue :)


Today VLC didn't start. No errors, no reason. Just didn't start.

Turns out snapd.apparmor, whatever that is, wasn't running. (I ran `vlc` on the cli to figure it out)

I love snaps, they're so convenient for the end user..


I wish there was a solution like flatpak or snap that does make things simpler, not more complex.

To begin with: It would be nice if the data of the containerized applications were stored in one place and one place only. Each application should simply be a single directory in /snaps/ or something.

At first I thought snap would be like that. But no. When I did some tests, the data of a snap seems to be splattered across various places on the file system.


Flatpak does this. It stores it in `~/.var/app/`



Well if the app works with documents I'd like to have them in ~/Documents, not in /snaps/


I think they're referring to this:

  /usr/share/bash-completion/completions/snap
  /usr/bin/snap
  /home/izkata/snap
  /var/snap
  /snap


The solution used by Linux Mint seems apt:

    cd /etc/apt/preferences.d/
    cat nosnap.pref
    # To prevent repository packages from triggering the installation of Snap,
    # this file forbids snapd from being installed by APT.
    # For more information: https://linuxmint-user-guide.readthedocs.io/en/latest/snap.html

    Package: snapd
    Pin: release a=*
    Pin-Priority: -10


Reminder that if you don't like snaps and prefer flatpaks, it's pretty easy to migrate using: https://github.com/popey/unsnap


I like the project status

> Let's say it's "Pre-alpha", as in "It kinda works on my computer".


What I really want is for Nix or something like Nix to become standard.

We can very easily mimic flatpak and snap by wrapping a nix closure in bwrap.

You can have the choice to sandbox it or not with Nix. And you can easily compose it with other software.


They could do with improving the behaviour of snaps first before removing alternatives. The updates are currently awkward as they don't seem to work if the application is running which can be a problem with something like Firefox which I have running for days at a time. It's also annoying that it's gone from using simple "apt" commands to keep the machine up-to-date, to also needing a "snap refresh".

I choose to use Ubuntu at work for a bunch of stuff, but snaps are making me consider whether it's worth migrating over to Debian instead.


I gave up on snap for Firefox, still use snap for other stuff on server. The issue I had with snap version of Firefox is that it Downloads feature is crippled.

As an example I wanted to download some pictures from a webpage , for each picture I was forced to select the destination folder because each time it defaulted to some snap related folder(I forgot).

The browser feature where it remember that downloads from site A goes in folder Fa and downloads from be go to Fb is a big time saver. I got the Firefox tar.gz file and using that/


I did. It's been a few months, and I am not going back to Ubuntu. I find Debian no different from Ubuntu. (I do not use GNOME, but rather stick with XFCE).


> The updates are currently awkward as they don't seem to work if the application is running

The fix for this is currently being tested: https://bugs.launchpad.net/snapd/+bug/1980271

> before removing alternatives

Alternatives remain available for install. They weren't removed.


That issue is related, but not quite what I meant. I would want the app to be updated in the background whilst it is running (which works fine for APT installed packages), so just closing it and re-opening it would get the newer version. That bug you linked to is the issue that after getting a visual prompt about a newer version of Firefox being available, the "snap refresh" isn't run immediately after closing it down. Having to close an app to update it reminds me of the pain of Windows updates.

You're technically correct (the best kind) about them not removing alternatives, but you have to do some manual intervention to get them back, so I consider it being removed when compared to the previous behaviour of having them available by default.


> I would want the app to be updated in the background whilst it is running (which works fine for APT installed packages), so just closing it and re-opening it would get the newer version.

The Firefox deb (such as in 20.04) became unusable and tabs crashed when the deb was updated without a restart.

It's really down to each individual app as to whether it will work being updated in the background or not.

AIUI the new snap implementation gets everything ready in the background, so the update on closing it and re-opening it is quick.


> The Firefox deb (such as in 20.04) became unusable and tabs crashed when the deb was updated without a restart.

I never encountered that for any previous version of Ubuntu.

> AIUI the new snap implementation gets everything ready in the background, so the update on closing it and re-opening it is quick.

I hope they get it working soon as my experience is that "snap refresh" does nothing whilst Firefox is running and doesn't even notify you that there's updates available.


It's insane how long it takes them to fix bugs in their auto-update mechanism they've forced on everyone. Things like that were reported years and years ago. They fixed the data corruption issue with their auto-update mechanism, but that too existed for years.

They also still haven't fixed that disgusting ~/snap folder.

It's very obvious by now how little Canonical cares about their users.


> It's insane how long it takes them to fix bugs in their auto-update mechanism they've forced on everyone.

You can disable auto-updates with eg. "snap refresh --hold firefox". See: https://snapcraft.io/docs/keeping-snaps-up-to-date#heading--...

Though not taking security updates for Firefox seems like a very dangerous thing to do.


That command works on snapd 2.58+, even Ubuntu 22.10 still ships 2.57.5 unless you've gotten extra channels enabled.

It's a very new addition after people being pissed for years. I suspect most people would be fine without hold if the auto-update would be seamless. Years ago it literally unmounted applications' storage abruptly, sessions and databases still open... Not well thought-out.


snapd 2.58 is available as an update recommended by default on Ubuntu going all the way back to 18.04. A standard "apt update && apt upgrade" will give you it. Unless you've gone out of your way to turn updates off.


apt install firefox still works?


I think it does, why wouldn't it work? That's installing it via aptitude, not flatpak.


It does not work as one would expect. Instead of installing Firefox itself, it installs some sort of a script that installs Firefox from snap.


It hasn't 'worked' since Ubuntu 21.10. Ubuntu overrides your apt install and installs the snap version instead for programs that have snaps.


I think you now need to add a PPA in order to get new versions or ESR of FF on Ubuntu.


It was expected, they're doing a semi walled garden with this snaps stuff it seems. I jumped ship back to regular Debian after the snapd stuff was foisted on users. Not looked back since.


I'm not sure if Ubuntu realizes that the entire Linux ecosystem is moving away from them because of this flatpak vs snap situation.

I'm sure their enterprise side is fine, but their consumer side is disappearing rapidly.


I don't think the Linux ecosystem likes Flatpak that much more than Snap. They are both pretty terrible.


People love flatpak and hate snap, actually.


Indeed! The temperature generally favors Flatpak, no doubt. Personally... I tolerate it.

It's useful for things I can't manage otherwise. Though I'll admit I'm in an incredibly small niche -- I maintain RPMs on COPR for fun... and at work.

I part with an anecdote! We weren't too careful on some Ubuntu systems and ended up with critical networking services in Snap.

The saving grace was: these systems are completely offline, so it couldn't sneak in updates. Note: this usually shouldn't be a big worry, but we have some debt


Over AppImage or APT, really? Do people ever go "great, it's not in my distro but there's a FlatPak available, just as good"?

Not saying you're wrong, but this is surprising to me, and not my own reaction as a full-time Linux user.


I'm saying flatpak over snap, that's all.


I think it should remove both - Flatpack and Snap. Stick to apt, improve it if you wish.

">This adds fuel to the fire that Canonical is doing this largely to further its own interests."

What a surprise, companies (excluding rare exceptions) are there to make money, not to give it out. Interests of the customers are taken into account only as far as it helps to increas income and satisfy legal obligations.


Canonical had a booth at CES this year, just a few rows over from mine. Every time I walked by it was totally deserted, just an employee or two looking at their phones.

It was rather cathartic to see


You could use Pop_OS, which is an Ubuntu derivative with flatpak instead of snap.


In 5 years I predict an arch based SteamOS derivative will be the most common desktop linux for personal users. It will have the backing of solid engineering form a company that couldn't care less about being opinionated regarding the desktop user space.


The more struggle Linux users have with snaps, flatpaks, appimages, and whatever their native package management system's quirks are, the more they will appreciate switching to NixOS one day, which IMHO is "the only sane Linux" as it's already definitively solved this problem. Join me in shaking my head at all this from the finish line...

And honestly, I get it. I avoided it for years (it's 20 years old now!!) mostly because the nix language and words like "derivations" scared me*, until a year ago after hopping distros for quite a while (hmmm Elementary, Pop_OS, Ubuntu, Manjaro, Arch...) and after a month where I had both a bricking AND a seemingly-unsolvable interdependency build problem, I got mad and decided to take a deep breath and check out https://nixos.org/

* here, let me immediately solve that fear: Nix interactive language tour, it's like JSON with syntax sugar and pure functions: https://nixcloud.io/tour/ . And a "derivation"? That's basically a "lockfile" for a particular package, if you know what that is (and you probably do). Except it existed before the concept of "lockfiles", so it got that name.


As long as they stop shipping snaps by default as well and go back to stuff that 'just worked' instead of 'stuff that works in interesting and novel ways'.


Reminder that ZorinOS exists. It's Ubuntu, but better: https://zorin.com/os


I would've appreciated some background how this differs from the next Linux distro, and what's the relation to Ubuntu removing Flatpak?


Beautiful UI and themes, based on Ubuntu, built-in support for Windows software through PlayOnLinux and Wine, software store supports apt, snap and flatpak, you can use whichever method you like.

https://help.zorin.com/docs/apps-games/install-apps/


I'm not sure which major distro doesn't have "built-in support for Windows software through PlayOnLinux and Wine"


The ones that don't come with Wine and PlayOnLinux preinstalled?


Some links I'd suggest reading:

https://www.forbes.com/sites/jasonevangelho/2022/01/17/what-...

https://www.techrepublic.com/article/zorinos-16-is-exactly-w...

https://www.zdnet.com/article/zorin-os-puts-on-a-master-clas...

Copy-pasting my comment from reddit:

- Interface is amazing. Possibly the most polished and modern UI in any distro.

- Nvidia drivers provided by default.

- Biggest App store library in any Ubuntu-based distro. Comes with Ubuntu + Zorin + Flatpak + Snap repositories.

- Made for newbies and pros alike.

- Wine pre-installed, so you can even use some Windows programs without doing any extra configuration.

- Has great hardware compatibility and is always on latest LTS kernel.

- Great performance.

- Extremely stable because it's based on Ubuntu.

- Works great for gaming.

- Has extra proprietary drivers pre-installed for example, Intel WiFi chipsets.

- Multiple layouts, Windows like shortcuts.

- Good amount of customization.

- Since it's based on Ubuntu, most articles and tutorials available online also will apply to ZorinOS.

- Did I mention that the UI is very polished?

ZorinOS is a no-brainer choice if you want a 'Just Works™' system that also looks highly polished. Ubuntu's focus is not the user, but corporate. Just compare their home pages and you'll know what I'm talking about. It's ubuntu but better, why would you use Ubuntu ;)


Thanks! Yeah it does sound great for someone who wants a Linux that "just works". That was the original selling point of Ubuntu as well, but this seems to be taking it on another level for less technically savy people who want a Linux that just works out of the box.


Like many, I have had my fair share of frustrations with snaps. Already considering moving away from Ubuntu, but not sure where to go next. Is Debian a good option? I use Linux both personally and professionally, so while I do like to tinker with new stuff, I also need some stability. I've used rolling-release distros in the past and its something I'd like to try again. Maybe Manjaro?


I've moved from Ubuntu to Fedora and it's been more stable while also being more up-to-date without rolling-release distros headaches. Strongly recommend it.


I gave Fedora a try some years ago (4-5?) and had a couple of issues with nvidia drivers. Whats the status these days? I also have a long history with debs, so going to rpm is a mystery for me :)


Also to add. Fedora has a 6 month release cycle for versions and a version is supported for about 13 months I think.

But during this time they regularly update packages. My kernel is always the latest version.

But I trust it because Fedora has a massive testing automated update and testing system. Every package is thoroughly tested for regressions and other things. https://bodhi.fedoraproject.org/ to look for yourself.

It is also integrated into their bug and other systems so it's a very well oiled machine.

It's been rock solid stable for me and I've been running it since 35. And upgrades are super easy.

There is also COPR which is their AUR/PPA hybrid system that lets them provide a way for users to setup their own repos but build using Fedoras learnings and systems. It's pretty cool.


It's vastly better nowadays.

Also with `dnf` RPMs are as easy as DEBs. And some of the commands are similar.

I find dnf is even smarter and better at managing dependencies.

I only ever run 2-3 commands. `dnd install` `dnf update` and `dnf remove`. Update handles the repo refresh and actually updating packages. And force refreshing repos you just add `--refresh` to the command. Otherwise it does it every few hours on its own (the refreshing of repos, not installing updates)

Fedora is a breath of fresh air after decades of Ubuntu, and then Manjaro. I wouldnt go back and I have used Ubuntu since 5.04 til 22.04


And dnf5 is around the corner making the "slow" dnf issue a thing of the past!


Nvidia is still nvidia -- with newer kernels you may occasionally find some build problems for their drivers. Fedora amplifies this by using such recent kernels

There are better packages/attempts at handling it, but I still find myself switching it to an LTS kernel if I have Nvidia or ZFS involved.


Don’t take this as a recommendation, but after using Ubuntu for 15 years I ended up switching to NixOS. Obviously it’s a different beast, but with all I’ve learned over the years it’s now a decent fit. The reduction in mutable state has been a breath of fresh air.


I stopped using linux 6 years ago when i realized it had become a glorified desktop cuztomization software for me. Now I use Windows, and on the extremely rare occasions that I need Linux, I use WSL2. But for the most part I do everything on Windows with Powershell


I only use windows for flight simulator, everything else is extremely frustrating for me, it feels like a battle against the OS. May not be the experience of many, but for me Linux stays out of my way or, rather, lets me define what the "way" is. Wouldn't switch for anything :)


Flatpaks are better than snaps IMO, but both are only good for certain apps due to the largely half baked implementations.


I see why software vendors don't want to have to target specific distributions if they can help it - it's expensive. I assume snaps and flatpaks try to mitigate that problem.

I wouldn't want it to be a very common solution really because it's not that efficient and means you get updates for security when that vendor feels like it.

I'm happiest with MOST software being in the distribution.

Anyhow I know there are lots of ways to look down on Arch/Pacman (Artix in my case - dinit instead of systemd) but I have to say that it is simply much more enjoyable to use than Ubuntu or Fedora or even Debian IMO. Perhaps one day some terrible thing will happen to make me regret it but I just cannot imagine going back to the horrible burdens of those distros - upgrading from one version to another (especially on Fedora - groan), selinux, snap Firefox etc


Are Ubuntu devs just not reading the room?


Oh they are, but they're intentionally ignoring the feedback. Instead of listening to hundreds of users (a ridiculously high amount for a bug report) they've said (paraphrasing) that they have "many more users to cater to".


These organizations just demonstrate utter disdain for their users. It's not just Canonical. It's almost every company that's been around for more than a couple of decades. Microsoft, Google and Mozilla are also like this. They just don't care what their users want and they treat them like children who need to be spanked when they don't like it. I personally end all relationships I have with any organization that does this.


I wonder if these developments won't eventually result on a split, when at some point Ubuntu would rather not keep up with Debian packages, or talk things through upstream. Perhaps when there are some fundamental differences of direction at the packaging level.


I like flatpaks much better than snaps. I wish they would implement an equivalent to snap's classic confinement so that apps like vscode could interact easily with the default system shell, libraries, executables etc. That feels like a big hole in flatpaks.


Can't stand Snap, Flatpak and AppImage. They're the Nodejs of the software distribution world - a horrendously inefficient way to distribute software. DEBs and RPMs via APT and DNF are far better, imho.


Yep. The notion that they're listening to anyone, anywhere is marketecture at best.

I was outspoken in my interview about snaps. Of course this is the path to success in a monoculture.


It's surprising to me how Canonical keeps "losing" when positing competing technologies (e.g.: Upstart, Mir, Unity, and now Snap)


There is a common theme to all of them: Canonical retains exclusive control over the development by requiring assignment of copyright of all contributions to Canonical.

The obvious consequence is that anybody who doesn't get paid by Canonical and wants to work in the area decides to contribute to the community based CLA-free competitor project (systemd, Wayland, Gnome/KDE/..., Flatpak), which eventually becomes more powerful/flexible/reliable due to lots of companies and individuals contributing.

(The oldest of the CLA projects was the "Bazaar" distributed version control system, almost completely forgotten now)


When ubuntu shoved libreoffice in a snap, and it took forever to start and crashed randomly. That was the day I wrote off ubuntu.


> Ubuntu is prioritizing deb and Snap, its default packaging technologies, while no longer providing a competitor by default. This is described as an effort to provide consistency and simplicity for users.

It does! It'd be even simpler and more consistent if they dropped snaps and only used debs!


Personally I've just not had that many problems with snaps. Just one, really, with Firefox. Firefox was crashing every-time I used the File API; removing the snap file cache fixed the problem. I don't have enough experience with Flatpak to compare.


>Another potential concern may be that Canonical could be using this decision to force package upstreams to offer a Snap version or face not being easily available in the default Ubuntu installation.

There we are.


I didn't even know Ubuntu ships flatpak preinstalled. It's not like they are removing it from repositories, if you want flatpak install it - I don't see the issue.


Moved to Debian a while ago because of the mess with Unity and snaps, etc…

A shame, since ubuntu got me into linux desktop back then, but in the end they keep screwing things up needlessly.


I didn't, but my next OS won't be Ubuntu for the same reasons.


I mean, you could always just do `sudo apt install flatpak`


Not installing software by default doesn't seem that big of a deal to me.

I mean, first thing I do is install Emacs, which should, but doesn't, come pre-packaged with Linux. vim, sure. gedit, ok.


If you are looking for a Linux distro, here is my recommendation. If you have the freedom, pick something that is independent + community driven(CD)/community oriented (CO) like Arch Linux(CD)/Linux Mint(CO) or which is not independent and depends on another distro's packages/infra but is still community oriented like Linux Mint or Elementary.

It's not some tinfoil stuff. Community oriented or driven (entirely community driven like Arch) always listens to the community. This snap and flatpaks wouldn't happen in the first place.

Use Linux Mint if you don't have time to initially set it up. Else use Arch Linux if you have initial time and patience to read. Arch is not scary like community is bad. There is a wall of text and it is intimidating. It needs time like a couple of days when you do it for the first time. But it is just about RTFM.

Flatpak and Snap are unneccessary abstractions which adds a hell a lot of bloat and for what? To make a point release distro work similar to rolling release. Nothing else. So just use a rolling release distribution like Arch or OpenSuse Tumbleweed (which I haven't used but have been hearing good things. Especially since last week here in HN).

But but, point release is bleeding edge and unstable!

NO! They are stable versions of packages and not development streams.

We seriously need to clear this confusion between a Linux distro's stable, testing, unstable repos and upstream's development branches. These two things are completely different.

When you say for eg; Firefox v222222.1 with a serious security patch is in testing branch for your distro, it means it is not tested with YOUR distro which did some hacks to make it work cos it is a point release and froze your package for a point release. *AND NOT BECAUSE OF THE UPSTREAM DEVELOPMENT BRANCH HAS BUG.* Majority of the time, it is fixed already in upstream. And even if it is a bug which is new, your package needs to be an important one. Otherwise you will have to wait till your next point release update or version bump before receiving the update. This backporting and picky patching on point releases is creating a huge amount of unneccessary overhead on upstream because hacks/patching differ on each distro.

Use any rolling release, Arch, Open Suse Tumbleweed, PCOSLinux, Void or a point release which uses unstable/testing branches (which ever is closest to upstream) like Debian unstable (Or whatever is being the distro's repo which is closest to upstream stable versions) which is essentially closer to upstream.

These are for desktop users. AND NOT SERVERS. FYI.


Too late. Snap, flatpak and command line ads made me switch to Debian testing. Not going back. Shuttlewho?


So, for a developer, what's currently the best hassle-free desktop Linux Distribution?


I personally would recommend Fedora. It's not rolling release like Arch so packages might be a bit behind, but with Red Hat using Fedora as their staging distro it's usually not very far behind and occasionally even ahead. And you don't have to worry about updates breaking everything. It's also one of the big distros so it's well supported


Been a long time that I used a rpm based distro. Ubuntu got me pretty hooked on apt-get. I guess I should let go of my preference based on nothing :)


Any of them that isn't some opinionated special feature distro besides Ubuntu.

Just pick one. Arch, Debian, CentOS, Fedora, any niche little distro, and if you're opinionated about something theres devuan, alpine, void, NixOS.

About the only ones I don't recommend anymore are Ubuntu and Manjaro.


There's no easy answer, the Linux Desktop world is more fractured today than ever. I use and like Ubuntu, Debian 11 has had driver issues for me. I don't like RPM distros. For Ubuntu, I had to uninstall snapd and pin it to never be installed. For Firefox, I downloaded it, unpacked it and use its Help menu to update it. I don't use Chromium because Ubuntu only ships it as a snap now, I use Chrome.


PopOS, SuSE maybe


Never checkout PopOS before, thanks for the hint!


I have been really impressed with PopOS so far.


EndeavourOS is the easiest way to use Arch imo, its a pretty painless installation and gives you a working desktop environment out of the box & seems to be pretty good at detecting hardware and installing drivers too.


OpenSuse tumbleweed.....rolling release that has snapshots. Up to date software, the open build system, yast

fedora

debian, maybe debian Sid


We should have never climbed down the Slackware tree.


The Linux desktop stack fragmentation continues.


That is just Ubuntu doing Ubuntu things. Like most of their other endeavors, like Upstart or Unity, they will eventually get bored with snap and adopt the most common tool instead.


Just to be clear, Canonical are often the first ones with a solution to a problem (Unity, Mir, Upstart, Snap, LXD predate alternatives trying to solve the same problems existing in existing software), they just usually use the ensuing "war" for adoption due to varying degrees of bad strategy, Red Hat being more popular, poor technical choices or just bad luck.


Often? Only in case of Upstart and LXD (I'm not familiar with the latter's history).

Here's Mark Shuttleworth in 2010:

The next major transition for Unity will be to deliver it on Wayland, the OpenGL-based display management system.

We came to the conclusion that any such effort would only create a hard split in the world which wasn’t worth the cost of having done it. There are issues with Wayland, but they seem to be solvable, we’d rather be part of solving them than chasing a better alternative. So Wayland it is.

https://www.markshuttleworth.com/archives/551

Then in 2013, Unity 8 was released with Mir.

Snap was largely developed in parallel with FlatPak (called xdg-app initially).

There was a talk about the plans at GUADEC 2013:

https://www.superlectures.com/guadec2013/sandboxed-applicati...


My point still stands.


This feels like .deb vs .rpm all over again.


Is there anything good about snaps?


What difference does it make?


Article is paywalled.



There's no workaround as far as I know. This MSN (!) article is about the same thing, albeit it's almost certainly a lower quality article.

https://www.msn.com/en-us/news/technology/ubuntu-flavors-to-...




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: