One thing I will not do is willingly allow somebody else a way to deploy and execute code on my computer without my say so (which snap is).
After reading the whole thread at https://forum.snapcraft.io/t/disabling-automatic-refresh-for... and seeing Gustavo Niemeyer's arrogance (we know better than you when you should be applying updates) I will be voting with my feet and will be installing Pop!_OS instead of Ubuntu, and if snapd is present I will remove it.
The stated goal of Niemeyer, to have users use updated software, would have been fulfilled in my case if I had a way to see what updates would be applied beforehand, instead of the updates being force-installed.
Lengthy dialog with Niemeyer in the forum thread seems to have been a waste of time for all the people who participated trying to convince him to allow disabling of force-installed updates so I suggest you do the same as me and vote with your feet!
Changing a paradigm usually involves pushing the envelope and breaking some existing assumptions; systemd is everybody's favorite example of that in the Linux world. The root of this issue with snaps is the trade-off between built-in security and user control. Some points to consider:
1. Browsers like FF and Chromium [on Windows] simply self-update, and disabling that requires configuration. So there is at least some precedent for taking the position that user applications should just update themselves. Server apps are more complex and are a strong argument counter to the existing behavior, as is the fact that many apps cannot be refreshed without user impact.
2. Ubuntu, since 16.04 LTS, ships with unattended-upgrades enabled, which means that for debian packages the default behavior is already auto-updating (although automated reboots are not enabled by default, as that would be crazy for the general purpose case). That feels like the correct default, too, given the risk of running code exposed to exploitable, public CVEs — and how reluctant users (like my dad and my wife!) are to click on "Install now" in the update-manager dialog.
3. Debian package updates run as root. Snap updates run in userspace, and confined. So in principle the risk exposure for snap updates is much smaller. And snaps do have an auto-rollback mechanism for failed updates [a]. Counter to that argument is the fact that snaps are meant to be under third-party control, and that there is no clear mechanism to separate security patches vs updates which you get with the debian pocket mechanism (i.e. focal-security vs focal-updates).
The lack of any official [b] means of user control over the snap auto-update mechanism feels wrong to many of us, including me. And while we may seem somewhat opaque in these debates, the feedback we get in threads like this one (and the snapcraft.io one) actually feeds into our decision making. So please do keep pushing on this topic and we'll do our part internally.
[a] See https://kyrofa.com/posts/snap-updates-automatic-rollbacks for a detailed guide on this topic. Of course, if your snap refreshes and you hate the new version, downgrading isn't quite always possible.
[b] There are ways to, hmmm, control auto-updating (i.e. refresh.metered, refresh.hold) if you really want to; that thread has a few. That doesn't help the debate, but I'm sharing in case someone has a technical need for it.
You know just as well as I do that if you criticize Snap within the company, you get fired. Especially if Mark overhears you. There is no room for criticism. You either drink the koolaid or you shutup. So, no, sorry, we're going to keep pumping out Snap and those who don't fall inline will just fall out of the company. This is how we've always done these things, despite it failing repeatedly, and Snap is no exception. Actually, Snap is in particular no exception, given how hard it's being pushed by top level management.
[An aside from the main point of this comment: your point 3 is nonsense, and any security guy will tell you the same. For packages that the main sudo-ing user executes, sandboxed or not, there still is effectively no difference between that and root. Snap's sandbox is alpha quality at best, and major platform hurdles remain to make it capable of doing anything remotely useful. Say no to auto-updating snap backdoors. Please. There's a reason why Linux has thrived and benefited with its vetted-by-distros traditional package managers.]
First, has anybody actually been fired for criticizing snaps? Your comment seems to imply through hyperbole that we don't debate inside Canonical, but in my experience that simply isn't true. In fact, I've seen a lot more intense IC to CEO debate in Canonical than anywhere else I've worked. It's not always super constructive debate, but I don't know how much better it is in any relatively small organization with the broad impact Canonical has.
Second, debate and reflexion is how these positions get refined. An idea starts out crazy and radical -- "let's make an OS which costs zero and which anybody on the planet can figure out how to use!", or "Launchpad will only support bzr", or yet again "upstart, not systemd" -- but over time it evolves towards a place of greater consensus. So I don't think we're the destination for snaps; in fact, if these blog posts are only coming out now, it signals we are rather early in the journey.
Finally, I can understand creating a throwaway account to disclose something you're not comfortable with at your workplace, but it's not cool -- nor constructive, civil or all sorts of C words -- to create one to and start with "you're simply not telling the truth". C'mon, I'm your coworker.
Separately... "In fact, I've seen a lot more intense IC to CEO debate in Canonical than anywhere else I've worked."
As an outsider, this makes me wonder if the CEO is too involved in day-to-day operations. (And overriding the work of those with more expertise than himself.)
But meanwhile, I'm curious about point 3 as you seem to have facts that I lack -- when a confined snap refresh runs through snapd, is the upgrade payload not executed entirely in userspace within the sandbox? I haven't looked at the code, but my understanding of the model is that the snap can only modify its own writable areas (and do stuff like add a symlink to /snap/bin, though that's also limited). So a snap update could't, for instance, modify arbitrary files, nor read restricted ones. Whereas a dpkg install can do anything as root. Can you help clarify?
What this tells me is that I am not the kind of user you are targeting. I don't either need or want any such trade-off. I'm knowledgeable enough to make my own decisions about security; I don't need a third party to do it for me. So if your distro will end up insisting that I cede any control over what software runs on my machine to a third party, it's a nonstarter as far as I am concerned.
That may or may not change your overall strategy, and I emphasize that it's your decision either way and I would not have a problem if your response is simply: "Well, we are targeting a particular kind of user and that's just the way it is." (I would simply go find another distro to run.) But I think you need to be clear about what kind of users you are targeting and what kinds of control you expect those users to give up to third parties, so users know what they are getting into.
I also think that this idea of having third parties control the security of your computer is against the basic Unix/Linux philosophy, because the whole point of running Linux or some other variety of Unix is to opt out of the walled gardens and third-party controls that other operating systems that I won't name force users to accept. So if you're going that route, IMO you're going to have a tough time explaining to users why they shouldn't just go ahead and run one of those other operating systems instead.
There is a whole other set of users that would prefer that Ubuntu kept out of their way administrative actions for which they trust the machine to handle. They are probably more silent because they are mostly OK with how things work. And I'll say that I'm not comfortable with everything about snaps so it's more of a spectrum than binary acceptance.
Snaps were invented because we had a problem. We wanted to make it easier for software publishers -- think games, desktop tools, browsers -- to deliver their software to Ubuntu users, but debs were both too hard and too powerful to make that a viable proposition. In the old days, people relied on archive.canonical.com  from which debs like skype were published, but it was unsustainable and led to poorly packaged and out of date software.
Regarding third parties, in the end, nobody in the community reads through the source code of every patch applied to binaries in their systems. There is a degree of assumed trust in any update. I accept that trusting Canonical is one thing and trusting third party publishers is another, but given the motivation I describe in the previous paragraph, the decision to use snaps wasn't made in a vaccuum.
 Which still exists, if you want to go and check out the pool for third-party binaries. All of those installed as root!
Is there somewhere--a blog post on Canonical's website, perhaps--that explains this in more detail? Off the top of my head I'm having a hard time seeing how snaps improve the situation, unless it's simply the "dependency hell" problem.
> nobody in the community reads through the source code of every patch applied to binaries in their systems
That's true, but irrelevant to the concerns of users like me. We aren't saying we insist on controlling when all updates are applied to our systems because we want to ensure that we have time to read the source code. We're just saying we insist on controlling when all updates are applied to our systems, period.
> There is a degree of assumed trust in any update.
Trusting that, for example, a particular update doesn't include malware is one thing. Trusting a third party to control when an update is applied to my system is something quite different. It seems to me that you are trying to conflate the two, which might be fine for some set of users (and, again, if that's simply the only set of users you are targeting, that's your call), but is not fine for me, nor I suspect for other users with similar concerns to mine.
> All of those installed as root!
Doesn't any update get installed as root? I certainly don't want system-level binaries installed to my user account; that would be a huge breach of security since my user would then have write access to them.
Yes, functionality and workflows around installation and updates are still insufficient for many use cases, but that could have been ironed out given enough time.
But what you messed up badly, IMO, was to force-migrate packages to Snap in an LTS release before it was ready.
Had you waited until Ubuntu 20.10, I'd have been more forgiving. But you (collectively) were so eager to get this in before the window closed for another two years.
If you had made Snap a compelling product, even LTS users might have voluntarily migrated to snaps once they saw how good it was. Now you've kind of pulled off the opposite.
Sadly the ship has sailed: Both in general, since you've pushed so heavily for Snap in a LTS release when it simply wasn't ready yet. And for me personally, where the forced installation of Snaps by some debs (notably Chromium) broke my trust significantly enough that I turned my back on Ubuntu after over a decade.
Not only is the Chromium Snap dog slow, it also can't see my NFS shares. So the snap version is objectively worse, at least for now.
But if I install a deb, I expect to get a deb. You don't want to offer it anymore, fine, take it out of the repo. But sneakily migrating me to a snap, and not even notifying me, is just trust-breaking.
We need to modify the age old saying:
"Computers do what you tell them to do, not what you want them to"
By applying the following patch:
"Computers do what you tell them to do, not what you want them to do - except Ubuntu 20.04 LTS"
Do these options currently exist?
In general, you can opt of snaps entirely and `apt-get remove snapd`, but you'll miss out on potentially critical components that are only available via snaps.
I don't use, but see the advantages of such packaging. But auto-updating software (FOSS or blob) brings-in it's own barrel of concerns. In the event of something 'critical', a -message- alerting the user ... who can then research and decide ... is far preferable.
As a former Gentoo user, this whole Snap concept is a huge deal-breaker. I wouldn't consider Ubuntu for anything other than some thin-client/smart terminal/kiosk usage, and I'd still be wary of what gets pushed and what sort of potential holes might get opened.
> but you'll miss out on potentially critical components that are only available via snaps
REALLY shitty move.
Until the snap maintainer of Qt decides to upgrade you out of version compliance with your compositor and... Well, hope your kiosk doesn't do anything critical...
I understand the argument with browsers, but that does not justify the default on ubuntu-server.
LXD being Snap-only really feels like force-feeding Snap.
sudo apt-get autoremove snapd
sudo apt-mark hold snapd
If it doesn't work sufficiently well when I'm ready to upgrade from 16.04 LTS, then I'm switching to Debian. (I'd intended to be on 18.04 LTS by now, but life doesn't always cooperate and I haven't found time to risk a day or two of squashing upgrade-induced bugs)
I might want the latest Chrome and Firefox. But I don’t necessarily want the latest update to other applications.
At the risk of reigniting this particular flamewar, systemd changed that paradigm for the worse, using flimsy arguments that don't hold much water to anyone who knows better. Comparing snap to systemd doesn't exactly do the former a whole lot of favors.
> Browsers like FF and Chromium [on Windows] simply self-update
If I was okay with this being the norm then I'd still be using Windows.
> Ubuntu, since 16.04 LTS, ships with unattended-upgrades enabled
> and how reluctant users (like my dad and my wife!) are to click on "Install now" in the update-manager dialog.
Your dad and your wife are reluctant for good reason. Having experienced first-hand how prone Windows updates are to break things, and now seeing an admission here that y'all want to make Ubuntu more like Windows, this doesn't bode well in the slightest for a good user experience.
Like, one of the main points I make to people to convince them to at least try a Linux distro is "well unlike Windows it won't shove updates down your throat, and the updates are quick and easy and painless anyway". And then here comes Canonical wrecking the former assumption (and who knows how it's impacting the latter, but I don't exactly have high hopes).
Not that it matters much since I've given up on Ubuntu (in favor of openSUSE) in my recommendations to others (and switched to Slackware for my own use) ever since God-Emperor Mark chose to double-down on the Amazon Lens instead of listening to His users. Every once in awhile I take a look at Ubuntu again, hoping that maybe Canonical's figured things out, but always come away disappointed. It's always a bummer, given that my first distro was Ubuntu 7.10, and every once in awhile I'll fire that one up in a VM and remind myself what Ubuntu used to be, before it seemingly became a soulless husk of a distro.
But I digress.
Updates to non-server applications can also have a big user impact and there needs to be a way to avoid/delay them when you know you won't have time to deal with any potential fallout.
I could remove some packages, like ubuntu-report or unattended-upgrades, but some seemed to be intertwined with other packages in a (purposeful?) labyrinth of nested dependencies.
They made themselves critical and uninstalling would break or cripple other fundamental system components.
Some I disabled in the config files, like apport, motd_news and kerneloops. Some I disabled and masked in systemd and others like snapd and whoopsie/whoopsie-preferences, I had to do:
dpkg -L snapd |
while read f
cat /dev/null > "$f"
Is it developers from commercial software vendors changing jobs and solving the problems the same way they solved them for other corporate customers? Or is it marketing carefully plotting a release by release path to dominance? Or is it people who truly believe that having a viable market for linux software will be good?
I mean, there might be some truths - people lag with their updates, people don't defend their privacy, and people would like to pay for software but have no avenue to do so.
But accepting those truths and unilaterally forcing "solutions" might find linux is a different sort of animal.
It's not like you're paying them for support.
And it seems like you're modifying the OS in such a way that might break compatibility.
Also commercial software? Linux is commercial software. Red hat and Canonical.
It’s the same thing MS did with Windows 10. You buy the product, but have to pay again if you want any semblance of control. Us normal users are now test subjects for the real customers. Look how non-enterprise Office 365 customers are on a monthly cadence for forced updates and the expensive plans get SAC or better.
I'm not sure whether you're being ironic, but they're doing just that.
With regards to updates, it seems that 20.04 continues in the mold of Pop OS 18.04: You get periodic notifications that updates are available, and can go to the Pop Shop to install them. If there are multiple apps receiving updates, you can review and install them piecemeal (although OS/library updates are bundled together as one item in the UI).
The idea of using it for my servers feels weird at first (when I think of Pop OS the UI comes to mind) but after thinking about it, it is a really solid OS and there’s no reason I can think of that it wouldn’t work.
(For server apps, the auto-update mechanism has a really painful consequence, which is that for clustered apps you have a built-in race condition that might kill your cluster)
You can do it in a privaledged container afaik though.
OMG. Is this real? This is the exact reason I use Linux instead of Windows 10 or macOS. I am not a grandma who can't stay up to date on tech news. At the least there should be a toggle for power users. But no, you can only defer it. Am I the only one who doesn't like it when your already slow internet slows down even further? It feels like hell when you are working.
I am not upgrading to this. I have been using Arch Linux as my personal OS. Maybe I should look into Debian for my VMs.
And just read this thread. Is this how they treat their users? Even Reddit is better then this.
> Even on metered connections, snaps auto-update anyway after some time.
I recently switched back to linux myself, but there are certain utilities and conveniences and options in Windows that linux distros don't yet provide, and ubuntu definitely is not meant to be light weight in any sense of the term. Is switching to something else an option at this point?
An experienced user could probably find a nicely tuned arch / manjaro setup to work better for them than Ubuntu, but if someone is just first getting their toes wet and learning, Ubuntu isn't a bad recommendation for a first go-round.
Can I mark all my network connections as metered?
And I'm pretty sure NM lets you set all network connections metered, i.e.
nmcli connection modify $CONNECTION connection.metered yes
Canonical should be up front about this type of information.
Except for the ones Microsoft thinks are really important to push to you.
I hope Canonical fixes this immediately. I'm not eager to spend time re-researching to market for a suitable OS.
Another huge differentiator for Ubuntu over Windows was that I didn't think the OS vendor was trying to seize control of my computer. Canonical jeopardized that trust with this choice. I truly hope they take steps to restore it. I don't want the added work of switching OSs.
I tried to disable it from systemd, but it had some weird way of relaunching itself.
just do a web search on disabling snapd to see how many people want to do this.
> GNOME Calculator was put on the ISO as a snap to help us test the whole “seeding snaps” process, not because it was a fast-moving, CVE-prone applications. Chromium, Firefox and LibreOffice fall more into that category.
Ok so the whole snap thing comes down to updating browsers. Is this for real? I want the web, not the browser to change daily, or to consume more bandwidth than my www usage :)
The problem is that there is no way to have a browser that only pushes updates for security fixes. They're always mixed in with changes to the UI that force people to re-learn workflows.
> If you don't want new features ("more free stuff!"), use an LTS version that only includes the security updates?
There is no such thing. I run Ubuntu 16.04 LTS on all my computers at home and I'm posting this on, IIRC, the fifth or sixth new version of Firefox I've had to accept (and that's only counting major version changes), because, as noted above, there was no way to just get the security updates and leave out the others.
Yes there is, it's called firefox esr.
At least when I encountered it a few years ago, if you go to Help -> About Firefox in the menu, it'll check for updates in the background, download the most recent version, and upgrade itself the next time you restarted the browser.
And yes, this was on Ubuntu.
So the capability is there, even if it's not on generally.
Watching this snap thing play out, and in the past, watching Mir, Unity, and Amazon Lens, has provided steady confirmation that I've made the right decision to stay away from Ubuntu.
I left Ubuntu almost ten years ago, after 5 years of using it, when they started using MIR instead of Gnome2 and I replaced it with Linux Mint and I haven't looked back. This whole snap thing looks like the new weird decision made by Canonical to make their faithful users leave :/
Debian's problem is that it's stodgy updating policy means 'Stable' is still on 4.19, things like Wireguard require a simple, but odd procedure to request apt pull packages from newer releases, and most of the copy/pasteable examples out there assume Ubuntu, and their versions/customization to critical infrastructure packages.
IMHO, the stodgy updates make it a perfect candidate for server based software. Personally, my Debian know-how makes it great for my desktop, and It has not failed for my use case: Development, Sysadmin, Browsers, Steam (or any other games releasing linux versions)
That's not a good idea, as it breaks the assurance that Debian Stable provides. Using the backports repository is the recommended approach if you need a newer version of some clearly-defined piece of software. It will pull the newer dependencies it requires from backports, while still relying on stock-provided packages as far as practicable.
I tried it. Long story short, now I'm on Sid.
It is true that the first-presented installer ISO images on Debian's downloads page lack the worst proprietary drivers, but another couple of clicks takes you to images with them included. So, worst case, you find that the image you have lacks such a needed driver, and you use another image. In practice, I just start with the latter, and have not encountered hardware not covered. For the absolute newest equipment, a "testing" installer may be the right version to use.
The Debian download pages provide installer images for all needs. I have not needed to look at secondary sites, which also exist for specialized needs.
On my older 2011-era laptop, that's the wifi and wired network that need those drivers. It's a bit of a pain.
Not really. Debian also offers one with all the firmware included but explicitly labels it "unofficial" (though very much official in practice and hosted on debian servers).
The "pain" is thus literally to click on another download link.
I think it's an over-zealous position from Debian not to redistribute firmware. Even systems that are very strict about licensing, like OpenBSD, redistribute firmware, because they have some common sense.
OTOH I believe it's a position fully aligned with their ethical standpoint. Equating common sense with your personal preference isn't very gracious.
If you want something that's less zealous about respecting (and eschewing) stupid licensing, but is more zealous about randomly upgrading all your software packages unexpectedly, there's always Ubuntu.
What exactly do you achieve by refusing to load it? Are you more free in one case and not the other?
you could perhaps offer to update that page to remove the ambiguities you believe exist.
For all intents and purposes, firmware is like a key or a password you must supply to the device to make it work. The driver, which is indeed open-source, just says: "here, device, is the firmware you need". That's it. You are not achieving anything useful at all by making people go through some ceremony to download it separately. Maybe they just want to send the signal that people should buy devices where the firmware is already burned into ROM or ASIC or whatever?
Firmware is typically copyrighted, large, obfuscated, and executable on your system.
A password is a string that you can examine and offers no intrinsic threat - either exploit, or legal.
As per the link I provided to you, Debian's policy is that free firmware are shipped in the distribution -- non-free firmware requires you add the 'non-free' and/or 'contrib' parameters to your repository lists.
There is no need to wildly speculate about the motivations of the Debian team -- eg 'send a signal people should buy certain devices' -- when their motivation is explicitly stated.
The DFSG dictates non-free software will not part of the standard distribution. But they've made it easy to pull those files in (as above) via a one word addition to one line of your sources.list file.
Time to check out Debian!
*Edited. I can't type on phones.
Pop! is Ubuntu-based, so no idea of the situation with all these other problems, but it intrigues me because they're doing tiling windows first-class.
My understanding of Void is that it doesn't use snaps or systemd, making the system as a whole significantly easier to understand, and simply sounds much much closer to what I want out of a computer (and much like 8.04 was when I first switched to Ubuntu).
Because if so, I'm sticking with 18.04.
Stability and predictability is their main strenght. Each release is supported for a veeeeery long time.
I mean... As long as you don't use EPEL, stability and predictability are pretty much granted.
I've been using epel on some machine and haven't had that many problems, though.
Not sure about win10, but macOS won't autoupdate apps if you turned it off.
If the app is not from an app store - it's up to the devs to have option to (auto) update. Most apps allow you to turn autoupdate off (in fact I can't think of single one without this option)
> I've used both Windows and OSX for my professional work and while Windows is the worst offender when it comes to automatic updates, OSX is pretty horrible as well. At least with Windows you can expect some sort of backwards compatibility, while on OSX, one day you have to upgrade your entire OS, otherwise Notes or some stupid application won't launch.
I used to run OS X some time ago. When even Windows supported turning off auto updates. These days I am seeing Github issues saying that they can't use brew, clang etc because there is a update. And most of the time the updates are just huge (even compared to Windows).
Is this not true? Can you put off OS updates for some time (a few hours is enough for me) and keep using XCode, brew etc?
I stayed on Mojave for months after Catalina was released and I had a MAS app that broke compatibility with the same companies own (abandoned) self-hosted server software so I just didn’t update it. I’ve since resorted to running that single app (and the abandoned server app) in a High Sierra VM.
The only version issues I know of that sound like what that other person referenced is:
If you update eg iOS to a new major version, sometimes iCloud-linked apps will say they need to upgrade something for new functionality (Notes specifically did this at least once in the last couple of years and iCloud Drive did it a few years ago).
But that is (a) not forced and (b) you’re told exactly what will happen (ie that older macs/iPhones won’t be able to use iCloud until they update too).
Some third party apps will set minimum required OS version (ie to use a new framework or api) but that doesn’t sound like what the other post was talking about?
Every MacOS app from Google auto-updates. You can turn it off iff you have hacker skills. The average user can't do it.
I'm currently on Debian with KDE, but I think I might need to move to a rolling release distro due to some issues with SMB/CIFS (that have already been fixed in newest builds of KDE) that probably won't be fixed in Debian until the next release.
Maybe I should start looking at distros in general-- but Ubuntu is definitely out of the picture.
Backwards compatibility is a positive as long as it's secure. This makes me hesitatant to what is going on. Auto updates good, no blocking not sure.
If this is of concern to you, why are you using snaps? And why Ubuntu? What's the value-add over Debian?
People will never buy linux but they might buy computers with linux at some point just like they have bought phones with it. Android had things that iphone didn't have and at a much cheaper price point thus linux based phones are everywhere. I wish steam boxes had taken off.
The idea that you "misjudged who actually uses Linux" was based on the assumption that you think a product should generally cater to its users. If instead you think most of the users should leave, then okay, that's a valid opinion, it's just surprising.
That said, I don't think Newbie friendly and power user friendly must be at odds with each other. If you can figure out what the sensible defaults are, and provide simple toggles to customize things, you can cater to newbies, average users, and power users alike.
Yes, but more importantly it's insecure. The ease of typo-squatting is a real problem.
Not that I don't see your point: a curated list like the repositories is preferable to a system where anyone can claim any name, but I am not sure that this extrapolates to the statement that "it's insecure" as a whole.
Out of interest (I don't use Ubuntu/snaps myself), is that really the case? Can I actually a publish <insert popular package> without any checks and, once I got half a million users by repackaging the deb file in snap, add some subtle malware? There is no review process or anything?
Sweet strawman argument dude. "That's bad, but this other unrelated thing is also bad, so the first thing can't be THAT bad in comparison". For real though, this is like a picture perfect example. I may put it on the Wikipedia page.
This is already possible with every other distribution method. If you host your own debs, then you can easily get them to do whatever you want. Even relying on the main archive isn't great - apt is typically delivered over HTTP to make mirroring easier, for example.
This is a large misunderstanding of how it works. You can't MITM millions of servers around the world just because they use HTTP for downloading their apt archives.
It verifies the cryptographic signatures. That's why you need to "apt-key add" when you add a custom repository. It doesn't rely on the transport method for integrity.
> [Typo-squatting] is already possible with every other distribution method.
No, the counterpoint we're talking about is apt. In apt, not anyone can just register any package name. My question was whether that's really a thing in snap.
> If you host your own debs, then you can easily get them to do whatever you want.
I'm not quite sure what you're trying to say here. Why would I host my own deb files (in the first place, but even if I did) only to hack myself? I could just install the modified deb files directly or modify the files on-disk, no hosting needed?
Actually, a bug allowing exactly that was in the last 3 ubuntu LTS versions: https://justi.cz/security/2019/01/22/apt-rce.html https://usn.ubuntu.com/3863-1/
If they used HTTPS then an attacker would have to control the mirror instead of being able to perform the attack as a MITM.
Also using HTTP allows someone in the middle to know what software is installed on a server, which while not critical if it is kept up to date it leaks some information.
For the 'which software is installed' argument (confidentiality in addition to integrity), I agreed but your first link actually argues this:
> the privacy gains [of using https] are minimal, because the sizes of packages are well-known
Yes they can. Yes distributions maintain archives and are able to decide what to include. But there's nothing special about apt per se that prevents namesquatting.
If you create a well-used PPA, for example, you could easily add extra packages, e.g. "firefix", later on. The next time someone runs `apt update`, they'll be exposed to the new package.
> My question was whether that's really a thing in snap.
It looks like that from the outside, but in practice that's not possible. All snap package names are tightly managed by Canonical staff.
> I'm not quite sure what you're trying to say here.
Lots of software vendors host their own archives. It's very difficult to get new packages into distros, and they're often updated infrequently once they're there. For example, the OSGeo project distributes its suite of packages this way.
I find it strange that we trust as much as we do on our computers. Would you let 1000 people walk through your house unsupervised and unannounced? Would you expect everything to be ok after? Yet so many apps are created either directly or more likely indirectly via its dependencies by 1000s of people. Each one of those people has to be trusted. That's insane IMO. And as we get more and more connected the incentives to do evil rise.
From say 1993 to 2010 I mostly didn't care that Windows wasn't sandboxed. Now every app I download is trying to spy on everything I do either for marketing directly or for analytics which is then shared with "business partners" who then share it with others.
I wouldn't have to worry as much a random library is going to do something bad if it wasn't possible for it to do something bad
By the time you are deciding whether to update candy crush or whatever application, you can't actually check all of it's dependencies. You can postpone updates, or only update manually, but what difference does it make?
If testing of updates is important then shouldn't you be running something's that's imaged or managed by something like ansible?
The author of this article claims it's too difficult to find Flatpak apps and that the Ubuntu software center prioritizes Snaps over .deb. Are platforms never allowed to migrate to a new standard? Why is it Canonical's fault that authors of individual applications have yet to migrate Snaps?
If we all agree that on the whole auto-updating software is generally better and more secure than manually updated software, why not single out the applications that haven't migrated instead of blaming the whole standard?
Maybe I'm just naive and not doing advanced super user stuff these Snap haters are doing but from a distance to me this resembles the systemd vs init controversy. One which, IMHO, Linux super users seemed unusually attached to an older standard for not always clear reasons. Snaps offer real benefits: maybe instead of complaining that 'this sucks' users could offer constructive criticism about how to improve the new standard.
Just my opinion tho.
I prefer my own package manager (pacman) over snaps for the following reasons:
1) I like to upgrade on my own schedule. I use my computer for work, and I cannot have things break in the middle of the day or in the morning, just as I get started with work. I usually save upgrades for when I less things to do, so that in case stuff breaks, I can spend time fixing it. This has happened twice during the last 2 years, or something like that. One time Firefox got broken (or rather, the version upgrade broke an extension I use) and second time some API in neovim changes to a couple of plugins broke. If this breakage would just happen by itself, it would break the two most common tools I use on a day-to-day basis.
I've used both Windows and OSX for my professional work and while Windows is the worst offender when it comes to automatic updates, OSX is pretty horrible as well. At least with Windows you can expect some sort of backwards compatibility, while on OSX, one day you have to upgrade your entire OS, otherwise Notes or some stupid application won't launch. On the other hand, Windows eventually forces you to upgrade no matter if you like it or not. So both of them suck equally, but in different ways.
2) snap seems to create mountpoints for the applications and never removes them. When trying snap apps I always end up with bunch of pollution in my environment. Could be that I'm using snap/snapd wrong, but left a sour taste in my mouth, as I saw snap as something that wants to solve a problem that existed for a long time. Instead, they look a bit amateurish because of this.
Or that new updates sometimes break things and that's a hassle. Of course they do and it is.
The problem is if you want to distribute an important security update, what do you do? Ask everyone nicely to upgrade? How? Again, what % of users will manually update their software? Not a lot.
For #2, that seems like a resolvable problem that can be brought to Canonical. I'd prefer to see auto-updates fine-tuned rather than have super users immediately dismiss the idea in general.
It's just my opinion but I think the greater good of the Linux community is served by auto updates, even if occasionally it means an update to an individual application has a bug here or there.
Maybe this doesn't apply to you but I wonder how much of the Linux community just doesn't like change. Sometimes Canonical stuff does crazy stuff (Mir?) but auto-updates seem like a noble principle worth attempting to adopt.
The thing is, I do not give a rats ass what the maintainer of the software wants.
I’m sure they’d like their software to be patched quickly on all the PC’s that use it, and more power to them. But I do not want their decision to patch something to affect my system unless I explicitly tell it to, period.
If I do want my software to update automatically, I’ll enable that. Just don’t force it onto me.
What the application author wants isn't all that matters. It's the user's system, so they install what they want when they want to.
Ensuring that the user can easily install important updates while preserving the overall order of the system is the job of the disto maintainers for the distro the user has chosen. This is the whole point of distros. The alternative, where each individual application author has free rein to jam their app into the system without coordination, gets you the kind of mess Windows has.
It is "fine" to make auto updates the default.
It is "not fine" to make it the only option.
Have some respect for your userbase. Who was the bonehead that make this call -- so stupid.
Edit: Apparently you can set a preferred schedule for the updates (from another comment)? That's still one more thing I need to think about, that I shouldn't have to. Just make it optional and everyone is happy.
Even I recognize that this doesn’t make sense in the Linux world, though. Ubuntu is trying to be something it’s not—they’re trying to appeal to a new demographic, and, in doing so, driving away their existing users.
Even with my stance on auto-updating, snaps are a problem for me because I mostly use Linux in the context of servers. Like it or not, that’s where Linux has the largest market share; Android aside, Linux’s consumer market share is negligible.
In that context, snaps have problems:
- I can’t have my servers updating on their own. Security updates rarely break things, but most other updates need testing.
- I use auto-scaling. That means servers need to come up quickly when load increases. If a bunch of new servers come online and all decide to update, that’s worse than no servers coming online.
- I don’t want or need a sandbox. In a cloud environment, the server is the sandbox.
- Environments and server states need to be reproducible for testing and auditing. If I’m doing a post-mortem, I need the software on the relevant image to be in the exact same state as when the problem occurred.
- Performance is critical. I’m already paying AWS for sandboxing in the form of many small EC2 instances; I don’t need the additional overhead of snaps. I’m not working with bare metal.
All of these issues could be resolved, and I wouldn’t object to this experiment running in a non-LTS or desktop-only release. But it is truly an experiment: snaps aren’t ready for prime time. My options are to pay Canonical for extended support for old software, wait it out and hope the issues are sorted before I stop receiving security updates, or switch to something like Alpine or Debian.
Keep in mind, this is coming from someone who loves automatic updates, generally prefers systemd, rather liked Unity, and didn’t see what all the fuss was with Upstart and Mir:
- systemd works pretty damn well and is a big improvement, although it has its hiccups
- Unity was fine. It looked nice out-of-the-box and wasn’t a resource hog.
- Upstart usually worked well enough, though it sometimes had reliability issues.
- Mir never really saw the light of day, so it didn’t matter.
Snaps are where I draw the line. They might be the future, but they’re not ready for the present. And that’s not for lack of trying on my part—I had no trouble embracing Upstart and later systemd.
"Greater good"? You sound like an ambassador for Red Star OS. :)
I have this vague feeling snap is here to stay but I don't like it.
You can arrange snapd to update on your preferred schedule. What you cannot do is defer updates forever.
Says who? AFAIK the person who paid my hardware (me) is who is in command.
At the same time, it's also perfectly valid to instead, express what you want and don't want, and generally try to fix something that has broken or correct the aim of something that has veered off course.
Approving of the new change is also valid for that matter.
The only invalid thing is telling whichever camp you're not in that they can leave if they don't like it.
Remember, this is a change, not just the way something has always been.
How about, if you like the idea of everything packaged in the form of snaps, and all those snaps updating themselves outside of your control, you can just go find, or create, some new distro that works that way, instead of changing one that already exists and forcing all it's existing users to either accept the change or move, to accomodate a change you like?
"Take it or leave it" are not the only options, and it says something unflattering about anyone who tries to suggest that they are.
Updating everything has always been one click in Ubuntu (and I'm sure there's an option to have it go automatically).
> The author of this article claims it's too difficult to find Flatpak apps and that the Ubuntu software center prioritizes Snaps over .deb. Are platforms never allowed to migrate to a new standard? Why is it Canonical's fault that authors of individual applications have yet to migrate Snaps?
Churn is bad, and having to migrate your application is burdensome. Maybe the benefits justify it, but what are those benefits supposed to be?
> Maybe I'm just naive and not doing advanced super user stuff these Snap haters are doing but from a distance to me this resembles the systemd vs init controversy. One which, IMHO, Linux super users seemed unusually attached to an older standard for not always clear reasons. Snaps offer real benefits: maybe instead of complaining that 'this sucks' users could offer constructive criticism about how to improve the new standard.
I hate systemd because it breaks a bunch of stuff but I'm still forced to use it. So far that's been my experiences of snap as well (specifically it breaks Japanese input for some applications).
What are those "real benefits"? You've only talked about auto-update, which was already working fine thank you very much. Snap, like systemd, seems to be more a case of https://www.jwz.org/doc/cadt.html than something that actually makes my system better.
(I don’t know if you’ve tried MX Linux BTW; Debian derivative without systemd by default)
I will never defend pulseaudio though, that’s a horrible mess.
I know that there is a separate command that can be used to tell systemd to allow a program to live. I know that there are systemd libraries that an executable can link against in order to opt out of the new behavior. These do not matter, because they shows that systemd is willing to break existing programs, and to break specified conventions. Systemd developers cannot be trusted to provide a foundation to build upon.
I know that this default setting can be overridden at the distribution level, or at the system level. This doesn't matter, because it shows that systemd developers do not know how to choose appropriate defaults, and that any changes that are made in systemd need to be continually monitored for stupidity.
Maybe this is just me being soured by a very poor first impression of systemd, but I haven't seen anything since to dissuade me from this impression.
Is that still the default? That’s horrific.
At this point, my standard .bashrc includes a check of whether systemd is running, and whether this absurd setting is set, so at least I will get some warning, and can either fix it or complain to the sysadmin.
> ExecStart=/bin/sh -c "/usr/bin/foo > /var/foo/bar 2>&1"
As for networking, it's not like you have to buy in to systemd-networkd, systemd-resolved et al - or am I missing something?
What I've definitely had issues with is the way networking services are configured in recent releases of Debian, but that's mostly from several of the network subsystems being in different degrees of weird limbo with "the new way" and "the old" interfering with each other. For example how resolv.conf is managed. And the whole back-and-forth with network names. Come to think of it, it's a bit reminiscent of snap/deb in Ubuntu 20.04 ;)
And what percentage of users do that? From your experience in software in general how much of the general population manually updates their software? It's almost always a low number and that creates problems. A different set of problems than new updates that cause bugs, but IMHO worse ones.
> Maybe the benefits justify it, but what are those benefits supposed to be?
Security, compatibility, uniformity. Not having to support 18 different versions.
> I hate systemd because it breaks a bunch of stuff but I'm still forced to use it.
Exactly what stuff does it break? And are those things more important than the benefits of systemd?
Do applications break after auto-updates? of course they do and that is something that is important to ubuntu users because they have to manually fix. Given the choice I would rather choose when to upgrade so I could set aside time for fixes.
Ubuntu users are not windows 10 users. Why treat them in the same way?
It's great that you are diligent about updating your software regularly. But how many people do? If you agree that it's a low percentage, then why is not better for the ecosystem as a whole to improve security and compatibility?
How often Ubuntu users who turn auto-update off manually update themselves is an actual thing that can be researched. It's disappointing how many developers just think you should assume the worst based on your imagination and ego, then justify taking away control from users whenever one can get away with it as a safety measure.
Many more examples.
And the fact that they first did it in an LTS release seriously jeopardises the trust is have in canonical. This is a deprecation which may have significant undetermined consequences. This is exactly the kind of change they should have introduced in 19.04 and then used the ensuing year to iron out any issues before 20.04 LTS.
What this tells me, however, is that for Canonical their business interests are placed above ensuring the stability of the LTS release, and that’s extremely disappointing to say the least.
You are correct that this appears to be EXACTLY like the systemd debate.
Snap HOPES to provide an easier environment for developers to target and thus provide a richer ecosystem for users to enjoy. This like trickle down economics probably isn't real. Like literally every other time Canonical decided to go their own way they will provide an inferior option that isn't taken up outside their own ecosystem before eventually giving up and joining the crowd. Unless it attracts highly hypothetical new developers to linux it offers nothing but downsides to users.
- It's tied to a close source server run by and solely controlled by Canonical with no ability to add software channels like virtually every other major software distribution model for Linux. This means not only could Canonical exercise undue control over how their users use software on their platform it means others including repressive governments could force it to on their behalf.
- Users may only install the most recent version of software and will be updated to the most recent as soon as it comes out.
-- This means that if devs push a buggy version you are stuck with it until its fixed. If it isn't fixed for months you just can't use the software. Bugs that effect everyone will probably get fixed immediately. Bugs that effect niche features or a smaller number of users are liable to go unfixed for longer. Please see bugs that are open for years at a time.
-- In case of developer getting compromised ability to push updates to all users as soon as users machine is online means that a substantial portion of user base can be hit within minutes and almost all within hours. If a new version had to get pulled in and then distributed at irregular intervals a new version would take at least weeks to compromise most users. This would give users/packagers/distros/developers time to realize what is going on before all users are effected.
- For some reason they are slow to start
- Waste users bandwidth and storage even with one or the other is dear.
- Results in 17 different apps having 17 different version of a dep 16 of which have known security vulnerabilities because apps don't use system libraries that get updates.
Do you think it will be pushed to the background with time or will Canonical manage to push it through?
Canonical could barely get its own users to prefer snaps over the alternatives, which is why they are forcing it onto them.
Canonical is the one pushing for snaps, and they own the centralized Snap store. On most other distros snap support is either non-existent or much less than for package manager or even flatpak. Let me turn that question around. Why should individual application developers have to package their apps as snaps, which are mostly just used on ubuntu?
> Maybe I'm just naive and not doing advanced super user stuff these Snap haters are doing
Most of the complaints here have been about the automatic updates (and specifically that you can't disable it, not that they are on by default). But, personally, I am more concerned with the fact that snap apps run slower. Snap uses squashfs for the program and any associated files, and squashfs is not designed to be fast, it's designed to store a file system in a small amount of (usually read-only) space, such as on a Live CD. Besides slower startup times, snaps also take up more space on disk (which may not be a huge issue for most people), and more time to download (a bigger issue) since each snap has to include its own copy of all its dependencies.
The containerization of snaps provides some security benefits, but there are also a couple security concerns with snaps:
- Unlike the official apt repos, the snap store is not curated. It is much easier to put malicious software on the snap store than get in the official apt repos.
- Since all dependencies are bundled with the snap, if there is a vulnerability in a common dependency, such as libc or openssl, then instead of updating a single package on your system to get a fix, you need to update all of the snaps. And you are dependent on the maintainers of all of those snaps to watch for such vulnerabilities and make sure their dependencies are kept up-to-date.
Everyone certainly don’t agree on that. It really depends on the situation. And if that’s what you want, you had that already with unattended-upgrades. I really prefer to manage what updates, how, when, and under what conditions myself.
What’s next, forced unscheduled reboots a la Windows 10?
I get that some people prefer a less secure ecosystem and never want to update their software. But it seems like the greater community is better served by auto updates.
The solution to that would be to fix unattended-upgrades and ship it as working by default, with an easy-to-use script to remove it. I bet that would have been orders of magnitude easier than developing snap.
But that would have kept the onus of packaging, testing, and delivering updates on Ubuntu. Instead, with snaps, they can offload all that to upstream developers. That is really the endgame here. Snap is a play for developers, not for users.
Ubuntu are saying to developers "if you build a snap, you don't have to worry ever again about distro differences! And you can update anything you need, at will!" and in exchange Ubuntu get to reduce their support costs. Win-win, right? And it is... except for power-users, who will get autoupdates shoved down their throats and their mount tables polluted up the wazoo. But nobody in Ubuntu ever cared about power-users on the desktop, really, so no news there.
But that's exactly what apt does. I last thought about updating Firefox (to use a more fitting example in the context of FOSS) around the same time as I thought about updating GIMP: not that I can remember.
Every. Fucking. Week. Because it breaks my session every update thanks to the inane switch to snap.
FWIW Snap isn't a requirement to do that. You can set Ubuntu to update .deb packaged software automatically.
Snap is more than just an update system. Even if snap's only concern was auto-updating, it would still carry a set of implementation decisions regarding auto-updating and it's irresponsible to rhetorically treat criticism/praise of an implementation as inseparable from the concept in general.
>from a distance to me this resembles the systemd vs init controversy.
Yes, people are again playing fast and loose with the distinction between features, and holistic analysis of systems that implement those features.
Basically a snap package is a container, that means an image of an operating system just to run one software. Just this idea should be considerered stupid, is like saying every software is distributed in a Docker container. It's a great way to waste disk space, and also RAM since shared library are no longer really shared...
You can have software that update automatically also with debs, where is the problem? Unattended upgrades exists since decades, you install it, and it updates all your packages automatically.
You can even have proprietary software packaged in .deb packages, why not, if for Canonical that is a concern. You can even have software that runs in a container packaged as a .deb package, why not?
Snap has no real purpose to me.
Flatpack is something that makes more sense, since it aims at providing a way to package software for multiple distributions, doesn't really need a runtime, a daemon like snap, but it builds application bundles that you double click and run, without installing them.
Secondly, snap works on pretty much all major distributions just like flatpak (for a list go here https://snapcraft.io/docs/installing-snapd).
One big advantage that snap has over flatpak is the "--classic" option to allow non-sandboxed applications given that some applications are hard to ship completely sandboxed without getting into some serious usability issues.
This is not true. Snaps are just squashfs images, there's nothing fancy there. No deduplication or anything. You're thinking of Flatpaks with OSTree, which does do this.
I've only glanced at the docs but Flatpack looks very complicated with lots of infrastructure and things that can go wrong; use Git to extract the app into a local repository with hard links to resources? It sound like typical linux centralized overcomplication.
Snaps may be slow, there may be a lot of machinations going on to make it happen, but at the end of the day it's just a file. You have the file, your program runs. That's a big advantage.
The biggest concern I have with Snap is that it’s hardcoded to a store controlled by Canonical. The store itself is closed source. Snaps can be side loaded, but doing so is a huge pain. Snap also requires you sign a CLA with Canonical, allowing them to relicense the ecosystem however they see fit.
As long as they depend on exactly the same versions, right? This seems unlikely to happen by chance, without someone there to actively coordinate the versions, so there will still be substantial duplication.
We have long since arrived at a point where its much more sensible to sandbox every application, with a majority of it's dependencies - less things break, less compatability problens, easier updates, greater reliability.
All major operating systems have done this now, Windows, Mac, etc. There is no turning back now.
Is it browser-/Electron-based apps that need constant updates? Then the developers really should consider their choices; why would I download a webapp along with a whole browser runtime repeatedly rather than simply run the app from their website, especially when the target environment is also sandboxed like a browser? That simply doesn't make sense. At a certain point, after over 25 years of attempting to shoehorn the web into an app delivery platform, things get absurd.
It's true though that shared libs have caused more trouble than worth, and are the root of this mess. But the solution is simply to not use them and just ship statically linked binaries instead rather than put a layer of abstraction over them. Even on DOS/Windows back in the day users were able to download an .EXE.
I've always said that if your updates are such a benefit, then surely users would almost never even need to turn them off without a good reason, so why not give them the option? Most of the time this happens, it is because a company is doing it to maintain their platform, at the expense of their users.
Don't say it is just for safety. Why is it easier to install an outdated kernel than an outdated web browser?
Operating systems and toolchains are FRAGILE. If I have a computer doing anything important, I have to be vigilant about keeping rolling images of it. Updates break things all the time, if you are doing more than just the basics.
I travel a lot. Sometimes I have to reschedule a flight from a 2G cellular connection and can't share that bandwidth with updates. I have computers that run proprietary CNC machines, use specialized musical hardware, or need to have ancient toolchains to build highly specialized software (like J2ME and other embedded toolchains) for internal use for some of my clients. This self serving evergreen mentality is filled with contradictions. Like that I can't use an insecure version of SSL to fix a SCADA device on a secured closed network because they would be insecure, but nobody has a problem with me using telnet or HTTP, without any warnings whatsoever.
It's my computer, I don't have to justify why I want to say no to updates! We should not even be having this conversation about why it is not ok for Google or Microsoft to make permanent changes to my data when I have said no.
Yes, I probably should make more backups, and I have had to become way more careful about that. But the response to a lot of botched updates is just to blame users for trusting them and not making backups. I shouldn't have to worry about data loss or loss of functionality from updates -- it used to be unheard of for updates to not to have built in rollback functionality.
And don't get me started on Google. They are the worst offender. And what bothers me even more, is that they lie about why they are doing it. They've treated their users as unwilling beta testers for years. They installed a persistent menu bar widget without my consent on my mac, which you could either hide, or disable using obscure undiscovered flags.
It is only because I got fed up and made Chrome.app immutable and completely removed and locked their Keystone updater, that my Mac wasn't rendered unbootable by Google's recent involuntary update.
This should tell you everything you need to know. They have such hubris that not just are they modifying their users's computers, but they are making it nearly impossible for the average users to say no.
If companies were truly being honest and stood behind their updates, then there would be a clearly labeled and discoverable checkbox to disable updates, like what OSX has. I'm fine with putting that checkbox behind a bunch of scary warnings, and having the OS check back to make sure you really want to keep updates disabled. But what Google and Microsoft are doing with updates is blatantly dishonest and immoral.
Of course I can - and do - use the deb version, but it's just one of the critical-always-on things that can creep onto a system as a Snap. For example, LXD is moving with Snap as the default way to distribute on Ubuntu.
If we agree that auto-updates on the whole improve security for the platform, why is that not a goal worth pursuing? Why are application devs totally blameless in releasing buggy software?
In an ideal world, you could get away with pushing all updates automatically, but I for one would rather not get my production server get totalled because of a botched update.
*edit to fix question mark
Microsoft are a vast organisation and do a huge amount of testing before pushing updates, yet time after time, there are reports of serious issues with updates.
As a software developer myself, I completely understand the desire to have consumers running the latest version - but I also recognise that real world users have different workloads, different levels of acceptable risk, and different consequences when things go wrong.
I think automatic updates probably are the best thing for most desktop users, and for some server users but certainly not for everyone all of the time - I don't even mind if auto update is the default, but make it clear and give people a config option where they can control updates themselves!
On Linux, Chrome does not autoupdate as it does on Windows or Mac. It installs apt or yum repository and then it is updated together with other packages, when YOU run the update using apt/yum/dnf/whatever frontend you use.
I don't know who all remembers having to develop websites compatible with outdated versions of Internet Explorer but I do and still have nightmares about it.
But it's estimated there are 1 billion users of Chrome. One billion! How wide of an attack surface does that present? Or how much of a nightmare would it be if they were all on wildly different versions?
I get that this specific bug may have caused problems for you. But if I had to choose between security and compatibility for 1 billion software users vs an occasional image orientation bug, I'll choose the former, personally.
This is a disingenuous characterization of my argument. The image orientation bug was a simple example. Further, why can't there be some kind of compromise where security updates are automatically applied and feature updates are not (of course I understand that the line can get blurry)?
Lastly, in a philosophical sense, I don't want to cede control of my machine to a third party. Automatically updating apps removes the chance for me to consent to changes and puts me at the mercy of a third party. It removes my ability to make an informed choice.
And yes, you can't cordon off security updates from everything else. They accidentally break other things. But this is a general problem of software development.
If you're using a third-party's software, aren't you already consenting to their "control"? How much control do you have over someone else's software?
For open source you can read the source and decide to install or change. You can limit permissions by assigning to different user groups. You can disallow firewall access. You can choose to use selinux and have additional restrictions.
Automatically updating anything introduces risk.
My phone living on an older version of chrome is included in that number. It won't be updated.
You are looking at 20 million at most. Most are running just a server. If you are lucky if 1% are affected. Doesn't really close the loop and in the worst cases it breaks more experienced computer users.
Not the greatest tradeoff.
Is it really "doing whatever it wants" or just updating itself? Doing whatever it wants seems like a much broader range of activities.
Crashing or freezing is a problem, but isn't all software susceptible to those bugs? What if Chrome was crashing or freezing on other people's computers and the latest update fixed it for them?
The problem with your preferred methodology of opting in to updates is that 90% of users won't do it, which leads to security and compatibility nightmares.
I get that an update can cause problems. But to say that all auto-updating is terrible and it just breaks things and software is doing "whatever it wants" in the background seems like an exaggeration and misses the larger benefits.
Microsoft itself ended up developing a somewhat more nuanced understanding of autoupdates than the current Canonical standard - business clients have various ways to override autoupdates. Surely Canonical can improve upon this standard, rather than learn the hard way the same lessons.
And from developer's point of view, I can attest that if I'm to package and publish something for Linux, I'd certainly use snap or Flatpak or both before using deb or rpm so long as they're not affecting me in some major negative way. I assure the reader this is not the minority opinion and the software catalog is only going to grow because of it. This is our way out of PPAs and dependency problems (among other things) for publishing up to date out-of-distro software. I have no problem with a particular set of users not wanting the latest version if they know what they're doing but that doesn't negate the benefits this kind of a system brings.
We've seen this play out with other new and needed Linux systems before and that's OK.
I think its interesting that they are pushing snaps so hard on this LTS release though. I always thought of the LTS versions as getting stuck with older versions and being very strict about updates not breaking things. I guess perhaps with the snap-sandboxing this should work smoother? Personally, as long as my system works I don't care. If the software gets updated, that's fine with me.
I installed VSCode from the .deb package on the VSCode website and it automatically added the update repository so that it auto updates via apt.
"Installing the .deb package will automatically install the apt repository and signing key to enable auto-updating using the system's package manager"
Discord doesn't, though.