I blew up my Ubuntu install and switched back to Debian. I haven't missed Ubuntu at all. I am resigned to the fact that if I care about a particular piece of software because it's the reason I use a computer (go, Emacs, Node, etc.) then I just have to maintain it myself. There simply isn't a good way right now. And you know what? It's fine. Everything is configured exactly the way I like, and it will never change unless I change it.
Now keep in mind we can't blame squashfs here. It was developed for use on <16 MiB large NOR flash chips for embedded devices likely connected over SPI - the underlying flash and interface is so unbelievably slow that no amount of compression or terrible kernel code would ever start showing up in some kind of benchmark. Using it on super fast desktop machines with storage that rivals RAM bandwidth and latency is just the opposite of what it was developed for.
Selected squashfs parameters are the speed issue.
Exactly. I recently had to replace Chromium with Google Chrome just because the Ubuntu maintainers thought it'd be a good idea to replace the Chromium apt package with an installer that pulls in the snap…
Good thing Firefox is my main browser and I only use Chromium/Chrome for testing (and whenever websites forget that there's more than just one browser to test for), otherwise I would have long ditched Ubuntu, too.
I've had this glyph issue for over a year. In chromium, Signal, some ebook app, and several other snaps.
I've tried many things. But gave up. Snap is not 'one layer of indirection' too much. It's hundreds of them. There's chroot, some VM, a virtual gnome, containerisation, weird user and permissions. And so on.
This complexity made me conclude that snap is a bad solution (to a real problem). Not the glyph issue, but the fact that I cannot fix it, is, for me, the reason to conclude it has, or will fail.
It’s sets out to solve a couple of problems in an app-focused way. As with any packager that packages dependencies, it introduces a few dozen more in the process.
Why this isn't more of a deal breaker I have no freaking idea.
There's more info here:
s/available to/allocated by/
All of the raging discussions in this thread would be totally absent if Canonical had taken the time to make apps installed with snaps fast. Unfortunately, these days, the "make it work, make it right, make it fast" mantra seems to stop at the "make it work". At least the 20.04 release seems to be at that stage.
Congratulations, you just got a bunch of users who are going to avoid updates even more because you are going to make everything slower with your shiny new release.
Genuinely curious, why would you not just keep using Arch?
I got sick of it after six years or so and moved to Linux Mint. (This was before Manjaro was widely visible.) Been on Mint ever since: it's a better Ubuntu than Ubuntu, and a better Windows than Windows (for ordinary uses).
Note that Mint 20, although based on Ubuntu 2004, has removed snap from the base install. `apt install chromium-browser' takes you to a web page explaining why.
I will say, Linux Mint is my recommended Debian based distro for desktop use, it really is a better Ubuntu than Ubuntu.
And as someone living with OCD (the real thing) the whole Snap thing just makes me so anxious. And like you experienced, it is soooo slow. I cannot even go back to windows because for some reason the wifi on my laptop does not work well with it.
I do not care about bleeding edge anymore, but I am a casual user mostly doing genetic research, so Debian Buster with Cinnamon is, to me, the best distro on Linux today.
Don’t do this to yourself! Debian is not going to give you bleeding edge. But there are plenty of distros that can. Despite being a meme, Arch Linux is one of the best distros available, and has been for years. Node, golang, are usually updated within hours of upstream, while the core system remains stable. If you’re looking for something more modern, Solus has been gaining popularity and also has relatively up to date packages.
Debian is great for servers.
1. You're using latest upstream, after it passed through the archlinux-testing repository. This means you won't have to install software from somewhere else just because the repos are outdated. (nighly builds of software are different)
2. Sometimes you have to sys-upgrade to install/fix something, and you don't have much say in when that happens. Typically you need to preemptively do this about once a week to not potentially get interrupted by manual intervention at a bad time. Yes, you will need to do manual intervention, but you won't spend much time on it. It's far less work than getting up-to-date software running on your debian stable.
Arch will give you small issues every now and then, but it gives you the tolls to fix them and makes it easy to do things that are much more difficult on other distros.
I like what they are trying to do but it just did not work for me.
Arch is hard to figure out too with the wiki based docs but at least that’s advertised as unfriendly.
I see the value of file system and dependency isolation, but it shouldn't result in such significantly degraded performance.
Sadly I noticed a while ago that Windows began to outrun Ubuntu on my older machines; it doesn’t seem like Canonical really cares about performance anymore.
IIRC Shuttleworth took a more governance and less hands-on role after that, and Ubuntu started to focus on server and enterprise support.
Is there anything special about getting nix working well with other package managers? I'll be honest, the main thing keeping me from digging into it further are the docs and the syntax. I can never tell if I'm reading about nix-the-package-manager, nix-the-language, or nix-the-operating-system; and looking at the syntax makes me understand what most programmers feel when they look at lisp.
Sorry to hear you had issues on ubuntu. Its hard to say how you might improve your experience without knowing more details. The nix forum is probably a good place to get support for that sort of thing.
It only clicked for me when I started using NixOS (on my primary laptop in my case) rather than just Nix.
I think the biggest challenge using NixOS vs Ubuntu is if you've got some weird obscure piece of software you need to get working there's a better chance that someone has already figured that out on Ubuntu and you might have to do the work to get it running on NixOS.
On the other hand I've found contributing to Nix easier and less intimidating than contributing to Ubuntu. To add a package to Nix you just open a PR in the nixpkgs repo on github. I've found the community to be friendly and helpful.
I use a lot of LXD containers for when I just want play around with something in a non Nix environment.
Oh and I love being able to run `nix-shell -p <package>` to use a package without "installing" it.
The TL;DR is that Ansible is given a description for some part of the system, then squints at that part and trys to make it match the description. This means that it doesn't unify anything that you haven't described and if you stop describing something it doesn't go away (unless you explicitly tell Ansible to remove it). This means that your Ansible configs end up unintentionally depending on the state of the system and the state of your system depends on the Ansible configs you have applied in the past.
NixOS is logically much more like building a fresh VM image every time you apply the configuration. Anything not in the configuration is effectively gone (it is still on the filesystem but the name is a cryptographic hash so no one can use it by accident). This makes the configs way more reproducible. It also means that I can apply a config to any system and end up with a functional replica that has no traces of the previous system. (other than mutable state which Nix doesn't really manage.)
Nix has other advantages such as easy rollbacks (which is just a bit more convenient than checking out an old config and applying it manually) and the ability to have many versions of a library/config/package without conflicts or any special work required if you need that.
I wrote a blog post a while ago that tries to go a bit more into detail over what I just described https://kevincox.ca/2015/12/13/nixos-managed-system/
That they clobbered the apt install just to push Snap forward left a bad taste in my mouth.
I've had to install Chrome to try a thing or too (so not as a daily driver) and I haven't noticed anything weird during use. I've used Jetbrains' PyCharm daily for a few months and no problem there either (although that is a "classic" snap, not sure if it matters).
I used Debian and it seems to be gaining detractors from Ubuntu, my only question is... what made you switch and sick with Debian?
The rough edges I've found: no automatic updater or security updates in testing, just run apt update/upgrade once a day. An initial problem with the video driver not working because it requires non-free firmware, solution is just to add Debian non-free (it would have been helpful if a warning was given during installation). Firefox/thunderbird still on an old long-term-service release: I've been installing tarballs manually for now.
What are the biggest drawbacks? I know you said you don't miss Ubuntu at all, but is there anything which is causing you pain because it works differently or is just missing?
And it's yet another way to do an end run around repositories, instead you will sooner or later get an app-store like environment that can be controlled by some entity. These large companies should stop fucking around with Linux, it was fine the way it was. Just fix the bugs and leave the rest to the application developers.
There are two models.
The first model is the traditional distribution model. The distribution curates and integrates software, picking (generally) one version of everything and releasing it all together. Users do not get featureful software updates except when they upgrade to a new distribution release - all at once.
The downside of this model is that developers who want to ship their new software or feature updates without waiting for a new distribution release get stuck into dependency hell and have to operate outside the normal packaging system. Same for users who want to consume this. Third party apt repositories and similar efforts are all fundamentally hacks and generally all end up breaking users' systems sooner or later. Often this is discovered only on a subsequent distribution upgrade and users are unable to attribute distribution upgrade failures to the hacked third party software installation they did months or years ago.
The second model is the bundled model. Ship all the dependencies bundled in the software itself. That's what Snaps (and Flatpaks, and AppImage) do - same as iOS and Android. This allows one build to work on all distributions and distribution releases. They can be installed and removed without leaving cruft or a broken system behind. They allow security sandboxing.
All your objections seem to be criticizing the bundled model itself, rather than anything about Snaps themselves except that they use the bundled model.
If you don't like the bundled model, then you can carry on using Ubuntu 20.04 without Snaps just fine.
> If you don't like the bundled model, then you can carry on using Ubuntu 20.04 without Snaps just fine.
No, you can't. Ubuntu has started replacing apt packages with snaps. So if you want to install those packages (such as Chromium) you now have to use the snap.
New distribution releases always add and remove packages. Chromium has been removed (as a deb). It's is no longer packaged as a deb because it's a rolling release upstream and it was too much work to backport featureful rolling release to debs.
The same is also somewhat true if first-class tools such as Software (as in the "store" GUI) begin to push snap packages as the first thing, because then you will, by default, need to go through additional steps to find the packages that you want, and which used to be available as the first choice.
You can still do things, of course, but it might be that you need to start getting around the snap-centric design choices more often, and at that point it doesn't come at no usability cost to you anymore.
I don't actually use Ubuntu at the moment, so I don't know how much that is the case now, but if it is (or if seems like it's going that way), I understand the frustration.
Well, if you don't like it, you can always fork it. That's the beauty of open source. /sarcasm
Seriously though, all out replacing stuff with Snaps doesn't seem like the right move.
Considering that (as other posters have mentioned) deb packages are released by the vendor in this case it feels like a flimsy excuse.
You can almost always install via deb if you want instead.
Slowly boiling the frog is really effective.
I'd wager they will require deb packages to be signed with a certain Canonical key that they will use strictly for basic system packages. Everything else will be a sandboxed Snap, left to third-parties to maintain, distributed through a Canonical Appstore that enables payments.
Maybe they will give you an option to "root" the system, and if you use it you'll lose any right to support or updates from Canonical.
Snap is fundamentally a commercial play to reduce support costs and enable an appstore.
I really don't see what benefit that would provide them over other distros though and I'm not sure why they would make that choice to close down their system?
Fundamentally Linux works so well because of the free software movement and I don't see any app maintainers choosing to charge a fee for their software if they aren't already.
If anything can go away its Ubuntu desktop.
Snap may stay since it is the defacto standard in IOT world.
The forth model - multiple versions of dependencies live on the same system, environment constructed for each application. Deduplication works.
Bundled model has its values (just like vm) but it really shines with inadequate package management. If you don't like the bundled model switch to Arch Linux or NixOS.
So different systems switch between different methods as their prieceved value of the different usecases change which upsets people who see the values of the tradeoffs differently.
If you have the perfect solution I'm all ears.
I experienced this lately when my Rust-compiled binary used too modern a libc version for the aging Docker container environment we used for deployment, which forced me to use another Docker container for local development -- which obviously isn't ideal and removes the 100% reproduceability promise.
First is pre internet, because updates cost money, and because pre internet security issues weren't common.
Second is now, change one thing every day, always run the latest code, automated testing to make the latest code always work. Also means don't need a security branch and a feature branch.
People hate change, and linux people have the ability to say no and do their own thing.
Python has easy_install, pip, anaconda and wheel; virtualenv to isolate packages. Node has npm and yarn (is it resolved?). Ruby gem defines version in code, bundler in Gemfile, Gemfile.lock; vendor/, rvm gemset, BUNDLE_PATH to isolate packages. Even developers can't find right answer.
Because it matters?
Package management is the main differentiator, I love pacman, I love PKGBUILD and makepkg.
Ubuntu is more stable in theory, but I've encountered broken packages (like hex editors with an incorrect hard-coded temp path, causing it to be unable to overwrite files).
$ pacman -Qi glibc
Depends On : linux-api-headers>=4.10 tzdata filesystem
2. install with `pacman -U linux-4.15.8-1-x86_64.pkg.tar.xz`
3. skip package from being upgraded with /etc/pacman.conf IgnorePkg 
I had problems both with Ubuntu and Arch Linux updates. At least I can fix Arch Linux issues, Ubuntu felt broken.
Ukku worked well for me to get newer kernels without issue, but kvm/virtualbox were totally borked, just as my hardware support was stable.
While it's true you technically can, at some point you have to ask yourself why you're going against the grain instead of picking a different distro (or not upgrading to 20.04, at least). Ubuntu wants you to use snaps. You can sidestep this, but you should ask yourself if you shouldn't just switch distros.
I just recently upgraded from Ubuntu 16.04 to 18.04. I don't see the reason to upgrade to 20.04 as long as the software I need works and I keep getting security fixes. Once this LTS gets EOL'd, I'll see what the current deal is with Ubuntu and seriously consider switching to another distro.
I regret not taking the plunge and installing Manjaro against the will of our IT during onboarding (they don't forbid it, they simply can't promise good support if you don't install Ubuntu). I am sure I could have found a way to install the 2-3 corporate VPN / spy / monitoring agents my employer requires on the machines they issue for employees.
But anything I've needed of Manjaro, I always got it. Granted that's an anecdotal evidence, obviously -- I haven't tried running games, for example.
Not really. The platform, Ubuntu in this case, has clearly signaled that they want to move from debs to snaps. And apart from the technical benefits of snaps (sandboxing) you also get the drawbacks (they are slow, updates are forced, and there is only one source for them). What you propose is fighting the platform you are staying on, and that's not a great place to be. Far better to move to platform more aligned with your choices as a user.
P.S.: If anyone is looking for quick and dirty suggestions, there's pop!_OS which is quite close to Ubuntu so there won't be many changes to the user experience. In my case it was Fedora. The linux ecosystem is diverse enough to offer many choices.
Servers are all deb which is main source of their income. If anything will go away is Ubuntu Desktop.
Snap may stay since it is the standard in IOT world.
When I was writing apps for distros, I had the opposite problem. Every single release of GNOME and Ubuntu would break PyGTK in some way so that my software that used zero new features in the OS, would require at least some modifications and force me to maintain many versions. Finally, Gtk changed something that broke something in a fundamental way (something in Gtk TreeView, I forget what exactly) and I simple gave up maintaining software rather than suffer through figuring out how rewrite everything again.
This blog post and many of the contents are claiming that you can't just use Ubuntu without snaps anymore as they are being forced upon users.
Disclaimer: I never used snap and don't use Ubuntu.
Flatpak, as well as snaps through clasic confinement, allows the developers to "escape" the sandbox because they know that they don't have all the permissions required to provide feature parity with deb/rpm packages. Another reason this is needed is that application developers are not writing their applications with flatpak compatibility in mind. However flatpak is going in the right direction.
Mobile operating systems have proven the value of sandboxing apps.
> entirely voluntary
Fedora Silverblue would like to disagree. And in any case all parties know it's still in flux, but the stable parts are stable in my experience. I would not want something like Snap or otherwise immature forced down my throat.
Note: I'm not making a value judgement about flatpak's sandboxing, merely describing it to the best of my knowledge.
The people configuring the sandbox should be packagers that you trust. The upstream developers might provide some recommendations to the packagers, but if it's obvious that an application shouldn't need a permission, it shouldn't have it.
But permissions can't save you from an actually malicious app. Constrain it from accessing the camera and it will still be using your device to host pornography. Constrain it from accessing the filesystem and it will still run up your electric bill mining bitcoin. You either need to trust the developer or you need to get the app through someone you trust to have audited it for you.
I’ve not been impressed at all.
Occasionally I need to replug my headset in at the beginning of the meeting to get my voice audio working, but I'm not sure if that's actually an OS issue or not. Either way, it's never taken more than a couple seconds.
Visual Studio Code is a regular package, just because its use case it not really suited for a sandbox (access to system binaries, libraries, etc.), but I honestly haven't even tested its Flatpak version.
The other day I pushed an update to some flatpackaged app, and guess what, it's available to all Linux users. Packaging has become incredibly easier for third parties with these kind of technologies.
I also am a strong believer that the future for sane desktop PC's is that every program (except the most fundamental core services) of a desktop OS should be sandbox by default. With basically no permission to access any local files or communicate with any other program/service.
It would need some MAJOR changes, slowly step by step. And I had hoped with snap & flatpack we would be slowly transitioning there. But it doesn't really look that way anymore to be honest.
(PS: And it can be done with reasonable UX experience without requiring the user to configure some magic access rules or anything, but it's tricky to get right and it not be fully backward compatible but often changes just need to go into the GUI toolkit (QT, GTK) so it should be possible.
Linux has had some of the best software security through MAC (SELinux, Apparmor) and cgroups. The problem is that there is no culture in free software to actually write specifications at the point of development, it generally falls on distros / maintainers to try to sort out MAC profiles or cgroups restrictions on a per-package basis.
That is why the packagers largely went with the Snap / Flatpak route of saying screw doing the grunt work heres a total sandbox with all the libraries built in.
It would be great if we could convince the whole ecosystem to start provisioning access specifications for libraries and binaries so upstream could start building apparmor / selinux profiles from provided files rather than having to do learning mode auditing that drove distro maintainers to not even bother.
For instance: the Pinebook Pro just got gles3 support via upstream mesa, but all the flatpaks with mesa haven't or won't update with any alacrity.
Users are left having to abandon sandboxing in order to get necessary updates.
As one example, I was surprised to find that the tor browser doesn't have an arm64 build :o
Thus far, it's my favourite laptop purchase ever. Beats the feeling I got with the IBM X series or the Ti Book. It's light, zippy enough, great Linux hardware support, and totally silent.
Still, it's not just graphics drivers: there's the oft-cited security patches, but also features for user-facing interfaces; Ie, an update to a URL parsing library might add additional codepage support transparently to an app that uses it.
Might be nice for hardcore fans, but good luck supporting anything there.
If the sandboxed app package model is what you desire then there's already a great and popular Linux distro for you: Android.
Nothing about "based on the AUR". Because it is liability. It is insecure, have to check PKGBUILD each install and update.
It was easier to build it and not use flatpak.
The number one and two attack vectors have always been tricking users into installing malware and attacking old insecure software.
Distributing virtually all software via app stores substantially solves acquiring safe software and ensuring it receives updates.
Defense in depth is virtuous but Linux is already more secure than windows in the ways that actually count and unlike MS they are actually positioned to in the future sandbox software because its all already mostly coming from app stores.
How are you going to sandbox graphical apps without knowing about (and having capabilities around) the system by which a containerized app would communicate with your OS’s graphics subsystem?
I mean, if you’re not going to run any X11-client graphical apps, it should probably be optional to have an X11 server installed; but either way, you’ll need the X11 wire-protocol libraries (“xorg-common” in most package repos) for the sandbox to link in.
Wayland is the way forward.
> you’ll need the X11 wire-protocol libraries (“xorg-common” in most package repos) for the sandbox to link in.
Today, runtimes do contain client x11 libs. However, nothing in flatpak requires it and it is possible to phase them out in the future releases of runtimes.
Wayland has been "The Way Forward(tm)" for 10 years now.
That may be. But nobody in RedHat/Canonical/etc. believes that enough to put sufficient manpower on it to make it true.
It is default display server in RHEL8. If that is not believing enough in it, I don't want to even know what would be sufficient to prove otherwise.
Linux is one of the most secure platforms to run web applications on, however, because more man hours than I can comprehend were spent hardening that use case.
All of those hardening measures can transfer over to the Linux desktop use case.
For example, seccomp, cgroups and MAC can all be used to harden a Linux server, and they can also be used to harden the Linux desktop. It's just that no one has thrown the same billions of dollars at desktop Linux that were thrown at solving web application security.
If you really wanted to, you could run a lot of your software in unprivileged containers secured with seccomp.
We've come full circle, because Snap does run software in unprivileged containers.
You can do very similar, but if it's a gui app, or has specific system dependencies there will be issues to work around.
I actually think that’s exactly what we should do. Containers are an over-engineered solution for a problem that never needed to exist.
Dynamic linking doesn’t work unless you can live inside a distro maintainer’s special bubble for all your software. If you can exist in that bubble, great—I really like Debian for certain use-cases—but if you can't, the benefits of dynamic linking everything are clearly outweighed by the drawbacks.
Good luck patching that security vulnerability in all those static binaries without proper dependency tracking ;). Not that I am on a particular side of the fence, both have their downsides.
To me the problem are package managers from the '90ies that use a single global namespace, only allows UID 0 to install packages, and do not really provide reusable components.
Modern packaging systems like Nix and Guix allow users to install packages. Packages are non-conflicting, since they do not use a global namespace (so, you can have multiple versions or different build options). They provide a language and library that allows third-parties to define their own packages.
Not to say that they are the final say in packaging, but there is clearly a lot of room for innovation.
Snap and Flatpak are copying the packaging model of macOS, iOS, and Android. This is a perfectly legitimate approach (and IMO the execution of Flatpak is far better). But it is not for everyone -- e.g. if you prefer a more traditional Unix environment.
The big one, which surprisingly places still manage to fumble due to poor process controls or simple mistakes, is that you have to restart all running processes that use the library after you update it.
I actually prefer to deploy static builds of critical services for this reason, because you already have to know that you're running version 1 build 5 everywhere -- and if everything is build 5, then they all have the fix. You don't also have to check if the process was started after May 5th.
Well, there is nothing that says you can't have proper dependency tracking just because something is statically linked. The infrastructure isn't currently there [x], but it definitely is something that languages and language package managers could with each other and provide.
[x] But can be built, now that more and more languages have access to language package managers with proper dependency tracking. One way would be to create a standard for how to query a binary for what it depends on. Then a computer could have a central database of the dependencies of static binaries that is installed.
Didn't say so. It is just easier with dynamic linking, because you can see what libraries (and versions) a binary is linked against.
But can be built, now that more and more languages have access to language package managers with proper dependency tracking.
Actually, approaches such as Nix' buildRustCrate (where every transitive crate dependency is represented as as a Nix derivation) + declarative package management offer this today.
But with curl | bash or traditional package managers, which are most widely used today, this is kind of dependency tracking hard/ad-hoc.
But then a static C library is used and nobody knows where it came from. Even if you look at the Rust ecosystem, which generally does things well when it comes to dependency handling, crates are all over the place when it comes to native libraries. I have seen everything from crates that use a system library (or something discoverable via pkg-config), via crates that have the library sources as a git submodule and build them as part of the build-script, to crates that download precompiled library from some shared Dropbox link.
Another fun example from another language ecosystem. numpy uses OpenBLAS. They compile their binary wheels on CI. However, OpenBLAS itself is retrieved as a precompiled binary from another project . However, the rabbit hole goes deeper. In case OpenBLAS is built for macOS, a precompiled disk image is retrieved from yet another repository . This disk image is added to that repository, but comes from yet another place.
This is all sort of the opposite the lessons to take from Reflections on Trusting Trust and the bootstrapping that the Guix folks try to do.
Anyway, with the mindset that most developers have, we will never have proper dependency tracking.
A few this could be solved (esp PAM and GPU ones) by making the full thing work over IPC. Already opengl is a pain to work in generic containers.
Why couldn't this work for a Linux-based OS? Honest question.
In Linux, there is no such authority and therefore no sharp line separating 'core OS' and 'external library'. It is just conglomerate of Linux kernel and independently developed tools and libraries (where each of them is more-or-less optional).
Some dev flat out refuse (and I am not debating if it's right or wrong) to package their app for every distro (even major ones: ubuntu, debian, arch, rhel/fedora) so it's up to distro maintainers to package them so users are always at the end of a line of other people packaging the apps (either through distro packages or sandboxed one click installer).
 Words are too strong and user CJefferson https://news.ycombinator.com/item?id=24384206 is right to call me out on that.
I agree, sorry about that. Poor choice of words. I had a very specific example in mind but there's obviously a whole gamut of reasons for not packaging. My position is that we can't nor should we expect dev to package their apps for the distro we use. Also it's no like app devs and distro/os devs/maintainers are living in hermetic boxes and their code/apps never interacting or evolving. Not editing out for context.
I tried packaging for Debian once and after 2 days I have up -- I have neither the time or patience to do work for distros I don't use for free.
If you haven't seen it, I highly recommend looking at fpm for packaging. Unless you're doing something weird or need an obscure format, it is the tool you want.
I have been thinking about that. I disagree. It's always nitpicking o'clock on HN and I specifically wrote "Some" and specifically wrote in brackets that I wasn't debating if it's either right or wrong. I agree that it should have been worded differently but the facts remains and that doesn't follow that there's “more than a whiff of entitlement“.
Not sure if you were commenting on the whole approach or just snap, but FTR, flatpak uses shared base layers which can be updated individually, so there's still an upside to dynamic linking.
Of course, half the point of containers is to “vendor” your dependencies — a container-image is the output of a release-management process. So the symbolic reference part of dynamic linking is an undesired goal here: the container is supposed to reference a specific version of its libraries.
But that reference can be just a reference. There’s nothing stopping container-images from being just the app, plus a deterministic formula for hard-linking together the rest of the environment that the app is going to run in, from a content-addressable store/cache that lives on the host.
With a design like this, you’d still only have one libimagemagick.so.6.0.1 on your system (or whatever), just hard-linked under a bunch of different chroots; and so all the containers that wanted to load that library at runtime, would be sharing their mmap(2) regions from the single on-disk copy of the file.
The primary issue with this approach is that if every program only sees its own version of the library anyway, there's no incentive to coordinate around library versions - you end up with tons of versions of everything anyway, maybe not one for every application but close to.
Oh, I know :)
> there’s no incentive to coordinate
True, but it potentially works out anyway, for several reasons that end up covering most libraries:
• libraries that just doesn’t change very often, are going to be “coordinated on” by default.
• on large container hosts, the most common libs are not app-layer libs, but rather base-layer libs, e.g. libc, libm, libresolv, ncurses, libpam, etc. These are going to be common to anything that uses the same base image (e.g. Ubuntu 20.04). Although these do receive bug-fix updates, those updates will end up as updates to the base-layer image, which will in turn cause the downstream container-images to be rebuilt under many container hosts.
• Homogenous workloads! Right now, due to software-design choices, many container orchestrators won’t ensure library-sharing even between multiple running instances of the same container-image. We could fix this issue without fixing the rest of this, but designing a container-orchestrator architecture around DLL-sharing generally, would also coincidentally solve this specific instance of it.
Simple example. App A wants tensorflow 1.10, CUDA 8, and python 3.7. App B wants tensorflow 2.2, CUDA 10, and python 3.8. You want App A and B installed at the same time but the two versions of tensorflow are neither forward nor backward compatible. The two pythons will fight with each other for who gets to be "python3". How do you deal with this without containerization?
I don't think it violates the principles of open source at all, it's just making sure each application gets the exact versions of libraries it wants without messing up the rest of your system.
And that's the exact problem. Instead of solving it in the proper way you end up with kludge-on-kludge to paper this over.
Backwards compatibility is a great good, you let it go only if you absolutely have to rather than because you can upgrade stuff so easily.
> The two pythons will fight with each other for who gets to be "python3"
An even clearer case, obviously python 3.8 should be backwards compatible with 3.7.
I think so too, but HN downvoted me to oblivion the last time I advocated for that. That's part of the problem, I guess, is that the dev community doesn't actually agree with 3.8 being backwards compatible with 3.7.
My understanding of semantic versioning is that:
- (x+1).0 and (x).0 don't necessarily need to be able to run code writen for the other
- 3.(x-1) doesn't need to be able to run code written for 3.(x)
- 3.(x+1) should always run code written for 3.(x)
Hence, you should be able to point "python3" at the latest subversion of 3 that is available, continually upgrade from 3.6 to 3.7 to 3.8, and as long as you have a higher sub version of 3, you shouldn't break any code that is also written for an earlier subversion of 3. That's why it is supposed to be okay to have them all symlinked to "python3". If a package install candidate thinks the currently running "python3" isn't recent enough for the feature set it needs, it can request the dependency manager upgrade "python3" to the latest 3.(x+n) with the understanding of not breaking any other code on the machine.
Unfortunately that isn't true between 3.7 and 3.8. There are lots of cases where upgrading to 3.8 will break packages and that violates semantic versioning.
...irrelevant, I'm afraid.
Python doesn't use semantic versioning, so you can't really expect them to follow it. As GP insinuated, if you just pretend that 3.7 is 37, and 3.8 is 38, you'll pretty much be able to apply semver thinking, though.
Right, so because Python doesn't coooperate, we end up needing containerization, which is what I was trying to explain in GGGGP. Because apt will upgrade 3.7 to 3.8 and unfortunately break anything that was written in 3.7 (and vice versa).
An app needs to be able to say "I'm ok with python3>=3.7" and be fine if it gets 3.8, 3.9, or 3.20, if we want to be able to run it without a container. (And likewise for all its other dependencies besides python)
What's wrong with that? It's perfectly okay to say "I don't care".
I do release all my code under MIT because I care about attribution. I don't mind if people want to use it in commercial or closed source applications, nor if they want to modify it somehow.
I distribute code because that makes _me_ happy, not because I want to share an ideological statement about how others should distribute their code (or not).
The problem is rather, what it means in the long run. The point is, that this kind of only caring about for example attribution makes the ecosystem exploitable. It does not uphold ideals or enforce principles. Without upholding ideals and enforcing principles, how do we expect our principles to be followed in the future? If there is no legal obligation to do anything, which capitalistic (We need to maximize our profit! Ethical principles? Nah, come on ...) big company is willing to go the extra mile to respect the principles of some open source community, perhaps even at the expense of making more profit from a closed source solution? And I mean going the extra mile, without seeing it as an opportunity to use the very action as another means of promoting oneself. Simply going the extra mile, because it is the fair thing to do.
As I see it, as long as there is a chance to deviate from following the principles (no copyleft), someone somewhere will do so. Heck, even with such obligation to adhere to principles some people will deviate from the path. The tendency is always towards the wrong direction, if we do not enforce our principles of openness and such. It is an uphill battle. The whole ecosystem goes towards a not so open direction, by these initially small "missteps".
Especially when a a big company with a lot of developers takes stuff and makes proprietary stuff out of it, as its product, which usually initially has more functionality than its open source counterpart, users will quickly switch to the non-open proprietary version. They do so because they want that new shiny functionality immediately. The slightest inconvenience is sufficient for many users to drive them towards proprietary software. They do not know nor often do they care initially about using a non-open, non-free thing. Until they are sufficient users to create a bubble, in which the open source ideas are no longer existent. Then however, the network effects are already strong. "But all my friends use X. No on uses Y. I cannot convince them all to switch from X to Y!"
Example: There are loaaads of at least open source (and some also free software) messengers out there. All people need to do is to use them. But the network effect and features like integration with (a)social media are so convenient for them, that they give up on their freedoms and use things like Whatsapp or Facebook.
MIT is beautiful is its concision, and reasonably reflects the "use however you want but don't blame me" legalese I used to custom craft before I found it.
Man, I get this now esp with AWS services and everyone recommending how "easy" it is to x with y service, and why it we should use to too and it will magically solve all the problems… and I'm like: "no, i'm going to use this open source software that we'll have the code for and be able to tweak to do whatever we want to do and see how it all works (oh and is free to use), and unless you are going to be willing to hack around all the edge cases yourself that with y service without me getting involved at all" and then that usually works, though I suspect once I leave, the costs of running infrastructure are going to go through to roof and no one else is going to have any clue as to why that's so (but more likely, think that its impossible to have it any other way than paying to have y)…
Moments like these are great opportunities for folks that just don't accept "the non-open proprietary" by default, but its only an opportunity because most choose to accept "the non-open proprietary" by default… we all have to pay for the choices we make… some just want to pay more to not have to think about things… tradeoffs.
Honestly, it really depends on which stage of life your company is at, and the resources you can allocate to infrastructure work.
At the very beginning of my company, we did exactly what you mentioned here.
- Pay for a managed NAT gateway? No thanks I can do the NAT myself with iptables on a cheaper EC2 instance.
- Pay for a managed NFS? No thanks I can do it myself
- Pay for managed VPN? No thanks I can setup IPsec myself
With time though, as the company starts to gain money, and the number of users increase, we switched back to more managed services. The key here is that you want to refocus your infrastructures efforts on more business centric issues.
Also, most of the time, the time spent to maintain a service is exponential with its scaling. NFS is a good example here. Setting up a number of NFS shares for 5/10 users is fine. Once you get 20+ NFS users, you just better focus on your real company product rather spend month and money on maintaining NFS yourself.
Though saying this, even at a small corp level, are still affordable proprietary solutions out there (not necessarily in AWS), but most people trend towards whats trendy…
This pulls users from the open source projects and since contributions do not flow back to the open source project, it can quickly become obsolete in the eyes of the most users. The principles of open source will live in that open source project, which in the end (exaggerating) "no one uses" and will become pointless. Most users are not protected by these great ideas or principles any longer, because they will be sucked into the closed source swamp, because all their friends are there already.
I'm curious - are there any Linux distributions that contain nothing but fully static, self-contained binary executables ? As in, ships with no libraries ?
It's about getting dependencies with the app. I can't tell you the number of times I borked my OS install because I wanted a version of a single application that had a feature newer than a year or two old version supported in my distro's repository.
It helps to have both as options.
Clearly they are looking for a way to put some kind of proprietary dependency in Linux by propagating snap to other distros so they can then milk it for cash (e.g. charging a fee for access to the snap store to big publishers like Microsoft/Google). I don't think they realise the mainstream Linux users will hate it for exactly that reason.
Canonical has 443 employees according to the wikipedia page. Is that large in this context? I don't really think so. Redhat (13k employees) is large. Canonical isn't large.
But perhaps base container images on scratch, not on ubuntu:latest ;)
There's a certain subset of developers who are against dynamic linking at all, and they do have some convincing arguments that are worth reading.
I don't necessarily agree with them, but their arguments are worth acknowledging.
I do think that's why the distinction between "free software" (copyleft) and merely "open-source" matters.
If you look historically, "open-source" became a thing as a reaction to free software where it preserves the most visible benefits, (source code in the open, modifyable by others), but treats these as purely a convenient workflow when working on code, whereas free software is more of a philosophy and so less likely to erode on principles.
Free software doesn't need to be copyleft, though. The MIT license, for example, is a free software license, even though it's not copyleft. Projects such as the various flavours of BSD can have pretty strong principles regarding their software distributions remaining free even though they don't prefer copyleft.
That is true, however speaking to many members of the FreeBSD community in particular, there seems to be a strong sentiment of this simply being a practical model of development, rather than a strong ideological stance. In fact a large portion seems to be rocking Macs, "cause it's BSD anyway", which to me does not seem particularly principled.
In fact they seem to take pride in completely closed systems being based on FreeBSD, like the PS4, Nintendo Switch etc.
In practice though, they don’t matter for a vast majority of users, and package managers are far less hassle. Maybe it’s time for the principles to change?