It uses rpm-ostree as a basis "to bring together a hybrid of image-like upgrade features (reliable replication, atomicity), with package-like flexibility (introspecting trees to find package sets, package layering, partial live updates)".
rpm-ostree itself is a layer above OSTree, which describes itself as a "git for operating system binaries" - "OSTree is a tool for managing bootable, immutable, versioned filesystem trees."
Yes, it can run docker images; but so does CoreOS or any normal distribution and they are not based on rpm-ostree at all.
There's currently an effort to make the command line utilities easier to use. There's also an effort to bring PackageKit to NixOS, which should make a few GUI package managers work.
I'm downloading the live image to have a look/play
In Nix, if you recompile the same version lib with new options, it will get a new branch in the Nix tree. While it will replace the existing one in Gobolinux.
I think there is a option for the Gobolinux Compile tool to behave more like Nix, but i have not explored its behavior much.
As a Schemer and GNU person it was easy for me to pick Guix. I've been contributing to Guix for a few years now. I find the tooling of Guix to be more accessible and more hackable.
In my experience when packaging Java for Guix it seems to me that Guix is a little more principled, where Nix would more readily accept binaries for bootstrapping (e.g. prebuilt jars).
Considering how different functional package management is compared to the alternatives I think that novices could pick one or the other; once they are used to functional package management they can better make a decision to switch to another implementation.
For bioinformatics software, though, Guix is far ahead. It's used at our bioinfo institute and in big projects like GeneNetwork.
However, it is very simple to customise packages, including the kernel package, e.g. to apply patches, use different sources, or to exercise your right to disagree with the Linux libre upstream on what blobs should be deleted from the kernel.
That said, I consider freedom by default a feature and it works very well on most of the hardware I use (an exception is an on-board Radeon graphics chip in a desktop machine I don't use much).
Creating package variants is almost trivial; it's certainly no harder than, say, customising Emacs. Guix blurs the lines between user and maintainer, so using custom package definitions is a supported use-case. At work even our scientist users create custom packages in case they are not available in Guix upstream yet.
> The promise: Sandboxing makes you immune to bad ISVs. The reality: It protects you from some things and not others.
Well yeah, but the same could be said of maintainers. Maintainers let things through all the time, and sometimes they cause problems (hello, Debian weak keys).
The reality is a bit complicated, and it boils down to something boring like: if the maintainers are better than upstream, maintainers are good, and if they are worse than upstream, then they are bad. But that is both tautological and vacuous, so it is not especially useful insight.
The real insight is this: there just aren't enough maintainers to go around. Debian has 1500 contributors and 57,000 packages, which is 38x as many packages as people. Now maintainers have a lot of tooling to make that more tractable, and upstream developers have their time split between multiple packages too, and some packages are important enough to get the attention of several maintainers fulltime. But do we really believe 1/38th of a person is judiciously considering how to overrule upstream's decisions, attentively investigating user bug reports, and so on?
More likely, the maintainers do not even claim to be doing that, because they have not packaged the software at all. I use lots of unpackaged software; until recently nobody was packaging Chromium for example. Java packaging is really unreliable. And so on.
Ultimately, Ubuntu Snap exists because there is a lot of unpackaged software, and a lot of software packaged poorly. It would be nice if we could wave a magic wand and get 38x more maintainers, but we cannot.
They also catch things. OpenBSD are particularly good at this, and Debian have a pretty good track record (exceptions noted) as well.
The most critical aspect of the package maintainer is that they don't have a horse in the race -- they're not representing the interests of the software developer, but of the users (or at least the OS). It's an additional step for independent review, a fresh set of eyes, and a set which operates outside the disciplinary scope of the developers, at least in theory. As Celine's Second Law states: the truth can only emerge in a non-punishing situation (from Robert Anton Wilson), and that's the situation an independent maintainer has.
Yes, shoving square pegs into round holes gets problematic, non-free software doesn't package well, and there may be delays. But independent software packaging done right adds tremendous value.
It also avoids huge scope of costs, the annoyance, security, privacy, and surveillance deadweight losses of proprietary, ISV packaging. I remember seeing this in the Microsoft world in the early 2000s, and being simply staggered at how bad the situation was. And yet, it's precisely the same set of dynamics, and inevitable consequences, which are now infesting the mobile world. Android is all but unusable as a consequence, and Apple have announced that virtually all pre-loaded software will be removable from the next generation of iOS.
It's the position of at least some Linux distros, most notably Debian (and many derivatives, at least in part) that the interests of the users come first, as expressed specifically in the Debian Social Contract, Debian Free Software Guidelines, and Debian Policy, and which are technically supported through packaging systems, bugtracking, and updates, which are the real secret sauce.
That's a lesson the technology community seems not to have learnt.
> until recently nobody was packaging Chromium for example.
To name one example, SlackBuilds.org has had Chromium since 2010, although admittedly that's not so long ago as the Debian weak key cockup, which was 2006.
Maybe your examples could use an upgrade to the latest stable version.
I don't mean anything personal by it. If I was maintaining 38 packages on my nights and weekends I'd do a bad job too.
But examples don't go out of date unless you present some force that takes 30k unmaintainable packages and turns them into 50k maintainable packages. What specific advance in software maintenance do you believe improved the art of software maintenance by an OOM?
But if you want to talk recent examples, we could talk about how nobody's packaging Swift.
Arch Linux packaging files have source history going back to then as well.
Chromium is also an example of a package that took a long time to appear in other distros, because it's really written with the app mindset. It copies and incompatibly customizes most of its dependencies. No one would consider that a great idea for ongoing maintenance and security patching for a typical project, but of course this is a google product-oriented thing with nearly a thousand well-above-average-skill developers assigned, so it's not a problem for them to manage surprisingly huge volumes of code and changes.
Web browsers like Chromium are also a good example of the kind of modern software which doesn't work well with the debian release model, because it's "unsupported and practically broken" in like half a year. That's not true for the "make" utility, or for gimp or inkscape or libreoffice or audacious or countless other useful applications and tools which are not browsers.
I really don't like the fact that modern browsers are a crazy-train and there's just no getting off it.
It looks much worse than it is, because most packages are really boring from a maintainer perspective.
For example, KDE releases a hundred or so applications. Some of them are probably very interesting to package, but for most packages, `cmake && make && make install` is just fine (maybe plus a certain KDE integration macro that your packaging tool already carries).
How recent is "recently" ? Only Fedora and it's downstream distros do not ship Chromium. Everyone else has been packaging it for a few years now.
And that's with both Canonical and Debian's resources involved in making sure that I get a relatively up to date base set of packages.
Never mind the issue of when your politics don't quite align with any distribution maintainer's in the whole.
(Note: edited to clarify 'bleeding edge ubuntu' as meaning the non-LTS release train)
But with bleeding edge, you lose the control over this. It may be worth it for some users, and not for some others. That's a trade-off, and I myself am at the maintained-package-repo+use-only-stable-releases side; thus I use FreeBSD. Debian and Ubuntu mainly provide stable server OSs, so they're on my side too. If you want bleeding edge software, you should migrate to an OS that provides bleeding edge software, tho should know that it's called bleeding edge is called bleeding for a reason. An OS that has to release a version that'll be stable for the coming five years simply cannot provide bleeding edge packages.
Wanting software more than every five years is hardly "bleeding edge".
But if you want secure, stable OS that wont drop the eggs, and also rolling release, well, nobody can do anything about it.
I just did a `snap install vlc` and got a recent version of vlc without having to upgrade my entire operating system.
I find functional package management to be a vast improvement over the packaging situation --- unlike incarnations of the latest bundling/appification trend such as Snappy or Docker.
Out of the 2500 or so packages installed on the computer I'm typing this on, I want about 10 of them to be bleeding edge. Maybe 20.
As far as I can see, Snaps/Flatpaks aim to change this, but at the expense of introducing much less manual oversight and inefficiencies in terms of storage, at the very least.
I'm working on a project (licensed under the GPLv3) and I have decided to be my own maintainer, i.e: building and distributing packaged versions of my project all by myself:
1. my project is young and unknown, I have no choice but to package it myself (I really like AUR by the way);
2. it lets me write and maintain step-by-step documentation;
3. it allows me to enforce a consistent interface for my project across operating systems, I know my project can be used the exact same way on any operating system I have packaged it for;
4. I have had bad experiences with maintainers having no idea what they are doing and completely breaking my software;
5. It allows me to build a full release pipeline including both source and binary releases.
The first point is sort of a big deal, having packages is important to drive adoption, therefore I need to invest my time in that. I'm not gonna wait for some random folks to package it a few years down the road (if that ever happens).
My work is not incompatible with downstream packaging, but I do intend to work closely with (or directly maintain) any downstream package.
I really feel like this is sort of a catch-22.
It doesn't even need a Makefile. They will create one for you.
(They need to package the 3870 other requested projects first, though https://www.debian.org/devel/wnpp/requested )
And hats off to you for going the extra mile for your project! Your PKGBUILD even looks quite good. Would you like some assistance in writing a -git pkgbuild for people who want to try the absolute newest commit?
I didn't know about Suse's OBS until this thread, I guess I need to look into it. I have been building my own CI/CD pipeline on top of Buildbot so far. It's a PITA, but leaves a lot of room for creativity, hopefully it will be rewarding down the road.
I don't know about the -git thing for PKGBUILD, what's the syntax/documentation?
Compare it to pacman's release-based pkgbuild:
The major differences are the "git" url in the sources, the provides/conflicts metadata and the dynamic pkgver() function.
1. I use Arch Linux for my desktop and I like it.
2. I hate docker's repository systems (and other upstream packaging)
But this article writer sure is putting the maintainers on a pedestal.
More fundamentally, the maintainer is the primary line of defence and interaction between users and developers. Maintainers shield developers from uninformed users, allowing the devs to write software with less support overhead. Non-bugs are caught and filtered out. Low-quality bugs reported to the distribution's tracker often becomes a good bug when the maintainer reports it upstream.
Maintainers also shield users from developers, offering a layer of quality control...choosing a subset of the (subjectively) best software FOSS has to offer. Maintainers will disable features that they feel act in bad faith. Maintainers' greatest power is the ability to outright say "This is not good enough for our users"
Really? Then what about this poor sap who had the misfortune of releasing one buggy minor version, and is now forced to support that for the next 5 years even though the bug was fixed in the next version?
The maintainers actively tried to sabotage his attempts to tell people to upgrade to a new version in the name of 'consistency.'
Sure, consistency is great, but only if we (the maintainers) don't have to do any work (such as backporting). We offload all the supports to the developer and have him do support work for us. Then when he complains we will call him a whiner and laugh him off...
You're also missing the forest for the trees - if no-one maintained xscreensaver packages for distros, then there would be considerably fewer installs of it. Centralised package management is a very powerful feature, and without it, you're reduced to the Windows style of installing software: hunt around on the web for it, try and figure out if you're getting it from the official repo, try and figure out if the download is trustworthly, install it separately, hope it doesn't sideload malware, then maintain it personally as time goes on, and maybe even suffer every second tool having it's own self-updating service phoning home and nagging you (plus you may not even be able to easily uninstall the software if the author didn't offer that option). It's easy to do that for just one bit of software, but it's a real chore to do it for all your software.
Packaging is a difficult problem - the 80% use case is easy to solve, but the edge cases really require elegant forethought. It also seems that every distro has a different idea of what that 20% is...
> Really? Then what about this poor sap who had the misfortune of releasing one buggy minor version
Can't you do a minor do a minor version upgrade? Even the stable distros do updates all the time, just nothing major.
Ah, poor maintainer. Releasing software with a self-destruct timer is not an acceptable way to "tell people to upgrade".
If desktop packages like the XScreensaver is what makes your system unstable, then you have worse problems...
The base problem is not ISVs or maintainers. But that the rigidity of the package managers force maintainers to either use one version of a lib for the duration of the distro version, or play musical chairs with the package names to get around conflicts.
Either option makes it hard for third parties to produce packages.
The likes of Nix allows multiple versions of a packages to exist side by side, and ensures that each program gets the version it wants.
This allows a third party package to request a newer version of a lib than the distro provides, as it will not conflict with the distro provided packages while retaining the naming scheme.
Writing package expressions is usually very simple, especially since we can use package importers to automatically generate and update them.
It's really frustrating when upstream turns around and says "actually, we provide packages -- not you". In fact, we've had cases where a certain project suggested that a reasonable compromise would be that we change the name of the package in our distribution to "reduce confusion to users". Wat.
However, I do understand their point somewhat. If you have users pinging your issue tracker and they're using distro-supplied packages, then you are getting a lot of noise. But the right way of reporting bugs is to complain to your distribution (we actually live and breathe the distro you're using!) and I feel like upstream projects should make this communication model more clear rather than complaining to distros about users submitting bugs in the wrong place.
EDIT: Thanks for the SUSE shoutout! OBS is pretty cool and the people developing it are pretty cool people too.
rm -rf $DIR
rm -rf /home/guest
Users can still install ISV-packaged software in distros. But ISV packages typically hurt the system by removing things like vital dependency tracking information, or replace files that other packages provide, or won't be tracked for security patches, or will be installed into non-standard paths.
I have been a package maintainer for 6.. no wait 8... no, 9 Linux distributions. I have created thousands of packages (easily 10,000). I have maintained forked distributions for corporate use. I have also developed dozens of individual software projects which all were designed to be packaged by maintainers, and were adopted into Linux distributions by somebody else.
And i'm telling you: Anything other than maintainers packaging software is a fucking nightmare for everyone involved. We have a good system! Devs, make your software so it's easy to package. Maintainers, package it for your particular distro. Users, just use what's in the distro, OR follow the upstream's instructions on installing _at your peril_.
If MongoDB is a commodity I can gladly stick to the distro version. If I'm developing something of more cutting edge I want and will install the latest version from the ISV repo.
Yes, ISVs have the right o package their software. But as someone who is both a maintainer of a free software project and packages software for a distribution, I can tell you that I definitely do _not_ want to deal with packaging for every distribution (especially if it's some magical "universal" package). I've used enough distributions to recognise that they all have differences that you really can't "just package for every distribution with one setup".
Only by playing musical chairs with the package names to avoid collisions.
And that in turn complicates dependencies.
With functional package management every application and library gets its own namespace. Installing something is a matter of picking individual items from their own namespaces and creating a profile, the union of all outputs of all packages that are to be installed.
This makes it possible for users to install not only different major versions for selected applications but even variants of the same package (e.g. different configuration flags or different compiler).
At the bioinformatics institute we are using this to provide scientific software to cluster users. With Guix, users can create custom variants with very little effort without affecting the system state or other people's software profiles.
There is still a lot that can be done by improving package management systems. Giving up by going the route of appification (like it's done with Snappy or when Docker is abused as a packaging tool) is a bad choice, in my opinion.
The priority for a regular desktop user is a nicely integrated system, and that is something that ISVs cannot deliver because they don't know what to integrate with.
For example, some folks hate the Windows way because it has drawbacks like a different installation look/feel/process for each application, there's no automatic updates (unless the application has it built-in, which most don't), there's no automatic dependencies (so each app installer has to bundle its own versions of things, which also don't auto-update), there's no single place to go to look for a program that does something you need, automated installations are hard, inconsistent, or non-existent, security issues are harder to deal with (some image library issues from many years ago are probably still in the wild, bundled into some old version of some application), ...
Please keep in mind that "just silly" is just perspective and priorities.
So All MS could do was to release a scanner to look for faulty versions, and beg people to pester ISVs about getting updates.
With universal packages we would have universal config file paths. So for example: apache2 on debian based systems will store documents on /var/www while something like arch will store them at /srv/httpd. Similar thing happens with a `sites-available` accross distros. Different distributions are doing things differently, which users are expecting and putting upstream developers in charge of that will make a huge mess, because it's more convenient for them to store configuration at a specific location accross every distribution. Yes that makes a pain to write deployment scripts accross difrerent distros, but IMO consistensy is a key to a good distribution.
Another good key point here would be testing, different distributions aim for different things, while debian aims for maximum stability and uses quite dated packages that are known to work. Where there are distributions which are always on the bleeding edge. Will upstream backport security patches for old packages? I really doubt they would like to maintain packages for years to come.
I don't say it would be all bad, there certainly would be great developers which would benefit from universal packaging, but that will not always be the case.
The shopping lens was openly added as a marketed feature, and could be turned off with one action. You could, I suppose, class it as spyware, but it wasn't surreptitiously sideloaded - it was touted as a default feature. It didn't alter the browser, wasn't limited run, didn't nag you, and while I didn't use it much, it could hardly be called 'pummelling with advertising'.
It was an openly-acknowledged experiment that failed, and was removed.
To ensure the operating system is stable and reliable, OS distributors do not make major updates to packages such as OpenSSL during the lifetime of each release.
Not entirely true. RHEL backports bug and security fixes continuously, and select feature enhancements are rolled out with point releases. For example, RHEL 7.2 updated mod_nss, and enabled TLS v1.1 and v1.2 in NSS (I'm not sure how that interacts with Nginx, if at all). I imagine RHEL 7.3 should be coming out any time now. Maybe it will help with this.
That would keep security, reviews & processes while stopping the nonsense of requiring root, global state & single versions rpm/deb enforce to the user.
Quite often, SBos are maintained by folks who use (even depend) on the software they are responsible for. As such, well-tested and sanely configured software tends to be highly valued by maintainers, and package responsibilities will commonly change hands when a maintainer no longer uses what (s)he's responsible for.
The mechanics of an SBo are also dead simple, making package review and modification by the user straightforward. This means that rolling your own package is straightforward, enabling you to easily become a "dogfooding" maintainer yourself for any package not already in the repo. Few distributions allow ordinary users to close the official package loop with anywhere near the ease--both technically and bureaucratically--that SBos do.
SBos are not the Perfect Package model by any means. But in terms of "dogfooding", ISVs and big distributions (especially Canonical) could stand to learn a few things from a relatively unstructured community of volunteers with a taste for the KISS principle.
The reason we did this is that we had to support RHEL5/6 and old Solaris and AIX distros.
We struggled for several years (and I personally struggled for at least a year) with supporting RHEL5 and it simply failed. That distro came with ruby-1.8.5 and upgrading it to even 1.8.7 was a major PITA. The ruby-1.9.2 in RHEL6 was also buggy and crashy and segfaulty and we didn't remotely have the resources to attempt to debug it and patch it for RedHat.
The reality is that we just wouldn't have been able to support those old distros at all.
I think there's a myth here that the human resources to support packaging and debugging apps against distro-supplied libraries and supporting packages magically appears out of thin air. It does not. As those supporting libraries age it consumes more and more time to support "back porting" onto aging versions of frameworks, which cannot be upgraded in the distro because of compatibility with other apps. That creates a large amount of serious shit work (seriously, nobody enjoys doing that) and open source doesn't make it magically appear.
I also think there's a misconception that user's have chosen to use those versions of software. Largely they've chosen that because they need the compatibility with a large body of their own custom in-house software and their internal porting effort is costly in time and labor. Largely they don't care if you install an upgraded version of supporting libraries so long as it doesn't impact any other running software and its contained.
For another example of software that has been doing this for years, check out vagrant.
Admittedly, there's issues with this since openssl vulnerabilities require the release of new software versions to upgrade the embedded version.
- Desktop Linux has a small user base. Further splitting it up with different packages for each distribution makes it further a non-target.
- You can only install software cleanly that is packaged by your distributor. No distributor packages software that costs money. Many distributors try to prevent you from installing software with licenses they don't approve.
- If your user is using a stable distribution they will have old software. The distribution might only push bug fixes if they are security related. Users will report bugs that are already fixed.
- Maintainers often have no idea what they're doing, because they aren't working on the project and there are rarely other developers that check their patches. Worst case this leads to something like the Debian random number generator bug.
Distribution packaging for user facing programs has large drawbacks with marginal benefits, which can be mostly broke down to the not invented here syndrome.
Taking the analogy further, the App Store trend is analogous to the 17th amendment superseding the states' right to appoint senators. The relevant implication being that tribal power inevitably gravitates toward centralization on behalf of the "user."