PPAs are a thing of the past, I think. A much better way forward would be to implement a truly transactional and functional packaging system (a la nix/guix), where updates could be rolled back seamlessly. This would allow many interesting features, including aggressive update cycles and multiple versions of the same package installed in the same system.
It'd help Debian, which thanks to its social contract is a very nice distribution but has become somewhat stagnant due to overly bureaucratic procedures.
PPAs aren't a replacement for the package management system. They're just another repo. A transactional packaging system would be cool, but it's a much bigger change than implementing PPAs.
They fulfill different goals. I've reached the stage where I don't always need the latest software—I'm more concerned with something stable that receives prompt security patches. For this reason, I've been sticking with the LTS Ubuntu releases rather than upgrading twice a year (though Kubuntu 15.04 has been awfully tempting for me). I guess I'm getting boring with "old" age—I built my systems following Linux From Scratch back in the day; now, I just want a system that works.
A transactional packaging system would definitely ease the pain of running newer software, but I don't want to take the time to try it—I'd rather stick with the release cycles provided by the distro as a whole. In this situation, PPAs are great for when I want to install software that isn't in the official repos (or is at an older version).
Both have the goal of to make it easy to get newer software, or software from other sources, within the same management interface, and without breaking all the other packages of the distro. But transactional packaging systems does get to those goals, while PPAs just maybe get there, for a time, but always turn-out breaking something.
But if you are using PPAs to create different lists of released software, so you can keep you stable software while everybody else wants to move to something newer, transactional packaging does not have that goal. Although it makes the job of maintaining such PPA much easier.
> A much better way forward would be to implement a truly transactional and functional packaging system (a la nix/guix), where updates could be rolled back seamlessly.
It doesn't work like nix/guix, but Ubuntu are doing what you describe with Click and Snappy. Click is already live on Ubuntu phones.
> PPAs are a thing of the past, I think. A much better way forward would be to implement a truly transactional and functional packaging system (a la nix/guix), where updates could be rolled back seamlessly.
Exactly, and this is what CoreOS does.
I see this as the role that containers fill. Systemd already supports what you describe, via systemd-nspawn, which allows native containerization, and with btrfs, which allows either persistent or ephemeral containers to be built off of a "template"[0].
Systemd is already able to run Docker images and even pull them directly from the Docker registry, so I'd rather see containers adopted as the new model of vendoring[1] rather than PPAs.
And since Debian already uses systemd now, this should be very easy to integrate.
[0] I put "template" in quotation marks because I'm using the term loosely here; nspawn uses the word template to refer to something different.
[1] Let's be frank: while PPAs can do more, 90% of the PPAs I was asked to add when I used Ubuntu were basically aimed at solving vendoring and/or versioning issues.
> Systemd is already able to run Docker images and even pull them directly from the Docker registry, so I'd rather see containers adopted as the new model of vendoring[1] rather than PPAs.
After having dug into Docker containers for a few weeks to identify what we have to be aware of as we roll out to them at work, I would rather not.
NAT overhead, asymmetric TCP routes, devicemapper inefficiencies, 400+mb images, new toolsets for debugging, unsigned images (though at least the foundation to support them exists now)...
Until these bumps are sorted out I'm more than happy to deal with a few rare incompatibilities from package upgrades.
> After having dug into Docker containers for a few weeks to identify what we have to be aware of as we roll out to them at work, I would rather not.
> NAT overhead, asymmetric TCP routes, devicemapper inefficiencies, 400+mb images, new toolsets for debugging, unsigned images (though at least the foundation to support them exists now)...
You don't have to use Docker images (and systemd actually handles all of the issues you point out better than Docker anyway). I only mentioned Docker because it allows people to use existing Docker images without any extra work.
I know. It's very nice. That's what I use everyday on Arch. Still, I feel functional package managers buy you a few more advantages. The whole package tree doesn't need to be in sync, and package hashes mean you have pretty strong guarantees things won't break due to dependency mismatches.
I don't understand how nix/guix deprecates the need for PPAs. PPAs allow you to offer a collection of packages, and updates, outside of the main repository and allows users to 'opt-in' to those packages and updates, which would otherwise not be visible to him. It seems to me nix/guix is entirely orthogonal to that.
It would be cool to have some sort of docker style version control. Docker has all versions in the same docker file, they just onion the newer ones on top. Roll back and version control would be really easy!
PC-BSD is doing this. They're using ZFS snapshots to get a COW version of your FS before an update so if something goes wrong you can instantly roll it back (even boot into it from the boot loader)
PPAs are how people build custom packages for Ubuntu, most often to support newer versions of software over what's available in the official package list.
This is particularly helpful when working with LTS (Long Term Support) versions. I have many servers who are still 12.04. I need to install Node 0.12, but the official packages are stuck in 0.6. PPAs make this possible without having to build from source or otherwise hack your way around the system.
Typical use cases for me our to get the latest stable Nginx, PHP, HAProxy and similar popular packages without having to upgrade to a non-LTS release of Ubuntu.
This would be really interesting to see on Debian!
These types of things are already available (and I would wager much more stable than the type of thing found in PPA's) through third-party repositories like http://Dotdeb.org
Honestly I think PPA's only real-world appeal will be for desktop Debian users.
No one with any rational thought process is going to use a PPA sourced package on a server environment.
PPA support would just make it easier and safer to add repositories like http://dotdeb.org to your setup. I don't see how PPAs are any worse than manually adding entries to your sources.list file. They're almost the same thing, except the former a bit more automated.
In addition to the main package repositories for your distro, you can add little repos that just have one or two packages in them. They get their own signing keys, so the author of a package that's not in the main repo can make their own little repo. It's a lot more convenient and a little more secure than downloading the package file from the dev's website.
This is by far the most annoying part of having to support random Ubuntu systems. PPA package quality has tended to be lower over the years, typically not on first install, but on upgrade/uninstall/dependency management paths that don't rear their head until months later (everything appears to work at first! this is great!)
What could use some work is the documentation/startup instructions for how to run a Debian package repository effectively (my company has a company-wide repository for internal packages with quality guidelines and a decent amount of review, but this is not without issues). https://wiki.debian.org/HowToSetupADebianRepository exists, but there's ten different tools listed of varying quality and there really, really needs to be a "This is how you need to do it." method.
That said, I really do like the model of "submit source package, server handles the logistics of building it for N architectures".
Looks neat. My intranet is completely airgapped. Is your enterprise offering self-hosted? "Works with your infrastructure" on your webpage is very ambiguous.
Thanks for the link. Aptly looks like a nice tool.
Your approach totally works, but we typically run into the problem of "system administrator has thing that works for them on a couple systems", then it's rather difficult to get everyone using the same tool so that people can interoperate (or bikeshedding about which one has a nifty feature and changing tools every six months) -- or worse, breaking something because they didn't understand how the other tool's process was different.
If different teams want to build their own packages (using whatever toolchain they want, which results in a .deb) and have them hosted on the company repo server(s) - why should they care how the hosting works? Have them supply (usually an upload of some kind) the built (and if necessary, signed) package files, and their job is done.
I know it's hip and cool to treat "devops" as meaning "developers get their hands dirty with ops and we have no separation" but that's a ridiculous interpretation of the concept.
There is a definite and identifiable skillset involved in system and network administration (aka Ops, Infrastructure, or historically "IT department").
There is no reason developers can't learn these skills (I actually went the other way, studied Network Engineering, learnt dev later, now I do both) but it's stupid to assume that just because a developer can install vagrant, that he or she is qualified to run or make key decisions about core infrastructure and services.
See my other comment, but hosting's the easy part, managing packages from "packaged, this afternoon, works on dev's machine" to "tested and works for a couple thousand people, let's deploy this" is the challenge. The bad ways of doing this involve every team setting up their own repos (differently! ha!) to test things before pushing them up a level. This happens a lot, absent defined structure. So does things like e-mailing packages around, two people bumping a version number to the same thing, and so on.
I completely agree that developers shouldn't be making ops decisions and have scars to prove it.
In an ideal world, we'd only be getting packages built on integration test passes, and automatically pushed from CI. I've got that for some cases, but it's still a manual push process for others.
Why do you run into that situation? Do your developers not have access to your interal repo? Emailing packages around sounds like a symptom of a process problem.
The way I use dpkg-scanpackages is to have a single, central repo machine, with independent repositories for different projects. Publishing a .deb is the same for any project - scp, then ssh and run make in that project's directory. I have a very short shell script called publish-deb which does exactly that.
It could be that I've just not bumped into problems which require them, but I've not found a project which offered features above the set this gives you where the trade-off is worth the complexity. "Everything works this very simple way, there's almost nothing to learn" is very persuasive.
Yeah, I agree with you. I think PPAs are a nice feature for end users [who don't mind nuking their computer if things go awry] but a prepackaged way to create your own mirror of Debian and then substitute packages as-desired would be lovely.
[e.g. Use a Debian mirror to pull from if there isn't a local substitute]
It sounds like what you want is your own repo with either backported packages or packages not available in any Debian repo?
That can (and is) done today - host the compiled packages in a reprepro repo, and add it as a source for apt.
If the package is a backport, apt will upgrade/install it, as it's a higher version than that in the official repo. If the package is from outside debian, apt will updgrade/install it, as it can't find any other package with that name.
I realise you said "prepackaged" but the steps to get a basic reprepro repo setup, and the steps to (e.g.) pull a testing package and build it under stable, are documented numerously online - if the existing options are too complex for the audience you have in mind, I think building (and maintaining) their own packages is not a reality for that audience.
Every time I have to do manual steps for something that is a one-off [which, let us be honest, I'm not going to automatically configure a deb repo since I only re-do it every few years].
Yes, it isn't /hard/ but that doesn't change the fact it consumes time I could spend elsewhere if I had a standard iso from Debian I can just spin up on a VM or a bare metal server.
Atm, my major limiter is how much time I spend building/maintaining environments vs. programming.
Keeping repositories synced between versions of Debian and architectures is slightly harder but doable.
Best practices for handling uploads from individual developers, making sure all dependencies are included at the same time when a package is published is hard.
Running something like the official Debian unstable -> testing transition of packages (with QA/testing in that loop), and figuring out what rebuilds are necessary is very hard.
I never understood PPAs. Ever since /etc/apt/sources.list was split up into /etc/apt/sources.d, we've been able to manage repositories using files. Since we already manage the rest of our system files with packages, why not do the same for repos? Dump a sources.d entry, and possibly a GPG key, into a .deb package and give that to users, rather than instructions on how to enter non-standard URLs into a vendor-specific GUI.
http://chriswarbo.net/git/service-packs.git provides a GUI for creating "service packs": packages containing a repository, generated from a list of packages and their dependencies.
Since in this case your binary package would consist of only a text file, building a full-fledged deb package could be considered overkill.
Besides that, installing your binary package would give a signature verification error, requiring the user to manually install your GPG key. But Launchpad and apt-add-repository handle that automatically.
And that's just for users. Think about the difference between the steps involved for repo authors: they could setup and build a package just to install a text file...or they could go to Launchpad, click a few buttons, and have a PPA ready.
Well, it's been a while since I wrote my proof-of-concept, but I managed to get the steps down to:
1. "Open with gDebi"
2. Install package
3. Install "actual, desired package"
The package list can be refreshed automatically using "postpone" (which was new to Debian at the time, but is pretty widespread now).
Also, the "service pack" idea is even more powerful. The package contains the repo, along with a metapackage depending on the "actual, desired package(s)" for which the repo was generated. The repo is unpacked to disk, added to sources.list.d, then the metapackage is installed.
The original idea was to package the closure of a bunch of packages; eg. we can say "include 'gimp', 'inkscape' and 'blender', but not 'ubuntu-desktop'"; that would generate a repo containing gimp, inkscape, blender and all of their dependencies, except for those implied by the existence of ubuntu-desktop. This repo would be packaged up, along with a metapackage depending on gimp, inkscape, blender and ubuntu-desktop. I also made an option to package up security updates.
A nice consequence is the ability to distribute all of the dependencies of a package, but they'll only be used as fallbacks; if newer versions are available, they'll be used instead.
Of course, updates can free up disk space as the need for fallbacks diminishes.
The suggestion to use `postpone` (by which i assume you mean `postpone --debian` ?) to trigger automatatic updates is very interesting, and led me to look into it further, only to discover it's been removed from Debian in Jessie onwards.
The message in the removal email mentions that dpkg triggers fulfil the same task now, have you ever tried/managed to use dpkg-triggers in the same way? It strikes me as likely being impossible as they're run by dpkg itself, which needs to exit before the update can begin...
At this stage it seems like a solution to do this would need to re-package `postpone` for >= Jessie (which isn't impossible, it just makes for one more thing to support).
> have you ever tried/managed to use dpkg-triggers in the same way? It strikes me as likely being impossible as they're run by dpkg itself, which needs to exit before the update can begin...
No, I've not looked at this problem for ~6 years. APTonCD achieves some of the goals I had, so when that became popular I abandoned my attempts.
PPAs as they stand are an antipattern. If Debian brings some order to the chaos of unsupported, incompatible and duplicated PPA repos then that would be a win on balance. But if they just bring that chaos to Debian then it certainly won't.
Care to elaborate (on both, i.e., what's the upstart issue and how a Debian vote relates to it)? It seems I missed all of that, but would be grateful to learn more.
Ubuntu created upstart around the same time systemd was created, both meant to replace the classic init system. Debian voted to adopt systemd as their new init system instead of upstart, and so now ubuntu, because of it's debian dependencies, will be dropping upstart and moving to systemd as well.
I guess I always sort of hoped they'd hold out against the tide of mediocrity sweeping the init systems of seemingly every distro save Gentoo these days.
The general idea of PPAs is that they are "personal". You don't need to set up anything to create your own repository. Just build the package, upload it to the PPA and you have a fully working APT repository.
You don't even have to build the package. You just create a source package and upload it with "dput" or similar. Then the PPA servers will build your package for multiple architectures (usually x86 and x86_64, but a Debian PPA might build for all Debian architectures, which are many more). It is also possible to build for multiple versions easily by changing the target distribution in the changelog file.
I've done the above both with and without a PPA for Ubuntu. Without a PPA you need to have separate virtual machines for each architecture (and if you don't have physical machines and/or hardware virtualization, this will be really really slow). And you also need a VM (or perhaps a chroot/pbuilder environment) for each distribution version, if you target more than one version.
It'd help Debian, which thanks to its social contract is a very nice distribution but has become somewhat stagnant due to overly bureaucratic procedures.