
Debian elects new project leader, PPA support proposed - Iuz
http://distrowatch.com/weekly.php?issue=20150420#news
======
nextos
PPAs are a thing of the past, I think. A much better way forward would be to
implement a truly transactional and functional packaging system (a la
nix/guix), where updates could be rolled back seamlessly. This would allow
many interesting features, including aggressive update cycles and multiple
versions of the same package installed in the same system.

It'd help Debian, which thanks to its social contract is a very nice
distribution but has become somewhat stagnant due to overly bureaucratic
procedures.

~~~
chimeracoder
> PPAs are a thing of the past, I think. A much better way forward would be to
> implement a truly transactional and functional packaging system (a la
> nix/guix), where updates could be rolled back seamlessly.

Exactly, and this is what CoreOS does.

I see this as the role that containers fill. Systemd already supports what you
describe, via systemd-nspawn, which allows native containerization, and with
btrfs, which allows either persistent or ephemeral containers to be built off
of a "template"[0].

Systemd is already able to run Docker images and even pull them directly from
the Docker registry, so I'd rather see containers adopted as the new model of
vendoring[1] rather than PPAs.

And since Debian already uses systemd now, this should be very easy to
integrate.

[0] I put "template" in quotation marks because I'm using the term loosely
here; nspawn uses the word template to refer to something different.

[1] Let's be frank: while PPAs can do more, 90% of the PPAs I was asked to add
when I used Ubuntu were basically aimed at solving vendoring and/or versioning
issues.

~~~
falcolas
> Systemd is already able to run Docker images and even pull them directly
> from the Docker registry, so I'd rather see containers adopted as the new
> model of vendoring[1] rather than PPAs.

After having dug into Docker containers for a few weeks to identify what we
have to be aware of as we roll out to them at work, I would rather not.

NAT overhead, asymmetric TCP routes, devicemapper inefficiencies, 400+mb
images, new toolsets for debugging, unsigned images (though at least the
foundation to support them exists now)...

Until these bumps are sorted out I'm more than happy to deal with a few rare
incompatibilities from package upgrades.

~~~
chimeracoder
> After having dug into Docker containers for a few weeks to identify what we
> have to be aware of as we roll out to them at work, I would rather not.

> NAT overhead, asymmetric TCP routes, devicemapper inefficiencies, 400+mb
> images, new toolsets for debugging, unsigned images (though at least the
> foundation to support them exists now)...

You don't have to use Docker images (and systemd actually handles all of the
issues you point out better than Docker anyway). I only mentioned Docker
because it allows people to use existing Docker images without any extra work.

------
ColinDabritz
PPA = Personal Package Archive

As someone who is unfamiliar with this term, can someone please summarize what
it means? What makes it appealing?

Thank you in advance.

~~~
jameskilton
PPAs are how people build custom packages for Ubuntu, most often to support
newer versions of software over what's available in the official package list.

This is particularly helpful when working with LTS (Long Term Support)
versions. I have many servers who are still 12.04. I need to install Node
0.12, but the official packages are stuck in 0.6. PPAs make this possible
without having to build from source or otherwise hack your way around the
system.

~~~
fideloper
Typical use cases for me our to get the latest stable Nginx, PHP, HAProxy and
similar popular packages without having to upgrade to a non-LTS release of
Ubuntu.

This would be really interesting to see on Debian!

~~~
stephenr
These types of things are already available (and I would wager _much_ more
stable than the type of thing found in PPA's) through third-party repositories
like [http://Dotdeb.org](http://Dotdeb.org)

Honestly I think PPA's only real-world appeal will be for desktop Debian
users.

No one with any rational thought process is going to use a PPA sourced package
on a server environment.

edit: missing closing parenthesis.

~~~
reubenmorais
PPA support would just make it easier and safer to add repositories like
[http://dotdeb.org](http://dotdeb.org) to your setup. I don't see how PPAs are
any worse than manually adding entries to your sources.list file. They're
almost the same thing, except the former a bit more automated.

------
iguessthislldo
PPA Support in debian.... please yes.

~~~
structural
This is by far the most annoying part of having to support random Ubuntu
systems. PPA package quality has tended to be lower over the years, typically
not on first install, but on upgrade/uninstall/dependency management paths
that don't rear their head until months later (everything appears to work at
first! this is great!)

What could use some work is the documentation/startup instructions for how to
run a Debian package repository effectively (my company has a company-wide
repository for internal packages with quality guidelines and a decent amount
of review, but this is not without issues).
[https://wiki.debian.org/HowToSetupADebianRepository](https://wiki.debian.org/HowToSetupADebianRepository)
exists, but there's ten different tools listed of varying quality and there
really, really needs to be a "This is how you need to do it." method.

That said, I really do like the model of "submit source package, server
handles the logistics of building it for N architectures".

~~~
ice799
I built [https://packagecloud.io](https://packagecloud.io) to help make
creating, hosting, and installing APT (and other) repositories easier. Check
it out :D

~~~
structural
Looks neat. My intranet is completely airgapped. Is your enterprise offering
self-hosted? "Works with your infrastructure" on your webpage is very
ambiguous.

------
chriswarbo
I never understood PPAs. Ever since /etc/apt/sources.list was split up into
/etc/apt/sources.d, we've been able to manage repositories using files. Since
we already manage the rest of our system files with packages, why not do the
same for repos? Dump a sources.d entry, and possibly a GPG key, into a .deb
package and give that to users, rather than instructions on how to enter non-
standard URLs into a vendor-specific GUI.

In fact, I wrote tools to do this many years ago:

[https://gitorious.org/debian-repo-packager](https://gitorious.org/debian-
repo-packager) creates packages for a bunch of repos, including PPAs

[http://chriswarbo.net/git/service-
packs.git](http://chriswarbo.net/git/service-packs.git) provides a GUI for
creating "service packs": packages containing a repository, generated from a
list of packages and their dependencies.

~~~
alphapapa
That's a sensible way. But consider the steps involved:

Your way: 1\. Download deb package. 2\. Locate downloaded deb package. 3\.
Install deb package. 4\. Update package lists. 5\. Install actual, desired
package.

PPA way: 1\. Add PPA. 2\. Update package lists. 3\. Install actual, desired
package.

Since in this case your binary package would consist of only a text file,
building a full-fledged deb package could be considered overkill.

Besides that, installing your binary package would give a signature
verification error, requiring the user to manually install your GPG key. But
Launchpad and apt-add-repository handle that automatically.

And that's just for users. Think about the difference between the steps
involved for repo authors: they could setup and build a package just to
install a text file...or they could go to Launchpad, click a few buttons, and
have a PPA ready.

~~~
chriswarbo
> 1\. Download deb package.

> 2\. Locate downloaded deb package.

> 3\. Install deb package.

> 4\. Update package lists.

> 5\. Install actual, desired package.

Well, it's been a while since I wrote my proof-of-concept, but I managed to
get the steps down to:

1\. "Open with gDebi"

2\. Install package

3\. Install "actual, desired package"

The package list can be refreshed automatically using "postpone" (which was
new to Debian at the time, but is pretty widespread now).

Also, the "service pack" idea is even more powerful. The package contains the
repo, along with a metapackage depending on the "actual, desired package(s)"
for which the repo was generated. The repo is unpacked to disk, added to
sources.list.d, then the metapackage is installed.

The original idea was to package the closure of a bunch of packages; eg. we
can say "include 'gimp', 'inkscape' and 'blender', but not 'ubuntu-desktop'";
that would generate a repo containing gimp, inkscape, blender and all of their
dependencies, except for those implied by the existence of ubuntu-desktop.
This repo would be packaged up, along with a metapackage depending on gimp,
inkscape, blender and ubuntu-desktop. I also made an option to package up
security updates.

A nice consequence is the ability to distribute all of the dependencies of a
package, but they'll only be used as fallbacks; if newer versions are
available, they'll be used instead.

Of course, updates can free up disk space as the need for fallbacks
diminishes.

~~~
stephenr
The suggestion to use `postpone` (by which i assume you mean `postpone
--debian` ?) to trigger automatatic updates is very interesting, and led me to
look into it further, only to discover it's been removed from Debian in Jessie
onwards.

The message in the removal email mentions that dpkg triggers fulfil the same
task now, have you ever tried/managed to use dpkg-triggers in the same way? It
strikes me as likely being impossible as they're run by dpkg itself, which
needs to exit before the update can begin...

At this stage it seems like a solution to do this would need to re-package
`postpone` for >= Jessie (which isn't impossible, it just makes for one more
thing to support).

~~~
chriswarbo
> have you ever tried/managed to use dpkg-triggers in the same way? It strikes
> me as likely being impossible as they're run by dpkg itself, which needs to
> exit before the update can begin...

No, I've not looked at this problem for ~6 years. APTonCD achieves some of the
goals I had, so when that became popular I abandoned my attempts.

------
yarrel
PPAs _as they stand_ are an antipattern. If Debian brings some order to the
chaos of unsupported, incompatible and duplicated PPA repos then that would be
a win on balance. But if they just bring that chaos to Debian then it
certainly won't.

------
na85
Rather upset to see Ubuntu moving away from Upstart.

~~~
pekk
That happened shortly after the infamous Debian vote

~~~
kleiba
Care to elaborate (on both, i.e., what's the upstart issue and how a Debian
vote relates to it)? It seems I missed all of that, but would be grateful to
learn more.

~~~
notatoad
Ubuntu created upstart around the same time systemd was created, both meant to
replace the classic init system. Debian voted to adopt systemd as their new
init system instead of upstart, and so now ubuntu, because of it's debian
dependencies, will be dropping upstart and moving to systemd as well.

~~~
rlpb
> Ubuntu created upstart around the same time systemd was created, both meant
> to replace the classic init system.

Upstart predates systemd by over four years. I just checked the initial
release dates on Wikipedia to confirm.

[https://en.wikipedia.org/wiki/Upstart](https://en.wikipedia.org/wiki/Upstart)
[August 24, 2006]

vs.

[https://en.wikipedia.org/wiki/Systemd](https://en.wikipedia.org/wiki/Systemd)
[30 March 2010]

------
shmerl
How are PPAs different from current unofficial Debian repositories?

~~~
lukaslalinsky
The general idea of PPAs is that they are "personal". You don't need to set up
anything to create your own repository. Just build the package, upload it to
the PPA and you have a fully working APT repository.

~~~
samuellb
You don't even have to build the package. You just create a source package and
upload it with "dput" or similar. Then the PPA servers will build your package
for multiple architectures (usually x86 and x86_64, but a Debian PPA might
build for all Debian architectures, which are many more). It is also possible
to build for multiple versions easily by changing the target distribution in
the changelog file.

I've done the above both with and without a PPA for Ubuntu. Without a PPA you
need to have separate virtual machines for each architecture (and if you don't
have physical machines and/or hardware virtualization, this will be really
really slow). And you also need a VM (or perhaps a chroot/pbuilder
environment) for each distribution version, if you target more than one
version.

