

Ask HN: How do you manage your *nix binary package updates? (first post. eek) - baconhigh

We run clusters of machines and whenever there's an update via USN / DSA or whatever I end up manually patching each cluster with cluster-ssh.<p>This is less than ideal, but seems to work.<p>What do you do?<p>Note; I'm talking about binary packages distributed by your OS : apt upgrades / rpms.. not  config files (Hi, puppet/chef), or deprec for capistrano style stuff.
======
aspitzer
I used to manage around 300 servers myself. The only way it was possible was
to have a completely stripped OS. All apps we used I installed under:

/apps/<appname>/<app version> example: /apps/perl/5.8.12 then I would symlink
/apps/perl/5.8.12 to /apps/perl/current

The profiles on the machine would add /apps/*/current/bin to the path. This
allowed upgrades and roll backs just by changing the symlink to the one I
wanted to be current. This also allowed me to push out versions of software
ahead of time, and they just change the link when we were ready to use it.

Each machine would rsync /apps from a master distro nightly and of course I
could force it with a for i in `cat hosts.list`...

~~~
locci
This sounds a lot like what gobolinux is doing [1] with its filesystem/package
manager. Did you ever incur in more trouble than it was worth with that
approach?

Making the filesystem the actual package database rather than a snapshot of
what should be installed is quite tempting.

[1]<http://gobolinux.org/index.php?page=at_a_glance>

------
rdtsc
Create a custom repository of your packages and point all the machines in your
cluster to it? If it is CentOS or RHEL just add a new repo to
/etc/yum.repos.d. If you are upgrading system repos then just set the priority
of your repo to be higher. Of course this implies that you have successfully
rpm-fied your packages. We did that to all our packages and configuration.

~~~
TuaAmin13
If you run the Red Hat family (Fedora, CentOS, SL, whatever else) you could
also run Spacewalk. Machines periodically check in and get config files and
packages that it needs. If you configure it to do so you can also push.

At my last job we ran Spacewalk on our Koji box. Koji will let you build
packages that you can then put in your repo to yum install.

~~~
mikemaccana
+1. Yum and apt and the like are awesome for individual machines, but pull
based. Spacewalk and RHN are centralized push, which you need for any decent
amount of machines.

------
briandoll
I was at devopsdays in Mountain View CA a week ago and one of the panel
discussions was on package management. I was a bit surprised to see it as a
topic, as there are numerous known ways to solve the problem.

It turns out, that's exactly the problem. The topic was hotly debated and yes,
there are endless possible ways to distribute packages across a large
environment.

I think choosing a pattern involves deciding for yourself how secure/auditable
you need your environment to be and how tightly you want to couple your
deployment process to your current architecture (ie. some package managers
only work on some systems, etc.). That will narrow your choices down to a
handful, and then you get to dig into the implementation details and decide
from there.

------
bretthoerner
Isn't this what people run their own apt mirrors for?

~~~
BCM43
Yep. At my work we have a apt repository, and push packages to in, then issue
an apt-get update on all the machines.

~~~
drivebyacct2
Hopefully an upgrade too, as an update alone wouldn't do anything.

~~~
drivebyacct2
So, do you guys not know what you're talking about is there some legitimate
reason for downvoting this? Please, by all means, `update` to your heart's
content with aptitude. It won't do a damn thing, but OKAY.

~~~
BCM43
Yes, I meant upgrade. I mistyped. I assume the reason for the down-voting is
that it was a petty point and it was clear what I meant.

~~~
drivebyacct2
For what it's worth, I meant it as a tongue-in-cheek sort of thing. I wasn't
really criticizing. Sometimes I come on HN and am more conversational or
joking than I ought to be. Anyway, I figured you meant as much, sorry!

------
Prometheu5
Perhaps Murder would work? See:
[http://engineering.twitter.com/2010/07/murder-fast-
datacente...](http://engineering.twitter.com/2010/07/murder-fast-datacenter-
code-deploys.html)

------
dkarl
We manage our own dists. We package our own software as .debs, so everything
gets managed the same way. All security updates, release deployments,
rollbacks, etc., are managed with apt-get. I don't exactly understand pinning,
but it's also important to how we manage packages.

Depending on the release and the kind of server we're deploying to, we may do
them all in one night or in batches of a few hundred over a week. All our
boxes install security updates regularly (because we promptly add security
updates to our dists.)

------
bcl
I run a squid caching proxy and direct all my (Fedora or RHEL) systems to use
it along with a non-mirror list repository that is fairly close to me. I'm not
totally sure what your question is though.

If you want to tightly control the packages that get updated then there isn't
much to be done other than manually managing your own repo. Although if it is
this big of a concern you should probably be running a distribution that is
less volatile than Ubuntu. Say RHEL, CentOS, or Debian.

------
nodata
You can do two things.

1\. Mirror a yum repository plus updates. For each day create a hardlinked
directory of updates named reponame.YYYYMMDD. On the hosts you wish to update
sed -i to the new day in your yum repo config file and run yum -y upgrade.
When you know it works, do it on the other hosts too.

2\. Use a configuration management tool like Puppet.

For your sized infrastructure you probably want the first.

~~~
baconhigh
Already using puppet, and -yum, it's debian/ubuntu ;) (thanks though)

------
bastiat
Corporate repository to which official packages migrate upon certification.
This usually takes about 24 hours.

------
rjh29
I've heard good things about MCollective for distributed package updates:

<http://www.puppetlabs.com/mcollective>

