The most important information is missing - did the attackers get the package signing key? The comment "the window for package compromise was very very small" is a bit ambiguous.
That the security of package integrity is so fragile should greatly concern all of us. Almost all big Linux distributions rely on downloading compiled packages signed by a single key. If the build system is compromised your Linux distribution of choice will happily give you trojaned packages that you install as root. If the signing key is compromised a man-in-the-middle is required, but the difficulty to mount an attack is decreased due to the fact that most repositories don't transport over TLS.
The situation can be improved in many ways. By making the build process deterministic many parties can compile the same package and compare the result, alerting the community if one of the build systems report a different checksum. Package managers like apt and yum should be extended with the ability to rely on signatures from multiple parties.
The Tor Project, Debian and Fedora has started working on the first problem, but I don't know of any efforts to support multi-sig.
Nix already solved the first problem a while ago, though its adoption is hindered by the fact that it would essentially deprecate the entire infrastructure that actually makes a distribution a distribution, and is therefore unaligned with best interests. This is in spite of widespread claims of wanting standardization.
Nix is just simply amazing, and there IS a distribution (NixOS[1]) which ships it. The big problem is of course keeping packages current.
Nix and SmartOS[2] are both really important in my book. Nobody's talking about them as distributions but they both have intense potential to completely change not just server-administration (which is what they solve right now) but having a nice desktop experience. It would look like the Windows desktop experience, where they try to hide C: from you and just give you a set of shared-across-your-applications folders: Documents, Desktop Icons, Pictures. When every application occupies its own universe, then the OS itself behaves just the way that the "window" metaphor suggests; moreover you can start to seamlessly include VMs and get Windows applications living next to OSX ones. The only cost is hard-drive space, but hard drive space can be shared if we can package the software and share packages with the same checksums.
Eventually, I hope that people will just assume that an application "comes with" its operating environment.
Hopefully/presumably the gpg-signing key wasn't on the box (no reason for it to be) -- so the only way to compromise the debs would/should be by uploading/using a new gpg-key -- which apt would warn the user about.
Now, if anyone followed only part of the instructions:
Repository keys
To avoid warnings, you might want to install
the key used to sign these repositories:
GET http://deb.haskell.org/deb.haskell.org.gpg-key | \
apt-key add -
Unless they paid attention to:
The key is signed using a key from the Debian keyring,
in case you want to verify it first.
This is of course backwards: it should list instructions for verifying the key before instructions on how to add it to apt...
Getting the gpg-key via http and not verifying it is batshit crazy anyway. I'm perfectly fine with anchoring trust in Debian's keyring -- after all, you implicitly trust that key with root access to all servers running apt.
Sadly, Debian's support for trusted (by the user, and which Debian can vouch for the key, if not the software) third party repositories is still somewhat sketchy -- it'd be nice if there was a "apt-get-me-a-key-for-this-url-only-if-signed-by-the-debian-keyring <apt-url>"-command. That would of course imply that Debian becomes a bit of a CA for apt-repositories -- which it already is in the case of deb.haskell.org. See also:
I've been looking at hosting some apt packages in a custom repo and am truly impressed/horrified by the layers of complexity necessary & suggested to create one. It boggles my mind why an apt host has to be more than a static list of files that are hosted on S3, raw github, etc. Running and maintaining a full blown server just to host some files seems crazy.
You can host a dep/apt repo as a static list of files in S3. Did that myself before using reprepro to generate it, then pushed it up to S3. Nothing fancy, but it worked fine. Not sure where you got the idea is was more complicated.
I've also done it using apt-ftparchive using a simple script. Very bare bones, but easy enough.
Was looking at trying aptly next time I needed one. It looked interesting.
Yeah the problem is getting to that point. Check out the docs on setting up an apt repo: https://wiki.debian.org/HowToSetupADebianRepository Good luck figuring out the easy way of using reprepro and S3 from that page.
The sad part is it's even easier than that, if your goal is hosting like two packages. Stick a bunch of debs in a directory, make sure your default GPG key is reasonable, then run `apt-ftparchive packages . > Packages`, `apt-ftparchive release . > Release`, and `gpg -ab < Release > Release.gpg`. Then toss the whole thing on a web server that supports plain HTTP. Honestly I bet GitHub Pages is good enough.
The syntax for sources.list is then "deb http://path/to/your/directory ./". Users can pick up your key by piping it into "sudo apt-key add -", or they can "sudo apt-key adv --keyserver... --recv-key..." with the usual GPG options (and check the fingerprint the same way).
The problem is that most serious users will quickly want tooling to do things like keep track of which versions of which packages are in the archive, not have two versions of the same package, support multiple distros/releases, support separate "test" and "production" areas, etc. And that's where the complexity shows up. (Also why you never see "./" sources.list lines in real life.)
But for "Hey, I built these three packages", it's kind of perfect.
Let me just remind everyone that wiki.debian.org is, well, a wiki. I'm on a mobile right now, so I trying to consolidate the apt/repo-pages, trimming the obviously (too) old information - and adding a more visible "this is how simple this stuff can be, even with proper gpg-signing, and http support" -- is a little too cumbersome. But I've for a while now been intending to help out a bit -- the wiki is somewhat neglected (compared to, say the excellent arch linux wiki). Mostly I think the Debian wiki is in need of some pretty mundane wiki clean-up: consolidation, simplification and modernization (consentrate on documenting current stable and testing).
But more impirtant than me actually doing that, is convincing all of you to update the wiki whenever you encounter errors or missing information! :-)
(No affilation with the Debian project other than being a long time user of Debian)
I feel appropriately chastened now. :-) But that page is huge, and apt-ftparchive is in fact listed halfway down, but it's marked as deprecated two places. And the thought of doing general cleanup on that page (which it can use) is a bit more difficult than just adding some docs, as is the thought of arguing about just how deprecated is deprecated.
Tried to create an account on the wiki, got error 919 and a message to contact the debian-www mailing list. Just hoping the list isn't members only now.
If anyone wants to collaborate on fixing that apt page then my email is in my profile. It should only take about an hour to clean up.
It should exist, and it should be the one you want to use to sign the archive. If not, make another keypair and ... figure out how the GPG command line works. :)
Aptly is a dream to use compared to the others, generally makes sense, and is well documented. Big thumbs up for aptly from me.
I started out with a reprepro archive, which is an abandoned project (or at least, had been for a couple of years early last year (edit: looks like I am in error, there are commits dating to last august... I must have been looking at the wrong repo)). The biggest problem with reprepro is that you can have only one package version per package - so no rollbacks.
But yeah, though you need software to calculate the index files, once they're done, an apt archive is just a static website, easily pushable to s3. Of course, if you want a private repo, it gets a bit more difficult...
You should check out reprepro [1]. The configuration is really easy compared to other solutions [2], and once you've set it up, it's just one command to add or remove packages. The hardest part is generating a GPG key.
One note: it's not quite true that you can't throw it on S3 -- they're just static files. This is how the mirroring infrastructure works, after all.
If you have questions about setting up an apt repo, feel free to reach out!
I can totally relate, that's why we built https://packagecloud.io to provide a service for securely and easily hosting debian and rpm files. We also provide chef and puppet cookbooks to make integration as easy as possible. Give it at try!
For apt repositories, you might be interested in http://www.aptly.info/, especially if you want to host it on S3 as it integrates very well. As others have mentioned reprepo isn't that tough to use either, but moving it to S3 instead of somewhere else basically amounted to using an S3 URI. It also has some other features that might be handy, e.g. versioning/snapshots and serving your repos locally for testing.
In any case, I find the process of actually creating the packages far more arduous than setting up the repo...
I forgot to also mention -- because apt was really made to service big repositories and big installations (like the main debian repo). What works for a large, stable, heavily-trafficked site isn't necessarily the same as a small site, so small repos are second-class citizens.
Such a waste.. probably ran some crappy spam/DoS drone on it, when binaries from that host are probably pulled and executed inside more or less every major financial institution on the planet.
I"m pretty sure most banks are using RHEL or the like, rather than Debian. At least for production environments, something about about enterprise support contracts and the like.
That's the whole point of the GPs comment. You can have all the security related features you want baked into your language if you distribute it in a way that can be compromised that's all for naught.
Likely an IDS such as Snort or Suricata to detect compromised hosting clients. And possibly something to measure unusual traffic volume, or traffic to suspicious regions or hosts.
They were probably spamming someone else and choking some network uplink at their provider, which is why the provider itself told them about the possible compromise.
Either that or killing other resources and raising very simple alerts (disk, CPU, RAM, Ethernet ports, etc).
For the record, the hosted files were daily builds of HEAD. Despite the domain name, these packages did not have "official status" anywhere and were not distributed outside of that site to our knowledge.
In fact there are very few users that we know of -- to the point that when the site went down, there were no reports filed or complaints made.
That said, when the service gets up and running again, the key will indeed need to be replaced.
It can be because it was targeted or it can be because it's connected to the internet. Any vulnerable server will do for a spammer or botnet herder (they use hijacked servers to control the bots).
That the security of package integrity is so fragile should greatly concern all of us. Almost all big Linux distributions rely on downloading compiled packages signed by a single key. If the build system is compromised your Linux distribution of choice will happily give you trojaned packages that you install as root. If the signing key is compromised a man-in-the-middle is required, but the difficulty to mount an attack is decreased due to the fact that most repositories don't transport over TLS.
The situation can be improved in many ways. By making the build process deterministic many parties can compile the same package and compare the result, alerting the community if one of the build systems report a different checksum. Package managers like apt and yum should be extended with the ability to rely on signatures from multiple parties.
The Tor Project, Debian and Fedora has started working on the first problem, but I don't know of any efforts to support multi-sig.
https://blog.torproject.org/blog/deterministic-builds-part-o...
https://blog.torproject.org/blog/deterministic-builds-part-t...
https://wiki.debian.org/ReproducibleBuilds
https://securityblog.redhat.com/2013/09/18/reproducible-buil...