
Deb.haskell.org Security Breach - corndoge
https://status.haskell.org/
======
kfreds
The most important information is missing - did the attackers get the package
signing key? The comment "the window for package compromise was very very
small" is a bit ambiguous.

That the security of package integrity is so fragile should greatly concern
all of us. Almost all big Linux distributions rely on downloading compiled
packages signed by a single key. If the build system is compromised your Linux
distribution of choice will happily give you trojaned packages that you
install as root. If the signing key is compromised a man-in-the-middle is
required, but the difficulty to mount an attack is decreased due to the fact
that most repositories don't transport over TLS.

The situation can be improved in many ways. By making the build process
deterministic many parties can compile the same package and compare the
result, alerting the community if one of the build systems report a different
checksum. Package managers like apt and yum should be extended with the
ability to rely on signatures from multiple parties.

The Tor Project, Debian and Fedora has started working on the first problem,
but I don't know of any efforts to support multi-sig.

[https://blog.torproject.org/blog/deterministic-builds-
part-o...](https://blog.torproject.org/blog/deterministic-builds-part-one-
cyberwar-and-global-compromise)

[https://blog.torproject.org/blog/deterministic-builds-
part-t...](https://blog.torproject.org/blog/deterministic-builds-part-two-
technical-details)

[https://wiki.debian.org/ReproducibleBuilds](https://wiki.debian.org/ReproducibleBuilds)

[https://securityblog.redhat.com/2013/09/18/reproducible-
buil...](https://securityblog.redhat.com/2013/09/18/reproducible-builds-for-
fedora/)

~~~
vezzy-fnord
Nix already solved the first problem a while ago, though its adoption is
hindered by the fact that it would essentially deprecate the entire
infrastructure that actually makes a distribution a distribution, and is
therefore unaligned with best interests. This is in spite of widespread claims
of wanting standardization.

~~~
drostie
Nix is just simply amazing, and there IS a distribution (NixOS[1]) which ships
it. The big problem is of course keeping packages _current_.

Nix and SmartOS[2] are both really important in my book. Nobody's talking
about them as distributions but they both have intense potential to completely
change not just server-administration (which is what they solve right now) but
having a nice desktop experience. It would look like the Windows desktop
experience, where they try to hide C: from you and just give you a set of
shared-across-your-applications folders: Documents, Desktop Icons, Pictures.
When every application occupies its own universe, then the OS itself behaves
just the way that the "window" metaphor suggests; moreover you can start to
seamlessly include VMs and get Windows applications living next to OSX ones.
The only cost is hard-drive space, but hard drive space can be shared if we
can package the software and share packages with the same checksums.

Eventually, I hope that people will just assume that an application "comes
with" its operating environment.

1\. [http://nixos.org/](http://nixos.org/)

2\. [https://smartos.org/](https://smartos.org/)

~~~
vezzy-fnord
My point was concerning the mainstream distributions adopting it. I'm well
aware of NixOS.

------
tdicola
I've been looking at hosting some apt packages in a custom repo and am truly
impressed/horrified by the layers of complexity necessary & suggested to
create one. It boggles my mind why an apt host has to be more than a static
list of files that are hosted on S3, raw github, etc. Running and maintaining
a full blown server just to host some files seems crazy.

~~~
eikenberry
You can host a dep/apt repo as a static list of files in S3. Did that myself
before using reprepro to generate it, then pushed it up to S3. Nothing fancy,
but it worked fine. Not sure where you got the idea is was more complicated.

I've also done it using apt-ftparchive using a simple script. Very bare bones,
but easy enough.

Was looking at trying aptly next time I needed one. It looked interesting.

[https://packages.debian.org/jessie/reprepro](https://packages.debian.org/jessie/reprepro)

[https://packages.debian.org/jessie/apt-
utils](https://packages.debian.org/jessie/apt-utils)

[https://packages.debian.org/jessie/aptly](https://packages.debian.org/jessie/aptly)

[edit to include all links]

~~~
tdicola
Yeah the problem is getting to that point. Check out the docs on setting up an
apt repo:
[https://wiki.debian.org/HowToSetupADebianRepository](https://wiki.debian.org/HowToSetupADebianRepository)
Good luck figuring out the easy way of using reprepro and S3 from that page.

~~~
geofft
The sad part is it's even easier than that, if your goal is hosting like two
packages. Stick a bunch of debs in a directory, make sure your default GPG key
is reasonable, then run `apt-ftparchive packages . > Packages`, `apt-
ftparchive release . > Release`, and `gpg -ab < Release > Release.gpg`. Then
toss the whole thing on a web server that supports plain HTTP. Honestly I bet
GitHub Pages is good enough.

The syntax for sources.list is then "deb
[http://path/to/your/directory](http://path/to/your/directory) ./". Users can
pick up your key by piping it into "sudo apt-key add -", or they can "sudo
apt-key adv --keyserver... --recv-key..." with the usual GPG options (and
check the fingerprint the same way).

The problem is that most serious users will quickly want tooling to do things
like keep track of which versions of which packages are in the archive, not
have two versions of the same package, support multiple distros/releases,
support separate "test" and "production" areas, etc. And that's where the
complexity shows up. (Also why you never see "./" sources.list lines in real
life.)

But for "Hey, I built these three packages", it's kind of perfect.

~~~
e12e
Let me just remind everyone that wiki.debian.org is, well, _a wiki_. I'm on a
mobile right now, so I trying to consolidate the apt/repo-pages, trimming the
obviously (too) old information - and adding a more visible "this is how
simple this stuff can be, even with proper gpg-signing, and http support" \--
is a little too cumbersome. But I've for a while now been intending to help
out a bit -- the wiki is somewhat neglected (compared to, say the excellent
arch linux wiki). Mostly I think the Debian wiki is in need of some pretty
mundane wiki clean-up: consolidation, simplification and modernization
(consentrate on documenting current stable and testing).

But more impirtant than me actually doing that, is convincing all of you to
update the wiki whenever you encounter errors or missing information! :-)

(No affilation with the Debian project other than being a long time user of
Debian)

~~~
voltagex_
Tried to create an account on the wiki, got error 919 and a message to contact
the debian-www mailing list. Just hoping the list isn't members only now.

If anyone wants to collaborate on fixing that apt page then my email is in my
profile. It should only take about an hour to clean up.

~~~
e12e
I just signed up without any problems. Did you try again?

~~~
voltagex_
Yeah, got a reply that indicated I'd been caught in a spam filter (at
$EMPLOYER, behind a proxy). All sorted now

------
_wmd
Such a waste.. probably ran some crappy spam/DoS drone on it, when binaries
from that host are probably pulled and executed inside more or less every
major financial institution on the planet.

~~~
intortus
That's what they'd like you to think.

~~~
voltagex_
Excel runs the world, right?

~~~
dippyskoodlez
Can confirm. It runs Sprint.

------
glibgil
Where's your type system now, Haskell?

~~~
Buttons840
Is this deb package repository built on Haskell technologies? I don't think
Haskell's type system is involved here.

~~~
jacquesm
That's the whole point of the GPs comment. You can have all the security
related features you want baked into your language if you distribute it in a
way that can be compromised that's all for naught.

------
49i4
Curious what monitoring system they have to detect things like that.

~~~
meowface
Looks like the hosting provider picked up on it.

Likely an IDS such as Snort or Suricata to detect compromised hosting clients.
And possibly something to measure unusual traffic volume, or traffic to
suspicious regions or hosts.

~~~
dmix
Also HIDS, such as OSSEC could detect tampering of the operating system. Such
as a rootkit installing a module or modifying a bin.

------
sclv
For the record, the hosted files were daily builds of HEAD. Despite the domain
name, these packages did not have "official status" anywhere and were not
distributed outside of that site to our knowledge.

In fact there are very few users that we know of -- to the point that when the
site went down, there were no reports filed or complaints made.

That said, when the service gets up and running again, the key will indeed
need to be replaced.

------
hoodoof
Why? Why do systems like this get hacked?

~~~
andrewchambers
Well, this distributes binaries that package managers install on peoples
systems, good way to spread a virus.

