
Distribution packages considered insecure - g1n016399
https://statuscode.ch/2016/02/distribution-packages-considered-insecure/
======
antonios
I've always considered the Debian model of support (freeze the world and try
to support everything) to be wrong and swimming against the tide. I believe
that the only ones capable of supporting a package of a given complexity are
the authors themselves, and the distros should just handle packaging (and
minimal patching on top if necessary). If the author drops support, then you
should either be _extremely_ confident you have an adequately strong team to
fork and maintain the package...or just don't.

In my humble opinion, the FreeBSD ports model is better in that regard. That's
also why I try to use pkgsrc for various packages when maintaining systems
running LTS Linux distros.

~~~
SXX
Reason why authors can't support packages is that their views may not
represent one of Debian and this is why there external maintainers. E.g we
have open source game engine and have launcher to download mods and Debian
privacy policies don't allow it to check mod updates by default.

Also there is authors that don't care to support ancient library versions that
present in all distributions or simply don't care about anything except static
linking. Some people also don't use Linux at all even if their project does
support Linux.

~~~
bbrazil
As an example, "ancient" for many projects means two years.

That's when enterprises might start considering moving onto that version, so
the incentives aren't really aligned here.

~~~
_yy
> That's when enterprises might start considering moving onto that version, so
> the incentives aren't really aligned here.

Easier said than done in many environments. RHEL 6, released in 2012, is still
supported until 2017!

------
calpaterson
This is a real problem. It is a total joke to suggest that people move
important systems to Archlinux though. Basically as an administrator you need
to be aware of security advisories against the OS you run. This is already
true because you need to know when to upgrade packages. If you are running
Debian you should sign up for debian-security-announce:

[https://lists.debian.org/debian-security-
announce/](https://lists.debian.org/debian-security-announce/)

There are equivalent mailing lists for CentOS. It would probably be helpful to
have better tooling to warn about/mark packages with known vulnerabilities
associated.

I'm not an expert on Debian's internal process but sometimes it seems that
Debian add packages to their distribution without a clear plan for how to fix
security issues in them. Sometimes upstream maintainers seem hellbent on
making it impossible to offer long term support for software. Elasticsearch is
a case in point:

[https://www.debian.org/security/2015/dsa-3389](https://www.debian.org/security/2015/dsa-3389)

~~~
mbrock
Lazyweb question. Is there a way for me to easily keep track of important
upstream security announcements for some given set of software—say,
ElasticSearch, Rails, nginx, and Linux? Like I could check those check boxes
and get an RSS feed, or something?

~~~
reidrac
Related to open source, you probably want to track OSS Security mailing list
[1]; or if you're part of an organization relying on open source software, you
should have someone looking at that list.

I think it is a good complement to the security advisories of your Linux
distributor (Debian, CentOS, whatever), but is isn't free as you'll need time
to keep an eye on it.

1: [http://oss-security.openwall.org/](http://oss-security.openwall.org/)

~~~
mbrock
Thanks. Regarding cost... someone should sell curated security feeds to
startups.

There must be many dozens of engineers who monitor mailing lists for issues in
(let's say) ElasticSearch. They could be incentivized to share their findings.

Then if one source tags a certain CVE as significant, I get a ping. If several
sources tag it, the ping gets more urgent. Eventually I feel scared enough
that I upgrade.

~~~
tie_
There was something made to ease exactly that pain... sadly it's no longer
updated: [http://www.websecuritywatch.com/](http://www.websecuritywatch.com/)

------
r0muald
> But can you really be sure that people that do this stuff as an hobby can
> deliver this in the quality that you expect and require? Let’s be honest
> here, probably not.

This must be the most short sighted description of Debian developers I have to
see, cause 1) most do packaging as part of their paid job 2) apparently
millions of people in the world believe that, yes, these people do a fine job
at maintaining those packages. Not to mention those who develop the software
they package or those who do academic research about software packaging and
dependency resolution. But insulting Debian has become commonplace nowadays.

~~~
_yy
The author also seriously recommends Arch Linux and Tumbleweed as
alternatives. Anyone who ever maintained a server in an enterprise production
environment knows how valuable the kind of stability that the author
criticisms is. Not everyone lives in a DevOps environment where you can "move
fast and break things".

He does have a point about unmaintained packages, though. Debian probably
shouldn't package things if they cannot support it over the lifetime of the
release. Why would I ever want to use a two-years-old Wordpress version? I
never understood why those kinds of applications are even packaged at all.

Most sysadmins deal with this by including a third party repository for
applications and Elasticsearch (any many others), often by the authors
themselves, which takes care of security and version updates. This eliminates
most of the concerns. You get a rock-solid operating system and an up to date
application on top of it. Web applications like Wordpress or Owncloud aren't
usually deployed using distro packages at all.

~~~
acdimalev
Countering one of your points, and yet bolstering your statement here. Even in
a DevOps environment, packaging is important. Same as a system image or random
artifact, a distribution package can provide consistency and repeatability.

If I had to wait for a rebuild of the entire operating system to test
deployment of a single application, how much confidence could I possibly gain
in that application's deployment process?

Also... it seems the author is suggesting that newer software always has fewer
bugs. Hmm... .

------
skarap
Well, you have to chose something. You can't have security, automated updates
to latest version and stability in the same place. You're either in
debian/RHEL world with old kernels, old libraries and old userspace tools
which miss a lot of fresh features or in npm/pip/curl|bash world, where you
have latest version of everything all the time.

In former case you can do `apt-get upgrade`/`yum update` and be almost sure
that everything will continue working, but - no - you can't have PHP 7.

In latter case you either use npm shrinkwrap-like tools to install the exact
same version of everything every single time, or play Russian roulette with
the new dependency versions. And - just in case if you didn't notice - when
you pin some package to a specific version you no longer receive security
upgrades for it. And let's be honest - you have a lots of those "^1.0.1",
"~0.10.29", "^0.3.1" things in you package.json/Berksfile/... And for almost
any package "^0.3.1" is the same as "0.3.1", cause the next version will
obviously be "1.0.0" and 0.3.X won't be receiving any more updates.

It's obvious that no single distribution will be able to package the insanely
large amount of packages from all the different sources, let alone backporting
patches. So you either limit yourself to only the stuff available in your
distribution, or you're on your own with updates (including security ones).

As for the packages updating themselves, sometimes it's a good thing,
sometimes it isn't. I bet a wordpress installation which can't overwrite
itself (because is owned by root), and doesn't allow executing user-uploaded
.php files will be much more secure than one which has full access to itself.

P.S. no amount of tooling can solve this problem. If you're using version X of
package A, then you find out that there is a security vulnerability in version
X which is fixed in version Y and version Y is not fully compatible with
version X (changed an API, config file, anything else in a backwards-
incompatible way), you're semi-screwed. You will have to handle that situation
manually.

~~~
stephenr
I personally find that Debian stable for most packages, and then
vendor/Packager repos for specific software (eg percona, varnish, dotdeb,
nodesource, etc) works very well.

~~~
_yy
This is exactly what any company using Debian in production does.

------
the_ancient
This seems to less a complaint about packages and more a debate over Rolling
vs Non-Rolling Distro's

The Author seems to take the opinion that Rolling is always better for
Security because he selected a few "web" packages and found vulnerabilities in
the shipping versions.

Ofcourse for the people running mission critical enterprise applications they
probally are not running phpmyadmin on that server so they could care less if
the repo for their stable version of debian contains that older software.

I agree it is problem, but the solution to that problem can not simply be
rolling all the time.

I use Arch on my Desktop system, but I would never run my employers mission
critical database on Arch... I need stability and security, not simply
security.

Ultimately I think we need a better way to define the Core OS, vs "User
Applications"

Things like glibc, the kernel, etc are clearly coreOS, things like phpmyadmin
are User Applications. Where it get grey is databases, webserver etc. Do you
lable them a Core product or a User App, if I was running a mission critical
applications I would not want my mariaDB system just upgraded to the lastest
version with new features and possible breaking (removing) older features that
my app may still need.

Enterprises move slow, I still have enterprise applications that require Java
6, I have some things that still have software that only run on Windows 2000.
This idea that Rolling to the latest version of software all the time is a
workable plan highlights the authors ignorance of how enterprises actually
work.

~~~
kchoudhu
The FreeBSD folks have made a pretty good separation between "base" (what you
call Core OS) and "ports" (what you call user applications). It works
wonderfully.

This isn't a new observation, and I'm kind of surprised it hasn't made it into
mainline linux packaging philosophy yet.

~~~
the_ancient
My knowlege of FreeBSD is limited, but from what I understand this still does
not resolve the issue.

While they have seperated the base OS, they have not resolve the issues of
applications that can be mission critical, thus warrant to be frozen like the
base os is. Things like apache, mysql, etc etc etc.

If you treat them like user apps then they will be updated when maybe I did
not want them to be, but if you treat them like base os then you have the
problem the author outlines.

Fruther my understanding is the ports system only supports the lastest version
of FreeBSD, meaning if you want to use ports you always have to update to the
latest version when released.

I have CentOS and RHEL Servers running version 5, if it were FreeBSD I would
not be able to use this 2 release old version of FreeBSD and ports, I would
have to upgrade.

~~~
kchoudhu
>> Things like apache, mysql, etc etc etc.

That's well beyond the remit of the OS developers who are only responsible for
FreeBSD, but the ports team has happily provided us with `pkg lock` to prevent
edits to ports once they are installed.

>> my understanding is the ports system only supports the lastest version of
FreeBSD

I believe the the ports tree will be compiled for a given branch until the
underlying branch is EOL. FreeBSD 8, for example, had binary packages built
from 2009(?) until the branch was EOL'd last year. That's six years of support
for 3rd party software, after which you can maintain/build your own pkg
repository.

I understand the desire for longer environment lifecycles in FreeBSD (I hate
the continuous upgrade cycle myself), but without a company stepping up and
providing long term commercial support, I'm not seeing how a volunteer project
as small as FreeBSD can do more for longer.

~~~
the_ancient
>>That's well beyond the remit of the OS developers who are only responsible
for FreeBSD,

That is exactly the point the author is making by declaring that "packages are
not secure". they are not maintained by the OS, but are distributed as
"official" thus giving some users the illusion they are supported or "in the
remit" of the OS Developers

------
jldugger
> Did you know that for example Wordpress comes with automatic updates

I grant that Debian is not great at webapps. OTOH giving an application write
access to itself is inherently risky. It's too bad shared webhosting and its
limitations has so warped the PHP application community.

An alternative model: web app has r-x on the app files, and an app specific
admin user has rwx to run the check and update script on a regular basis.

~~~
rmccue
WP has the ability to use (S)FTP to update the files rather than direct file
access.

That said, it's intended for the long-tail of sites. If you know how to run an
update script, you're already more advanced than a fairly big chunk of the WP
userbase, so you probably should be doing updates that way. Many
professionally developed sites will be using source control, so autoupdates
are disabled anyway.

~~~
_yy
> WP has the ability to use (S)FTP to update the files rather than direct file
> access.

Well that doesn't help at all with security.

------
goodplay
I think that labeling point release distribution package managers as insecure
because they are sub-optimal for a certain use case (updating web
applications) seems a tad too excessive.

Having software go through a decent distribution's packaging process not only
provides stability guarantees, but also off-loads many tasks that you would
otherwise have to preform on a per-project/per-developer basis.

I perfectly satisfied with the trade offs.

------
hannob
The author makes some important points, but there is a cruel irony: He's a
main developer of owncloud, which in terms of security updates is a huge
problem.

Owncloud has its own update mechanism, which unfortunately usually doesn't
work in the real world (it breaks if you have any reasonable timeout for your
PHP applications, which every normal webhoster has). There are likely
countless owncloud installations with known security issues that their users
tried to update, but couldn't. (The alternative is a manual update on the
command line, but given the target audience of owncloud it's safe to assume
that many of its users aren't capable of doing that.)

------
fibo
It is true that in the stable branch there are dated version numbers, probably
most of the time for a good reason (e.g. long term support).

On the other side I adhere to Slackware's vanilla philosophy and with
slackbuilds you can have always fresh and up to date software.

Well, if you do this you should have dev, test, prod chain for you servers,
otherwise you update at your own risk.

Last upgrade I went for Ubuntu12/Node0.10 to Ubuntu14/Node4 but also nginx
changed and even logrotate from 3.7.8 to 3.8.7 introduced few modifications
that broke my configuration files.

Upgrade is not that easy.

I would like to share a project of mine that brings vanilla philosophy on
every distri. You have a script to build your software and installing it
locally, so for example you have version z.y.x installed oj your system and
you want to install z.y.(x+1) released yesterday.

Normally you download the tarball, bla bla bla and launch make install, most
of the time you follow really similar steps, you can put in a script and
launch

.software_install Foo

if the version is the same as in the README.md or even

.software_install Foo 1.2.3

to install a specific version. It is really was to add new software to the
list. You can also package your software to avoid compile time on other hosts
(test and prod). Give it a try, I think it can be useful to many system
administrators and developers:

[http://g14n.info/dotsoftware](http://g14n.info/dotsoftware)

~~~
_yy
Your project reminds me of 0install.

------
dozzie
Yup, the guy still doesn't have a clue what he's talking about.

[https://news.ycombinator.com/item?id=11095783](https://news.ycombinator.com/item?id=11095783)

~~~
cwyers
The author brought evidence -- CVEs that aren't getting fixed in Debian repos.
What's your counter-evidence? Letting software keep up with upstream is as
insecure as upstream, no more or less. What's the evidence that pinning
upstream to some older version and backporting some (not all) security patches
and bug fixes is more secure?

~~~
danieldk
_The author brought evidence -- CVEs that aren 't getting fixed in Debian
repos. What's your counter-evidence?_

Just poking around for two minutes finds counter-evidence. Let's take the two
critical Wordpress vulnerabilities. He uses an archive.org page of the
Wordpress package from February 7. If he actually looked in the Changelog of
the Wordpress package for e.g. Jessie, he would have seen that these issues
were fixed the day before in stable and two days before in sid:

[http://metadata.ftp-
master.debian.org/changelogs/main/w/word...](http://metadata.ftp-
master.debian.org/changelogs/main/w/wordpress/wordpress_4.1+dfsg-1+deb8u8_changelog)

According to Red Hat, the CVEs were announced on February 4:

[https://bugzilla.redhat.com/show_bug.cgi?id=1305471](https://bugzilla.redhat.com/show_bug.cgi?id=1305471)

So, I guess that it depends on your definition of "aren't getting fixed". But
the way the author writes it, it seems that issues are lingering from
weeks/months, which does not seem to be true.

~~~
LukasReschke
The archive.org links were used at the creation of the blog post. (which was
not necessarily the release date ;-) - needed to find some time to write it)

So Wordpress is an interesting example. Because the CVE assignment date has
nothing to do with the release date of the patches. Wordpress doesn't request
the CVE on their own.

So we're still at a 4-5 day delay
([https://wordpress.org/news/2016/02/wordpress-4-4-2-security-...](https://wordpress.org/news/2016/02/wordpress-4-4-2-security-
and-maintenance-release/)) for security fixes for a web-facing software. This
is still far worse than just enabling automatic updates in Wordpress. I have
not much problems if that would be a locally exploitable vuln, but web
software usually is exploitable via web.

When it comes to web software I believe it's unacceptable to add any
additional delay. (sure those bugs were not that severe, but as other examples
in the blog show the problem with delayed or never updated packages is
inherent)

~~~
LeonidasXIV
> This is still far worse than just enabling automatic updates in Wordpress.

A webapp updating itself, having write access to its files (I see zero
privilege separation in [1]) and getting updates from a hopefully not
compromised source (see Linux Mint lately), that's just asking for trouble. I
trust the Debian mirror infrastructure with signed packages that are updated
by a privileged system user way more.

[1]:
[https://codex.wordpress.org/Configuring_Automatic_Background...](https://codex.wordpress.org/Configuring_Automatic_Background_Updates)

------
dochtman
I think Debian's long release cycles don't make much sense in this day and
age. To me, a rolling release model makes much more sense, especially in this
world where security updates are being done constantly, and are generally
focused on the more modern branches.

For enterprises where software is part of the core business, keeping up with
updates on a regular basis is probably better than doing large upgrades every
once in a while (for one thing, tracking down regressions is a lot easier when
you don't have to search the entire haystack).

~~~
koja86
By using relase designation instead of toy codenames ("testing" instead of
"Stretch") in apt config files I get pretty close to rolling realease. Using
that and apt-pinning I am totally happy running
stable/testing/unstable/experimental system with majority of packages from
testing on my desktop. I am rather conservative and would avoid anything
bleeding-edge on production systems so using stable with security updates (and
stable-backports if needed) would be my choice. YMMV though.

~~~
ckuehl
Just want to emphasize (and this is not directed specifically at you): you
almost certainly shouldn't run Debian testing on anything that is public-
facing. Packages get migrated to testing after some days in unstable if no
high-priority bugs are filed against them during the days after their upload.

If a security patch is uploaded to unstable today, you won't get it in testing
for a few days, and possibly many more if the migration gets blocked.

[https://www.debian.org/devel/testing](https://www.debian.org/devel/testing)

~~~
g1n016399
You can install security updates from unstable using the output of debsecan:

apt-get -t unstable install $(debsecan --suite sid --format packages --only-
fixed)

~~~
ckuehl
Just keep in mind that unstable is not guaranteed to get security fixes
promptly, either. The Debian Security Team only handles supported releases.

The Security Team FAQ is a good read:
[https://www.debian.org/security/faq#unstable](https://www.debian.org/security/faq#unstable)

It's quite explicit in saying that if security is important to you, then you
should run a supported release.

------
takluyver
The conclusion I take from this is that distros need to be a lot more
selective in what they package. If packagers can't reliably backport security
fixes for the several years that a distro release is supported, they shouldn't
create that expectation by putting the package in.

~~~
stsp
OpenBSD purged many webapps from its ports tree one or two releaeses ago for
this reason (too many security problems to keep up with). Most such packages
didn't do much else than extracting .php files to somewhere in /var/www
anyway. Users of such apps are now expected to do this themselves and keep
track of updates.

------
PMan74
A bit meta but articles that use the passive voice such as "Blah considered
insecure" annoy me. Considered insecure by whom? The author? The why not just
say "Distribution packages _are_ insecure". Sure it makes it sound like there
is some consensus here but that does not actually seem to be the case.

~~~
avian
"x considered y" is such a common template for titles of articles in computer
science that it in fact has its own Wikipedia page:

[https://en.wikipedia.org/wiki/Considered_harmful](https://en.wikipedia.org/wiki/Considered_harmful)

~~~
PMan74
Ha, from the Wiki, looks I'm not the only one rubbed up the wrong way by this
phrase

[http://meyerweb.com/eric/comment/chech.html](http://meyerweb.com/eric/comment/chech.html)

------
HerrMonnezza
Well, I can agree with the concerns. But what are the alternatives? What
GNU/Linux distributions do (and Debian in particular) looks to me the least of
all possible evils.

------
raintrees
How does RHEL rate in regard to this issue? Isn't that one of Red Hat's
selling points?

~~~
danieldk
Red Hat seems to be very swift when it comes to security updates. They provide
reports on this web page:

[https://www.redhat.com/security/data/metrics/](https://www.redhat.com/security/data/metrics/)

Nearly all the vulnerabilities were apparently fixed within one day. This is
the report of RHEL 7:

[https://www.redhat.com/security/data/metrics/summary-
rhel7-c...](https://www.redhat.com/security/data/metrics/summary-
rhel7-critical.html)

One thing to note is that RHEL comes with far fewer packages than Debian (e.g.
Wordpress and phpmyadmin don't seem to be in the main repository), so you may
have to rely on slower third-party repositories.

~~~
cpitman
When you actually have a subscription for RHEL, you can also get email
notifications whenever security fixes, bug fixes, and/or feature enhancements
are released for packages that are installed on your servers. There is some
configuration you can do to pick what events you get notified for (like bug
fixes only).

And if you find yourself managing an enterprise environment, Red Hat Satellite
(basically on-premise RHN + promotion/environment/configuration management)
tracks what errata are applied to or required by each system and can report on
what the gaps are.

I work for Red Hat, and making it easy to support Linux is definitely our
bread and butter.

~~~
snuxoll
And if you're like me and too cheap to pay for a Red Hat subscription, the
CentOS-announce mailing list also announces all the Red Hat errata - you
should absolutely be subscribed to this list if you run CentOS in production.

------
berdario
I thought about this issue some time ago:

[http://security.stackexchange.com/questions/109026/security-...](http://security.stackexchange.com/questions/109026/security-
updates-foss-upstream-policies-how-are-they-chosen)

I agree that there's a distinction to be made between core/base and user-
applications/ports (as mentioned elsewhere in this thread)...

Ultimately it's all softare, and the only distinction is fuzzy: e.g. the
kernel won't easily break backwards compatibility, while databases,
interpreters, etc. will... but it's not something you can easily measure
without being vigilant for every change.

I think that an important distinction (at least for Ubuntu) are the main and
universe repositories: I'd expect these problems to happen in universe and to
be mostly absent in main.

From this point of view, a good choice would be to completely rely on main,
and to weights pro and cons when deciding if using the repositories to manage
your user applications/libraries/dependencies for your actual service. (I'd
probably define all of them with Nix, but that's not a panacea)

The problem is that even main is guaranteed to keep up with all the security
updates: In some cases updates aren't prepared and shipped because the default
configuration is not vulnerable (but obviously a sysadmin could change that)
and it's not worth the effort.. or just like in the Python2.7.9 case: A
security update most often can be applied as a standalone patch, but if the
changes are overarching and not easily distilled into a patch, the update will
be too expensive/risky and won't be done.

------
teekert
Oh yes, this is a big problem. I still run into things like installing
ownCloud client on Ubuntu and then wondering why it doesn't work (it's years
old). Or recently I found out that Python3-Pandas on the Raspberry pi is
version 14.something which has annoyances that have long since been solved. If
you install Drush (Drupal Shell) from Ubuntu 14.04, you get version 5.10!
(They are at 8.1). Arch Linux is already much better but one breaks things
occasionally and they were kicked off of Digital Ocean, sadly.

I really hope Ubuntu Snappy packages, which is essentially the same as the
Arch User Repository but more secure If I understand correctly, will solve
this messy misery.

~~~
g1n016399
Based on what you have said it sounds like you want Ubuntu's rolling release,
which is upgraded daily.

Ubuntu Snappy packages are based on the normal Ubuntu packages so they will be
at the same versions.

~~~
teekert
My understanding was that snappy packages can be descriptions that point to
i.e. Github to get latest versions. Hmm, never heard of Ubuntu rolling and it
is not at vpn providers I know. But I will check it out!

------
DyslexicAtheist
The fact that we haven't solved this yet is puzzling. It seems that we have
actually gone backwards in some ways over the last decade. With languages
implementing several broken distribution packaging systems trying to compete
against each other. Not even to mention the horrible practice of piping random
_sh1t_ from the web with curl/wget into a shell.

It would be more or less trivial to implement a wget wrapper that downloads
the .sig and validates it. Hardly anyone these days understands or cares which
seems to be the bigger issue (not a technical problem but a people's problem
as _Gerald M. Weinberg_ would say)

------
nwah1
Regarding the issue of verifiable builds, everyone should read this:

[http://0pointer.net/blog/projects/stateless.html](http://0pointer.net/blog/projects/stateless.html)

------
knweiss
The job of the distributions would be much easier if more software projects
would a) consequently use semantic versioning (MAJOR.MINOR.PATCH, see
[http://semver.org](http://semver.org) for details) and b) explicitly and
officially designate that they no longer support the version MAJOR.MINOR
branch.

The latter should be a signal for the distribution to upgrade to a newer and
supported upstream version instead of (halfheartedly) trying to support the
software themselves.

------
_yy
Nah, that's why packaging _web apps_ with a stable Linux distribution is a
notoriously bad idea and should be discouraged.

Debian (and other distros) do a perfectly fine job updating the actual
operating system.

------
newman314
This. I agree with this.

Freezing the work does not work particularly with certain packages. For
example, I still see lots of usage of old versions of openssl (0.9.8) and
openssh.

If we even just make sure to cover these two packages and the related
dependencies/affected applications alone, that would go a long way in covering
attack vectors (obviously not comprehensive)

------
carapace
Light grey, thin, sans-serif body text basically means you HATE your readers.

------
akerro
That's why we have `pkg audit` on BSD.

------
unixhero
Considered harmful is considered harmful

