It is not the software writers' fault that your distro can't be arsed to keep its package system up-to-date.
Even the unstable branch is routinely multiple versions behind on software.
The idea of linking end-user software versioning to the operating system version itself was always a dumb idea, but has become even more absurd over time. No other operating system but Linux (and possibly some BSDs) does this to the extent that the distro model does.
I'm not limited to awkward work-around manual install methods just because the version of Notepad++ linked to Windows 8.1 hasn't been updated since it was released. The whole scenario is absurd.
Yes, package repos are nice, but not when it means I'm perpetually multiple versions behind on common software just because a handful of nerds are trying to do the job of Github and Sourceforge combined, instead of just building an easier method of installing and updating third-party software.
Debian guarantees that when you install their distro things are going to work and are kind of secure. The tradeoff of having all the software in the distro being checked by people that have tested and checked that everything works well and smooth it's going delay updating the packages.
But the idea of having a distro it's something that offers you that. That someone took the work of packaging and testing, so you can install things and everything works.
I think that if more software would be offered statically compiled with all the libraries (like many apps in OSX), then we would be able to try the latest release of Gimp when it's released, instead of having to waste time trying to compile from source or waiting for being included in the next release of the distro.
I can't even remember how many hours I wasted on trying to compile a new version of some program, just to learn the infinite tree of dependencies, newer versions of existing libraries required, build prefixes tweaking etc... Most of that time could have been saved.
* increased storage space
* increased memory usage
* increased downtime for updates (since more files have
to be updated)
* increased bandwidth usage (total size of download
* potentially increased security risks
Our systems need less downtime and better security; not the opposite, which is what static linking brings.
That's why Solaris, as an example, does not provide static archives for almost any of the components that are provided, especially libc.
it will decrease memory usage (as most loaders load the whole library, even when just one function is used)
more things to upgrade, yes
more bandwidth, yes (not much if one uses binary diffs)
potentially increased security risks, yes.
although shared libraries are bigger security risks
i made an acc just to reply to this.
why do people never understand static linking ?
Most OS linkers already do efficient loading of only the relevant portions of a shared library since they typically mmap the library file.
In aggregate, static linking the same dependency for multiple programs will increase memory usage as well despite your assertions to the contrary since the pages will not generally be shared (yes, I'm aware some OS have page dedup or compression, etc., but I'm talking about the general case here).
more things to upgrade, yes
more bandwidth, yes (not much if one uses binary diffs)
A binary diff of 100 programs all linking to the same library will still be larger than the diffs for a single library.
In addition, updating 100 binaries, even with diffs still takes longer than uprating a single library, which means increased downtime during system updates.
potentially increased security risks, yes. although shared libraries are bigger security risks
Code is code; how is shared linking a greater risk? Have any data to back that up?
i made an acc just to reply to this. why do people never understand static linking ?
I understand static linking quite well, since my day job is part of a commercial OS team that coincidentally pioneered dynamic linking technology in UNIX.
So let's not base our arguments on assumptions about knowledge just because we disagree.
a page is 4kB (on x86 at least).
a function is typically, what, 60 bytes ? 120 maybe ? 240 ?
some functions use other functions (fopen()->_open() in libc) so you can have more then two pages
maybe you are thinking about swapping (or not even lazy loading in the block layer) ?
code has to be in RAM, everything else is insecure
>In aggregate, static linking the same dependency for multiple programs will increase memory usage as well despite your assertions to the contrary since the pages will not generally be shared (yes, I'm aware some OS have page dedup or compression, etc., but I'm talking about the general case here).
not pages, functions (and/or variable objects).
when you make a .la on unices you are making a ar(1) archive out of .o object files.
when the linker is linking the final binary it opens that .la archive and pulls out the relevant .o objects
so "statically linking to properly(!) made libraries" will reduce overall memory usage, unless a lot of programs use that (small) library (rarely the case)
OSX linker does the right thing with their libc.
in that out of the many optimized memcpy()'s it will only load the one that is fast on the particular cpu
GNU linker loads them all
>Code is code; how is shared linking a greater risk? Have any data to back that up?
data to back it up ?
do you have any data to back up anything you said ?
LD_PRELOAD can be used to replace functions and the user/admin would not notice it.
while the only security flaw with static linking is an administrative mistake
>I understand static linking quite well, since my day job is part of a commercial OS team that coincidentally pioneered dynamic linking technology in UNIX.
>So let's not base our arguments on assumptions about knowledge just because we disagree.
so go ask them about it
But in the situation that a new version of the software is released, and I think it would benefit me right now, I'd trade all those downsides for being able to use that software right now than waiting to be included in my distro (like it happened to me with Gimp some years ago).
I'm plenty of memory, storage and bandwidth but not so much of time to compile it by hand.
I have to admit that this is just a workaround while we find a better software release and dependency management system.
I don't understand what static linking vs. dynamic linking has to do with waiting for gimp to be included in your distro?
No one is suggesting you have to do that, but someone will have to do that, and it does require resources (time, people, hardware, etc.).
If by system, you're referring to technology, then I disagree.
In other words, the technological constraints of today's package management systems are not the primary issue, rather it is how the software itself is being developed and managed and the available resources to do so.
Ultimately though, most of this all comes about because the underlying systems that applications depend upon in a typical Linux distribution are not properly release engineered. They don't properly version the shared libraries, they don't carefully avoid incompatible changes in interfaces, and they don't have sufficient regression testing.
So to me, the right answer is to fix the root of the problem, not paper over it by pretending there isn't one by essentially embedding copies of every dependency into an application.
In short, start encouraging developers to reduce their dependency chains, properly release manage their software, ensure that core system components offer stable interfaces, and provide timely updates.
Whereas with static linking, you can directly see the effect that any library you use has on the binary size. This encourages library and application devs alike to be more judicious with their dependencies.
Architecting an operating system and its packages as a psychological forcing function to improve software development is not a worthwhile tradeoff for the efficiency, security, or reliability of a system.
Who cares about 20 megs here or there if there is only one shared library instance? Who cares about bringing in a tree of 50 dependencies for this console program if the user probably has them installed already? And thus we end up with the modern Linux desktop, where nearly everything depends on a gigabyte or more of dependencies.
I find it amusing that someone is arguing for static linking as a way to make it obvious how bloated programs are, since usually the counter-argument to dynamic linking from many is "who cares about disk space!".
There are plenty of programs that are distributed essentially statically-linked today, that still have 50 dependencies or more -- there is no data that I am aware of to support the idea that dynamic vs. static linking would encourage developers to reduce their dependencies.
This is a dubious argument at best; you can still easily see the size of your program today even if it's dynamically linked simply by looking at top, or by using the appropriate developer tools.
I can empathize with the frustrations that some feel in dealing with some operating systems and applications, but it is not due to a technological issue -- it is due to cultural and process issues present not only in the development of the software they rely on but in the distributions that provide the software themselves.
"papering over" those issues by static linking only makes the problem worse -- not better. This a cultural and process issue that cannot be addressed solely through technological means.
Also, I want to be clear that I am not linking against static linking completely; I'm arguing for dynamic linking (or more generally, shared objects) for those dependencies shared between a sufficient number of components that have sufficiently stable interfaces to share them.
By all means, if there is a component with a sufficiently unstable interface, then by definition, it is not appropriate for sharing between components, and at that point whether you statically or dynamically-link the dependency is moot, since you can have private dynamically-linked dependencies just as easily statically-linking them into binaries.
Software is just files on disk, rather than statically linking (and this duplicating) to libraries we can use references to the libraries (dynamic linking), the issue is the user has to install the software and its dependencies then.
AppFS solves this by not having an install step. All files in all AppFS packages in all the world show up in the filesystem if you look for them. They are then cached to disk if you try to use them.
So if you had a package X that depended on library Y you simply run the binary you want from X and any libraries from Y will get used.
Since AppFS is a global namespace using HTTP, it has very similar semantics to the WWW, just with symlinks or ELF NEEDED entries instead of hyperlinks.
So you, as a developer can publish software that depends on the software another developer publishes, and the user can fetch the software directly from you without there ever being an install step.
In reality that of course doesn't happen, so programs linking statically or including their own versions of shared libraries never get security updates for the included libraries.
A couple of years back Microsoft discovered some kind of issue with their redistributable dlls.
They patched Office etc, but could only offer a scanner that would check each and every dll to see if it was of a vulnerable version. And asked users to pester third party software providers for updates if the scanner found any.
New versions of software also fix old stability and security bugs.
If software became more secure with age, then the older your version of gnome screensaver, the more secure it would be ( https://www.jwz.org/blog/2015/04/i-told-you-so-again/ ).
Sometimes new versions of software fix bugs, sometimes they improve stability, sometimes they introduce new bugs, sometimes they decrease the performance, sometimes they become bloatware, sometimes several of this things happen in any combination.
> No other operating system but Linux (and possibly some
> BSDs) does this to the extent that the distro model does.
> Even the unstable branch is routinely multiple versions
> behind on software.
And this same pattern repeated itself so often where programming tools were concerned that I eventually just gave up on using apt-get for developing and would wind up manually installing most languages, and then sometimes having to fight with conflicts because some package or other had already installed an out-of-date version, so now I either had to overwrite the package-installed version and risk breaking something, or just run it locally from some directory in /home.
Contrast to Windows or OS X, where all I have to do to run Racket is go to the website, download the latest version, and possibly run an automated installer. On OS X, I just have to drag one icon onto another icon, and if the app's been set up right this even handles my command-line paths for me.
Seriously, this is 2016. We have the internet now. We do not need to load every piece of software one would ever need onto a CD anymore like it's 1994 and the only reliable source of Linux software is Walnut Creek CD-ROMs.
It is a perfectly reasonable expected use case that someone desiring a piece of software should be able to just go and download the most recent version, install it, and should need arise, uninstall it just as cleanly.
Nowhere else is that process as broken as it is on Linux, especially under Debian's hopelessly slow update process.
The software distribution and management scheme you are advocating for sounds like a security nightmare. Administrators are already bad enough with applying vendor updates. Can you imagine what things would be like if administrators had to track all of the different upstream security announcements for everything they installed?
2) I am describing how literally every other major OS works. That is how Windows works. That is how OS X works. There is nothing impractical or intractable about this problem. Mainstream software has been solving it for decades.
That's it. Ubuntu releases every six months, it's not upstream's problem that an up-to-date version of your stable distro is using the program released in September 2014 (17 months ago). As XScreenSaver 5.32 was released in November 2014 and Debian stable was released in April 2015, you have to wonder why in those five months why it was not 5.32 which was released, or even 5.31, but 5.30 which was released seven months before.
Does anyone remember July 2002 to June 2005, almost three years between Debian releases? I do. You can point figures for why that happened, but it makes little sense to point them at the upstreams who make your distribution possible.
At the end of the day, the message jwz put in the code is crystal clear - if you are unhappy with the message and don't plan to keep your distro somewhat up to date, he'd prefer you just rip out xscreensaver from the distro and use an alternative. It doesn't make sense to compare him to a terrorist setting bombs as someone did in the thread.
It goes back to the initial reply. For all the gnashing of teeth and finger-pointing and name-calling and effort to deal with this and other issues due to old packages - if Debian channeled all of that energy and effort into a shorter, saner release cycle, it would be much more beneficial to everyone. If stable Debian was as current as the last release of Ubuntu or Fedora or OpenSuse etc., this discussion would have never taken place.
P.S. One good reason to download XScreenSaver 5.34 is the additional Android functionality. XScreenSaver 5.35 will possibly have much-expanded Android functionality when it is released (or if not, then 5.36 or one soon after). If you have Android Studio and an Android device (or emulator), compile it and give it a whirl. Once it gets into shape more for users, an APK will probably be put up, and possibly even a Google Play app. So if you're an Android developer, give it a whirl. Send an e-mail to the folks in the android/README with any bugs, questions, comments, patches etc. Probably best to e-mail first before doing the work on major patches. Give it a spin.
The whole point of Stable is that it doesn't change often. As a Debian user, I don't want to upgrade my servers every six months. I'm fine with stale packages (with backported security patches). There's a reason why Ubuntu, despite its standard release cycle, also maintains LTS releases for both servers and desktops for five years; but adopting that approach that requires much more effort than just switching the cycle as you're suggesting.
Also, another advantage of longer cycles is that Testing has more time to mature, which is why a Stable release is - in my experience - much more stable than an Ubuntu release.
So, yeah, I'm fine with having to compile XScreensaver myself. But I choose Debian because it's not like Ubuntu or Fedora.
edit: found it:
I am as close to certain as I can be that there is no action a user can take on their input devices that will cause the current Xlib-based lock dialog in xscreensaver to unlock. That's because it's a small amount of code that I have stared at and tested for a very long time. It is a small enough piece of code that I (believe I) know every possible path through it.
Introduce N layers of widget library, general text field handling, compose processing, input methods, I18N... and all bets are off. Who knows what bugs wait lurking in there; who knows which particular combinations of which libraries are a security-bug timebomb.
Let me put that another way:
The GTK and GNOME libraries have never been security-audited to the extent that their maintainers would be willing to make the claim, "under no circumstances will this library ever crash."
One can, within a reasonable doubt, make that claim about libc, or even about Xlib, but not about anything the size of GTK. It's just too big to be sure. This is not a criticism of GTK or GNOME or their authors: it's simply a truth about any piece of software of that size.
Not to mention that I can still write a keylogger that bypasses jwz's xscreensaver. 
GTK (at least the 2.x series, I don't know if that's changed in 3.x) uses libx11. There's a good chance that, if there's a major flaw in libx11 which can be exploited, a GTK-based program is vulnerable to it. GTK is pretty massive, so it likely introduces issues of its own.
E.g. this bug: https://bugzilla.gnome.org/show_bug.cgi?id=722106 in GTK triggered this problem: https://bugzilla.redhat.com/show_bug.cgi?id=1064695 .
> Not to mention that I can still write a keylogger that bypasses jwz's xscreensaver. 
You can write a keylogger that bypasses pretty much anything that's X-based.
To, uh, to put it bluntly, ditching X11 for something saner would be the correct approach. Stacking stuff on top of X11 makes the problem worse; not stacking anything leaves it pretty bad. I'm not overly optimistic about Wayland, but I guess we'll have to wait and see :-).
Anyways, if i am reading the CVE right it is about how X11 can be provoked into maxing out its stack. Possibly annoying for those trying to use X11, but not something that leaks data.
The more i see "security" discussed on HN, the more i feel that there needs to be a ranking system. All to often it seems like people are operating with a binary definition of security, putting anything with a CVE in the insecure bin.
Edit: Just to add a bit more detail. xscreensaver locks the screen by putting a window across it that grabs ownership of input devices. Thus if it crashes, the window goes away and the inputs released.
I read about that bug a while ago. It was due to mishandling a strange X11-specific cornercase; to be fair, that's the kind of stuff that is certainly more aptly handled (in the X11 world) in a toolkit (X11-specific cornercases are like 30% of the reason why, as soon as alternatives to Xlib were available, everyone embraced them and cried tears of joy), which would suggest that not relying on toolkits opens xscreensaver's locker to other issues that are more aptly handled in a toolkit.
However, if you look over the bugs that Gnome, Cinnamon or Unity's wrappers had... I'm inclined to think that there are a lot more trivial problems (like the one I linked in my other comment, here : https://news.ycombinator.com/item?id=11412688 ) in Gnome's a thousand and one libraries than there are cornercases in X11.
Upstream has fixed lots of bugs, critical crashes, and CVEs, and Debian stable takes weeks or months to backport them, and even then only the CVEs. Not even fixing major usability bugs.
I’ve seen users on versions so old where nothing is able it interoperate with them anymore, and the users unable to compile from source themselves.
Currently, the solution I’ve seen from several such upstream sources was to offer a ppa or custom apt repository, and to let users install that.
The usability of debian – especially stable and oldstable – is severely hurt by maintainers not backporting these things, and it adds a lot of work for upstream.
Especially for fast moving things – like compilers, programming languages, networking software – this is a severe issue, and Debian is just ignoring it away, and patching the warnings upstream might have added away as well.
If Debian wishes to remove the warnings from Upstream, they should actually backport all bugfixes themselves – as they claim they do.
Btw, I won’t be able to answer to comments on this for another hour, as I’ve exhausted my hourly limit on HN comments – as always.
It doesn't work at all and bug is fixed upstream, but fix only goes to unstable.
Original link is to post #84 in the conversation.
My worst experience with their repos was with logstash having a bug where it would annoyingly install logstash-web with an auto-start. But..... the package had a typo in its startup script and caused the JVM to restart over and over and over. This was at an HFT shop, so sure enough.. I got a call the next morning that all trading was stopped until it was fixed.
I don't understand how these sorts of things issues happen for so long on such a popular distro...
*arch is nice and all, but holy crap does AUR need some better standards with what's acceptable. After spending an asinine amount of time trying to figure out why every 5th package won't build where devs closing bug reports with, "Did you try $BASICTROUBLESHOOTING".. I'm pretty much done.
(this isn't out of frustration, I read early on that it was for sharing preliminary packages and discarded all expectations)
Debian has never had the goal of being the latest shiny thing on the block. And that's why people people keep coming back to it. Sure, over the years I've dabbled with RedHat, Ubuntu, Mint, and so on, but then repeatedly I rediscover, that, oh yeah, security and stability actually matter.
Debian Stable cuts the right balance for me by incorporating the latest security patches but not the latest features/bugs. This is a heck of a lot more work for the Debian maintainers than simply rubber stamping whatever the upstream software developers release, but it's proven worth it.
My only disappointment in Debian here is that they didn't catch this time bomb and disarm it preemptively.
See for example the patch in Slackware:
> patches/packages/xscreensaver-5.34-i486-1_slack14.1.txz: Upgraded.
I promised jwz that I'd keep this updated in -stable when I removed (against
his wishes) the nag screen that complains if a year has passed since that
version was released. So, here's the latest one.
Personally I don't get the point of screesavers at all, since we have monitors that can be sent sleep/standby mode when they're not used.
On topic though, nobody wants to develop the way distributions want you to - basically maintain branches of every release you make for the lifetime of the distros where you backport bug fixes but not feature additions. Very little software, even libraries, is ever developed like that, and it is part of the reason why Linux desktops are a Mess™. And practically, you often cannot even do this. Your feature additions will optimize code that your bug fixes interact with, and trying to keep them separate is an exercise in masochism.
The TLDR answer is the sooner we can get a community software repo with open signups that developers push new releases to that supports appstream / xdg-app infrastructure the better. Distributions can still freeze the world and maintain all the stability they want, but we need to also let developers take responsibility for their own software on desktop releases.
I'm pretty sure that's not how Debian wants to handle software. The whole point of Stable is that packages won't get any new updates, with the single exception of security fixes. Regular bugs are not supposed to be fixed, because doing so introduces uncertainty - you might be introducing new bugs or changing expected behavior.
*nix already have a mechanism called soname in place that allow versioned libs to live side by side. Gobolinux makes good use of it, along with symlinking, to allow multiple versions to be installed side by side.
You find similar, tho more elaborate, systems in place in NixOS/Guix.
xdg-app on the other hand comes out of the RPM/DEB camp, where there can only be one canonical version of every package installed at any one time. So to get around that they device a system where you basically build a new system for each and every "package".
What that effectively does is replacing dependency "hell" for dll hell (hello Windows).
There are two independent circumstances to that, though. Either the developer does not deserve downstream trust because they abuse their users by breaking their APIs, ABIs, or UX's without due notice or process. This is how things were in the early 90s when Debian first started and the original crop of distributions emerged - the mindset was, of course as a distributor I have to assume responsibility for packaging and integrating all the software I ship, and thus of course I cannot track the individual releases of everything I include while making sure everything still works. That is insane, and no software distributor in the world does that.
The other circumstance is when developers do deserve trust but are not given it, as is the case in Debian, or why KDE needed the Neon project. Or when they want to assume the responsibility on their own, which no Linux distro provides adequate means to directly support users.
Thus they are reluctant to adopt new version of something, because the dependency chain may force a update of a large part of a installed system.
A package manager that can handle multiple version frees them from this worry, without the admin of a system having to trawl every "container" if a lib is found to be vulnerable (a very real possibility with the likes of xdg-app, and somewhat akin to Windows "dll hell").
Arch, for example, has clang3.5 and clang available because 3.7 was an ABI break. It has gstreamer 0.10 and 1.0 as separate packages because it was an API break. These projects respect these design constraints and should be trustworthy, and notify downstream of breaking issues like that well in advance so packagers can avoid repackaging upgrades expecting smooth transitions. But its unfair to users and developers to use carte blanche policy of freezing everything because some bad apples cannot maintain ABI/API stability.
The idea that the freeze should be done without carefully taking responsibility for it is probably a pretty stupid norm, so I can understand the lack of patience.
Debian can, and should, freeze their software and provide support for what they package. But the developer should both be able to ask Debian to remove it, and to provide it themselves. But not in the traditional insanely bad and broken Windows "search the Internet for random binaries and use those" method. We have the capability, fairly easily, to provide a community repository that crosses distros and lets developers directly ship and update their own software, we just need to provide the capacity. Tech like appstream is how you enable it.
I think it's fine if this doesn't meet the wishes of how upstream wants to manage releases. Part of the point of free software is that we have a right to adapt software to serve our own needs, without needing permission of the original authors. Rolling-style releases, which get updated whenever upstream wants to push an update, are a nice option to have. But a Debian-style stable release model is also a nice option to have. So I'd like both to exist, and wouldn't want upstream package authors to be able to veto the existence of stable releases.
I personally hope none of my software ends up in Debian Stable, because if it did anything broken in it at the point of freeze would haunt me for half a decade.
But I mean, there's no actual obligation to update software. Lots of redistributors don't update to the latest version for many reasons. Apple ships old GNU utilities because they don't like GPLv3. FreeBSD ships an old version of OpenBSD's pf firewall because their patched version is hard to forward-port. OpenBSD ships a truly ancient gcc for a mixture of technical and license reasons. And Debian updates software on a fixed stable-release cycle. I suppose it's fair that upstream can complain about this: GNU and jwz don't have to be happy that people are shipping old versions of their software. But people are also probably not going to stop doing it.
1. jwz is sorta famous and cool, I think. I've heard of him before this (and I'm not exactly hip!).
2. Debian is well known, too! /understatement
3. jwz put a timebomb in xscreensaver.
4. The timebomb displays a message if current date > source code date + 18 months.
5. The timebomb has been there for at least three years. See https://github.com/Zygo/xscreensaver/blame/88cfe534a698a0562... (Unofficial repo)
6. I'm with Debian on this one. Sorry, jwz.
7. I really, really, really like xscreensavers and want it to stay in Debian! :(
I also think there's another bug with xscreensaver where it will capture and "hang on" to your keyboard after you've unlocked the machine. Maybe it's time to switch...
You sure you are running the latest version?
That is the origins of this message. That Debian (and perhaps other distros) would fail to push xscreensavers in a timely manner, resulting in JWZ getting emails about issues he had long since fixed.
This seems to stem from a policy that only security issues (resulting in a CVE being published) will be processed, while usability issues are left in place until the next major stable release is rolled out.
I can see two reason for this.
A: that the thinking around stable is heavily server oriented. Thus anything but CVEs being excluded result in reduced risk of production breakages. This even though Debian is a generic distro that can be molded into desktop or server usage depending on what gets installed.
B: that the rigidity of traditional package management do not allow a piecemeal updating process. This because updating a package removes the old package in the process. Thus if you want to bump up the xscreensaver version, and it hard depends on a newer lib somewhere in the dependency chain, you end up with an unresolvable conflict unless you also update everything else that depends on the same lib.
Of course, then I run into your B problem, where there isn't a really good way of installing your own versions that aren't in the repos. I was able to blacklist the kernel from updating itself, somehow, so I really just need to figure out how to do that again.
That isn't actually the policy, for example:
Anyway, my sane period ended a while ago, it's full on NixOS now, why are people still using OS'es that are not the future?.
I had my Arch system running for over a year, upgraded at least once a week. Only once did I need the rescue disk, and that was my own fault (some um... "clever" pacman command I dreamed up ended up uninstalling some pretty important things like all of base).
I still don't agree with the unbootable bit though. That hasn't happened on any of my systems (two Arch machines, two Arch-based Antergos machines) in over a year without it being my own fault tinkering with something. I don't know man, maybe we're running completely different hardware or something.
And the problem that the bugreport is about is that Stable would not get bugfix updates, resulting in JWZ getting emails for long fixed bugs.
Thus he put in the "obsolete version" warning to get Stable users to pester Debian maintainers.
If there's something you need that's outdated, a fix is as trivial as opening an issue on GitHub, or sending a Pull Request with the revision and SHA256 bumped.
I've found BSD ports pretty similar (mainly pkgsrc is the one I use). Some are very actively maintained, and others are much less actively maintained. Resources are limited, so pkgsrc maintenance happens only to the extent that someone makes it happen.
BTW: i guess this is an argument for having feature release and bugfix only releases. Or at the very lease make it easy for distros to backport bugfixes (though i guess most will argue to give distros the middle finger and pull everything from git or equivalent).
Debian always providing with fun...