Hacker News new | past | comments | ask | show | jobs | submit login
“This version of XScreenSaver is very old. Please upgrade” (debian.org)
84 points by ashitlerferad on Apr 2, 2016 | hide | past | web | favorite | 99 comments



Frankly, this is a very good example why I inevitably give up on using Debian.

It is not the software writers' fault that your distro can't be arsed to keep its package system up-to-date.

Even the unstable branch is routinely multiple versions behind on software.

The idea of linking end-user software versioning to the operating system version itself was always a dumb idea, but has become even more absurd over time. No other operating system but Linux (and possibly some BSDs) does this to the extent that the distro model does.

I'm not limited to awkward work-around manual install methods just because the version of Notepad++ linked to Windows 8.1 hasn't been updated since it was released. The whole scenario is absurd.

Yes, package repos are nice, but not when it means I'm perpetually multiple versions behind on common software just because a handful of nerds are trying to do the job of Github and Sourceforge combined, instead of just building an easier method of installing and updating third-party software.


Maybe distros like Debian are more stability/security oriented than feature oriented. New version of software often contain new features that it may introduce new bugs.

Debian guarantees that when you install their distro things are going to work and are kind of secure. The tradeoff of having all the software in the distro being checked by people that have tested and checked that everything works well and smooth it's going delay updating the packages.

But the idea of having a distro it's something that offers you that. That someone took the work of packaging and testing, so you can install things and everything works.

I think that if more software would be offered statically compiled with all the libraries (like many apps in OSX), then we would be able to try the latest release of Gimp when it's released, instead of having to waste time trying to compile from source or waiting for being included in the next release of the distro.


Spot on. I was always puzzled why some software just can't come statically compiled. I suppose not all apps can be distributed like that, but most of them can.

I can't even remember how many hours I wasted on trying to compile a new version of some program, just to learn the infinite tree of dependencies, newer versions of existing libraries required, build prefixes tweaking etc... Most of that time could have been saved.


Because statically linking everything has several negative consequences:

  * increased storage space
  * increased memory usage
  * increased downtime for updates (since more files have
    to be updated)
  * increased bandwidth usage (total size of download
    for update)
  * potentially increased security risks
In short, trading off all of the above to simply avoid proper release engineering and simplified dependency management is the wrong answer.

Our systems need less downtime and better security; not the opposite, which is what static linking brings.

That's why Solaris, as an example, does not provide static archives for almost any of the components that are provided, especially libc.


statically linking to properly(!) made libraries will only maybe increase storage space

it will decrease memory usage (as most loaders load the whole library, even when just one function is used)

more things to upgrade, yes

more bandwidth, yes (not much if one uses binary diffs)

potentially increased security risks, yes. although shared libraries are bigger security risks

i made an acc just to reply to this. why do people never understand static linking ?


statically linking to properly(!) made libraries will only maybe increase storage space it will decrease memory usage (as most loaders load the whole library, even when just one function is used)

Most OS linkers already do efficient loading of only the relevant portions of a shared library since they typically mmap the library file.

In aggregate, static linking the same dependency for multiple programs will increase memory usage as well despite your assertions to the contrary since the pages will not generally be shared (yes, I'm aware some OS have page dedup or compression, etc., but I'm talking about the general case here).

more things to upgrade, yes more bandwidth, yes (not much if one uses binary diffs)

A binary diff of 100 programs all linking to the same library will still be larger than the diffs for a single library.

In addition, updating 100 binaries, even with diffs still takes longer than uprating a single library, which means increased downtime during system updates.

potentially increased security risks, yes. although shared libraries are bigger security risks

Code is code; how is shared linking a greater risk? Have any data to back that up?

i made an acc just to reply to this. why do people never understand static linking ?

I understand static linking quite well, since my day job is part of a commercial OS team that coincidentally pioneered dynamic linking technology in UNIX.

So let's not base our arguments on assumptions about knowledge just because we disagree.


>Most OS linkers already do efficient loading of only the relevant portions of a shared library since they typically mmap the library file.

a page is 4kB (on x86 at least). a function is typically, what, 60 bytes ? 120 maybe ? 240 ? some functions use other functions (fopen()->_open() in libc) so you can have more then two pages

maybe you are thinking about swapping (or not even lazy loading in the block layer) ? code has to be in RAM, everything else is insecure

>In aggregate, static linking the same dependency for multiple programs will increase memory usage as well despite your assertions to the contrary since the pages will not generally be shared (yes, I'm aware some OS have page dedup or compression, etc., but I'm talking about the general case here).

not pages, functions (and/or variable objects). when you make a .la on unices you are making a ar(1) archive out of .o object files. when the linker is linking the final binary it opens that .la archive and pulls out the relevant .o objects

so "statically linking to properly(!) made libraries" will reduce overall memory usage, unless a lot of programs use that (small) library (rarely the case)

OSX linker does the right thing with their libc. in that out of the many optimized memcpy()'s it will only load the one that is fast on the particular cpu

GNU linker loads them all

>Code is code; how is shared linking a greater risk? Have any data to back that up?

data to back it up ? do you have any data to back up anything you said ?

LD_PRELOAD can be used to replace functions and the user/admin would not notice it. while the only security flaw with static linking is an administrative mistake

>I understand static linking quite well, since my day job is part of a commercial OS team that coincidentally pioneered dynamic linking technology in UNIX.

>So let's not base our arguments on assumptions about knowledge just because we disagree.

so go ask them about it


Thank you for enumerating those downsides. Clearly I'd avoid statically linked software if I would have the option to use `apt-get install` to get that software.

But in the situation that a new version of the software is released, and I think it would benefit me right now, I'd trade all those downsides for being able to use that software right now than waiting to be included in my distro (like it happened to me with Gimp some years ago).

I'm plenty of memory, storage and bandwidth but not so much of time to compile it by hand.

I have to admit that this is just a workaround while we find a better software release and dependency management system.


But in the situation that a new version of the software is released, and I think it would benefit me right now, I'd trade all those downsides for being able to use that software right now than waiting to be included in my distro (like it happened to me with Gimp some years ago).

I don't understand what static linking vs. dynamic linking has to do with waiting for gimp to be included in your distro?

I'm plenty of memory, storage and bandwidth but not so much of time to compile it by hand.

No one is suggesting you have to do that, but someone will have to do that, and it does require resources (time, people, hardware, etc.).

I have to admit that this is just a workaround while we find a better software release and dependency management system.

If by system, you're referring to technology, then I disagree.

In other words, the technological constraints of today's package management systems are not the primary issue, rather it is how the software itself is being developed and managed and the available resources to do so.

Ultimately though, most of this all comes about because the underlying systems that applications depend upon in a typical Linux distribution are not properly release engineered. They don't properly version the shared libraries, they don't carefully avoid incompatible changes in interfaces, and they don't have sufficient regression testing.

So to me, the right answer is to fix the root of the problem, not paper over it by pretending there isn't one by essentially embedding copies of every dependency into an application.

In short, start encouraging developers to reduce their dependency chains, properly release manage their software, ensure that core system components offer stable interfaces, and provide timely updates.


A tradeoff that few consider is that dynamic linking can lessen the motivation of library developers to reduce code bloat and complex dependency trees. Who cares about 20 megs here or there if there is only one shared library instance? Who cares about bringing in a tree of 50 dependencies for this console program if the user probably has them installed already? And thus we end up with the modern Linux desktop, where nearly everything depends on a gigabyte or more of dependencies.

Whereas with static linking, you can directly see the effect that any library you use has on the binary size. This encourages library and application devs alike to be more judicious with their dependencies.


A tradeoff that few consider is that dynamic linking can lessen the motivation of library developers to reduce code bloat and complex dependency trees.

Architecting an operating system and its packages as a psychological forcing function to improve software development is not a worthwhile tradeoff for the efficiency, security, or reliability of a system.

Who cares about 20 megs here or there if there is only one shared library instance? Who cares about bringing in a tree of 50 dependencies for this console program if the user probably has them installed already? And thus we end up with the modern Linux desktop, where nearly everything depends on a gigabyte or more of dependencies.

I find it amusing that someone is arguing for static linking as a way to make it obvious how bloated programs are, since usually the counter-argument to dynamic linking from many is "who cares about disk space!".

There are plenty of programs that are distributed essentially statically-linked today, that still have 50 dependencies or more -- there is no data that I am aware of to support the idea that dynamic vs. static linking would encourage developers to reduce their dependencies.

Whereas with static linking, you can directly see the effect that any library you use has on the binary size. This encourages library and application devs alike to be more judicious with their dependencies.

This is a dubious argument at best; you can still easily see the size of your program today even if it's dynamically linked simply by looking at top, or by using the appropriate developer tools.

I can empathize with the frustrations that some feel in dealing with some operating systems and applications, but it is not due to a technological issue -- it is due to cultural and process issues present not only in the development of the software they rely on but in the distributions that provide the software themselves.

"papering over" those issues by static linking only makes the problem worse -- not better. This a cultural and process issue that cannot be addressed solely through technological means.

Also, I want to be clear that I am not linking against static linking completely; I'm arguing for dynamic linking (or more generally, shared objects) for those dependencies shared between a sufficient number of components that have sufficiently stable interfaces to share them.

By all means, if there is a component with a sufficiently unstable interface, then by definition, it is not appropriate for sharing between components, and at that point whether you statically or dynamically-link the dependency is moot, since you can have private dynamically-linked dependencies just as easily statically-linking them into binaries.


AppFS ( http://appfs.rkeene.org/ ) aims to solve the packaging problem in a way that is very similar to this line of thinking.

Software is just files on disk, rather than statically linking (and this duplicating) to libraries we can use references to the libraries (dynamic linking), the issue is the user has to install the software and its dependencies then.

AppFS solves this by not having an install step. All files in all AppFS packages in all the world show up in the filesystem if you look for them. They are then cached to disk if you try to use them.

So if you had a package X that depended on library Y you simply run the binary you want from X and any libraries from Y will get used.

Since AppFS is a global namespace using HTTP, it has very similar semantics to the WWW, just with symlinks or ELF NEEDED entries instead of hyperlinks.

So you, as a developer can publish software that depends on the software another developer publishes, and the user can fetch the software directly from you without there ever being an install step.


Because any (possibly security-relevant) update to a library would mean that all software linking it statically has to be rebuilt.

In reality that of course doesn't happen, so programs linking statically or including their own versions of shared libraries never get security updates for the included libraries.


If you want the perfect example of that problem, look to Windows.

A couple of years back Microsoft discovered some kind of issue with their redistributable dlls.

They patched Office etc, but could only offer a scanner that would check each and every dll to see if it was of a vulnerable version. And asked users to pester third party software providers for updates if the scanner found any.


I wouldn't move from a package system to an "everything has to be statically compiled" system. But it would be a nice option to have when most of the software in your distro it's ok but you want to run some app without upgrading your whole distro/OS.


In the same way as you upgrade library you can just upgrade your software...


> Maybe distros like Debian are more stability/security oriented than feature oriented. New version of software often contain new features that it may introduce new bugs.

New versions of software also fix old stability and security bugs.

If software became more secure with age, then the older your version of gnome screensaver, the more secure it would be ( https://www.jwz.org/blog/2015/04/i-told-you-so-again/ ).


> New versions of software also fix old stability and security bugs.

Sometimes new versions of software fix bugs, sometimes they improve stability, sometimes they introduce new bugs, sometimes they decrease the performance, sometimes they become bloatware, sometimes several of this things happen in any combination.


Debian doesn't guarantee you anything. "stable" isn't bug free, what you're getting is a lack of newness. "stale" would be a better choice of word. It's not necessarily better, it's just not changing.


  > No other operating system but Linux (and possibly some 
  > BSDs) does this to the extent that the distro model does.
I am not sure this is the case. What other operating systems except Linux and the BSDs offer users a choice for which XYZ should be used? Is the window manager functionality in Windows Vista not provided by a specific version of that software component? I think the version of OpenSSH in OSX is tied to the version of OSX? You don't recognize this in non-Linux+BSDs because you don't have a choice, you use whatever version of XYZ comes with your OS.

  > Even the unstable branch is routinely multiple versions 
  > behind on software.
Respectfully, I think this is a bit of an exaggeration. Maybe this is true for esoteric packages but I am curious to hear why you think this. Unstable has the latest Xscreensaver at the moment. There was a one day lag between the release of 5.34 and a packaged version for unstable.


At the moment, the Racket package in the current stable release of Debian is using 6.1. That is an 8 month old package. Even unstable is only on on 6.3, which itself is almost 6 months old. The last time I used Debian was at most a year ago, and at that time the stable package for Racket was still 5.3, which came out in 2013.

And this same pattern repeated itself so often where programming tools were concerned that I eventually just gave up on using apt-get for developing and would wind up manually installing most languages, and then sometimes having to fight with conflicts because some package or other had already installed an out-of-date version, so now I either had to overwrite the package-installed version and risk breaking something, or just run it locally from some directory in /home.

Contrast to Windows or OS X, where all I have to do to run Racket is go to the website, download the latest version, and possibly run an automated installer. On OS X, I just have to drag one icon onto another icon, and if the app's been set up right this even handles my command-line paths for me.

Seriously, this is 2016. We have the internet now. We do not need to load every piece of software one would ever need onto a CD anymore like it's 1994 and the only reliable source of Linux software is Walnut Creek CD-ROMs.

It is a perfectly reasonable expected use case that someone desiring a piece of software should be able to just go and download the most recent version, install it, and should need arise, uninstall it just as cleanly.

Nowhere else is that process as broken as it is on Linux, especially under Debian's hopelessly slow update process.


Correct racket version 6.3 is 6 months old. But 6.4 came out two months ago, that seems to be the relevant age statistic. That seems a little disingenuous of you?

The software distribution and management scheme you are advocating for sounds like a security nightmare. Administrators are already bad enough with applying vendor updates. Can you imagine what things would be like if administrators had to track all of the different upstream security announcements for everything they installed?


1) No, it's not disingenuous, those are the package versions included in the present Debian branches. Yes, 6.4 is two months old. So why the fuck is it still not even included in unstable, and why do I need a PPA for it on Ubuntu?

2) I am describing how literally every other major OS works. That is how Windows works. That is how OS X works. There is nothing impractical or intractable about this problem. Mainstream software has been solving it for decades.


> It is not the software writers' fault that your distro can't be arsed to keep its package system up-to-date.

That's it. Ubuntu releases every six months, it's not upstream's problem that an up-to-date version of your stable distro is using the program released in September 2014 (17 months ago). As XScreenSaver 5.32 was released in November 2014 and Debian stable was released in April 2015, you have to wonder why in those five months why it was not 5.32 which was released, or even 5.31, but 5.30 which was released seven months before.

Does anyone remember July 2002 to June 2005, almost three years between Debian releases? I do. You can point figures for why that happened, but it makes little sense to point them at the upstreams who make your distribution possible.

At the end of the day, the message jwz put in the code is crystal clear - if you are unhappy with the message and don't plan to keep your distro somewhat up to date, he'd prefer you just rip out xscreensaver from the distro and use an alternative. It doesn't make sense to compare him to a terrorist setting bombs as someone did in the thread.

It goes back to the initial reply. For all the gnashing of teeth and finger-pointing and name-calling and effort to deal with this and other issues due to old packages - if Debian channeled all of that energy and effort into a shorter, saner release cycle, it would be much more beneficial to everyone. If stable Debian was as current as the last release of Ubuntu or Fedora or OpenSuse etc., this discussion would have never taken place.

P.S. One good reason to download XScreenSaver 5.34 is the additional Android functionality. XScreenSaver 5.35 will possibly have much-expanded Android functionality when it is released (or if not, then 5.36 or one soon after). If you have Android Studio and an Android device (or emulator), compile it and give it a whirl. Once it gets into shape more for users, an APK will probably be put up, and possibly even a Google Play app. So if you're an Android developer, give it a whirl. Send an e-mail to the folks in the android/README with any bugs, questions, comments, patches etc. Probably best to e-mail first before doing the work on major patches. Give it a spin.


if Debian channeled all of that energy and effort into a shorter, saner release cycle, it would be much more beneficial to everyone.

The whole point of Stable is that it doesn't change often. As a Debian user, I don't want to upgrade my servers every six months. I'm fine with stale packages (with backported security patches). There's a reason why Ubuntu, despite its standard release cycle, also maintains LTS releases for both servers and desktops for five years; but adopting that approach that requires much more effort than just switching the cycle as you're suggesting.

Also, another advantage of longer cycles is that Testing has more time to mature, which is why a Stable release is - in my experience - much more stable than an Ubuntu release.

So, yeah, I'm fine with having to compile XScreensaver myself. But I choose Debian because it's not like Ubuntu or Fedora.


It's called 'unstable' for a reason. New stuff breaks. Often.


This is the worst attitude to have about software


What, having beta versions?


That any new release is not to be trusted.


I can't find it on his website at the moment but he has an excellent explanation of why gnome-screensaver is inherently insecure. If I remember correctly it boils down to something like: `nobody can guarantee that gnome-screensaver is secure because it relies on GTK which nobody can prove or guarantee that it's 100% secure because there's too much code to check.`.

edit: found it:

I am as close to certain as I can be that there is no action a user can take on their input devices that will cause the current Xlib-based lock dialog in xscreensaver to unlock. That's because it's a small amount of code that I have stared at and tested for a very long time. It is a small enough piece of code that I (believe I) know every possible path through it.

Introduce N layers of widget library, general text field handling, compose processing, input methods, I18N... and all bets are off. Who knows what bugs wait lurking in there; who knows which particular combinations of which libraries are a security-bug timebomb.

Let me put that another way:

The GTK and GNOME libraries have never been security-audited to the extent that their maintainers would be willing to make the claim, "under no circumstances will this library ever crash."

One can, within a reasonable doubt, make that claim about libc, or even about Xlib, but not about anything the size of GTK. It's just too big to be sure. This is not a criticism of GTK or GNOME or their authors: it's simply a truth about any piece of software of that size.


But why can he vouch that libX11 is any more secure? The library that runs complex input method code on every key-press, that has had CVEs in it? [0] [1]

[0] https://cgit.freedesktop.org/xorg/lib/libX11/tree/modules/im... [1] https://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2013-20...

Not to mention that I can still write a keylogger that bypasses jwz's xscreensaver. [2]

[2] https://github.com/magcius/keylog


> But why can he vouch that libX11 is any more secure? The library that runs complex input method code on every key-press, that has had CVEs in it? [0] [1]

GTK (at least the 2.x series, I don't know if that's changed in 3.x) uses libx11. There's a good chance that, if there's a major flaw in libx11 which can be exploited, a GTK-based program is vulnerable to it. GTK is pretty massive, so it likely introduces issues of its own.

E.g. this bug: https://bugzilla.gnome.org/show_bug.cgi?id=722106 in GTK triggered this problem: https://bugzilla.redhat.com/show_bug.cgi?id=1064695 .

> Not to mention that I can still write a keylogger that bypasses jwz's xscreensaver. [2]

You can write a keylogger that bypasses pretty much anything that's X-based.

To, uh, to put it bluntly, ditching X11 for something saner would be the correct approach. Stacking stuff on top of X11 makes the problem worse; not stacking anything leaves it pretty bad. I'm not overly optimistic about Wayland, but I guess we'll have to wait and see :-).


Didn't know that screen locks were supposed to defeat keyloggers. I thought they existed to stop people walking up to an unattended computer and mess around.

Anyways, if i am reading the CVE right it is about how X11 can be provoked into maxing out its stack. Possibly annoying for those trying to use X11, but not something that leaks data.

The more i see "security" discussed on HN, the more i feel that there needs to be a ranking system. All to often it seems like people are operating with a binary definition of security, putting anything with a CVE in the insecure bin.


I believe this blog post http://blog.martin-graesslin.com/blog/2015/01/why-screen-loc... is relevant to the conversation and the key logger part (but I am out of my league to comment about it).


Then again he got egg on his face recently because the xscreensaver lock screen would crash if the display output was switched (or some such).


Was that a security issue or just a crash?


It would lead to the screen becoming unlocked, iirc.

Edit: Just to add a bit more detail. xscreensaver locks the screen by putting a window across it that grabs ownership of input devices. Thus if it crashes, the window goes away and the inputs released.


I think that's basically how all the X11 screenlockers work, sadly.


Yes. But xscreensaver only use straight xlib to paint any UI elements. JWZ's argument is that this reduce the chance of bugs vs using the likes of GTK or Qt to draw password prompts etc.


Looking at bug tracker activity so far would suggest that it does, but of course, the stats might be skewed by far more people using a Gnome/Unity/Cinnamon/whatever-ified version of xscreensaver than the vanilla one.

I read about that bug a while ago. It was due to mishandling a strange X11-specific cornercase; to be fair, that's the kind of stuff that is certainly more aptly handled (in the X11 world) in a toolkit (X11-specific cornercases are like 30% of the reason why, as soon as alternatives to Xlib were available, everyone embraced them and cried tears of joy), which would suggest that not relying on toolkits opens xscreensaver's locker to other issues that are more aptly handled in a toolkit.

However, if you look over the bugs that Gnome, Cinnamon or Unity's wrappers had... I'm inclined to think that there are a lot more trivial problems (like the one I linked in my other comment, here : https://news.ycombinator.com/item?id=11412688 ) in Gnome's a thousand and one libraries than there are cornercases in X11.


I don't think he meant to say it was flawless, but I'm sure you agree it reduces the attack surface by a great deal.


I’ve seen similar issues with several (!) Debian packages again and again.

Upstream has fixed lots of bugs, critical crashes, and CVEs, and Debian stable takes weeks or months to backport them, and even then only the CVEs. Not even fixing major usability bugs.

I’ve seen users on versions so old where nothing is able it interoperate with them anymore, and the users unable to compile from source themselves.

Currently, the solution I’ve seen from several such upstream sources was to offer a ppa or custom apt repository, and to let users install that.

The usability of debian – especially stable and oldstable – is severely hurt by maintainers not backporting these things, and it adds a lot of work for upstream.

Especially for fast moving things – like compilers, programming languages, networking software – this is a severe issue, and Debian is just ignoring it away, and patching the warnings upstream might have added away as well.

________________________

If Debian wishes to remove the warnings from Upstream, they should actually backport all bugfixes themselves – as they claim they do.

________________________

Btw, I won’t be able to answer to comments on this for another hour, as I’ve exhausted my hourly limit on HN comments – as always.


Here is another example:

https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=811273

https://bugs.launchpad.net/ubuntu/+source/nethogs/+bug/15433...

It doesn't work at all and bug is fixed upstream, but fix only goes to unstable.


If you want a specific bug fixed in stable, there are avenues to doing that:

https://www.debian.org/doc/manuals/developers-reference/pkgs...


Google Cache as the Debian tracker is getting hit hard https://webcache.googleusercontent.com/search?q=cache:UoVCY4...

Original link is to post #84 in the conversation.


This is a good example of why I do use Debian.

Debian has never had the goal of being the latest shiny thing on the block. And that's why people people keep coming back to it. Sure, over the years I've dabbled with RedHat, Ubuntu, Mint, and so on, but then repeatedly I rediscover, that, oh yeah, security and stability actually matter.

Debian Stable cuts the right balance for me by incorporating the latest security patches but not the latest features/bugs. This is a heck of a lot more work for the Debian maintainers than simply rubber stamping whatever the upstream software developers release, but it's proven worth it.

My only disappointment in Debian here is that they didn't catch this time bomb and disarm it preemptively.


Man, it's really weird to see this after just installing Debian after using arch for about a year. And sure enough, that message popped up, I tried to update it and the repos were outdated. Brother..

My worst experience with their repos was with logstash having a bug where it would annoyingly install logstash-web with an auto-start. But..... the package had a typo in its startup script and caused the JVM to restart over and over and over. This was at an HFT shop, so sure enough.. I got a call the next morning that all trading was stopped until it was fixed.

I don't understand how these sorts of things issues happen for so long on such a popular distro...

*arch is nice and all, but holy crap does AUR need some better standards with what's acceptable. After spending an asinine amount of time trying to figure out why every 5th package won't build where devs closing bug reports with, "Did you try $BASICTROUBLESHOOTING".. I'm pretty much done.


I've only been using Arch for a few months, but my mental model of AUR is that I shouldn't expect anything to work.

(this isn't out of frustration, I read early on that it was for sharing preliminary packages and discarded all expectations)


Debian does not have a logstash or logstash-web package. When complaining about "their repos" did you mean elastic.co's repos?


Here in Argentina it's common to find people that works dealing with customers or in the street that are very rude and kind of sociopaths. For what I have read in threads like this one, software developers that have to deal with many users/developers like Linus Torvalds, Theo de Raadt, and Jamie Zawinski end up suffering the same symptom.


It is also a classic help desk issue.


I'm glad I read the bug discussion before this discussion.

1. jwz is sorta famous and cool, I think. I've heard of him before this (and I'm not exactly hip!).

2. Debian is well known, too! /understatement

3. jwz put a timebomb in xscreensaver.

4. The timebomb displays a message if current date > source code date + 18 months.

5. The timebomb has been there for at least three years. See https://github.com/Zygo/xscreensaver/blame/88cfe534a698a0562... (Unofficial repo)

6. I'm with Debian on this one. Sorry, jwz.

7. I really, really, really like xscreensavers and want it to stay in Debian! :(


And at least OpenSuSE and Slackware just patched the warning out already -- probably after getting hit by it in the past.

See for example the patch in Slackware: https://slackbuilds.org/mirror/slackware/slackware-current/s...


Oh wow, I didn't expect that from Slackware... I hadn't noticed that the message is gone, I guess. I can't help but to think it is kind of rude to keep using xscreensaver but not honor the request of its author. It's purely the principle of the thing because I agree that the message is ugly. That's not in true Slackware spirit imho (in the sense that this is a less-than-necessary patch). I am a little bit disappointed. :(


Yes, the nag text was removed, but at least xscreensaver gets updates, which was the whole point.

> patches/packages/xscreensaver-5.34-i486-1_slack14.1.txz: Upgraded. I promised jwz that I'd keep this updated in -stable when I removed (against his wishes) the nag screen that complains if a year has passed since that version was released. So, here's the latest one.

Personally I don't get the point of screesavers at all, since we have monitors that can be sent sleep/standby mode when they're not used.


it is a lock screen that prompts for password on wake

https://www.jwz.org/xscreensaver/toolkits.html


I don't (ever) link my own psychotic blog rants on HN (not because I value anything close to a reputation, but because I don't want to inflict my stupid on others) but I wasted an hour last evening on an exceedingly long tirade fairly congruent with this debacle.[1]

On topic though, nobody wants to develop the way distributions want you to - basically maintain branches of every release you make for the lifetime of the distros where you backport bug fixes but not feature additions. Very little software, even libraries, is ever developed like that, and it is part of the reason why Linux desktops are a Mess™. And practically, you often cannot even do this. Your feature additions will optimize code that your bug fixes interact with, and trying to keep them separate is an exercise in masochism.

The TLDR answer is the sooner we can get a community software repo with open signups that developers push new releases to that supports appstream / xdg-app infrastructure the better. Distributions can still freeze the world and maintain all the stability they want, but we need to also let developers take responsibility for their own software on desktop releases.

https://zannyland.wordpress.com/2016/04/02/software-rants-23...


> On topic though, nobody wants to develop the way distributions want you to - basically maintain branches of every release you make for the lifetime of the distros where you backport bug fixes but not feature additions.

I'm pretty sure that's not how Debian wants to handle software. The whole point of Stable is that packages won't get any new updates, with the single exception of security fixes. Regular bugs are not supposed to be fixed, because doing so introduces uncertainty - you might be introducing new bugs or changing expected behavior.


Having used Gobolinux for some years now, i find the xdg-app thinking a case of shooting twee twee birds with AA guns.

*nix already have a mechanism called soname in place that allow versioned libs to live side by side. Gobolinux makes good use of it, along with symlinking, to allow multiple versions to be installed side by side.

You find similar, tho more elaborate, systems in place in NixOS/Guix.

xdg-app on the other hand comes out of the RPM/DEB camp, where there can only be one canonical version of every package installed at any one time. So to get around that they device a system where you basically build a new system for each and every "package".

What that effectively does is replacing dependency "hell" for dll hell (hello Windows).


Different versions of the same libraries for different applications is still the same problem. Fundamentally developers are writing software that distributors and integrators do not want to actually ship.

There are two independent circumstances to that, though. Either the developer does not deserve downstream trust because they abuse their users by breaking their APIs, ABIs, or UX's without due notice or process. This is how things were in the early 90s when Debian first started and the original crop of distributions emerged - the mindset was, of course as a distributor I have to assume responsibility for packaging and integrating all the software I ship, and thus of course I cannot track the individual releases of everything I include while making sure everything still works. That is insane, and no software distributor in the world does that.

The other circumstance is when developers do deserve trust but are not given it, as is the case in Debian, or why KDE needed the Neon project. Or when they want to assume the responsibility on their own, which no Linux distro provides adequate means to directly support users.


From where i sit, what seems to block most distributors from upgrading is the rigidity of their tradition package managers when dealing with multiple versions.

Thus they are reluctant to adopt new version of something, because the dependency chain may force a update of a large part of a installed system.

A package manager that can handle multiple version frees them from this worry, without the admin of a system having to trawl every "container" if a lib is found to be vulnerable (a very real possibility with the likes of xdg-app, and somewhat akin to Windows "dll hell").


In general, if new versions of libraries are breaking the ABI or API, the library developer needs to be thoroughly scolded. We are well capable and have the necessary workflows to create ABI and API stable feature releases. If vendors chose not to use these practices to create stable software, we need to criticize them and work to replace developers who don't respect their users enough not to break everything on every release.

Arch, for example, has clang3.5 and clang available because 3.7 was an ABI break. It has gstreamer 0.10 and 1.0 as separate packages because it was an API break. These projects respect these design constraints and should be trustworthy, and notify downstream of breaking issues like that well in advance so packagers can avoid repackaging upgrades expecting smooth transitions. But its unfair to users and developers to use carte blanche policy of freezing everything because some bad apples cannot maintain ABI/API stability.


A new version is about more than APIs and ABIs. It may introduce a new feature, or it may fix a bug that previously needed a nasty workaround or made something practically impossible while present.


jwz here seems to be arguing that distros shouldn't be allowed to freeze his software, even if they do it themselves and maintain the branch themselves. He wants only current versions of his software shipped, and old versions disabled once they reach a certain age. I don't think this is an entirely reasonable demand, at least for open-source software.


More charitably, he thinks the distro choice to freeze something should result in 0 headaches for him.

The idea that the freeze should be done without carefully taking responsibility for it is probably a pretty stupid norm, so I can understand the lack of patience.


It should be perfectly reasonable, it is just that within the infrastructure of projects like Debian and RHEL there are no mechanisms for a developer to assume responsibility for their own software. The disconnect of both behavior and authority in pushing new releases vs distributing new releases is a problem, particularly for user facing applications and collections like KDE or VLC.

Debian can, and should, freeze their software and provide support for what they package. But the developer should both be able to ask Debian to remove it, and to provide it themselves. But not in the traditional insanely bad and broken Windows "search the Internet for random binaries and use those" method. We have the capability, fairly easily, to provide a community repository that crosses distros and lets developers directly ship and update their own software, we just need to provide the capacity. Tech like appstream is how you enable it.


As a Debian user, that isn't really what I'm looking for. I like Debian's release management. For a stable Debian release, I don't want upstream being allowed to either: 1) push major feature and API changes, or 2) remove the software entirely. Then I'd have to deal with every random upstream's release model, whereas today I understand and trust Debian's synchronized release model.

I think it's fine if this doesn't meet the wishes of how upstream wants to manage releases. Part of the point of free software is that we have a right to adapt software to serve our own needs, without needing permission of the original authors. Rolling-style releases, which get updated whenever upstream wants to push an update, are a nice option to have. But a Debian-style stable release model is also a nice option to have. So I'd like both to exist, and wouldn't want upstream package authors to be able to veto the existence of stable releases.


Its not about vetoing, it is about Debian shipping broken software that is never updated and it drives developers mad to constantly get the same bug report for the same broken feature that was fixed years ago upstream.

I personally hope none of my software ends up in Debian Stable, because if it did anything broken in it at the point of freeze would haunt me for half a decade.


Well if upstream doesn't want Debian to ship broken software in a stable version, there's a solution: upstream should improve their quality control and not ship broken software in the first place. ;)

But I mean, there's no actual obligation to update software. Lots of redistributors don't update to the latest version for many reasons. Apple ships old GNU utilities because they don't like GPLv3. FreeBSD ships an old version of OpenBSD's pf firewall because their patched version is hard to forward-port. OpenBSD ships a truly ancient gcc for a mixture of technical and license reasons. And Debian updates software on a fixed stable-release cycle. I suppose it's fair that upstream can complain about this: GNU and jwz don't have to be happy that people are shipping old versions of their software. But people are also probably not going to stop doing it.



Huh. Ran into this behavior yesterday, coincidentally.

I also think there's another bug with xscreensaver where it will capture and "hang on" to your keyboard after you've unlocked the machine. Maybe it's time to switch...


> I also think there's another bug with xscreensaver where it will capture and "hang on" to your keyboard after you've unlocked the machine.

You sure you are running the latest version?

That is the origins of this message. That Debian (and perhaps other distros) would fail to push xscreensavers in a timely manner, resulting in JWZ getting emails about issues he had long since fixed.

This seems to stem from a policy that only security issues (resulting in a CVE being published) will be processed, while usability issues are left in place until the next major stable release is rolled out.

I can see two reason for this.

A: that the thinking around stable is heavily server oriented. Thus anything but CVEs being excluded result in reduced risk of production breakages. This even though Debian is a generic distro that can be molded into desktop or server usage depending on what gets installed.

B: that the rigidity of traditional package management do not allow a piecemeal updating process. This because updating a package removes the old package in the process. Thus if you want to bump up the xscreensaver version, and it hard depends on a newer lib somewhere in the dependency chain, you end up with an unresolvable conflict unless you also update everything else that depends on the same lib.


Oh, maybe I should update to the latest xscreensaver by hand, then. Thanks.

Of course, then I run into your B problem, where there isn't a really good way of installing your own versions that aren't in the repos. I was able to blacklist the kernel from updating itself, somehow, so I really just need to figure out how to do that again.


> only security issues ... will be processed

That isn't actually the policy, for example:

https://www.debian.org/News/2016/20160402


> This update mainly adds corrections for security problems to the stable release, along with a few adjustments for serious problems.


... to another distro that actually ships up to date software?


Not a chance! Debian is the only sane Linux distro. Xubuntu would by alright if they ever decide to switch to systemd.


Fair enough, I only use insane distros. I quit Debian because I wanted to dev in Ruby and it wouldn't let me. Then I used Archlinux for a while, but having to use rescue mode everytime I did 'pacman -Syu' got a little old so I switched to a sane OS for a while. Seriously, aren't both Ubuntu and CentOS not much saner than Debian?

Anyway, my sane period ended a while ago, it's full on NixOS now, why are people still using OS'es that are not the future?.


Having to rescue your system after every upgrade is either an exaggeration or you do weird things with your system. If the latter, quit that.

I had my Arch system running for over a year, upgraded at least once a week. Only once did I need the rescue disk, and that was my own fault (some um... "clever" pacman command I dreamed up ended up uninstalling some pretty important things like all of base).


It's a joke about how Archlinux works. If you do a "pacman -Syu" without reading the changelogs there is a good chance your system won't boot. You just can't be negligent. They might change the location of libc, they might switch to systemd, they change whatever they need to to be the coolest operating system on the block, and that's why I love them. But if you weren't hip to it, your system won't boot.


Arch isn't a distro for people who take issue to reading up on what they're actually doing to their system with upgrades, package installations, and configuration. That's why they have one of the most amazing wikis around. It's expected that you read all of the things.

I still don't agree with the unbootable bit though. That hasn't happened on any of my systems (two Arch machines, two Arch-based Antergos machines) in over a year without it being my own fault tinkering with something. I don't know man, maybe we're running completely different hardware or something.


Please note that I ran Arch for over 5 years, upgrading perhaps once or twice a year. Rebooting only at power failures, and at particularly bad security issues. The hardware was a Pentium 4. This was years ago (when I was in college).


That's basically guaranteed to break your system on Arch. Don't do that - it's your fault when it breaks like that.


>they might switch to systemd I remember that day. I was an Arch user at the time. For me, everything went without a hitch, but I know a couple of my friends had a lot of grief over the switch, especially since they had no idea the switch was happening until after things broke.


Keep in mind that Ubuntu and offshots are based on Debian Unstable, not Stable.

And the problem that the bugreport is about is that Stable would not get bugfix updates, resulting in JWZ getting emails for long fixed bugs.

Thus he put in the "obsolete version" warning to get Stable users to pester Debian maintainers.


If distrowatch can be trusted, they have so as of 15.4...


The current version of Zawinski is very old.


I would recommend looking at NixOS. We're not perfect either, but we generally do a much better job at keeping our shit up to date:

https://github.com/NixOS/nixpkgs/blob/master/pkgs/misc/scree...

If there's something you need that's outdated, a fix is as trivial as opening an issue on GitHub, or sending a Pull Request with the revision and SHA256 bumped.


Another response to this issue:

https://mjg59.dreamwidth.org/41085.html


I'm more on the BSD-side, so can someone give an explanation on why bug fixes are not being back ported? I understand long term stable, but I thought that was more an API thing.


The Debian policy states that all software in stable stay at the same version, with patches backported from upstream. I'm not sure it really makes sense as a general policy; at least I'm pretty sure there could be exceptions for some packages with little dependancies, such as xscreensaver. Slackware 14.0 which is way older than Jessie and even older than Wheezy, comes with the latest, 5.34 xscreensaver in its update stream.


There's no policy against backporting bugfixes, it just doesn't always happen due to lack of developers. If upstream doesn't provide backported bugfixes, it requires someone else to volunteer the time/resources to do it. For some packages, companies sponsor long-term maintenance branches, or particularly interested volunteers take it upon themselves to do it. For other packages, nobody steps up to do the work, so it doesn't happen.

I've found BSD ports pretty similar (mainly pkgsrc is the one I use). Some are very actively maintained, and others are much less actively maintained. Resources are limited, so pkgsrc maintenance happens only to the extent that someone makes it happen.


Oh yeah, depending on which one, its a bit slow on the BSD side also. I was just looking at why you wouldn't back port bug fixes as a matter of policy, but I'll take your comment as truth although it seems the conversation in the thread is a bit odd on that point. Thanks.


Heh, looks like its another popcorn moment.

BTW: i guess this is an argument for having feature release and bugfix only releases. Or at the very lease make it easy for distros to backport bugfixes (though i guess most will argue to give distros the middle finger and pull everything from git or equivalent).


> For the record, the timebomb for 5.34 will go off on 2017-04-01, ie, shortly after stretch's expected release date.

Debian always providing with fun...


After skimming the thread, I'm tempted to say: tell jwz "tough luck" and keep xscreensaver, with the message patched out, just out of spite. After all, he put it under a permissive license, so he should not be surprised that people actually use that license to do whatever they want.


He admits as much in the post, so you don't really have any cause for spite. Respecting his request would improve everyone's experience.


Respecting his request would mean removing xscreensaver which would be bad for users.


I don't see why a package maintainer would patch out the warning instead of just packaging the new version. But if it comes to that, it's better for the users to get xscreensaver from a more reliable source than that maintainer.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: