
Long term support considered harmful - luu
http://www.tedunangst.com/flak/post/long-term-support-considered-harmful
======
thaumaturgy
> _Frequent upgrades amortize the cost and ensure that regressions are caught
> early. No one upgrade is likely to end in disaster because there simply
> isn’t enough change for that to happen._

Oh, how I wish this were true.

For what it's worth, it's pretty true as far as OpenBSD is concerned, in my
experience. But OpenBSD is the exception here, not the rule. Everywhere else,
developers all seem to have embraced "break early, break often".

Eventually you get burned. For me, it was a routine should-have-been-minor web
server update where one of the packages I relied on suddenly became
unsupported and _every single hosted site stopped working_. Since there's no
way to roll back server upgrades, I had a marathon night involving building a
new server stack and migrating all hosted sites there by 8 a.m.

But you can't yell at anybody when that happens, because the answer's always
the same: it's not the developers' fault.

Who really believes sysadmins wouldn't update everything all the time if they
could? Old, dodgy, out-of-date servers exist exactly because updates are
butthole-puckering, because everyone's been burned at least once by a "minor"
update, and because once the damage is done, undoing it is horrifyingly
difficult.

~~~
edofic
"Since there's no way to roll back server upgrades"

It is if you run a modern filesystem like ZFS or btrfs. You just do a cheap
snapshot before upgrading(can be automated) and roll back if there are
problems. Even works with lvm.

~~~
KaiserPro
Sadly as zfs isn't as widely available as I'd like, you can use LVM to provide
snapshots.

Its not as friendly as ZFS snapshots, but it is at least available in centos 5
[https://www.centos.org/docs/5/html/Cluster_Logical_Volume_Ma...](https://www.centos.org/docs/5/html/Cluster_Logical_Volume_Manager/snapshot_command.html)

(sadly somethings need long term support)

~~~
lsc
rolling back a LVM snapshot involves dding off the snapshot, and on to
whatever you want your production disk to be (or just running off the snapshot
forever, which has.... performance consequences with LVM.)

Yes, LVM snapshots exist, but they are of limited utility compared to ZFS and
the like.

I've been experimenting with CentOS6 and ZFS on Linux; so far it looks pretty
good. it handles failing consumer grade hard drives vastly better than lvm on
md, and snapshots are inexpensive.

~~~
b101010
You can also roll back lvm snapshots using the merge option

[https://access.redhat.com/documentation/en-
US/Red_Hat_Enterp...](https://access.redhat.com/documentation/en-
US/Red_Hat_Enterprise_Linux/6/html/Logical_Volume_Manager_Administration/snapshot_merge.html)

~~~
lsc
nice. I had not seen that before... that makes it a lot more useful.

------
chollida1
> A one year support window isn’t too short; it’s too long.

Wow, I'm in very strong disagreement with the author on this point.

I don't think I could find a single person, technical or otherwise who didn't
use the same operating system for more than a year. In fact its the default
for almost everyone who isn't technically inclined to upgrade their os only
when they get a new computer.

What about every single car produced today. I doubt you'd want to upgrade them
every 6 months.

As a ubuntu user, I find that I cross my fingers every time I run sudo apt-get
upgrade. About half the time I get broken builds and once a year my OS will
just flat out crash from it, or fail to reboot in virtual box.

This view, while idealistic is just so laughable I can't begin to believe the
author is serious.

I've worked with some mission critical systems( stock exchange's) They take
more than a year planning a big OS upgrade. To turn around and tell them that
they should be doing this every 6 months is just so out of touch with reality,
I don't know how to respond to the author:(

Many companies won't install a new piece of software until its been proved out
in production for 6 months to a year by someone else....

~~~
lomnakkus
From a different perspective...

... perhaps if we were to actually get serious about doing small/incremental
updates all the time, we'd get better at actually doing them without incident.

I think one of the major obstactles is that the software stack has become
ridiculously (and hideously) complex over time. If we could converge towards a
world more like what's described here[1], then I think we'd be in a better
place overall. (Granted, this the video is talking about deploying services,
but AFAICT there's little _essential_ difference between that and a modern GUI
system with all its DBUS interfaces and whatnot.)

[1]
[https://www.youtube.com/watch?v=GaHzdqFithc](https://www.youtube.com/watch?v=GaHzdqFithc)

~~~
vinceguidry
Small, incremental upgrades are something I'm finding are essential to
maintaining sanity as the only dev at my company doing what I do. Waiting to
upgrade only increases the pain. Now instead of having one potential issue to
troubleshoot every now and again, I have dozens when I finally do buck up and
upgrade.

So I get in the habit of upgrading the dependencies of all the apps I work on,
every time I work on them. Issues happen, but only to one dependency at a
time. It's manageable.

What I would love is to eventually have a CI server do it for me. Every single
day, it would run bundle update, run the tests, and deploy unless there's a
problem. If there is a problem, it drops me an email with the trace and fixing
it becomes part of my morning routine.

If subtler problems surface this way, then I've discovered an oversight in my
test coverage, or an overly complicated architecture that I need to remove
dependencies from.

I'll probably implement this sometime this year. I'm thinking I'll want to
redo deployment instead of relying on Capistrano, then finally growing my own
CI solution. I'm slowly moving away from big monolithic apps to smaller,
homegrown solutions that do only what I want them to do. I've already
reimplemented provisioning and configuration management. I believe in DevOps
as code.

~~~
sarciszewski
> What I would love is to eventually have a CI server do it for me. Every
> single day, it would run bundle update, run the tests, and deploy unless
> there's a problem. If there is a problem, it drops me an email with the
> trace and fixing it becomes part of my morning routine.

Agreed. A while back, there was a security fix in the default php5-fpm
configuration (0666 -> 0660, I believe), and I skipped it when updating. My
website went down until I learned to change the owner of the process. And this
wasn't even a _dist-upgrade_.

Sometimes manual intervention is required.

In my case, I run updates almost every day. If I had to wade through 6 months
of backlogged updates, I would have wasted a lot more time identifying where
it broke. Smaller feedback loops are a win.

------
fidotron
Upvoted for some interesting points, but it's quite wrong for the simple
reason that on a commercial timescale one year is nothing. There are reasons
Microsoft end up being dragged into supporting old OS versions, and it's
because running an operating system is just a means to an end, which hopefully
is doing something useful. "Upgrades" have, understandably, become synonymous
with unnecessary pain and breaking things.

Maybe OpenBSD really is more rigorous about quality control, but (as an
example) if you were to just accept every Ubuntu update it wants to install
you'd be wasting a significant amount of time just ensuring your system works
properly.

~~~
deathanatos
> on a commercial timescale one year is nothing.

I've worked in commercial environments where common invocations of `tar` did
not work, because `tar` was a _decade_ out of date. I had to learn things and
habits that had died before I even started programming. It wasn't pleasant. Do
not underestimate the age and stubbornness to upgrade of some environments.

Recently, I've helped migrate software from Ubuntu Precise to Trusty, and the
amount of differences make things mildly frustrating. We don't get to just run
one or the other, and we can't just drop everything and move to Trusty. We
have to continue to support the old while we build support for the new,
briefly support both, transition to the new, then tear out the support for the
old. Migrated. It's a lot of work when the changes are huge, but much more
manageable when I can take the changes more piecemeal (it's one if statement,
as opposed to many, that I need to manage at any given time). That's in a
production environment.

I run Gentoo at home. I much prefer its rolling releases to Ubuntu and Debian
which I ran alongside and before, respectively. Things break every now and
then. It's a tad annoying, but it gets fixed sooner or later.

------
craigds
"Considered Harmful" Essays Considered Harmful:
[http://meyerweb.com/eric/comment/chech.html](http://meyerweb.com/eric/comment/chech.html)

I think the OP is being a bit naïve if they expect all users to upgrade to a
new OS every year. Upgrades of Ubuntu and OSX are usually quite painful
endeavours, fraught with UI-breakage and new bugs to solve, and there's
usually little incentive for users to upgrade in every 6-monthly release.

IMHO Ubuntu has a good two-pronged approach - short-term support for most
releases, but a LTS every couple of years for those who don't want to handle
the pain of upgrading all the time.

Anecdotally, even upgrading to stay up-to-date with the LTS versions can be
difficult. A company I deal with is currently scrambling to ditch Ubuntu 10.04
before it loses support in April this year. That's 5 years old now. Companies
don't upgrade for fun.

------
zokier
Personally I think a 6 month release cycle is the worst of both worlds. Either
give me LTS and I'll do that one big upgrade every few years, or give me
rolling release where any breakage should be relatively localized and minor.
But with 6 month release cycle you have to constantly be upgrading, but the
upgrades are still big "break the world" upgrades.

------
learnstats2
>Now on the one hand, this forces users to upgrade at least once per year.

The problem with this article is this assumption which is not true in the
slightest.

There's no force which makes users upgrade - some users will upgrade when
they're told; some will wait 5 or more years - and may have good reason to.

> Nothing kills a bug report faster than “My network card worked in 4.4 but
> stopped working in 5.6.” Developers aren’t going to bisect five years of
> changes; you get to do that yourself.

This statement can translate into _don 't ever upgrade_. Is that really the
goal?

~~~
sarciszewski
No. Ideally, the goal is that users will upgrade as soon as a fix is
available.

~~~
LukeB_UK
The author makes it sound like the developers won't care about the bug because
the user waited from 4.4 until 5.6 to upgrade.

In this case the user is better off not upgrading if the developers aren't
doing to do anything about it.

~~~
sarciszewski
No, in this case, the user should have been upgrading this whole time, not
putting it off for 3+ years.

Downvote me all you want. Tough love is what's needed here. Hiding my comments
isn't going to make peoples' negligence any less _their_ fault.

~~~
obsurveyor
Their network card still would have broken at some point and they'd be in the
same bind they are in now: A non-working critical part and at the mercy of the
developers to identify and fix the bug.

~~~
ams6110
Exactly this thing happened to me in an OpenBSD upgrade. My network card no
longer worked. That prompted me to actually read the release notes, and I
quickly found the reason.

------
kbenson
Every time this topic comes up, we have people talking about really different
things not clearly explaining what they are referring to, and causing
confusion and people not understanding the reasoning of others.

I've read the following arguments already, but with portions of them implicit.
See if you can determine why the people are talking at cross points:

\- "I generate VM server images for all my needs, and deploy to virtualization
infrastructure behind load balancers to handle services. I treat he OS like an
application like the rest of the stack, and all my data is abstracted to a
data layer. I just generate a new image with patches and test it, then deploy
if it works in my test suite."

\- "I have hundreds of servers in dozens of roles with different software
needs, and I need them to be _secure_ and _stable_ in a timely fashion, and I
can't spend multiple weeks achieving that. Long term support and back-patching
allows me the time to plan needed large changes in infrastructure without
having to spend all my time managing updates, software changes and
configuration changes multiple steps down the dependency graph."

\- "I have a few servers with a few roles, or many servers with one or two
roles, and I can manage frequent updates just fine, and it allows me to take
advantage of the newest features, get security updates immediately as the
software authors fix it, and I don't have to worry about end of life of
software."

\- "I have an application stack with multiple dependencies, and I just make
sure to update my stack as I make changes to the software. I would love/have
set up continuous integration software to build and test everything as I go,
so I know if it works or not before taking it live."

As someone who's been in all of these situations at different points, often
multiple at once, let me be clear: _Unless you have an argument that addresses
ALL of these situations, you really haven 't thought through the issue._

~~~
hawleyal
I think the author is advocating that those scenarios should converge. His
argument is that second scenario is an anti-pattern and should be corrected to
be more inline. It might be a pipe dream, but it's still his perspective.

~~~
kbenson
They serve different needs. The second scenario may well be better served by
moving towards the first, but that's not a quick project, especially when
uptime and reliabiity is important. Even with that, the for the third scenario
a full VM infrastructure may by overkill. This whole topic can be summed up
with "Examine your needs, examine your options, make the best choice you can
at the time, try to make your life easier by leaving pathways to change your
choice with the least problems based on expected future needs."

Just because someone _looks_ to be doing something similar to you, doesn't
mean their needs and constraints aren't quite different. I think this is the
big thing most people miss.

------
slasaus
It's funny that just today, before I found out about the glibc vulnerability,
I reconsidered if I really want to upgrade my Ubuntu 10.04 mail and web
servers to Ubuntu 14.04 LTS. I was triggered after reading some bad things
about 14.04 [1]. I've looked at the M:Tier binpatches and package upgrades for
OpenBSD, looked at FreeBSD, Debian with it's experimental LTS, but eventually
I'm still in favor of the 5-year support for Ubuntu server.

I have bad experiences in upgrading production machines in-place, whether it
is OpenBSD, Ubuntu or Debian and always install a new machine (vps) side-by-
side which is really the only stress-free guaranteed way to go in my opinion.
Having to do this only once every 5 years is really a lot nicer then at least
once a year. The good security backports of Ubuntu, minimal breakage (auto-
security upgrades at my Ubuntu servers have been working almost flawlessly
[2]) are the least maintenance, stable and secure setup I can imagine.

OpenBSD having only one year of support, no binpatches of the kernel and
having no stable security fixes of the packages are the reason I only use it
with anything that can be done by the base system (backup host and
nameserver). OpenSMTPd looks very promising, but I would need supported
amavisd packages, same goes for httpd that needs php in my setup. Besides it's
limited use, I still love OpenBSD and the mindset that stewards it. If only
they had longer support and binpatches for kernel and packages :)

[1] [https://tim.siosm.fr/blog/2014/04/25/why-not-
ubuntu-14.04-lt...](https://tim.siosm.fr/blog/2014/04/25/why-not-
ubuntu-14.04-lts/)

[2] the mail config was overwritten once after an auto security update of
dovecot in 10.04, quickly recovered it with etckeeper (/etc in git)

~~~
tghw
Thanks for the tip on etckeeper! I've been wanting something like that for a
while.

------
allendoerfer
His reasoning makes sense but, as all developers tend to do, he spots an error
in a system and wants to switch to another or invent a new one. You know the
xkcd.

Often times the solution is just to work together and fix the old one. For me
the logical fix would be to patch all kinds of malicious, undefined or non-
spec behaviors in LTS releases in short cycles regardless whether the
developer thinks, it is security-critical or not. To make this more feasible
you could either pay for it or use a minimal base system separated from the
user-space. Both exists today.

~~~
JoachimSchipper
That's both a lot of work and a lot of churn, though - people choose LTS
because they _don 't_ want things to change!

~~~
allendoerfer
In a sense, things do not change, they stop to be changed from a assumed
state.

------
Sanddancer
There are a lot of problems with this argument, especially when you go into
the ports section. For example, the version of go between 5.4, 5.5, and 5.6 is
different for all three versions, which means you're going to have to rebuild
packages, run regression tests, etc every six months to make sure some new
feature doesn't run into your code, or some bug isn't introduced. Similarly,
in the case of the base install, you are still going to have to run a
regression test there to ensure that any deprecated or removed features don't
impact your workflow.

These things are expensive from a development and ops perspective, and why
most commercial software vendors tend to stick to a releasing for a few
platforms they know are going to be supported for longer than a year or two.
Yes, OpenBSD upgrades are mostly painless, but still, bugs can and do happen,
and expecting everyone to be able to just drop everything and rebuild is plain
and simple unreasonable.

------
jerf
In theory, I agree. I've often likened code development to muscles; you get
good at what you do, and what you don't do atrophies. If you push changes ever
hour, you get very good at making sure that works. If you push them every
three months, you get good at pushing them every three months, and panic when
something requires you to move within days. And so on. This applies to all
sorts of development process aspects (test a lot, get good at testing, fail to
test, becomes impossible to test later, etc).

In practice, neither what tedunangst nor I think matters, because once you
ship something to a customer in any way, they're not going to upgrade it.
Offer automated upgrades and they'll demand a way to turn them off. It's not
just developer boxes we have to worry about, unfortunately.

------
CrLf
Forcing users to upgrade frequently has one outcome, and one outcome only:
they will stop upgrading, period. If you give users only two choices, they
will choose one, but the one that requires less effort.

There is more to life than keeping up with the constant upgrade treadmill.
It's already bad as it is. Some Linux distributions are on a 3-year support
cycle, which is too short. If you have many servers, especially many servers
running a lot of different things, upgrading every 3-years means having people
doing nothing more than upgrades.

Once you fix your applications to support the changes in the next OS release,
that release is almost out of support...

~~~
wmf
_they will stop upgrading, period_

Has that been observed with Chrome, ChromeOS, iOS, etc.?

~~~
CrLf
That's an apples to oranges comparison. To upgrade Chrome a consumer depends
only on himself. To upgrade a server OS, the "user" (the sysadmin) depends on
a number of other people with different priorities. And that's _if_ the people
he depends on are still with the company...

I've been faced inumerous times with impossible upgrades (nobody cares about
upgrading besides you, so nobody cares to fix whatever is preventing the
upgrade). And when the upgrade is possible, many times it takes years.

That same user that upgrades Chrome on his laptop, is still stuck with IE7 at
work because his company relies on it to pay his salary.

------
gtirloni
Oh please, all it would take for this "Ghost" vulnerability not to happen was
someone at glibc to have made the right call regarding how to treat a buffer
overflow in a function dealing with external data. They didn't, so what? The
lesson here is that Red Hat & other companies should do additional review if
they are shipping something branded as super secure and stable.

Long term support is great. People screw up sometimes. Additionally, it's
about time core software components get more attention from companies with the
big bucks (and that are profiting from it).

~~~
vezzy-fnord
Personally I'm of the idealistic belief that glibc is really starting to show
its corrosion, and should be replaced by a clean-room, standards-compliant and
elegant design like musl: [http://wiki.musl-
libc.org/wiki/Functional_differences_from_g...](http://wiki.musl-
libc.org/wiki/Functional_differences_from_glibc)

Of course, this would be a massive change for the Linux ecosystem at large. It
is my hope that projects like Sabotage Linux will eventually procure most
major software to become musl-compatible, though.

The source of the GHOST bug was in NSS, which musl lacks by design.

~~~
GregBuchholz
Speaking of corrosion, maybe we should be building systems with a language
which avoids most of the problems of C.

~~~
Sanddancer
I think that the OpenBSD team has more than definitively shown that C isn't a
problem, programmers' attitudes is the problem. While I disagree with them on
the premise of upgrade early and often, I do agree with them on active
reviews, active documentation, and enforcing code hygiene, which prevents the
sort of problems that proponents of other languages harp on about when talking
about C.

~~~
MaulingMonkey
> I think that the OpenBSD team has more than definitively shown that C isn't
> a problem, programmers' attitudes is the problem.

To err is human, no matter what your attitude. While having the correct
attitude can help, this line of argument - that it's the programmer fault and
not his tools - is part of the very attitude that causes problems!

Any decent programmer embraces the fact that they'll make faults - and beyond
simply trying to improve, they'll also embrace tools that help them
compensate. Such as static analyzers. Fuzzing frameworks. Safer programming
languages.

Perhaps they'll choose tradeoffs that sacrifice some of these options - I
certainly do, coding in C++ all day, a language with at least 40 references to
"undefined behavior" in the '03 standard alone. But that doesn't mean those
tools aren't worth considering.

> I do agree with them on active reviews, active documentation, and enforcing
> code hygiene, which prevents the sort of problems that proponents of other
> languages harp on about when talking about C.

I see zero inherent reason to rely on extreme vigilance by programmers to
catch errors that better tooling can catch without having the occasional
shared blind spot where 5 programmers all miss an uninitialized variable.

You can help use these to help compensate for C's weakness if that's a
tradeoff you need to make, but that's quite an opportunity cost - leaving them
with far less time to catch other logic bugs and ship features.

There's a reason nobody preaches about how it's going to be the year of the
OpenBSD desktop (and that's not a knock at OpenBSD!)

Vigilance also requires the right knowledge. A coworker recently revealed to
me he'd only recently learned that C++ member variables will generally have
unspecified values if you don't initialize them. He thought other threads were
to blame for garbage values.

Every now and then, I check to see if any of the compilers we're using have
added a warning I can enable that will catch leaving members uninitialized
after a constructor completes. So I can enable it. And configure it as an
error. So everyone who builds the code knows there's a problem. Immediately.

So far, no dice.

------
zdw
The downside of this is that OpenBSD is pretty difficult to upgrade in a clean
manner given it's "uncompress the tarball over /, then clean stuff up
manually" method.

I really like OpenBSD, but it desperately needs a modern package management
system. This is frankly my largest sticking point with it - it's faster and
much cleaner in most cases to wipe, reinstall, then apply configuration than
to try to upgrade.

~~~
jkot
How about debian with freebsd kernel?

~~~
elektronjunge
The new pkg tool in freebsd is pretty good and meshes nicely with the ports
tree. The marginal improvements for 3rd-party built packages with apt aren't
really worth getting rid of the rest of freebsd for. It also doesn't help
OpenBSD, which really needs a new package manager.

------
cesarb
A problem with the lack of long term support is that you have to keep moving.

The most important part of "long term support" distributions is that breaking
changes are kept to a minimum. If a "long term support" distribution releases
an update to for instance glibc, you can expect that applying that update will
change nothing other parts of your software stack might depend on.

The dependencies might be subtle; for instance, a new version of a database
server might have optimized its query planner, which happened to make one
particular query your software does a couple of percent slower, which led to
it using a few more seconds to do its processing, enough to push it over the
timeout limit for a different part of the system. So the only sane way to
avoid breaking changes is to avoid all changes.

The opposite would be, as advocated in this post, frequent upgrades. "Frequent
upgrades amortize the cost and ensure that regressions are caught early", but
that means you are dealing with upgrades and regressions all the time. You
arrive at work in the morning, planning to write a new feature; but an upgrade
has just arrived, and it needs a few changes to your project. You develop,
test, and deploy these changes; in the meantime, another upgrade has just
arrived, needing more changes, to something you had changed just a few days
ago. The day ends, and you didn't even start to develop the new feature. You
use more time chasing the upgrade stream than doing productive work.

Long term releases "batch" the changes. When several changes affect one part
of your software, you only have to deal with them once. Sometimes, you can
even discard that part of your system and do something else, while with a
continuous change stream, you might have wasted time adjusting little by
little.

A somewhat relevant post from Joel on Software:
[http://www.joelonsoftware.com/articles/fog0000000339.html](http://www.joelonsoftware.com/articles/fog0000000339.html)

------
ak217
Spoken like someone who has never had to maintain a large production system.

This notion is handily disproved by the market, though. Red Hat and Canonical
enjoy commercial success and mindshare precisely because they provide stable
long-term support platforms on which others can build software. Most of
Canonical's worth is actually embodied in their commitment to LTS releases,
which provide a sweet spot between stability and the glacial pace of RHEL.

------
zokier
Isn't 6 month release cycle and support for past 2 releases essentially also
the Fedora release model? From what I've heard (and used) that hasn't been
massively successful.

> No one upgrade is likely to end in disaster because there simply isn’t
> enough change for that to happen.

That might be true for OpenBSD, but in Linux land the rate of change certainly
is great enough to cause major breakage even at 6 month cycle.

------
jrochkind1
The only thing that would make this realistic is if open source developers
prioritize backwards compatibility very highly.

Of course, there will always be some bugs anyway. And it varies across
language (or other open source affinity) communities. But in many communities,
developers aren't really even _trying_ to ensure backwards compatibility.

If developers committed to backwards compatibiilty with successive versions
for X number of years -- then you could update to the latest release for a
number of years worrying less about breaking things, rather than needing to
freeze the versions you use for that number of years.

Yes, this would significantly increase developer hours on open source
projects. Nothing comes for free.

But wouldn't those developer hours be better spent centrally on ensuring
backwards compatibility in the projects themselves; as opposed to every
developer of every consumer of the project needing to deal with backwards
incompatibility on every upgrade? Or every distro needing to backport patches?
(This latter is more debatable, which is exactly why we have the status quo)

------
jacquesm
There is a cost to upgrading and there is a cost to staying behind. As soon as
the cost to stay behind outweighs the cost to upgrade I upgrade, but no
sooner. Otherwise I'd just be throwing away money and time. The bigger problem
here is that the cost to upgrade is only known after the fact. What should be
a routine job can easily spiral out of control into a marathon of misery.

------
contingencies
Reading the title I assumed this was a rant about how proper ops processes
replaced the need for LTS releases and the demonstrably bad practices they
could be seen to encourage. IMHO, this would have been a more interesting and
valid point.

The current post is a straw man argument for a few reasons.

First, because vulnerabilities will be exploited whether the window of
availability is 1 day, 100 days, or 1000 days (like this one) and likewise,
upgrades will be missed, for various reasons, whether or not they are
encouraged.

Second, the author spuriously implies that a 6 month sliding upgrade window
("one size fits all") approach is superior, easier and more appropriate for
everyone through a combination of broad assumption and vague hand-waving about
developer time efficiency. Rubbish.

If you want newer code, then ASAP is the go and BSD-style monolithic releases
are _bad_... something closer to the versionless OS ideal of constant,
incremental, package-wise release processes such as Gentoo's _portage_ (BSD-
inspired!) where you can even install a _-9999_ ( _git HEAD_ ) version of many
packages would be ideal. Obviously, with such an approach, stability caveats
must be considered (as they are). Thus, both FreeBSD and OpenBSD actually
represent a 'middle-ground' between Gentoo and commercial Linux vendor LTS,
with OS wide release processes incorporating some greater collective package
and package interoperability testing, combined with the package-wise release
of things through the ports trees.

 _What 's the real ideal here?_ A general purpose means to build working,
tested, maintainable, secure systems with minimal effort for a broad audience.

 _How to get there?_ Clearly, not by whinging about external release window
frequencies of volunteer-based projects, as any frequency still creates bugs,
and no process is perfect. The answer, I believe, is time-honored. Careful
design for failure and minimalism, and good process. Test. Measure. Remove
surplus features and components. Iterate.

------
IshKebab
Can we have a ban on "X considered harmful" titles? Have some originality.

------
Silhouette
This is not an argument against long term support. It is an argument against
using OpenBSD for any project you need to work for more than one year.

------
reality_czech
Yes, we should all step forward into the wonderful world of running brand new
software all the time in production. I'm sure it will all be much more secure
than relying on year old software that is being actively maintained.

Is the date on this one supposed to be April 1st?

------
jeffdavis
PostgreSQL has a 5-year support window.

I'm curious what negative consequences the author thinks PostgreSQL suffers as
a result from that policy.

~~~
silvestrov
It's a lot easier with a 5 year window for PostgreSQL: Kernel developers have
to support all kinds of weird semi-buggy hardware that they can't buy in their
country, they have to make the kernel work with badly written GPU drivers ,
and they have to deal with interrupt-level code which is notoriously difficult
to debug.

~~~
jeffdavis
The author's primary example for the "LTS is bad claim" was glibc, which is
not a kernel.

Are you saying the "LTS is bad claim" only applies to kernels and other things
very close to the hardware (closer than a database)?

------
coherentpony
"[Useful popular thing] considered harmful."

