
Server Retired After 18 Years Running on FreeBSD 2.2.1 and a Pentium 200MHZ - joshbaptiste
http://www.theregister.co.uk/2016/01/14/server_retired_after_18_years_and_ten_months_beat_that_readers/
======
krylon
I remember a discussion on a FreeBSD mailing list, around 2003-2004, where
people bragged about the impressive (though in comparison to this headline,
puny) uptimes of a few years.

One of the developers remarked that while he was proud the system he worked on
could deliver such uptimes, having an uptime of, say, three years, on a
server, also meant that a) its hardware was kind of dated and b) it had not
received kernel updates (and probably no other updates, either) for as long.
(Which might be okay, if your system is well tucked away behind a good
firewall, but is kind of insane if it is directly reachable from the
Internet.)

Still, that is really impressive.

~~~
sagischwarz
I always wondered how people got to these uptimes. I have to reboot my linux
box at least once ever other week to install updates.

~~~
hyperknot
You only need to restart when there is a kernel update, and the frequency of
kernel updates depend heavily on the distro used. Debian stable, for example,
although using ancient versions of the packages, is a great OS for such a use
case, as kernel upgrades are really infrequent. Have a look at the changelog
frequency of Squeeze [1] or Wheezy [2].

[1] [http://metadata.ftp-
master.debian.org/changelogs/main/l/linu...](http://metadata.ftp-
master.debian.org/changelogs/main/l/linux-latest-2.6/linux-
latest-2.6_29_changelog) [2] [http://metadata.ftp-
master.debian.org/changelogs/main/l/linu...](http://metadata.ftp-
master.debian.org/changelogs/main/l/linux-latest/linux-latest_46_changelog)

~~~
aroch
If you update a central library (e.g. openssl), you'll have to restart in
order to deal with in-memory copies being used by other programs. If you're
running a Debian server one of the packages to include in your base install is
debian-goodies or needrestart because the former bundles a very helpful little
script called "checkrestart" and the latter is an updated systemd-compatible
version, both of which use `lsof` under the hood to determine when and why
package updates require a restart for full effect.

~~~
bgray
But do you? You really only need to restart the processes using those
packages. Technically, a kernel update (specifically security update, bug
fixes may not be important) would only require a reboot.

~~~
jerf
Yes, you can restart all processes using SSL.

However, I've often been in situations where I reboot anyhow, because
rebooting means I'm 100% confident the old code is gone, whereas if I try to
get clever and avoid the restart, I'm significantly less confident. Depending
on how hard it is to validate the security bug, that can be a problem.

Plus, for much of the past 20 years for many computers, if you're going to
restart all services, adding in the last step for rebooting doesn't add all
that significantly to the downtime. Server-class hardware often have things
that make that not true (stupid RAID cards), but for everything else you were
often only adding, say, 25% for the actual reboot.

~~~
ultramancool
You don't need to be 100% confident the old code is gone - just 100% confident
the old code is no longer exposed to the network - check your sockstat/netstat
and call it a day.

------
ljosa
The authoritative DNS server for pvv.ntnu.no is still a MicroVAX II from the
late 1980s. It runs an (up-to-date, I think) NetBSD. Logging in by SSH takes
several minutes, even with SSH v1.

~~~
olemartinorg
Sounds just like the kind of half-crazy stuff NTNU students do when you give
them some time to tinker. I heard a story from Samfundet where they were
displeased with the speed of their key/value database for some payment
processing stuff, so they replaced it with BIND.

Even though the thing is old, I'm guessing the uptime is not that impressive?

~~~
CaptSpify
Do you have a source for that? I'd be interested in seeing how they did it

~~~
hucker
He's probably refering to itkacl, which we use for authorization/acls (not
payments). The README (and the code) can be found here:
[https://git.sesse.net/?p=itkacl;a=blob;f=README;h=3dcb059f30...](https://git.sesse.net/?p=itkacl;a=blob;f=README;h=3dcb059f306140838f22dd6fd8541a5a9e5a26b8;hb=HEAD)

------
icebraining
[http://lists.ausnog.net/pipermail/ausnog/2016-January/034052...](http://lists.ausnog.net/pipermail/ausnog/2016-January/034052.html)

~~~
codemusings
News worthy! Jokes aside I just can't get over the writing style of The
Register:

    
    
      "The post, made by a chap named Ross..."
    
      "The drive's a Seagate, for those of you..."

~~~
oneeyedpigeon
a) It's British, which might explain some of the language and humour (with a
"u").

b) I keep forgetting about 'El Reg', but whenever I return there, I enjoy my
visit. It has a strong, unique tone which I find a breath of fresh air amongst
all the dull, neutral news reporting out there.

~~~
laumars
_> It's British, which might explain some of the language and humour_

It's better explained by simply describing the site as a tabloid. Tabloids
aside, British news reporting isn't typically that informal.

 _> I keep forgetting about 'El Reg', but whenever I return there, I enjoy my
visit. It has a strong, unique tone which I find a breath of fresh air amongst
all the dull, neutral news reporting out there._

Personally I find the site appalling. It's up there with the The Daily Mail
for rewriting other people's articles. Granted this is basically how 99% of
reporting happens these days, but these kinds of tabloids often miss the
entire crux of the original piece they're plagiarizing due to their
editorialisation. For example, their articles on OpenSSL parallels the Daily
Mail's absurd anti-immigration propaganda. And often, again like the Daily
Mail, El Reg will completely miss-report a story just for the sake of having a
clickbait headline.

Frankly, I'd rather have dull neutral news reporting over biased
misinformation.

~~~
pjc50
Andrew Orlowski's pro-copyright cheerleading was what turned me off it. He can
be very nasty about people he disagrees with.

~~~
teh_klev
Yeah, I'm not a huge fan either and find him somewhat objectionable.
Orlowski's the only one that used to turn off commenting on his articles which
is half the fun of the reg, though he seems to do this less now. But every
publication, in print or online, is going to have a writer who you just never
get on with intellectually.

There's plenty of other good contributors such as Duncan Campbell [0] and
Alastair Daabs [1] that more than make up for the Orlowski deficit.

[0]:
[http://www.theregister.co.uk/Author/3066](http://www.theregister.co.uk/Author/3066)

[1]:
[http://www.theregister.co.uk/Author/2802](http://www.theregister.co.uk/Author/2802)

------
crishoj
In fairness, from the article it's not actually clear whether the server
literally had an uptime (as reported by the OS) of 18 years, or whether it had
simply been in constant service (modulus power cuts) for 18 years.

~~~
orblivion
The article said "24/7".

~~~
crishoj
Your average electrical utility also provides electricity "24/7".

~~~
digi_owl
Any proper server should be on some kind of UPS, no?

~~~
Thrymr
Most UPS batteries won't last anywhere close to 18 years, either. Can you hot-
swap your UPS battery as well?

~~~
Symbiote
Maybe a dual PSU, with each connected to a different UPS.

Or some kind of bypass switch.

Or a really fancy UPS, such as might be used in a hospital or laboratory.

------
theandrewbailey
Ars Technica had an article a few years back about a machine that was up for
16 years. Had pics, too! [http://arstechnica.com/information-
technology/2013/03/epic-u...](http://arstechnica.com/information-
technology/2013/03/epic-uptime-achievement-can-you-beat-16-years/)

------
keithpeter
Having read and enjoyed this thread and the later follow up thread on The
Register, I was struck by the number of commenters who could not clearly
remember the dates/machine types or who posted anachronistic descriptions.

People here forging ahead with innovative hardware, why not just record brief
details of dates and setups in the back of a diary or something. In 30 years
time, you'll be able to start threads like this!

------
geggam
I tossed out a similar system not too long ago

Pentium Pro 180Mhz running OpenBSD 64MB RAM with a perl BBS averaging around
10k hits / day.

Wasn't worth the electricity to run that thing. It still worked when I put it
out on the corner.

~~~
unethical_ban
It belongs in a museum!

------
SEJeff
This was an old RHEL4 external dns server I ran at $last_job:

[http://www.digitalprognosis.com/pics/bye-bye-
uptime.png](http://www.digitalprognosis.com/pics/bye-bye-uptime.png)

I was sad that we had to shut it down, but we had to shut it down due to
migrating our primary colo to another city and were going to retire all of the
hardware. I'd been manually backporting bind fixes, building my own version,
and had to do some config tweaks when Dan Kaminski released his DNS vulns to
the world.

It is always a sad day to retire an old server like that, but 18 years... What
a winner!

Edit:

But 1158 days for an old dell 1750 running RHEL4 isn't too bad considering it
serviced all kinds of external dns requests for the firm. Its secondary didn't
have the uptime due to constant power issues in the backup datacenter and
incompetent people managing the UPS.

~~~
dozzie
At my previous job I had a server for big, commercial version control system
(an SCM, as people in selling those call them) that was running RHEL4 and had
similar uptime. I remember my team celebrating round 1024 days of uptime.

------
rogeryu
Almost as impressive is the fact that in 18 years, the electricity had no
downtime.

~~~
kuschku
It's not too uncommon to have such uptimes for electricity.

The last blackout I personally saw was in 2006, the famous one which killed
half of Europe's grid.

With average yearly downtime of 17min, it's not hard to find places that had
no downtime for decades.

~~~
frik
Half of Europe? Probably not.

~~~
kuschku
Yes, half of Europe.

> Twenty eight seconds later, an electrical blackout had cascaded across
> Europe extending from Poland in the north-east, to the Benelux countries and
> France in the west, through to Portugal, Spain and Morocco in the south-
> west, and across to Greece and the Balkans in the south-east.

[https://en.wikipedia.org/wiki/2006_European_blackout](https://en.wikipedia.org/wiki/2006_European_blackout)

There were even some TV documentaries about it, and in some areas it continued
for days.

~~~
frik
Parts of several countries were affected. Other parts had no outage. Depending
on the view point, the blackouts a few years earlier in Italy and US were more
widespread.

------
jedberg
I used to run the FreeBSD box for sendmail.org. When I left that job in 2001
it had already been running for 2+ years.

Considering that the datacenter it was in is now the Dropbox office, I'm
guessing it had to be shut down and moved at some point, but 2+ years seemed
like a really long time even then!

FreeBSD is just really good at lasting forever.

~~~
rconti
I don't get the "even then" part of your statement. If anything, big uptimes
are a thing of an older, more monolithic era when patching was less common. It
was extraordinarily common to have multi-year uptimes on important servers,
whereas these days they seem far less common

In my experience, the past decade has been a time of trying to be more
rigorous than ever about regular patching, routine reboots, and respinning VMs
to ensure that your provisioning systems work as intended, so you never again
end up with these monolithic irreplaceable systems.

These days, the only time I find servers with big uptimes is when they've been
neglected -- they're some old bastard child of some former employee or the
ancient rickety crap some department is too afraid to touch... And even then,
it doesn't raise an eyebrow until it's 1000+ days.

~~~
jedberg
Because even 15 years ago it was rare for a server to have such a high uptime,
since you usually had to unplug and move it every once in a while, and
hardware still didn't last years and needed replacement.

------
wazoox
I always had many Unix machines with high uptimes around. My home PC (Linux)
typically reboots 2 or 3 times a year. My office DNS server has currently 411
days of uptime and is the best of my bunch ATM.

In 2002 I had installed on the machines under my guard some program that
reported uptime to some website. One of my machines, an SGI Indy workstation,
had a high uptime, about 2 years. Then a new intern came, and we installed him
next to the Indy. Unfortunately, his feet under the desk pulled some cables
and unplugged the Indy and broke my hopes of records :)

------
emcrazyone
oh man, I have them so beat! I have a Slackware Linux box with similar specs.
200 MHz Pentium, 32MB of RAM, and I think I have an old 10GB barracuda 80pin
SCSI drive in it connected to an Adeptec PCI SCSI card. Every so often the
hard disk starts making a high pitch noise but throws no errors and the noise
goes away after a few minutes. It sits on a UPS and I probably have an uptime
of a few years on it now. It has been running nearly 24/7 since 1996! Only
powered off when I needed to move the box from a home-office and a few rented
offices over the years.

When it was in the basement of my home/office, I would sometimes hear it's
disks wine as I was working out (lifting weights and such). It was even in my
basement through parties in my early bootstrap years.

I originally bought it to run WinNT 4.0 for a new company a friend of mine and
I bootstrapped. I would guess a couple years later is when I put Slackware on
it. It's running a 2.0 linux kernel. It's not exposed to the public Internet.

It use to be a local Samba, DHCP, and DNS server for the company. I eventually
upgraded to new hardware and left this server around for redundant backups. I
develop software so copies of my git repositories find their way onto this box
each night. It is in no way relied upon other than to call upon it out of
convenience if another server is down or being upgraded, etc...

At one point the box was in the basement of my home when a small amount of
water got to the basement floor and because the box sat just high enough on
rubber feet, no damage. Occasionally I go back there and pull the cob webs off
it.

There is no SSL on it. We still telnet into it or access the SMB shares for
nostalgia. It's sort of a joke in the office these days to see how long it
will last or if it will simply out last us.

~~~
autoreleasepool
Where do you all live where the power doesn't go out a couple times a year?
I'm from South Florida and I'm jealous. Tropical climates are no fun for
always-on computers. I guess I don't need to worry about static discharge when
I build my boxes, so that's a plus.

~~~
emcrazyone
Michigan - and yes, power is awful in certain parts. Especially in
subdivisions where much of the power cables are above ground and strung
through trees... So bad that I was considering a whole house generator at one
point but eventually moved to an industrial park for expanding business.... I
also run equally old Trippline UPSes. They run off 220v. Every couple years we
have to replace the batteries in them and other than the batteries, they still
work.

The UPSes pretty much keep things running for a couple hours - most of my
mission critical hosting servers are are in data centers these days and for
the office, our servers sit on the UPSes where at most we lose power maybe an
hour every couple years in the office park. Much more often in a residential
area.

When that large blackout of 2003 occurred we were down for couple days.
[https://en.wikipedia.org/wiki/Northeast_blackout_of_2003](https://en.wikipedia.org/wiki/Northeast_blackout_of_2003)

------
MikeNomad
Great run for all-original equipment. I worked at Shell's Westhollow Research
Center in the mid-90s. We handled the nightmare of standardizing the desktop
space (for the first time ever).

A lab was decommisioning an instrument controller that had been running non-
stop since they had first spun it up, fresh out of tge paking box, a decade
previous.

And they had never backed up any of the data. Sure, the solution was the
pretty straight forward use of a stack of floppies. It was still pretty nerve-
wracking having a bunch of high-powered research scientists watching over my
shoulder, "making sure" I got all their research data off the machine they
were too smart to ever back up themselves. Good Times.

~~~
MikeNomad
"fresh out of the packing box" \- apologies.

------
iolothebard
Anyone running old machinery that had DOS drivers would likely have older
computers. I remember working on base seeing 386/486s in an aircraft hanger
area that were so covered in grime I was astounded they were still used.

------
metaguri
I set up a FreeBSD box at a computer shop I worked at in high school. We had a
T1 and static IP, so I set up some routes to make it internet accessible (my
boss wanted to use it to host pictures for his eBay transactions).

I set it up, threw it under a table in the corner with nothing but a power and
ethernet cable, and moved on.

I was surprised when 5 years later he called to ask why it had stopped
working. I told him where it was, he rebooted it, and it came back.

(My memory was a little fuzzy but I probably set it up in 2001-2002 and it ran
until at least 2007-2008)

------
gtrubetskoy
Funny, in today's world, the uptime on my Linux (virtual) box is several times
greater than that of the macbook within which it's hosted.

------
webkike
It will be given the highest honor a sys admin can give a piece of hardware:
casual reference to it as "what a box" in the future.

------
koolba
Is there a way to track uptime across kexec[1] restarts? That way you could
differentiate between a hard reboot and "soft" one (ex: automated kernel
upgrade). Having a system like that working for a 18 years would be insane!

[1]:
[https://en.wikipedia.org/wiki/Kexec](https://en.wikipedia.org/wiki/Kexec)

------
vondur
Reminds me of the old Netware servers we used to have running file services
and print queues for a few computer labs at a University I worked at. Netware
was really stable and we only restarted them when some of the hard disks in
the raid array were dying.

------
deutronium
Impressive! I made a silly kernel module that 'fakes' your uptime, by patching
the kernel.

[https://www.anfractuosity.com/projects/uptime/](https://www.anfractuosity.com/projects/uptime/)

------
meesterdude
This is beautiful to me; it's ROI is off the charts from any kind of
reasonable expectation. Keeping it cool certainly helped, and having it serve
a role that could even exist for 18 years is another important factor.

------
NDizzle
18 years is a really good run. I had some white-box Cisco networking equipment
that had 10 year uptime. I shut it down when we closed the office they were
in.

------
bechampion
it would've been nice a photo of the uptime or something like that... i
believe him tho.

------
grabcocque
And now its watch is ended.

------
ommunist
Well, I have TI-92 graphing calculator still working OK, since 1995.

~~~
Cyberdog
Has it been graphing quadratic equations the whole time, though? :P

~~~
ommunist
It was running Win95.asm

------
mnw21cam
That's nothing.
[https://news.ycombinator.com/item?id=10952016](https://news.ycombinator.com/item?id=10952016)

~~~
oneeyedpigeon
"Some have had disks replaced, the odd battery renewed, ..."

Reminds me of the old gag: "I've had that broom for 14 years. It's only had 2
new heads and 3 new handles."

At least the server in the original story stayed original.

~~~
_Codemonkeyism
This was thought of by the Greeks, it's called 'Theseus paradox'

[https://en.wikipedia.org/wiki/Ship_of_Theseus](https://en.wikipedia.org/wiki/Ship_of_Theseus)

~~~
saalweachter
Even better (especially when talking about things like teleportation), the
Roosevelt Birthplace:
[https://en.wikipedia.org/wiki/Theodore_Roosevelt_Birthplace_...](https://en.wikipedia.org/wiki/Theodore_Roosevelt_Birthplace_National_Historic_Site)

It was torn down in 1916 and then a replica was built in the same place in
1919.

