
Ancient Linux servers: The blighted slum houses of the Internet - smacktoward
http://arstechnica.com/security/2014/03/ancient-linux-servers-the-blighted-slum-houses-of-the-internet/
======
keithpeter
Enterprise Linux and version numbering[1]: I'm a bit worried by quotes like

 _" The mass infection works against servers running version 2.6 of the Linux
operating system kernel, some using releases from 2007 or earlier, Lee said."_

 _Some_ servers running 2.6.x kernels are as secure as anything out there!

[1]
[https://access.redhat.com/site/security/updates/backporting/](https://access.redhat.com/site/security/updates/backporting/)

~~~
duaneb
Why is that worrying? I don't read into that an implication that linux 2.6 is
insecure, just the terribly vulnerable versions they were using.

~~~
mpyne
The problem is that if you simply say that, something like "2.6.9 is
vulnerable", it might not actually be true. Enterprise Linux vendors might
actually be running 2.6.9 but with relevant security patches backported.

------
bananas
This is quite funny because the other day I was asked by a friend to look at
the box serving his company site just on a whim. I've never looked at or
discussed the tech he was using. Turns out he paid a company for "managed
service" in 2004. They were crap so he stopped paying for the managed bit in
2006 but paid the £90/month for the 1U rack space. He was running IMAP and a
static site. IMAP stopped working one day so he moved it to hosted Exchange
and shrugged it off. The static site was fine and has been updated via FTP
since by his design agency who don't know arse from elbow.

So the first thing I did was nmap it. SSH, HTTP, SMTP, IMAP open. Fair enough.

SSH'ed in. Some interesting surprises in there. 10 year old DL360 G2 with one
dead disk in the array with CentOS 3 on it. CPU spammed 100% with just about
every bit of crap that could work its way through the ancient OpenSSH build
(and presumably the cgi-bin and IMAP server (cyrus).

Sooooo... FTP'ed all the files of it, fired up a digital ocean box and locked
it down, uploaded the static site to it, changed the A record on his DNS and
shut the old box down.

I know this will only postpone the problem for another couple of years but
what can you do with people like this? All I got was a "meh" and a crate of
lager for my troubles ( granted it was 45 mins work but...)

~~~
sentenza
What... what hosting provider will not notice this? Shouldn't their abuse mail
address be glowing red?

~~~
dredmorbius
Do you have any idea of how many abuse addresses are /dev/nulled?

Sadly: black-hat takedown pages are sometimes your best sources of
information. A few weeks back, a site I visit periodically was pwned via what
appears to have been a Wordpress bug, based on similarities between it and
other sites listed by the black-hat hacker. I fired off mail to the standard
contact addresses for the site (abuse, webmaster, postmaster), all of which
bounced. The listed WHOIS contact actually worked (an argument _against_
domains by proxy), but I also reached out to the upstream hosting provider,
who had several other hosts similarly taken down, though other providers were
also affected.

Sadly, and from my own experience, monitoring abuse for smaller shops (1-2 man
admin team, far too many fires going on) is often a very, very low priority.

------
incision
When I read this I can't help but think about how often I run across people in
development or design who don't seem to see difficulty in or value of system
administration.

It's pretty trivial to turn up systems that work in the most basic sense, but
given enough time or load things start to break down.

The skill isn't in making things work, it's making them work efficiently,
securely and automating what will keep them in that state while planning ahead
to make it as easy as possible to recover, replace, extend and transition from
them in the future.

~~~
derefr
Are you sure that's what sysadmins themselves do? In practice, the "sysadmin"
seems to be "that guy who does all the stuff that we don't know how to
automate."

The people who _do_ the automation--if it's repeatable, instead of per-site
one-off stuff--are just called developers. E.g., the Docker developers, or the
Erlang developers.

~~~
dredmorbius
It's difficult to automate: vendor relations, software and package assessment,
configuration tuning, troubleshooting, hardware replacement, PCI / SOX / other
audit processes, pager response, documentation authoring, procedures
development (and refinement), user education (especially on matters affecting
security and stability), and what I tend to call technical anthropology:
digging into the dim mists of ancient history (anything more than 6 months
ago, but in cases stretching back decades) to figure out _why_ a particular
design decision was made (often being "hell, seems to work, make it so"), and
what might explode spectacularly if it's changed.

But yes, your competent sysadmin is _also_ automating the living fuck-all out
of as much as possible. Using standard methods and tools and very clear
documentation so as not to create _yet more_ technical debt for the next guy.

And if you're classifying your devs by language, you've got other problems as
well.

~~~
derefr
Not by language, no; I meant "Erlang developers" as in "the fellows at
Ericsson who make my life easier by repeatably automating distributed
failover, at least when you're using their platform." You'd call these
fellows, and the Docker ones, and the fellows responsible for writing Chef and
Puppet and so on, sysadmins?

I guess I mean more by "automation" than you do, though. Paying some other
company to do things like software and package assessment for you, so you just
have to consume their customized software distribution and can let the system
update whenever it likes, is "automation." Buying standard hardware that you
can find pre-tuned configurations for is "automation."

Effectively, anything that moves the company away from maintaining a "system",
and closer to being just a tiny core of people who run one tiny thing on top
of a large set of frozen, well-known components is "automation."

~~~
dredmorbius
_a tiny core of people who run one tiny thing on top of a large set of frozen,
well-known components_

Your error is in assuming this is possible. Change is a constant. Integrating
change with existing systems is the role of the sysadmin.

------
ChuckMcM
This is an interesting problem. Too many people just ignore their machine once
it "works" and leave it to the wild, or their attention span drops and ignore
it. The painful part is when previously "good" internet sites are now
injecting sophisticated malware unbeknownst to the owner.

------
foohbarbaz
I am running a box built with Fedora Core 4 (2007 vintage). Never patch any
systems. Why would I?

If I am running a service facing the internet, it's custom built and patches
would do it no good. Why would I wait for vendor to release a patch? If a
service is external, I will watch out for vulnerabilities and rebuild ASAP
before any patches are out. Besides, 90% of the time my custom build is not
even vulnerable to a particular problem.

If I am NOT running a service, why would I care about patches for it?

Why would I wholesale patch a server anyway? If somebody breaks in and gets a
local shell, all is lost anyway. If they are not in, they are dealing with
externally facing services only, see above. There are specific and counted
number of daemons on every machine.

This whole patch-update thing is misguided and for people that want assurances
and no responsibility.

~~~
brownbat
As a security researcher, this approach just confounds me.

I've never had an update break my system, and if someone pushed updates that
were broken, I wouldn't trust any old versions of their software any more than
the current one.

And we keep finding that people don't update and miss critical
vulnerabilities. There may be some admins out there that can independently
track and patch every known vulnerability... but that seems like an impossible
task for a box with any nontrivial amount of software on it.

And a lot of vulnerabilities aren't widely released. Updates sometimes
coincidentally break zero days that were never publicly revealed.

I remember the world where everyone stubbornly refused to leave early versions
of IE. Massive problem for security. The Chrome team looked at that and made
the call to move to automatic updates. I'm still pretty convinced that's a
better world.

You want to run a small box that barely faces the internet where you
constantly write your own patches in parallel with the primary software
developers, while also researching and patching new vulnerabilities before
they are deployed, go for it... but when that becomes the industry norm, I
consider it extremely harmful.

Maybe you can pull that off, but most people are not nearly that cool.

~~~
cpncrunch
I think you misunderstand. It's not that people are pushing out crap updates.
Rather, the problem is that when you update one thing on linux you usually end
up having to update 100 other things.

I'm in a similar position to the OP, in that I don't generally update linux
systems. The problem is that there is no way to simply 'update everything' in
linux (at least, not in Centos). yum update certainly doesn't do it - in
Centos 5.5 it only gets you php 5.1.x. To get a newer version you have to
update it manually or bodge yum.

Then the problem is that many newer packages require a newer glibc or
whatever, and that is something that can break your entire system very easily.

I think the root of the problem is that linux isn't very easy to update,
unlike Windows.

As long as your linux system is well locked down and you regularly keep an eye
on it, I don't see a problem with not updating regularly.

~~~
brownbat
That makes a lot of sense, thanks. I had always sort of seen Linux as easier
to update, since it's a single command, but you're right... that command
doesn't necessarily get you all the way. Things are going to vary from distro
to distro, and none of them will necessarily roll in the bleeding edge version
of whatever thing you want the day it launches. And then, custom code is vital
on a lot of machines for a lot of applications, and it will introduce its own
dependencies.

That said, these factors really complicate security advice on patch
management. If customers could be trusted to lock things down and keep an eye
on them, that would be a much better world. And I'm sure a lot of admins out
there are more than capable, but I worry about the Dunning Krueger effect
catching some admins off guard.

But ultimately, this is just a battle of emphasis more than disagreement. The
answer isn't "everyone should always patch everything," it just depends on a
lot of factors.

------
NoodleIncident
Would it be feasible to write a virus that only exploits vulnerabilities more
than, say, 18 months old, to either encourage or force sysadmins to upgrade?

If you get access, then you can look for ssh keys and whatnot to figure out
who to send form letters to. "Your server is vulnerable to x, y, and z: please
upgrade a, b, and c." It would be more fun if the virus tried to upgrade a, b,
and c itself, though. Even if it fails, a broken server that used to process
credit cards is better than a vulnerable one, right?

~~~
bediger4000
Back in 1996, I heard two different people complain that since Microsoft
convinced anti-virus companies to support Windows 95, and drop support for
Windows 3.11, new viruses for Windows 3.11 were forcing them to upgrade to
Windows 95. At least one of these folks was perfectly happy with Windows 3.11,
had a huge stash of Windows 3.11 games on CDs and just plain didn't want to
upgrade. He saw the whole thing as a charade to force people to shell out for
upgrades.

~~~
mikeash
This is playing out again pretty much word-for-word with the termination of
support for Windows XP.

~~~
justin66
It would have been a more valid complaint for someone compelled to upgrade
from Win3.1 to Windows 95.

At this stage, Windows XP users are like Japanese soldiers stranded on an
island, growing old not knowing they lost the war. Except without the excuse
of being on an island.

~~~
anigbrowl
I run Win7 on my main machine but I have laptop that still runs XP (and which
isn't connected to the internet, in general), because I have a large piece of
audio hardware whose driver support stopped with XP but which is otherwise
fine, and which I have no desire to replace ahead of time. I'm not alone in
this; while XP is far from ideal as an audio production environment, later
versions of Windows suffer from really terrible MIDI (control protocol for
synthesizers) drivers with unreliable timing.

Of course I could get a Mac, bt I'd really rather not switch platforms or buy
a bunch of extra expensive hardware for this one purpose. I could also run
Linux, but until very recently the selection of music production software has
been poor.

That said, I don't have any complaint with MS about their decision to abandon
the platform, I think they've done a good job in supporting it this long. I do
wish they'd rethink their driver priorities though, MIDI has been around for
over 30 years, and being 7-bit ~32kbps it can't be that hard to get decent
realtime performance.

~~~
bananas
I don't know what you're doing but I have rock solid midi timing from Windows
7 via a £5 generic MIDI/USB cable to my Korg Triton Studio. I'm using Renoise
as a DAW.

XP was crappy - it hit the disk so often and hung.

~~~
anigbrowl
It's a problem on multi-output USB interfaces for a lot of people. I suspect
system configuration issues but I don't think you should have to take a
barebones system approach to get decent timing. I mainly sequence in hardware
so I haven't gone to great lengths to get to the bottom of it.

DYI [http://www.gearslutz.com/board/electronic-music-
instruments-...](http://www.gearslutz.com/board/electronic-music-instruments-
electronic-music-production/693456-windows-7s-midi-out-jitter-disaster.html)
although you'll have to wade through some non-techie voodoo interpretations if
you peruse the whole thread. Also, bear in mind that this is a particularly
fussy demographic. I sometimes think if you are that obsessed then maybe MIDI
isn't the protocol for you, but OTOH there isn't much in the way of OSC
hardware on the market :-/ It's notable that there's a bit of a CV renaissance
underway though, a lot of people prize timing/resolution over versatility.

~~~
bananas
Depends how they implemented midi and the timers. If it's win32 MIDI API stuff
and Win32 timers the messaging isn't real time so you will get jitter. That's
going to be worse on win vista and above due to a number of issues.

If it uses ASIO MIDI back end it'll miss all the problematic message loops.

I only use SYSEX and use the Triton Studio sequencer though as its the most
awesome thing ever made by man.

------
Zaephyr
A browser add-in that would check and warn/block loading when the OS and
language versions aren't sufficiently patched might be useful security
mechanism.

