

Securing an Ubuntu Server - chaosmachine
http://www.andrewault.net/2010/05/17/securing-an-ubuntu-server/

======
zdw
I tend to find that minimalist OS's, such as plain Debian installed with no
frills or OpenBSD tend to be better candidates for security sensitive
machines.

Some of the utilities mentioned like Tiger haven't been updated in years.

No mention of chroot/jails/zones/etc, which can go a long way to nullifying
intrusions when they happen, or integrity tools built into the system (using
package managers to check for changed binaries, or checksum tools like like
tripwire/AIDE/radmind)

While it does have some good points, it also strikes me as "cargo cult system
administration", specifically "Following a security recipe":
[http://blog.lastinfirstout.net/2009/11/cargo-cult-system-
adm...](http://blog.lastinfirstout.net/2009/11/cargo-cult-system-
administration.html)

~~~
viraptor
Getting a minimal distro also gives you the possibility to install only the
things which are needed, instead of uninstalling a lot of stuff which came in
by default.

Additionally - if you are serious about monitoring, you should ship the logs
to some external host in realtime, so they can't be deleted / changed.

Running services in jails is definitely a good idea. Additionally configuring
Apparmor and Selinux could be a good idea too. Or even switching to a grsec
kernel. For the consistency checking, I prefer Samhain.

~~~
krakensden
I thought Ubuntu had AppArmor on and configured by default for many/most of
the popular services (Apache etc). I often see 'updating AppArmor profile' log
messages in system updates, in any case.

~~~
viraptor
Only as long as you use the default paths and enforce the profiles... which
should be the default for most people. But I think it's worth mentioning in
case you have a specific application which you reverse-proxy, or install
nginx, or ....

------
dedward
Securing a server is relatively easy - what people need to look past is
securing the system as a whole.

Some day something in the servers or network you are responsible for will
probably be breached in some fashion. You won't expect it. You'll find out.
You'll panic.

Think past securing the perimeter and think about what happens when that
fails... because it may. Are you using SSH keys internally to walk around
systems? Are they secure? What happens if someone accidentally leaves a test
account open somewhere and somehow, through dumb luck, ends up with root
access to your kickstart server or whatever config management stuff you have?

Now - what will you do when you have to start from scratch? Everyone likes to
say once a system is compromised you have to format - but that becomes a
daunting task if you're talking about hundreds of systems including the
systems used to manage those systems. Do you nuke everything from orbit? Can
your business afford the downtime?

You need recovery procedures as much as you need security... and you need
auditing procedures to ensure that your security and recovery procedures
remain valid.... and you need auditing procedures to ensure those auditing
procedures are followed.

Complexity also breeds problems... keeping systems simple also makes them
easier to manage.

------
moe
Articles like this are amusing but also somewhat disturbing.

Ubuntu is based on debian unstable. It was created to escape the slow debian
upgrade cycles and get more recent packages on the desktop.

So far, so good. But why are we now baking a server distro out of a desktop
distro that is based on the _unstable_ branch of a server distro? And who
would put that on a server and try to "secure" it?

~~~
DarkShikari
_stable_ means "doesn't change very often". It does not mean "less buggy".

In my experience, Debian "stable" packages tend to be literally covered in
bugs and security holes due to being years old. Only a few of the most popular
packages get fixes backported.

~~~
moe
_Debian "stable" packages tend to be literally covered in bugs and security
holes_

Would you mind backing this up?

 _Only a few of the most popular packages get fixes backported._

Debian must have many popular packages then, considering lenny has accumulated
security backports for 1529 packages in amd64 alone.

They take security pretty[1] seriously[2] and considering it's all done by
volunteers it doesn't seem nice to spread FUD about their work without any
data to verify your claims.

    
    
      [1] http://www.debian.org/security/
      [2] http://security-tracker.debian.org/tracker

~~~
moe
It's interesting to see this particular comment voted down without any
replies...

------
bryanh
I go by the trifecta:

* Disable root login. * Change SSH port. * Install fail2ban or denyhosts.

~~~
chanri
What if you can't disable root login (i.e. it's a cloud server)?

~~~
steve19
Restrict root login from a particular IP address

AllowUsers root@112.113.114.115

I use AllowUsers to ensure only users I want can login from SSH.

~~~
spicyj
But what if your IP address changes?

~~~
cuu508
Root login shouldn't be your normal entrance. For maintenance stuff use sudo-
capable regular user account with public key authentication. If there's some
software that requires root login and you cannot do anything about it, enable
root login and allow it only from the specific IPs.

------
lamnk
For a firewall (frontend for iptables) I recommend csf:
<http://www.configserver.com/cp/csf.html>

Features: stateful inspection, protection from different types of attack,
ability to ban port scanning attempts, ban brute force logins for various
services (ssh, ftp, ...), numerous configuration options but very easy to
configure with excellent inline documentation. For more info see the link
above.

~~~
SwellJoe
iptables already has all those features. I find "frontend" scripts like CSF
confusing and distracting, because they throw in everything under the sun,
including stuff that might make no sense in my particularly use case.

~~~
lamnk
Yeah, if you know the TCP/IP networking stack and iptables well enough. For
most people, they do not. That's why sysadmins exist ;-)

Otherwise, can iptables ban those who failed more than 10 SSH logins?

~~~
SwellJoe
"Otherwise, can iptables ban those who failed more than 10 SSH logins?"

Yes, sort of, though it's not "more than 10", it's "drop packets that look
suspiciously like an automated attack", which I think is actually cooler
because it never outright bans anyone but it makes it impossible to run an
effective brute force attack. You'd use the "--state NEW" option to determine
whether the connection is a new one or an established one. If someone connects
over and over again to ssh (or any login-able service, really) within a short
time you can drop them. Rules would look something like this:

    
    
      iptables -A INPUT -p tcp -i eth0 -m state --state NEW --dport 22 -m recent --update --seconds 15 -j DROP
    
      iptables -A INPUT -p tcp -i eth0 -m state --state NEW --dport 22 -m recent --set -j ACCEPT
    

Assuming, of course, that you're already accepting ESTABLISHED connections
above those rules.

iptables is astonishingly powerful and flexible, and it's usually pretty easy
to google up the right recipes, if you aren't quite sure of the incantations.
It can be a little intimidating, but it more than repays you for the effort.
When I did network consulting I was always surprised when I came upon a
network where they had a Linux router, web server, mail server, etc., and then
a Cisco PIX firewall sitting in front of it. Once again, it's just needless
complexity, when the Linux box could do everything the PIX does (and possibly
more, in the case of the low end PIX that I usually see).

Since you have professed iptables ignorance...are you sure CSF is doing
anything sensible in your deployment? By that, I mean, do you have any idea
what your firewall rules are actually doing and if they are effective for what
you think they are effective for? I'm always a bit wary when I come upon a
network where the people maintaining it have no idea what their systems are
doing or how they work. While CSF may be a net positive, if the trend is
toward avoiding knowledge, it's a dangerous direction to go in. I'm all for
simplifying, and sometimes tools make things simpler. But, as I said, in my
experience the "pile of shell" firewall scripts complexify things rather than
simplify them.

------
muppetman
You should also look at the grsecurity kernel patch:

<http://grsecurity.net>

It can really help to harden a system by making it much, much harder to
exploit buffer overflows, making it impossible for users to see any processes
other than their own etc. It's a great kernel patch and I highly recommend it.

------
sucuri2
I don't get why people recommend chkrootkit + failban/denyhost.

I would recommend OSSEC to anyone looking for a serious host-based IDS (it
does all those tools do + a lot more and very light weight).

*link: <http://www.ossec.net>

~~~
dedward
I found that, while it looked great, for some reason on every install I tried
(granted this was a couple of years ago, but I tried on various distributions
and hardware) - the mailing engine tended to silently fail - which was fairly
critical.

Denyhost, while not as sophisticated by a longshot, is dead easy to install
and take care of the SSH brute force issue.

------
zokier
nmapping localhost doesn't sound very effective.

~~~
getsat
netstat -tap | grep LISTEN

~~~
beala
Or you could just use the 'listen' flag ;)

sudo netstat -tpl

~~~
getsat
Today I learned. Thanks!

------
asdfor
The first step on securing an Ubuntu server is switching to Debian (this
sounds a bit trollish but oh well)

