Hacker News new | past | comments | ask | show | jobs | submit login
Securing an Ubuntu Server (andrewault.net)
168 points by chaosmachine on Mar 12, 2011 | hide | past | web | favorite | 52 comments

I tend to find that minimalist OS's, such as plain Debian installed with no frills or OpenBSD tend to be better candidates for security sensitive machines.

Some of the utilities mentioned like Tiger haven't been updated in years.

No mention of chroot/jails/zones/etc, which can go a long way to nullifying intrusions when they happen, or integrity tools built into the system (using package managers to check for changed binaries, or checksum tools like like tripwire/AIDE/radmind)

While it does have some good points, it also strikes me as "cargo cult system administration", specifically "Following a security recipe": http://blog.lastinfirstout.net/2009/11/cargo-cult-system-adm...

Getting a minimal distro also gives you the possibility to install only the things which are needed, instead of uninstalling a lot of stuff which came in by default.

Additionally - if you are serious about monitoring, you should ship the logs to some external host in realtime, so they can't be deleted / changed.

Running services in jails is definitely a good idea. Additionally configuring Apparmor and Selinux could be a good idea too. Or even switching to a grsec kernel. For the consistency checking, I prefer Samhain.

I thought Ubuntu had AppArmor on and configured by default for many/most of the popular services (Apache etc). I often see 'updating AppArmor profile' log messages in system updates, in any case.

Only as long as you use the default paths and enforce the profiles... which should be the default for most people. But I think it's worth mentioning in case you have a specific application which you reverse-proxy, or install nginx, or ....

Isn't Ubuntu server a fairly minimal install(not OpenBSD minimal but still).

The Ubuntu Minimal CD is one of the leanest distro install options available.


That's just a small Ubuntu CD which downloads packages on demand (instead of keeping them on the CD).

If you really want a minimal distro, install FreeBSD or build a Debian root using debootstrap and just the daemons you need. I have a Debian machine (acting as a firewall) which is ridiculously parsimonious -- 'ps ax' fits on a single page and most of that is just kernel threads.

In defence of the article, it has some good ideas. On the other hand it completely overlooks what people should be doing to secure their systems, regardless of OS.

People should look at the role of the system and the information assets contained therein, then determine (if possible) what types of attack and attacker might be able to get in, and how - that tells you what you need to do.

If you're not able to do the latter, but can do the former then you're still able to secure your system. A classic example would be a firewall. Your firewall contains no information assets beyond your ruleset. Your firewall however can be used to open up internal access to your network.

We can then say, "We believe our attackers will mainly come from the outside". How will they get in? We don't know. What can we do to stop them from accessing the firewall? Put a rule in to drop all traffic with a destination of the firewall's external IP address or network interface. Providing your routing's set up correctly, you've now put in an appropriate countermeasure.

Rinse, repeat for all of your Internet-facing assets, hosts, services and applications. It sounds a lot, but it gets quicker the more you do it. Most unlaunched startups with relatively uncomplex setups could do this in a day.

Here's another example. Our Apache server is accessible from the Internet.

Who can access it? Everyone.

Does everyone need to access it? Yes - we're using it to host our application.

Who is going to attack this? We don't know, could be someone skilled, could be someone not skilled.

How will they get in? An Apache exploit, an Apache misconfiguration, something further up the stack (like our app or PHP).

What do we do about it? We maintain Apache versions (to address the Apache exploit issue as best as we can).

We make sure all unnecessary Apache modules are disabled and that our Apache setup is as minimal as possible, then look at the directives on the Apache docs and change the settings to be as effective as possible (without breaking our app of course) - this addresses the misconfiguration issue. If we have to switch something on we now know the effect of that and if concerned can ask HN, monitor it or both.

As for something further up the stack, let's repeat the process on the way up until we're comfortable with what we have.

If you're writing down your reasoning and your results along the way this will help you get up to speed - it also means you can write your own hardening guide, or even integrate it into fabric scripts to save time in the future. You'll also massively increase your understanding of the software you're using in the process, so it's definitely worth doing.

Agreed. If running a server, I'd choose Debian or Slackware over Ubuntu any day.

Securing a server is relatively easy - what people need to look past is securing the system as a whole.

Some day something in the servers or network you are responsible for will probably be breached in some fashion. You won't expect it. You'll find out. You'll panic.

Think past securing the perimeter and think about what happens when that fails... because it may. Are you using SSH keys internally to walk around systems? Are they secure? What happens if someone accidentally leaves a test account open somewhere and somehow, through dumb luck, ends up with root access to your kickstart server or whatever config management stuff you have?

Now - what will you do when you have to start from scratch? Everyone likes to say once a system is compromised you have to format - but that becomes a daunting task if you're talking about hundreds of systems including the systems used to manage those systems. Do you nuke everything from orbit? Can your business afford the downtime?

You need recovery procedures as much as you need security... and you need auditing procedures to ensure that your security and recovery procedures remain valid.... and you need auditing procedures to ensure those auditing procedures are followed.

Complexity also breeds problems... keeping systems simple also makes them easier to manage.

Articles like this are amusing but also somewhat disturbing.

Ubuntu is based on debian unstable. It was created to escape the slow debian upgrade cycles and get more recent packages on the desktop.

So far, so good. But why are we now baking a server distro out of a desktop distro that is based on the unstable branch of a server distro? And who would put that on a server and try to "secure" it?

Maybe debians philosophy and approach is not the only valid one for security. Or people want to choose other trade-offs than the debian maintainers.

stable means "doesn't change very often". It does not mean "less buggy".

In my experience, Debian "stable" packages tend to be literally covered in bugs and security holes due to being years old. Only a few of the most popular packages get fixes backported.

Debian "stable" packages tend to be literally covered in bugs and security holes

Would you mind backing this up?

Only a few of the most popular packages get fixes backported.

Debian must have many popular packages then, considering lenny has accumulated security backports for 1529 packages in amd64 alone.

They take security pretty[1] seriously[2] and considering it's all done by volunteers it doesn't seem nice to spread FUD about their work without any data to verify your claims.

  [1] http://www.debian.org/security/
  [2] http://security-tracker.debian.org/tracker

It's interesting to see this particular comment voted down without any replies...

I go by the trifecta:

* Disable root login. * Change SSH port. * Install fail2ban or denyhosts.

Actually, the trifecta would be:

1. Update your software.

2. Use strong passwords, or public keys, for all accounts, regardless of how unimportant they seem. A strong password is one with letters, numbers, and possibly special characters, 8 or more characters in length, and is changed periodically.

3. Don't run unnecessary services or services with a poor recent security history.

Disabling the root login is a legit choice, though if someone figures out your sudo-capable account, you're equally screwed, so use strong passwords always.

Changing the ssh port only stops the people who aren't targeting you specifically, and even those people aren't always stopped. It is trivial to scan a host to find where ssh is running, regardless of the port you put it on.

fail2ban or denyhosts are OK for preventing brute force attacks, but OpenSSH and most security sensitive software already has timeouts, among other measures for preventing brute force attacks. Which makes fail2ban more of a nuisance-prevention measure (i.e. you see fewer brute force attacks show up in your log reports, and thus making it easier to see the more dangerous threats), rather than a core security measure.

If you performed your three steps, but ran an insecure version of any service (particularly those that have elevated privileges), you would still be easily exploited. fail2ban or denyhosts would prevent most services from being exploited due to weak passwords, so that's probably not a hole you'd have. And, changing the ssh port does nothing for you if your system hasn't been updated and is running exploitable services.

I used to do security cleanup and post-mortems and data recovery as a contractor for folks who'd had systems exploited. 90% of them were due to running old versions of software, and the rest were due to weak passwords (which may or may not have been prevented by fail2ban; in a couple of cases the exploiter used the webmail client to brute force the weak passwords...most people don't include the webmail client logs in their security considerations, and thus wouldn't configure fail2ban to ban attempts on the webmail client).

I'm not saying your advice is not useful, just that it's something you do after you get your core processes in order.

Edit: Formatting.

Great advice! You sound like you have a lot of experience in server security.

What is your recommendation on strong passwords? Also, should all passwords on a system be different (login passwords for different servers, email passwords, etc...), and if so, how do you keep track of all of them?


Use SHA1_Pass for all of your passwords. It's awesome. Never store, type or forget a password again. Full disclosure, I'm the author.

This is a good way to go about it. I use Supergenpass session (the bookmarklet version is exploitable by websites you visit) for Chrome for websites, and keys for most ssh purposes. I have a couple of strong memorized passwords for situations where keys and supergenpass aren't convenient.

I'm totally on board with having one long/strong passphrase memorized and using it to generate strong, seemingly random passwords algorithmically, and I assume that's what SHA1_Pass does. So, yes, using SHA1_Pass is probably a good way to go about it.

I'm a bit conflicted by this.

On the one hand, the hex password in the screenshot while quite long has only 16 possible characters. The Base64 password has only 60 possible characters to choose from (for each position) and must end with an equals sign.

The number of possible characters for each position in the original sentence is quite high (94) but in a sentence the actual likelihood of a wide range of combinations being used is quite low (unless deliberately using obsfucation).

It's a very interesting piece of software and presents a very interesting question, which I guess is this:

For a given sentence, which out of Base64 encoding, Base 32, Hex and the sentence make for the most permutations required to crack? If the answer is Base64, 32 or Hex then your tool is helping. If the answer is the sentence then your tool is impeding. I suspect (but haven't done the maths) that for purely single case alphanumeric passwords it'll be base 64, but for mixed-case alphanumeric with punctuation it'll be the sentence. Anyone care to do the sums to lock this down?

I think you're missing the point of something like SHA1_pass: A different passphrase for every site. In the case of SuperGenPass, it hashes the site with the passphrase, making a unique passphrase for every domain. In the case of SHA1_pass, I would do something like, "My wacky passphrase 123 facebook.com" and "My wacky passphrase 123 google.com", etc. if I were to use it.

The sentence is only a piece of the hashed value, while some unique thing about what you're logging into is the rest of it. So, using "My wacky passphrase 123 facebook.com" as my password directly on facebook.com would mean that anyone with malicious intent and access to facebook.com code could easily figure out that every website where I have an account is "My wacky passphrase 123 sitename.tld". Strong password failure. The one-way hashed version of that has no meaning to the sites I log in to.

So, original sentence has very low security value, while a hashed version of it (assuming a unique piece for every site or service) has very high security value, even if the actual password generated is less strong than the original sentence from a purely "number of possibilities" perspective.

Of course, if you always use the exact same passphrase, and thus the same resulting password, your math would make sense...but the likelihood of an exploit is far more likely to come from people behind one of the sites you use sniffing your password, than from a brute force attack, in either case.

As I understand it, SHA1_pass does the following (please correct me if I'm wrong or missing anything out):

* Takes a user supplied passphrase

* Makes a SHA-1 hash of the supplied passphrase

* Encodes the resulting hash in a variety of ways

I don't see where a different passphrase for every site comes in. You seem to be saying that you would append the site if you were to use it - you wouldn't need a tool like SHA1_pass to do that though.

I guess where I'm coming from is that I don't see what SHA1_pass does that provides any benefit over something like 1password or password gorilla, both of which can generate random passwords for arbitrary accounts.

Following your example, if I obtain your password on site A, then I get a hex|base32|base64 representation of a SHA-1 hash. I then put this into something like this (http://www.golubev.com/hashgpu.htm) and crack the SHA-1. I notice your algorithm for creating passwords and do the same. I'm now exactly where I would be if you weren't using your approach for a password on every site.

I appreciate that the SHA-1 element acts as an interesting intermediary, but your method for generating the password is predictable. I think a randomised SHA-1 might be better.

Author again. It should be used exactly as SwellJoe Described. The hash of "My Awesome password for Facebook!!!" should only be used on facebook.com. "My Awesome password for Twitter!!!" and so on.

The benefit of SHA1_Pass is that you never store, synchronize or backup passwords ever again. It's free, completely open-source and anyone can implement it and other software can be used to generate the hashes. Some of the password storage managers are not that way.

Spot on advice, and a valid rebuttal. My trifecta was more in the spirit of securing against hundreds of brute force attempts, but I can't really knock your excellent advice.

That's a terrible trifecta.

* Disable root login

If you have root login enabled by default, you're using the wrong distribution. This is 2011, not 1996.

* Change the SSH port

Why are you doing this? What will you achieve by it? In what way does this reduce your overall risk of compromise, and by whom?

* Install fail2ban or denyhosts

That's the only thing out of your trifecta that should be in there. Aside from that, here's why your trifecta isn't a good move:

You're still using password authentication for SSH. Switch to public key authentication. This, combined with fail2ban or denyhosts means bots are not getting in. Period.

You aren't addressing your attack surface area - look at what information assets are exposed to the wider world and how. Once you know what your information assets are and how they can be accessed (either legitimately or unscrupulously) from the outside world you know what you have to protect and how people can get to it. That way you can focus on real countermeasures.

What about your applications? Have you checked your code to make sure you're validating input correctly? How are you protecting against SQL injection or Cross-Site Scripting? How are you handling state?

I put to you an alternative trifecta:

* Know your assets

* Know your threats

* Use appropriate measures to protect your assets from your threats.

"* Change the SSH port"

"Why are you doing this? What will you achieve by it?"

Changing the port does not improve security. It does, however:

- dramatically reduce the noise associated with the fleet of password guessing bots that hit open SSH server daily.

- make it reasonable to assume that a password guess attempt is specifically targeting your serve, and therefor consideration for escalation and follow up.

Signal to noise ratio. Less noise make it possible to discover the signal.

> - dramatically reduce the noise associated with the fleet of password guessing bots that hit open SSH server daily.

But if you're already using fail2ban or denyhosts (as suggested in the trifecta) then you won't get that much noise anyway, and if you're only using public key auth then the noise from password guessing bots doesn't matter anyway.

> - make it reasonable to assume that a password guess attempt is specifically targeting your serve, and therefor consideration for escalation and follow up.

Unfortunately a failed authentication attempt regardless of port isn't enough to conclude that it's a targeted attack. Plenty of bots port scan common ports before running the tools to make sure they're attacking the right service. In fact some bots can do full portscans of hosts (although this is rare as it's quicker to scan for the attacks you have built in, thus you get more attack attempts in less time) - usually this is done to build a database of services, so that they can be exploited later when a new vulnerability comes out.

Regardless, as you say it doesn't improve security, there's no reason for it to be in any security-related trifecta.

You're missing a really important ssh setting: disable password-based authentication!

However, this still won't protect you from vulnerabilities in your ssh daemon itself. Optimally, you would use Single Packet Authorization.

Wow, I didn't know of fail2ban and denyhosts.

I actually have a home-made script similar to fail2ban.

Yeah, I've written a number of very long, very gruelling scripts that ultimately were all replaced by denyhosts.

On the flip side, denyhosts only works on sshd (to my knowledge) -- so the scripts I wrote to monitor tornado log files and block all the random attack vectors is still worth having around.

denyhosts is configurable enough that I see no reason why it couldn't also work for other services. You'd probably have to hand-configure it (with a regex for each log type), though.

That said, I've only ever used it for ssh.

I wrote a similar script way before fail2ban was ever created in around 2003 or so. When PF was first introduced. Since then I've just kept using it since it works particularly well for what I want it to do.

What if you can't disable root login (i.e. it's a cloud server)?

There's no reason you can't disable root logins on a cloud server. I do it every time.

This is where firewalls come in handy...

Disabling remote root login isn't as big a deal as it used to be - as others have said, if someone gets your administrative account you're pretty screwed anyway - and if they get your password for sudo, it's the same thing.

Other than that, it really depends on what you mean by cloud.

Restrict root login from a particular IP address

AllowUsers root@

I use AllowUsers to ensure only users I want can login from SSH.

But what if your IP address changes?

Root login shouldn't be your normal entrance. For maintenance stuff use sudo-capable regular user account with public key authentication. If there's some software that requires root login and you cannot do anything about it, enable root login and allow it only from the specific IPs.

Why can't you disable root login on a cloud server?

For a firewall (frontend for iptables) I recommend csf: http://www.configserver.com/cp/csf.html

Features: stateful inspection, protection from different types of attack, ability to ban port scanning attempts, ban brute force logins for various services (ssh, ftp, ...), numerous configuration options but very easy to configure with excellent inline documentation. For more info see the link above.

iptables already has all those features. I find "frontend" scripts like CSF confusing and distracting, because they throw in everything under the sun, including stuff that might make no sense in my particularly use case.

Yeah, if you know the TCP/IP networking stack and iptables well enough. For most people, they do not. That's why sysadmins exist ;-)

Otherwise, can iptables ban those who failed more than 10 SSH logins?

"Otherwise, can iptables ban those who failed more than 10 SSH logins?"

Yes, sort of, though it's not "more than 10", it's "drop packets that look suspiciously like an automated attack", which I think is actually cooler because it never outright bans anyone but it makes it impossible to run an effective brute force attack. You'd use the "--state NEW" option to determine whether the connection is a new one or an established one. If someone connects over and over again to ssh (or any login-able service, really) within a short time you can drop them. Rules would look something like this:

  iptables -A INPUT -p tcp -i eth0 -m state --state NEW --dport 22 -m recent --update --seconds 15 -j DROP

  iptables -A INPUT -p tcp -i eth0 -m state --state NEW --dport 22 -m recent --set -j ACCEPT
Assuming, of course, that you're already accepting ESTABLISHED connections above those rules.

iptables is astonishingly powerful and flexible, and it's usually pretty easy to google up the right recipes, if you aren't quite sure of the incantations. It can be a little intimidating, but it more than repays you for the effort. When I did network consulting I was always surprised when I came upon a network where they had a Linux router, web server, mail server, etc., and then a Cisco PIX firewall sitting in front of it. Once again, it's just needless complexity, when the Linux box could do everything the PIX does (and possibly more, in the case of the low end PIX that I usually see).

Since you have professed iptables ignorance...are you sure CSF is doing anything sensible in your deployment? By that, I mean, do you have any idea what your firewall rules are actually doing and if they are effective for what you think they are effective for? I'm always a bit wary when I come upon a network where the people maintaining it have no idea what their systems are doing or how they work. While CSF may be a net positive, if the trend is toward avoiding knowledge, it's a dangerous direction to go in. I'm all for simplifying, and sometimes tools make things simpler. But, as I said, in my experience the "pile of shell" firewall scripts complexify things rather than simplify them.

You should also look at the grsecurity kernel patch:


It can really help to harden a system by making it much, much harder to exploit buffer overflows, making it impossible for users to see any processes other than their own etc. It's a great kernel patch and I highly recommend it.

I don't get why people recommend chkrootkit + failban/denyhost.

I would recommend OSSEC to anyone looking for a serious host-based IDS (it does all those tools do + a lot more and very light weight).

*link: http://www.ossec.net

I found that, while it looked great, for some reason on every install I tried (granted this was a couple of years ago, but I tried on various distributions and hardware) - the mailing engine tended to silently fail - which was fairly critical.

Denyhost, while not as sophisticated by a longshot, is dead easy to install and take care of the SSH brute force issue.

nmapping localhost doesn't sound very effective.

netstat -tap | grep LISTEN

Or you could just use the 'listen' flag ;)

sudo netstat -tpl

Today I learned. Thanks!

The first step on securing an Ubuntu server is switching to Debian (this sounds a bit trollish but oh well)

Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact