Hacker News new | past | comments | ask | show | jobs | submit login
Securing a Linux Server (spenserj.com)
304 points by shawndumas on Sept 14, 2013 | hide | past | favorite | 140 comments

This falls a bit short.

You shouldn't just update, you should update regularly or better yet set up unattended upgrades[1]. Especially for your hobby projects or personal server because odds are that you won't always have the time to act on every security advisory. (Subscribe here[2] to at least hear about them.) Also, if something breaks once in a blue moon, it's not that big a deal.

Fail2ban is fairly heavy and only very recently supports IPv6 (which means the version from your repo may not). You can achieve similar results with something like

    -A INPUT -p tcp -m tcp --dport 22 -m recent --update --seconds 180 --hitcount 4 --rttl --name SSH --rsource -j LOG --log-prefix "ssh brute force: "
    -A INPUT -p tcp -m tcp --dport 22 -m recent --update --seconds 180 --hitcount 4 --rttl --name SSH --rsource -j DROP
    -A INPUT -p tcp -m tcp --dport 22 -m state --state NEW -m recent --set --name SSH --rsource -j ACCEPT
but if you disable password logins, no one's going to brute force their way in.

[1] https://help.ubuntu.com/community/AutomaticSecurityUpdates

[2] http://www.ubuntu.com/usn/

As a seasoned system administrator, I strongly advise you against unattended or automatic upgrades. In a Linux or FLOSS eco system components are very strongly coupled but loosely developed. Say glibc is base of nearly every server application but it is and server apps are not regres tested for each other. Which will or may lead to incompatibilities, down times or performance issues even on edge cases data loss. So please don't.

Your concerns are not an issue with a distro like Debian where the only upgrades are for backported security fixes (or, when an announced point release comes around, critical bug fixes). I worked for several years at a shop with dozens of Debian servers, and nightly unattended upgrades very rarely caused issues, and never caused anything truly serious.

> Your concerns are not an issue with a distro like Debian

Ten years ago, I had a Debian server completely wedge itself on an update. glibc was hosed, practically nothing would run, and I couldn't untangle the mess. Had it been unattended, downtime would have been even longer than the hours it took to rebuild the box from scratch.

This was not the first or last time such an event would occur, merely the most severe.

Updates break things. Anyone who claims otherwise is either incredibly lucky, or incredibly inexperienced.

But if you can't flip a switch to deploy an identical server and/or restore from a backup image, aren't you in deep shit anyway? Your failure plan is to rebuild the box from scratch?

For a one-off budget server in 2002-2003? Yes, yes it is.

For a highly-available mission-critical infrastructure in 2013, the failure plan is also to rebuild from scratch, because "failure" means redundant and backup systems have exploded. This is most likely to occur when you automatically roll out untested changes to your infrastructure.

In either case, you won't be sleeping tonight. Or possibly tomorrow night.

Debian is very stable. I recommend running "safe-upgrade" nightly -- and keeping an eye out for when you need a "dist-upgrade" (typically new kernel images -- and other stuff that should be tested (eg: test that the server actually boots -- not to mention do a reboot to use the new kernel).

I'm not sure how good the Ubuntu LTS relases are, but Debian has always been great at both keeping a stable system up, and keeping the upgrade to new version as painless as possible.

In the typical case, you only need dist-upgrade for upgrading to a new Debian release (which is every 2 years or so and is big enough news you'll probably hear about it). When there's a security update for the kernel, a regular upgrade/safe-upgrade[1] is sufficient to get it. Of course you must schedule a reboot yourself.

[1] "upgrade" if using apt-get, "safe-upgrade" if using aptitude

Ah -- I usually use "apt-get safe-upgrade" for scheduled upgrades, and aptitude for all manual work.

So I had the effect right, but the commands mixed up :-)

While you are absolutely correct, no one is regression testing the updates potentially coming into their project box/gameserver/etc, even if they perform the apt-get upgrade manually.

Might as well set it automatically and stay protected <shrugs>

I prefer to be actively administrating my server, so that I know if something does go awry. If I'm asleep, it may be a few hours before I realize that an update broke something. On top of that, I clone my servers and test upgrades in a development environment as much as possible, before allowing an update to go live. As long as you're on top of the updates, a few days between automatic and manual shouldn't have much effect.

Another really nifty way to achieve pretty much the same is to use iptables' built-in limit module like so:

  -A INPUT -p tcp -m tcp --dport 22 -m state --state RELATED,ESTABLISHED -j ACCEPT
  -A INPUT -p tcp -m tcp --dport 22 -m state --state NEW -m limit --limit 3/min --limit-burst 4 -j ACCEPT
  -A INPUT -p tcp -m tcp --dport 22 -j DROP
Limit-bursts allows that many packages through before any limit counter starts, if any of the other packages afterwards also do not garner any response, a counter counts up for every further package, effectively blocking everything after the first 7 packages arriving within a minute. (You could also do 3/hour, for example, if you think you need it.)

The limit module is a beautiful thing, since you can also use it to limit log messages generated for events etc., using the same syntax, like so:

  -A INPUT -m limit --limit 1/hour --limit-burst 3 -j LOG --log-prefix "iptables denied: "

Awesome, now I can DoS your SSH with only 3 requests per minute.

You can DOS your own IP from being allowed in..... But not mine.

The `limit` module operates system-wide; one must use the `hashlimit` module to differentiate based on IP.

Both limit and spindritf's suggestion will mitigate the attack, but they won't notify you about it. In most circumstances, I'd prefer to skip on the thousands of notifications per day, however I sometimes like to know about every detail on a server, and Fail2ban gives me that level of control, without the need to tail a log file.

Security is a tradeoff. In this case, unattended upgrades lead to increased downtime.

(Why? Because you didn't read the caution note that upstream sent along to you, that's why. You just woke up to the alert saying that your system is down.)

If you have a reasonable environment, you should have alpha or qa servers that look just like production, except that they aren't visible to the outside. Always test changes there before upgrading on your production machines.

Security is a tradeoff. In this case, unattended upgrades lead to increased downtime.

Agreed. Indeed, the parent explicitly shows one possible trade-off:

Especially for your hobby projects or personal server because odds are that you won't always have the time to act on every security advisory.

I find https://wiki.ubuntu.com/UncomplicatedFirewall to be much friendlier to use when setting up firewalls.

The big problem I have is securing my private keys. I use multiple devices, and haven't found a secure and convenient way to share the keys across devices securely. I'd love ideas..

You shouldn't try to share them, imo, and generate a key for each device. Yes, that may be painful to have to copy 5 public keys on each new server, but the day where, say, your laptop get stolen, you don't have to change the private key on each device : simply remove the laptop one.

If you want some kind of centralization, you could only add your keys on a single server, then add this server key on others. You log on first server, then use it to log on others. But that would be dangerous : if the master server crashes, you can't access other servers anymore. Redundancy is good, in that case.

I agree that key-for-each-device is probably best practice.

It just doesn't work for me though. I frequently need to log onto servers from new client computers, and the workflow of connect from trusted device, generate new key for new device then use new device just isn't practical.

I've spent a lot of time thinking about this.

I think the biggest threat to security on my servers is brute forcing of ssh passwords. The best solution for that is key based authentication. (I deploy fail2ban as well though)

There is a non-zero threat of my devices being lost/stolen/accessed with my keys on them.

So key based authentication is the way to go, but I just can't get around the practical aspects.

In the end I often rely on long, random passwords kept in a KeePass database I sync (along with Fail2Ban). I've considered keeping my private keys in that too, and to be honest I think that isn't a terrible solution. It is certainly less than ideal, but probably mildly more secure than my current method (ie: really a private key shared across devices is acting as a very long password)

"I frequently need to log onto servers from new client computers, and the workflow of connect from trusted device, generate new key for new device then use new device just isn't practical."

Agree. But wouldn't it make sense then to generate unique keys for all of your devices (still) and then have a key which is your "travel" key and change that periodically?

I'll second that.

I keep my travel key in a pen drive, and in practice I change it shortly after every usage. But that's because I don't use it a lot, you'll probably want to change it every few weeks.

Also, if you don't trust the machines you are using to connect, no time frame is small enough. If you really need to connect from them, use a VM (best) or a guest account (not as good) with access to just the stuff you need.

That's a good compromise.

I just thought of enabling 2 factor authentication instead of plain password auth, which I think is probably reasonable too

Was about to suggest 2-factor auth as well. At least for the "first" machine, where the SSH key for the others is.

Your private key should be encrypted with a strong password.

If you follow that best practice, you can stop worrying about your laptop being stolen, and start worrying about five dollar wrenches. (http://xkcd.com/538/)

Better way to reduce the need for copying keys around would be setting up a private CA, and signing the individual keys with it. Of course you need to keep the CA key secure, I'd strongly suggest removable media or preferably a real HSM.

Can you explain how this would work?

How do I use a private CA to generate(?) keys for ssh authentication?

https://blog.habets.se/2011/07/OpenSSH-certificates hopefully explains how to set up CA for SSH

Thanks for this tip, I can't believe I missed the announcement about this feature of SSH, but this is gold.

The answer is don't: use an unique key for each device.

But this doesn't solve the problem you have to connect to each host to reject the lost key. (But you have the same problem with 1 key, so no worse.)

I fall on the side of an SSH key identifies a person, not a device. That's what host keys are for.

What's always been missing though is a decent way to keep track of all the places which might be set to authorize a particular key so you can revoke it if you think it's been compromised (which should mean "is under brute force attack to decrypt it's AES encryption").

I suppose with a DynDNS server and a smartphone app, you could setup a web-page which would allow you to request a URL to download your SSH key securely to new devices but this is kind of the problem: there's just nothing out there which lets you keep a proper eye on how your web-of-trust works for this type of thing.

On Windows I use KeePass 2 [1] with the KeeAgent [2] plugin. The keys will be protected by your master password, and it will act like pageant so most tools will automatically recognize and use the keys. You can configure it to automatically let all applications use the keys, or first prompt you before providing the key to the user.

You can keep the database file on an USB stick or even something like dropbox or owncloud [3].

[1] http://keepass.info/index.html

[2] http://keepass.info/plugins.html#keeagent

[3] http://owncloud.org/

I do the same, and have found it works beautifully. You need my master password, SSH Key, and my SSH Key's password before you can log into my server. And if you somehow manage to get all of that, you need my account password in order to modify any non-user files.

Put a passphrase on your private key and copy it over whatever insecure channel you'd like. ... or use ssh, why wouldn't you use ssh?

Another option is ferm (http://ferm.foo-projects.org/).

sudo, ssh certs, egress firewalling. Congrats - you've covered about 4 pages from the NSA's 200 page hardening benchmark. Off to a good start!


This isn't intended as a be-all-end-all guide to security, like the NSA aims for. Instead, view it as a quickstart guide for those first five minutes on a new server, or as a starting point for beginners that have no idea where to even look.


I doubt there's suddenly much appetite for the NSA-authored security guides anymore.

This is very lightweight.

There are many more steps used in more serious situations... chiefly, these come to mind: Validating installation media. Validating/upgrading firmware. Installing a minimalist kernel. Securing BIOS and IPMI. Taking an inventory of part numbers and identities to detect tampering at later stages. Locking down network access on the switch to the appropriate MAC address. Determining administrative access methodology and distributing appropriate keys or credentials. Testing.

Probably most servers today are automatically set up, and live in large scale server farms. They probably boot via PXE, using custom auto-provisioning code. Other than physically taking delivery of the unit and inspecting it and logging its presence in inventory, such machines are literally just plugged in and boot up. Probably most of them are preconfigured to PXE boot by the vendor. A strong setup will also override BIOS settings automatically upon boot, ensuring further boots have 100% pretested/tuned configurations.

I used to think that the Redhat security guides were good [1], but would also go through the NSA guide to securing Redhat [2]. Seems a little ironic nowadays.

Recently, we had Bryan Kennedy's "first 5 minutes" [3], linked from Drew Crawford's guide to "NSA proofing" email [4] (pretty good guide to securing a mail server).

[1] https://access.redhat.com/site/documentation/Red_Hat_Enterpr... [2] http://www.nsa.gov/ia/mitigation_guidance/security_configura... [3] http://plusbryan.com/my-first-5-minutes-on-a-server-or-essen... [4] http://sealedabstract.com/code/nsa-proof-your-e-mail-in-2-ho...

I've never understood the compulsion to restrict outbound traffic on an internet facing server that you do not intend to be used by other (untrusted) people.

If someone is good enough to own you with everything else locked down, they can change any firewall rules completely if they need to, or just tunnel out over an allowed port.

Creating a non-root user then giving them carte blanche sudo rights is similarly odd to me. I'd rather just use root and /etc/nologin (assuming no one else needed a login shell to run).

EDIT: Added paragraph about non-root users.

I've seen more compromised boxes than one can shake a stick at. There's all sorts of reasons that blocking egress is a great idea. Compromises are usually automated bots, and no, they're not smart enough to bring down iptables. Even if it's a human that's pwned you, it's frequently a stupid human, or a lazy human. It's just good practice to practice security in depth.

Or a non-privileged account is accessed and the kiddie just wanted to run an eggdrop bot. Plot foiled.

The most important word in your comment is 'intend'. Unauthorized use is never intended (other than in honeypots, but even there it is somewhat intended...).

Egress filtering is important, if you think the chances are large that a user account is compromised a filter table can help. But an even more effective egress filter is one that you run on the router/hardware firewall just upstream from your machines. After all an egress filter on the machine can be disabled by someone that has compromised that machine.

Exactly right re: doing it downstream (or in the case of virtual machines, outside of the guest).

I didn't mean 'intend' in the context of unauthorised use, i was trying to differentiate between a 'single user' ( can't think of a better term, but i'm sure you know what i mean) application server and systems where you purposefully give shell access to other users but want to restrict egress.

Systems like that are only a privilege escalation away from being compromised and are much softer targets. Anything with shell access for multiple parties not known intimately to the owner of the box should be monitored with great zeal.

Even an unprivileged compromised PHP script can open sockets and send email. Is it your intent that your server be able to send spam?

Or maybe there's a vulnerability in your web app that grants people shell access. In that case you'll still want to lock down their limited normal-user privileges.

Yes but even then you're still not protecting much by allowing HTTP... said user would still be able to download attack tools locally and use them.

Agreed it will still stop a script kiddie (do ppl even use that word anymore? Showing my age!) from running random crap on high ports, but thats about it.

Blocking outgoing traffic for all applications except the ones that need the Internet can make a difference, especially under Windows. For example, some applications like to phone home for no benefit to the user and there is sometime no option to turn it off.

The option to block traffic per-application in Linux is no longer included in iptables (I think it was the switch --cmd-owner )

While I agree with your reasoning (and personally wouldn't bother with such a restriction), there is a good reason and it's that while they're likely good enough to get root, they might not be or simply can't for some reason. And if they can't, then that extra measure could be worth its cost in time and effort.

Yeh i know what you mean, see reply to the other similar comment :)

Not all automated scripts, or all the available scripts used in automated ways, can flush your rules before send things outside.

You're not going to be 24/7 at the keyboard of the system, there are going to be 0days, or unreleased vulnerabilities, etc.

It's usually also worth checking whether you need to generate new SSH host keys - a lot of VPS providers use the same host keys on all their deployed instances, which isn't secure, and the longer you leave it the more of a pain it'll be to change. If "ls -lh /etc/ssh/" shows that the host keys predate when your system was provisioned, change them:

    rm /etc/ssh/ssh_host_*
    dpkg-reconfigure openssh-server
Also, on some VPS solutions I've come across you can't use passwd to change the root password permanently; management software outside of the VM changes it back on the next restart.

The first time I locked down a server I found iptables to be a little intimidating. If you're on Ubuntu then ufw is extremely easy to use - it's just a front end for iptables.


I made the mistake of accidentally setting the firewall too strict on a remote server, killing my ability to SSH. A neat little trick I found was to setup a scheduled task to kill the firewall in 5 minutes, and then restart it. If it's too restrictive and locks you out, wait 5 minutes. If you did it right, then kill the scheduled task.

You can just use iptables-apply. It achieves the same thing and is built-in. If you don't confirm the changes after they are applied, because you locked yourself out, they will be rolled back.

This is in the article FWIW.

APF, which is basically just a simplified way of setting up IPTables rules, does this.



apt-get install apf-firewall

Install dome9 to remotely configure the iptables. Never be locked out of your server again

This is a good start, netstat -an to see what ports are open, to shut down things that open ports. Turn of xinetd if it is on, etc. There was a much more complete best practices document that came through here earlier.

And then realize you left all ipv6 traffic open (?)

Having separate iptables for IPv6 is painful. Are there any means to address this?

Modern rule generators use to have this in account. Well, maybe you still need to maintain certain definitions separated (as far as I can see on http://www.fwbuilder.org/4.0/docs/users_guide5/working-with-...)

But home made scripts and rulesets, may or may don't, depends on implementations.

Simple. Disable IPv6 - it's probably not worth the tradeoff, and your network provider probably doesn't support it anyway.

Worst advice ever :-(.

Many ISPs (and hosters) do support IPv6, and it’s worthwhile to learn a bit about it. If you are running an internet-facing server, I’d say it’s part of being a good citizen to support the latest standards.

To address the parent question, the firewall rules actually need to be a bit different than for IPv4. As an example, IPv6 makes supporting ICMP (for ping and the like) mandatory.

I wrote a post with a simple ip6tables setup about 4 years ago (in German, though, and there might be better posts): http://blogs.noname-ev.de/commandline-tools/archives/74-ip6t...

netstat -an shows a lot of stuff. I think here all you need is

netstat -ntul

(Thanks "child", I did indeed initially not have the -u there. You should.)

Better yet, netstat -nltu (--numeric --listening --tcp --udp). You probably want to shut down anything unwanted that binds an udp-socket too.

I frequently use netstat -tulpen

.. which is a mnemonic trick, "tulpen" is "tulips" in German...

Which one are you referring to?

This article is letting IPv6 without a firewall.

A few feedback:

1) Start by the "lower layer"

If it's a physical server, start by reviewing all BIOS options, if it's virtual one you control, start by reviewing the host environment for the guest.

2) Follow from the bottom to the top

Follow by review and secure the boot process. (Grub password, kernel modules loaded, kernel module options, kernel sysctl options.

Follow by review the services started "even before starting nothing".

Review what is accessible in the system to "anybody".

Review what is accessible in the system to each "user". Many times a "user" is going to be listening to the hostile network in a port.

Review what the system is going to do across the time, what is scheduled, which resources are assigned to which users (and their processes), what is going to be registered.

4) Does not limit or scrub any traffic

It's not that we forget about IPv6

It's that such iptables, on any 1 core VPS, is vulnerable to DoS from a single IP (you don't need any distributed attack to put down that). A syn attack to the port 22 could be enough.

In resume:

When giving general security advisories, lets try that people build a secure system if they follow our tips. Not let open things like ping broadcast from our host, or IPv6

I haven't enabled IPv6 on any of my servers yet, as I can say with 99.9% certainty that my visitors do not know what it is, let alone have it enabled. Additionally, this is intended as a quickstart or beginners guide, and obviously leaves quite a bit out. That being said, it is definitely something you want to consider, and I've added a followup to the post mentioning that.

The problem with IPv6 was not that it could be enabled... it's that it is enabled by default. In all major linux distributions.

That being said, I've read your update, and that is the point that people seeking for "recipes" should get. If they are going to touch ssh or iptables, they need to be ready to explore the documented options (which are many), to validate that the changes are working, to dive into networking, etc.

Guides like this are tricky. They secure one facet of lots of things, and not always the things you actually want.

If securing a server were as straightforward as changing ssh settings and firewall rules, distro providers would do this sort of thing out-of-the-box, or at the least there would be a script circulating on github for doing this specific setup.

I've never been a fan of shunning/blocking ip-addresses based on number of wrong passwords. It's to easy to exploit as denial-of-service attack and can be used to lock out legitimate users.

Possibly use a non-standard ssh port, make sure you disable ssh v1 and apply a password policy, allow password logins only for white listed users. Now you should be reasonably safe against brute force attacks and still have a system that is accessible.

If/when you disable root logins through ssh, try to have another way to login as root - maybe a console/kvm switch, prefferably with remote access through a secure network.

> It's to easy to exploit as denial-of-service attack and can be used to lock out legitimate users.

Could you explain what's the "easy" way to DoS using fail2ban(or equivalent system)?

There's still a lot of ISPs that don't follow BCP-38 so source address IP spoofing is easy enough

SSH is TCP based, you need two-way handshake to construct the connection. Source address spoofing gets you only half-way there.

Unless your server's TCP stack has SYN-cookies enabled (and I think most do): http://www.jakoblell.com/blog/2013/08/13/quick-blind-tcp-con...

Law of unintended consequences strikes again.

What's wrong with passwords ? The info is in your head and in a physical booklet secured at home for backup. Its only weakness is is keylogger. A secret key protected by password doesn't provide you more security. You can only connect from the computers having a copy of the key. If that computer is inaccessible or dead, or the private key is erased you can't log in. It also is exposed to keylogger. So I stay with password for now. It should be a secure password of course. Maybe use private keys and a password for backup recovery only.

Password authentication is prone to bruteforce. With public key auth, even if someone has your passphrase, he also needs the private key to break into your server.

Indeed, there's always the risk to lose your private key. That's why it's useful to have several : I have one on my desktop, one on my laptop and one on my mobile. If any get compromised, I only have to delete one public key in authorized keys and generate a new one. That's still open to some attacks, but way less dangerous than simply allowing anyone from anywhere to log in.

That's also better than allowing only a few IPs on port 22, because you'll never now where you will be and what IP your laptop will have when an emergency occurs.

I have one on my desktop, one on my laptop and one on my mobile. If any get compromised, I only have to delete one public key in authorized keys and generate a new one.

Personally, I still don't like SSH keys. Yes, they are "more secure" in the sense of preventing others from breaking into your servers. But they are less reliable in a sense that they increase your probability of being completely locked out when shit hits the fan.

Laptop, desktop and mobile phone? If you can't imagine a situation where you will loose access to all three at once, you either live a very different lifestyle than I do or have a poor imagination.

For me, the expected loss (i.e. probability * damage) of loosing access to an important server because of some fire of flood is far greater than the expected loss of someone guessing SSH port, my login, my password and then the root password.

Well, maybe you live in some place especially prone to natural catastrophes :) I live in France, and I certainly have lesser chances to loose all devices at once than to have a single one compromised.

Fail2ban prevents brute force connection attempts. One can also chose a password resistent to brute force by using a long string. A psalm verse, or a sentence from some peoetry which is easy to remember. Or pick a password made of the first letter of each word of this long sentence.

> A psalm verse, or a sentence from some peoetry which is easy to remember.

I think you'd be astonished at how weak these are. That's part of the problem: it's hard, and not getting easier, to pick strong passwords.

> A secret key protected by password doesn't provide you more security.

It does. You can encrypt the key and then you have two-factor authentication. You need to have the key and know the password to use it.

I don't want to be picky but this isn't two factor authentication in its usual sense. Two factor authentication uses two independent means/media to authenticate. Protecting the private key with a password is required to protect it of beeing exposed when stolen. But if it's stolen then brute force password guessing is for free. Brute force attack against a login password can be detected and impaired, not with stolen private key.

You must have the key, and know the password. Two factors.

It might have been true if it wasn't possible to get a copy of the plaintext key (eg: from ssh-agent).

Actual two-factor authentication would be requiring proof of access to the secret key and a diffrent login password.

Hmm, no mention of SELinux? It's a pain to get into though. Many just turn it off to make its errors go away, while they should actually configure their machine. But it is very convoluted and hard to get really into. IRC helped a lot here.

Another suggestion I miss is just setting the SSH port to something different than 22, while many will correctly notice that this comes close to security through obscurity, it does throw off some more mainstream attempts.

A third one is more web-server specific, if you have the option, make it hide its version number in its responses.

A fourth one would be to mask all errors to a 404 on a production server, the less an attacker knows, the better.

A fifth actually links to the person operating the server: 'know what permissions are, and what they allow/deny'. Too many people "solve" issues by doing a "chmod 777" on a folder, which might have solved one problem (in a very bad way) but probably set him up for a few nasty surprises down the road.

I've been tweaking a post on building a solid LEMP stack, that I hope to publish soon that covers your last three points. While security through obscurity isn't a solution, it does help thwart a large portion of the attacks, although it causes more headache than benefits in some cases (heavily restricted work networks for instance). When it comes to webservers, you don't encounter issues like that, and the less information you provide, the better.

No mention for SELinux because it's more than painful to get into it. No, I will actually say it: Documentation is shit about SELinux.

I actually wanted to do some kind of "frontend"* to SELinux to facilitate it's usage, but I got every time frustrated when going back to it that it never got above the idea point.

* something more like gitflow is to git; than SELinux Troubleshooting is to selinux

Amazed that "Install grsecurity" isn't one of the key things listed here. Sure, you have to compile your own kernel, but the enhanced memory protections and many other hardening features make it an excellent addition to your security arsenal.


It's not just "you have to compile your own kernel". It's "you have to compile your kernel EVERY TIME there's a kernel update". If you forget one update and you leave an exploit on your system, you're screwed.

Having said that, I'm surprised Debian and Ubuntu don't have better support for grsecurity. If they'd provide up-to-date packages I'd switch over.

My dedicated server provider ships grsec enabled kernels by default. Admittedly, they tend to be a bit stale.

I'm not really "Amazed" as that is going a particular bunkerization route that resists a certain class of attackers that you would not be capable of holding off anyway as someone who this article is useful to in the first place. Properly configuring RBAC is probably slightly above much of the target audience.

install pax would be more accurate. anyhow the article tells nothing special. fail2ban doesnt do anything to secure you, also

The article absolutely forgets about parametrize the kernel space.

Important caveat: fail2ban does not work for IPv6 connections yet (there are patches floating around though)

ty, i didn't know that was the case. are there other alternatives?

There is a quite straightforward patch[0], and since fail2ban is python, you can apply it on the installed package. Not the best for maintenance, so use your best judgement (and a few tricks to keep yourself safe and up to date), but it's certainly better than being exposed.

[0]: http://www.fail2ban.org/wiki/index.php/Fail2ban:Community_Po...

I also always change the SSH port to 9922. I haven't seen any failed login attempts so far.

I would use a privileged port for ssh (different than 22). In case a hacker owns the process he would need sudo to open another connection if the port is <1024.

I wonder if it's ever happened that a hacker was able to pwn sshd only to be stopped by the lack of a local privilege escalation to root.

I've found CSF is a great tool for securing a server if you aren't an IPTables expert: http://configserver.com/cp/csf.html

Ah, I'll second CSF! Love it, it wraps so much up into a sweet little bunch of scripts.

This article is okay, nothing against the author but the topic of securing a host depends on many factors and it's so extensive at so many levels that this post doesn't even scratch the surface.

Why is it in frontpage?

Because most developers have ~0 such knowledge...

Would automated security updates be an appropriate item to include in this?

I thought the same, since the OP has already installed and configured outgoing emails, I think using apticron[0] would make a lot of sense.

[0] https://www.debian-administration.org/articles/491

Please do yourself a favor and instead of fail2ban install dome9.com. You'll have your ssh closed as well as all other non public ports, will not rely on funky failed login logic. As a bonus you will have clean logs. Also saw here a recommendation to change ssh port. Man, even kids today use nmap. It takes nothing to find your 'hidden' ssh.

Consider pimping this setup with external WAF such as incapsula.com or CloudFlare.com. Combined with dome9- there will be no entity connecting directly to your server (or even knowing its ip)

Can I haz an Ansible playbook with good things?

If you allow some self-promotion: you can take a look at the ansible playbook repo I've been putting up (including some documentation and background in the doc directory): https://github.com/pjan/the-ansibles

In all honesty; the steps in the TO's article are only providing basic security. Pointing the ansible playbooks to your server will further enhance it.

You should also consider changing the SSH port, but that's up to you (in which case you also need to change the firewall settings!)

Wow, let me say that if you're even kind of thinking about Ansible, you should star that. There's some very good use of vars going on, and I'm going to use some of these examples to improve my playbooks. Thanks! If you're thinking about using Ansible in any capacity, star/fork this project!

LIFS (Linux Iptables Firewall Script) is aimed at what UFW does but it also provides NAT, port forwarding and allows you to work with groups of hosts and services.


Where can I buy this as a service?

You could use Heroku, Amazon Elastic Beanstalk, or a myriad of similar services. You could also hire a consultant to secure the system for you.

The consultant route is a lot easier said than done; at least, if you want a good consultant.

Most truly competent sysadmins already have full-time employment by a product company or hosting provider. Most so-called sysadmin consultants are former web developers who taught themselves Linux and security, and don't know what they don't know.

What is the purpose of creating a new user and not using root, assuming ssh password auth is disabled, and only I have the key. Making a new user that has sudo an attacker geting that user as opposed to root all they have to do is sudo anyway?

Sudo ask for your account's password, so it's a protection in case someone manage to get your private key (or manage to login with OpenSSH using an exploit).

There seems to be a lot of wasted breath in these comments about IPv6. Given that its unlikely that your provider even supports IPv6 on their networks, and that globally the traffic via IPv6 is nearly non-existent, I wouldn't spend too too much time caring about it.

Less than 2% of traffic to Google is IPv6: http://www.google.com/ipv6/statistics.html Even the attacks CloudFlare sees are mostly DDOS: http://blog.cloudflare.com/ipv6-day-usage-attacks-rise Traffic through Akamai is minimal: http://www.akamai.com/ipv6

Maybe someday it makes sense to spend a lot of time around IPv6 defenses, but today is not that day.

Their breath is not wasted! This is an article about securing a Linux server, and explicitly talks about adding a firewall with iptables. If it will only defend against IPv4-enabled attackers, then the server operator is at risk of leaving the system open to compromise.

Learning about ip6tables or turning off ipv6 entirely if not needed would be a better discussion topic, but people here are right to care about a major oversight in the security notes presented.

Very useful for any Linode user.. Compare to AWS, I find it hard to manage security on Linode boxes.. Need concept of security groups like AWS configurable and manageable from UI..

Is creating a new group really necessary? Doesn't ubuntu have the 'sudo' group for exactly that purpose? And others have 'wheel'.

Yes it does, and Debian has staff.

The group in Debian is also sudo. The staff group is used to manage /usr/local (and is going away in future Debian releases, since it's root-equivalent anyway).

Change the default ssh port and use .ssh/config to predefine connection parameters. This way you can write ssh mybox

  you’ll want to lock down SSH access entirely,
  and make sure that only you can get in
This sounds like Ubuntu is by default open to everybody via ssh. I find this hard to believe.

The whole article sounds like a lot of fud to me. For example, what benefit is there in creating a new user with sudo privileges instead of using root directly?

> For example, what benefit is there in creating a new user with sudo privileges instead of using root directly?

Users should run with as few privileges as possible. Logging in as root requires two sets of keys.

Sudo logs all commands. The commands usable when using sudo can be limited, whereas root gives access to everything. Using sudo creates heightened awareness of risks.

Root is a high value target. Giving it a strong password is important, but locking it is better.

> For example, what benefit is there in creating a new user with sudo privileges instead of using root directly?

Also, when a group of people all need access to the same servers, you don't want everyone SSHing in as root. It's much easier to manage team security when everyone has individual users and sudo rights that can be logged and audited.

Potentially that if the key is compromised, the holder still requires a password before gaining root access. But if your key is compromised, you probably have a bigger problem.

The .ssh folder and the files inside it should be given the lowest permissions possible.

Just to avoid confusion, I would have said "most restrictive", not "lowest". At first glance, "lowest" might be taken to mean most permissive.

port knocking to open ssh can come in handy in certain setups as well...

Is there any chance of getting a guide of this for FreeBSD?

Change default SSH ports should be mentioned right away.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact