You shouldn't just update, you should update regularly or better yet set up unattended upgrades. Especially for your hobby projects or personal server because odds are that you won't always have the time to act on every security advisory. (Subscribe here to at least hear about them.) Also, if something breaks once in a blue moon, it's not that big a deal.
Fail2ban is fairly heavy and only very recently supports IPv6 (which means the version from your repo may not). You can achieve similar results with something like
-A INPUT -p tcp -m tcp --dport 22 -m recent --update --seconds 180 --hitcount 4 --rttl --name SSH --rsource -j LOG --log-prefix "ssh brute force: "
-A INPUT -p tcp -m tcp --dport 22 -m recent --update --seconds 180 --hitcount 4 --rttl --name SSH --rsource -j DROP
-A INPUT -p tcp -m tcp --dport 22 -m state --state NEW -m recent --set --name SSH --rsource -j ACCEPT
Ten years ago, I had a Debian server completely wedge itself on an update. glibc was hosed, practically nothing would run, and I couldn't untangle the mess. Had it been unattended, downtime would have been even longer than the hours it took to rebuild the box from scratch.
This was not the first or last time such an event would occur, merely the most severe.
Updates break things. Anyone who claims otherwise is either incredibly lucky, or incredibly inexperienced.
For a highly-available mission-critical infrastructure in 2013, the failure plan is also to rebuild from scratch, because "failure" means redundant and backup systems have exploded. This is most likely to occur when you automatically roll out untested changes to your infrastructure.
In either case, you won't be sleeping tonight. Or possibly tomorrow night.
I'm not sure how good the Ubuntu LTS relases are, but Debian has always been great at both keeping a stable system up, and keeping the upgrade to new version as painless as possible.
 "upgrade" if using apt-get, "safe-upgrade" if using aptitude
So I had the effect right, but the commands mixed up :-)
Might as well set it automatically and stay protected <shrugs>
-A INPUT -p tcp -m tcp --dport 22 -m state --state RELATED,ESTABLISHED -j ACCEPT
-A INPUT -p tcp -m tcp --dport 22 -m state --state NEW -m limit --limit 3/min --limit-burst 4 -j ACCEPT
-A INPUT -p tcp -m tcp --dport 22 -j DROP
The limit module is a beautiful thing, since you can also use it to limit log messages generated for events etc., using the same syntax, like so:
-A INPUT -m limit --limit 1/hour --limit-burst 3 -j LOG --log-prefix "iptables denied: "
(Why? Because you didn't read the caution note that upstream sent along to you, that's why. You just woke up to the alert saying that your system is down.)
If you have a reasonable environment, you should have alpha or qa servers that look just like production, except that they aren't visible to the outside. Always test changes there before upgrading on your production machines.
Agreed. Indeed, the parent explicitly shows one possible trade-off:
Especially for your hobby projects or personal server because odds are that you won't always have the time to act on every security advisory.
The big problem I have is securing my private keys. I use multiple devices, and haven't found a secure and convenient way to share the keys across devices securely. I'd love ideas..
If you want some kind of centralization, you could only add your keys on a single server, then add this server key on others. You log on first server, then use it to log on others. But that would be dangerous : if the master server crashes, you can't access other servers anymore. Redundancy is good, in that case.
It just doesn't work for me though. I frequently need to log onto servers from new client computers, and the workflow of connect from trusted device, generate new key for new device then use new device just isn't practical.
I've spent a lot of time thinking about this.
I think the biggest threat to security on my servers is brute forcing of ssh passwords. The best solution for that is key based authentication. (I deploy fail2ban as well though)
There is a non-zero threat of my devices being lost/stolen/accessed with my keys on them.
So key based authentication is the way to go, but I just can't get around the practical aspects.
In the end I often rely on long, random passwords kept in a KeePass database I sync (along with Fail2Ban). I've considered keeping my private keys in that too, and to be honest I think that isn't a terrible solution. It is certainly less than ideal, but probably mildly more secure than my current method (ie: really a private key shared across devices is acting as a very long password)
Agree. But wouldn't it make sense then to generate unique keys for all of your devices (still) and then have a key which is your "travel" key and change that periodically?
I keep my travel key in a pen drive, and in practice I change it shortly after every usage. But that's because I don't use it a lot, you'll probably want to change it every few weeks.
Also, if you don't trust the machines you are using to connect, no time frame is small enough. If you really need to connect from them, use a VM (best) or a guest account (not as good) with access to just the stuff you need.
I just thought of enabling 2 factor authentication instead of plain password auth, which I think is probably reasonable too
If you follow that best practice, you can stop worrying about your laptop being stolen, and start worrying about five dollar wrenches. (http://xkcd.com/538/)
How do I use a private CA to generate(?) keys for ssh authentication?
But this doesn't solve the problem you have to connect to each host to reject the lost key. (But you have the same problem with 1 key, so no worse.)
What's always been missing though is a decent way to keep track of all the places which might be set to authorize a particular key so you can revoke it if you think it's been compromised (which should mean "is under brute force attack to decrypt it's AES encryption").
I suppose with a DynDNS server and a smartphone app, you could setup a web-page which would allow you to request a URL to download your SSH key securely to new devices but this is kind of the problem: there's just nothing out there which lets you keep a proper eye on how your web-of-trust works for this type of thing.
You can keep the database file on an USB stick or even something like dropbox or owncloud .
There are many more steps used in more serious situations... chiefly, these come to mind: Validating installation media. Validating/upgrading firmware. Installing a minimalist kernel. Securing BIOS and IPMI. Taking an inventory of part numbers and identities to detect tampering at later stages. Locking down network access on the switch to the appropriate MAC address. Determining administrative access methodology and distributing appropriate keys or credentials. Testing.
Probably most servers today are automatically set up, and live in large scale server farms. They probably boot via PXE, using custom auto-provisioning code. Other than physically taking delivery of the unit and inspecting it and logging its presence in inventory, such machines are literally just plugged in and boot up. Probably most of them are preconfigured to PXE boot by the vendor. A strong setup will also override BIOS settings automatically upon boot, ensuring further boots have 100% pretested/tuned configurations.
Recently, we had Bryan Kennedy's "first 5 minutes" , linked from Drew Crawford's guide to "NSA proofing" email  (pretty good guide to securing a mail server).
If someone is good enough to own you with everything else locked down, they can change any firewall rules completely if they need to, or just tunnel out over an allowed port.
Creating a non-root user then giving them carte blanche sudo rights is similarly odd to me. I'd rather just use root and /etc/nologin (assuming no one else needed a login shell to run).
EDIT: Added paragraph about non-root users.
Egress filtering is important, if you think the chances are large that a user account is compromised a filter table can help. But an even more effective egress filter is one that you run on the router/hardware firewall just upstream from your machines. After all an egress filter on the machine can be disabled by someone that has compromised that machine.
I didn't mean 'intend' in the context of unauthorised use, i was trying to differentiate between a 'single user' ( can't think of a better term, but i'm sure you know what i mean) application server and systems where you purposefully give shell access to other users but want to restrict egress.
Agreed it will still stop a script kiddie (do ppl even use that word anymore? Showing my age!) from running random crap on high ports, but thats about it.
The option to block traffic per-application in Linux is no longer included in iptables (I think it was the switch --cmd-owner )
You're not going to be 24/7 at the keyboard of the system, there are going to be 0days, or unreleased vulnerabilities, etc.
apt-get install apf-firewall
But home made scripts and rulesets, may or may don't, depends on implementations.
Many ISPs (and hosters) do support IPv6, and it’s worthwhile to learn a bit about it. If you are running an internet-facing server, I’d say it’s part of being a good citizen to support the latest standards.
To address the parent question, the firewall rules actually need to be a bit different than for IPv4. As an example, IPv6 makes supporting ICMP (for ping and the like) mandatory.
I wrote a post with a simple ip6tables setup about 4 years ago (in German, though, and there might be better posts):
(Thanks "child", I did indeed initially not have the -u there. You should.)
.. which is a mnemonic trick, "tulpen" is "tulips" in German...
A few feedback:
1) Start by the "lower layer"
If it's a physical server, start by reviewing all BIOS options, if it's virtual one you control, start by reviewing the host environment for the guest.
2) Follow from the bottom to the top
Follow by review and secure the boot process. (Grub password, kernel modules loaded, kernel module options, kernel sysctl options.
Follow by review the services started "even before starting nothing".
Review what is accessible in the system to "anybody".
Review what is accessible in the system to each "user". Many times a "user" is going to be listening to the hostile network in a port.
Review what the system is going to do across the time, what is scheduled, which resources are assigned to which users (and their processes), what is going to be registered.
4) Does not limit or scrub any traffic
It's not that we forget about IPv6
It's that such iptables, on any 1 core VPS, is vulnerable to DoS from a single IP (you don't need any distributed attack to put down that). A syn attack to the port 22 could be enough.
When giving general security advisories, lets try that people build a secure system if they follow our tips. Not let open things like ping broadcast from our host, or IPv6
That being said, I've read your update, and that is the point that people seeking for "recipes" should get. If they are going to touch ssh or iptables, they need to be ready to explore the documented options (which are many), to validate that the changes are working, to dive into networking, etc.
If securing a server were as straightforward as changing ssh settings and firewall rules, distro providers would do this sort of thing out-of-the-box, or at the least there would be a script circulating on github for doing this specific setup.
Discussed here: https://news.ycombinator.com/item?id=5316093
Discussed here: https://news.ycombinator.com/item?id=5361335
Possibly use a non-standard ssh port, make sure you disable ssh v1 and apply a password policy, allow password logins only for white listed users. Now you should be reasonably safe against brute force attacks and still have a system that is accessible.
If/when you disable root logins through ssh, try to have another way to login as root - maybe a console/kvm switch, prefferably with remote access through a secure network.
Could you explain what's the "easy" way to DoS using fail2ban(or equivalent system)?
Law of unintended consequences strikes again.
Indeed, there's always the risk to lose your private key. That's why it's useful to have several : I have one on my desktop, one on my laptop and one on my mobile. If any get compromised, I only have to delete one public key in authorized keys and generate a new one. That's still open to some attacks, but way less dangerous than simply allowing anyone from anywhere to log in.
That's also better than allowing only a few IPs on port 22, because you'll never now where you will be and what IP your laptop will have when an emergency occurs.
Personally, I still don't like SSH keys. Yes, they are "more secure" in the sense of preventing others from breaking into your servers. But they are less reliable in a sense that they increase your probability of being completely locked out when shit hits the fan.
Laptop, desktop and mobile phone? If you can't imagine a situation where you will loose access to all three at once, you either live a very different lifestyle than I do or have a poor imagination.
For me, the expected loss (i.e. probability * damage) of loosing access to an important server because of some fire of flood is far greater than the expected loss of someone guessing SSH port, my login, my password and then the root password.
I think you'd be astonished at how weak these are. That's part of the problem: it's hard, and not getting easier, to pick strong passwords.
It does. You can encrypt the key and then you have two-factor authentication. You need to have the key and know the password to use it.
Actual two-factor authentication would be requiring proof of access to the secret key and a diffrent login password.
Another suggestion I miss is just setting the SSH port to something different than 22, while many will correctly notice that this comes close to security through obscurity, it does throw off some more mainstream attempts.
A third one is more web-server specific, if you have the option, make it hide its version number in its responses.
A fourth one would be to mask all errors to a 404 on a production server, the less an attacker knows, the better.
A fifth actually links to the person operating the server: 'know what permissions are, and what they allow/deny'. Too many people "solve" issues by doing a "chmod 777" on a folder, which might have solved one problem (in a very bad way) but probably set him up for a few nasty surprises down the road.
I actually wanted to do some kind of "frontend"* to SELinux to facilitate it's usage, but I got every time frustrated when going back to it that it never got above the idea point.
* something more like gitflow is to git; than SELinux Troubleshooting is to selinux
Having said that, I'm surprised Debian and Ubuntu don't have better support for grsecurity. If they'd provide up-to-date packages I'd switch over.
Why is it in frontpage?
Consider pimping this setup with external WAF such as incapsula.com or CloudFlare.com. Combined with dome9- there will be no entity connecting directly to your server (or even knowing its ip)
In all honesty; the steps in the TO's article are only providing basic security. Pointing the ansible playbooks to your server will further enhance it.
You should also consider changing the SSH port, but that's up to you (in which case you also need to change the firewall settings!)
Most truly competent sysadmins already have full-time employment by a product company or hosting provider. Most so-called sysadmin consultants are former web developers who taught themselves Linux and security, and don't know what they don't know.
Less than 2% of traffic to Google is IPv6:
Even the attacks CloudFlare sees are mostly DDOS: http://blog.cloudflare.com/ipv6-day-usage-attacks-rise
Traffic through Akamai is minimal:
Maybe someday it makes sense to spend a lot of time around IPv6 defenses, but today is not that day.
Learning about ip6tables or turning off ipv6 entirely if not needed would be a better discussion topic, but people here are right to care about a major oversight in the security notes presented.
you’ll want to lock down SSH access entirely,
and make sure that only you can get in
The whole article sounds like a lot of fud to me. For example, what benefit is there in creating a new user with sudo privileges instead of using root directly?
Users should run with as few privileges as possible. Logging in as root requires two sets of keys.
Sudo logs all commands. The commands usable when using sudo can be limited, whereas root gives access to everything. Using sudo creates heightened awareness of risks.
Root is a high value target. Giving it a strong password is important, but locking it is better.
Also, when a group of people all need access to the same servers, you don't want everyone SSHing in as root. It's much easier to manage team security when everyone has individual users and sudo rights that can be logged and audited.