Hacker News new | comments | show | ask | jobs | submit login
My First 10 Minutes on a Server (codelitt.com)
1282 points by codelitt 316 days ago | hide | past | web | 290 comments | favorite



This one is pretty decent but if you want the ultimate guide check out this one:

https://www.inversoft.com/guides/2016-guide-to-user-data-sec...

It covers 10x what all the other guides cover in terms of server and application security. It was posted a few weeks ago on HN but didn't make the front-page.


My worry here is that, in posting what seems to be a book, people just won't even do it because we don't have time to do it, unless this is a primary part of their jobs.

If a 10 minute guide gets users 90% of the way, then they're more likely to do it. And that's good enough to cover a majority of automated attacks.

Update: I take it back – they've provided scripts to run this stuff. I will explore these. Thanks for the link.


Back in the day, The Linux Documentation Project had a trove of hundreds of HOWTOs covering every facet of using Linux. By today's standards they seem like "books", but in reality they were step-by-step instructions for anything you could ever want to do in Linux. No digging through forums, no combing through man pages, no following broken outdated blog posts that didn't explain what you were doing. I find it sad that these are seen as detrimental today.


I feel like arch wiki is that thing today. It's a bit disorganised and things get outdated while nobody's looking, but there's almost anything you'd every want in there.


It seems like there a bunch of articles there but they're more like notes on how to use a tool specifically with Arch, rather than a guide for everyone.

Compare this page http://www.tldp.org/HOWTO/Quota.html to this page https://wiki.archlinux.org/index.php/Disk_quota . TLDP organized HOWTOs like mini-books; tables of content, multiple authors, versioned releases, and of course, you could download them all and search through them by category. And they didn't assume things like the distribution setting up a bunch of the tools and system for you, so you learned how the tools actually worked.


Sure, it's a different format. But you can always dig into man pages or other documentation to learn the details. Arch wiki is great for practical solutions - for example https://wiki.archlinux.org/index.php/HiDPI is the place to go to for hidpi display configuration. TLDP has not been seriously updated since hidpi even came out. Similar to https://wiki.archlinux.org/index.php/Power_management - there's no recent TLDP equivalent.

I appreciate TLDP for what it is and the details they go to. But Arch wiki can easily be read as a FAQ for all modern systems. And it will usually send you to other places for details (sometimes TLDP as well)


So are they outdated now?


A lot of the commands they documented have become obsolete or out of fashion.

E.g. Apache 1, BIND, sendmail, cgi-bin, ifconfig, etc.


Check out the arch linux wiki.


The problem isn't that it's long; the problem is that it's not navigable - there needs to be a table of contents. There's great stuff in here, but it's hard to sort out what I already know how to do from what I can actually use.


HTML version now has a TOC as well!


There is a table of contents in the PDF version.


Yeah, but it's a PDF. Anyway, why can't you generate a TOC into the HTML version from whatever format you're actually authoring in? Good tooling support should make this not hard.

EDIT: And now the HTML version has a TOC, too! Talk about immediate gratification -- kudos to whoever did that!


I wrote a tool the other day to help create a Table of Contents https://github.com/kaihendry/toc


Agreed - it is a bit long. But if you want to set something up from top to bottom it may be worth it. Cool - I saw the link to Github as well, looks like there is code to do much of what the article outlines. Awesome.


The hackthis application that is referenced in the guide is also in Github here:

https://github.com/inversoft/passport-js-example

It uses Ember, Node.js, Express, Sequelize, MySQL and Passport User Database (https://www.inversoft.com/products/user-database-sso).



>2048 bit RSA keys

Github recommends 4096 now, for what it's worth. [1]

>Pushing database backups offsite

This is a really bad idea and a good way to get owned. Database backups must be PULLED from the server, not pushed from it. Separately, you also need to test that you can restore from your backups periodically.

There were a couple other things I disagree with, but they're in the realm of personal preference. It's also interesting that this guide uses Linode which is a hosting company known to have had quite egregious security issues in the past. [2]

[1] https://help.github.com/articles/generating-a-new-ssh-key-an...

[2] https://www.google.com/#q=linode+hack


I agree that pulling does appear to be more secure, but if someone breaches any of your servers undetected, you're pretty much hosed regardless.

Plus, the Inversoft guide specifically states that backups must be encrypted. I could put my backup ZIPs on a public Github repository and no one would be able to access the user data stored inside it. Therefore, it really doesn't matted if they are pushed or pulled.


If a script or cronjob or whatever on the server has write access to the repo, so does the hypothetical attacker.

I remember seeing a "hacking" website get wiped along with all of its backups about 15 years ago and it left a very strong impression on me regarding this issue.


If it is write access but not delete access (nor overwrite access) then there is no issue.

It is sometimes more secure to push because it requires no inbound connections or authorizations the live machine.


That may be true but with a pulled db backup and a good recovery strategy you'll be back up relatively quickly, without you'll be DOA.


>This is a really bad idea and a good way to get owned. Database backups must be PULLED

Just make sure your database server doesn't have permission to delete backups (e.g. Have it POST a backup via HTTPS). There is nothing wrong with db server initiated backups.


This is still open to abuse in the form of your legitimate backup script being disabled and replaced by one uploading false backups. You wouldn't know anything has happened until you go to restore from one.

Of course, this is an edge case, but I believe setting up a pull-based backup system is still going to be less work than a write-only push system.


A hacked server can have the backup server pull false backups as well.

Of course, your servers shouldn't be SSHing to your backup servers, but that goes both ways.


If the backup machine is compromised the attacker can corrupt your backups. Either way, if machine is hacked you risk loss.


That's ridiculous. The same tricks can be played on a pull based system once you have compromised the DB server.


Why must they be PULLED?


If a server can push backups, it must necessarily have credentials for and write access to that backup repository. If the server is compromised, that write access could be used to attack the backup repository as well. Pulling backups from a server removes the need for credentials or access to the backup repository.


That is one valid way of solving this problem. Another option is to have a permission system that allows you specify that the server can only append new data, but not delete or overwrite existing files. This is useful if you don't want the added complexity of having a separate "backup management" service, which could easily become a bottleneck if you are backing up a lot of data from multiple servers.

One way to do this is with S3, for example, is to use an IAM role with only the "PutObject" permission, and enable object versioning for the bucket to prevent a compromised server from being able to delete data by overwriting existing files.


It's also easy to do this with Tarsnap. You can split out your host keys with tarsnap-keymgmt into separate read, write, and delete keys. Just take your delete key off of the host and store it somewhere secure.


Or better yet, create the key somewhere else and only upload the keys you want to have on the server. Otherwise there's a chance that the original tarsnap key is left in an unallocated block.


Not an expert here, but this seems to me to be stripping away one layer of security. Kind of like saying that you can leave a key to your house lying around as long as the pantry door is locked - if your database server gets owned, they now have some kind of login to your backup server as opposed to no kind of login.

Maybe the extra layer doesn't add much security, but if it's a simple config change and it does add something, wouldn't it be worth doing?


Not a bad point - it's a trade-off.

You could make a similar point about the centralized backup management server - it needs to have "some kind of login" to all your production database server, so of that host is compromised (which might only store encrypted copies of your backup), so are all your database servers if those privileges can be escalated. You could argue that one backup host is easier to secure than a complex system such as AWS, but then I would argue that it would probably be hard to beat the track record of S3/IAM. ;-)

Both approaches have their place. If you're dealing with a large number of hosts and a lot of data, the backup host will quickly become a bottleneck and you're probably better off with the append-only approach. If that's not a concern for you (i.e. you're not running into any bandwidth limits on the backup hosts) and want to deal with operating yet another service to avoid the risk of e.g. IAM privilege escalation, the other solution might be a better fit.


Doesn't entirely fix the problem, but using public/private key auth and a private key created just for the backup cron job, the server can push a backup via scp/ssh to its own user account which on the destination end is chrooted into a particular backup destination directory.


Plus, if you're super paranoid, pulling doesn't require leaving any ports open and software listening on the repository.


But it requires leaving a port open on the DB server where an attacker can get a whole copy of your database, or destroy it.


Many database servers are not serving databases solely to themselves and have that port open as part of the normal course of operation.


Could you recommend another hosting service with better security practices than Linode?


I don't currently use them, but I've had good luck with Vultr and they haven't had any incidents like Linode AFAIK. There's also AWS, Azure, whatever Google's thing is called, and all the other big ones. I have no opinion on DigitalOcean even though I currently have a VPS with them since I'm not running anything remotely mission critical there.


I don't know enough about server security to critique most of that guide, but the Password Security section is clearly behind the times:

> Passwords should always be hashed using a strong, one-way hash algorithm. [...] hashed with an algorithm like SHA-256 7 times.

If they had simply written "just use bcrypt" they'd be much closer to 2016-era security practices.


BCrypt works fine, but I wouldn't say it is "2016-era security practices". It was written in 1999 and hasn't had as much scrutiny as SHA or Blowfish (although it is based on Blowfish).

Regardless, using a salted, multi-pass algorithm will keep everything nicely secured using nearly any hashing algorithm.

Remember the goal is not to crack one user's password using a brute force lookup table, it is to crack everyone's password.


BCrypt is apparently just fine in 2016: https://news.ycombinator.com/item?id=11120528.


It is in fact a 2016-era good security practice.

And it is designed to prevent what your last sentence says.


We are preparing one focussing on application development:

https://github.com/FallibleInc/security-guide-for-developers... (work in progress)


Great checklist.


Yes. We originally just put this together just as a primer for younger engineers to understand the basic steps. The guide you posted looks really good though. It looks like it covers some of the application side of things as well.


So many people get some of the most basic security wrong. It is great to see people trying to educate instead of just pointing out issues. Thanks for this work.


I love the fail2ban part of your guide! I totally need to try that.


That is great as well. Quite extensive.


Great guide!


> We don't even have a password for our root user. We'll want to select something random and complex.

So you're taking something secure by default -- no password means no login allowed, and making it less secure. And if you have hundreds of these servers, you'll need to rotate them whenever someone on the team leaves. This is painful.

Simple solution: leave root password blank, don't forget your sudo password. If you can't get in, use grub or a liveCD. Or tie auth to ldap or kerberos so you _can't_ forget. This is one area where Windows has a distinct advantage: AD more or less requires admins to think at the level of network of servers, and provides a baseline set of services always present.


It depends on your VPS, but many give a root password by default.

I do make sure later in the article that `/etc/ssh/sshd_config` does not allow root login:

    PermitRootLogin no
But you make a good point that a simple solution is just having no root password at all. If your VPS does have a root login by default, then I believe you can get rid of it with:

    sudo usermod -p '!' root
The best part about sharing things like this is getting all sorts of great info and input on things.


  sudo passwd -l


So it looks like from the `passwd` man page that `-l` locks it, but doesn't actually remove it completely? I wonder if there are any disadvantages to this.


Locking means it changes the password to a value that nothing could ever hash to (by putting a ! in front of it, for example). Unlocking just removes the !.

The security disadvantage is that if someone gets your /etc/shadow file they can still see the password hash, in case they wanted to try to crack it, but honestly at that point your server is likely compromised anyway.


Great! Thanks for the answer.


Not on (eg) BSD. I don't know if this is Linux-only, or more pervasive, but certainly not entirely standard. On Net, Free, Dragon Fly and OpenBSD "-l" mean "update only the local database", versus Kerberos.


Well, sure, but the usermod method I was remarking on isn't standard either, and it established a Linux context for this thread. I'm not sure there's a truly portable way to lock a password across SysV, Linux, and BSD systems.


The _effect_ of what you do (make an "impossible hash") is the way, as far as I know -- whether its vipw, passwd -l, whatever...

Regardless -- I'm not trying to diminish your solution, nor Linux; I elided over the usermod example in the grandparent (didn't recognize it at all), but interestingly, as I look on my NetBSD system,there is a usermod(8), with a -p for already hashed passwords. As I test it though, it rejects '!' as an argument:

  # usermod -p ! jnk
  usermod: Can't modify user `jnk': invalid password: `!'


The last company I was at was more like this and had admins who were really anti LDAP...because...I have no clue.

You should have centralized auth whenever possible. LDAP servers with fallback, Shibboleth for SSO on web apps (or to implement ADFS integration or SSO with Slack or whatever),

I'd also recommend these steps being combined with Vagrant + your configuration management tools (Ansible, Chef, Puppet).

There are Vagrant + Linode, Vagrant + KVM/libvrt, Vagrant + Digital Ocean, etc. If you combine the two, you can get pretty close between local virtualbox instances and production (you'll run into some issues with each providers base box being a little different, but you can usually accommodate for both your own box and your provider's)

This makes it a little easier to move your architecture from one provider to another.


This part struck me as odd, too, especially when you consider the justification: That if you lose access to your sudo account/password, you must have some other way to get into the system.

Backups should include everything required to rebuild every server in the company. No server should be 'too critical' to wipe and start again if required. So maybe I'm being too much of an armchair warrior here, but the reason I don't like this one is less about the actual security implications, and more because of what it says about the fragility of the overall setup.


>Backups should include everything required to rebuild every server in the company. No server should be 'too critical' to wipe and start again if required.

True. But compare the effort required for, say, solving an accidentally misconfigured /etc/sudoers on a database server. You could argue that would never happen with proper testing, but shit happens and and having a particularly long root password in Keepass is a small price to pay to save that sort of headache.


You still can't login to the system. Root login is disabled as well as password authentication. The root password is just in case you lose your sudo password but it doesn't provide you a way to login if you don't have your key, passphrase to your key, access to the static IP/VPN and device with 2FA.


What if it is a VPS?


Boot from a rescue volume, or use the VPS provider's root password reset functionality.


terminate and launch a new one? You have automation to provision your servers right?


what about protecting against the physical host of the machine?


Agreed. Stopped reading when I got to that.


This is good advice on achieving the most secure SSH configuration: https://stribika.github.io/2015/01/04/secure-secure-shell.ht...

"My goal with this post here is to make NSA analysts sad."


Actually, I've improved on this somewhat by splitting configs to 6.5+ vs. older. Corrections welcome. At some point, I will get around to publishing it.

Configs

OpenSSH 6.5+ Server

UsePrivilegeSeparation sandbox KexAlgorithms curve25519-sha256@libssh.org,diffie-hellman-group14-sha1 Ciphers chacha20-poly1305@openssh.com,aes256-gcm@openssh.com,aes128-gcm@openssh.com,aes256-ctr,aes128-ctr MACs hmac-sha2-512-etm@openssh.com,hmac-sha2-256-etm@openssh.com,hmac-sha2-512,hmac-sha2-256

OpenSSH Server Legacy

#UsePrivilegeSeparation yes KexAlgorithms diffie-hellman-group14-sha1 Ciphers aes256-ctr,aes128-ctr MACs hmac-sha2-512,hmac-sha2-256

OpenSSH 6.5+ Client

UseRoaming no IdentitiesOnly yes KexAlgorithms curve25519-sha256@libssh.org,diffie-hellman-group14-sha1,diffie-hellman-group-exchange-sha256,diffie-hellman-group-exchange-sha1 HostKeyAlgorithms ssh-ed25519-cert-v01@openssh.com,ssh-ed25519,ssh-rsa-cert-v01@openssh.com,ssh-rsa Ciphers chacha20-poly1305@openssh.com,aes256-gcm@openssh.com,aes128-gcm@openssh.com,aes256-ctr,aes128-ctr MACs hmac-sha2-512-etm@openssh.com,hmac-sha2-256-etm@openssh.com,hmac-sha1-etm@openssh.com,hmac-sha2-512,hmac-sha2-256,hmac-sha1

Host * IdentityFile ~/.ssh/id_ed25519 IdentityFile ~/.ssh/id_rsa HashKnownHosts yes VisualHostKey yes VerifyHostKeyDNS ask AddressFamily inet ForwardX11 no ForwardX11Trusted no

OpenSSH Client Legacy

UseRoaming no IdentitiesOnly yes KexAlgorithms diffie-hellman-group14-sha1 HostKeyAlgorithms ssh-rsa-cert-v01@openssh.com,ssh-rsa Ciphers aes256-ctr,aes128-ctr MACs hmac-sha2-512-etm@openssh.com,hmac-sha2-256-etm@openssh.com,hmac-sha1-etm@openssh.com,hmac-sha2-512,hmac-sha2-256,hmac-sha1

Host * IdentityFile ~/.ssh/id_rsa HashKnownHosts yes VisualHostKey yes VerifyHostKeyDNS ask AddressFamily inet ForwardX11 no ForwardX11Trusted no


Hahaha great quote.

I haven't seen this article before. It looks quite good. Thanks for posting it.


> I check our logwatch email every morning and thoroughly enjoy watching several hundreds (sometimes 1000s) of attempts at gaining access with little prevail.

This is something that actually bugs me a bit. These attacks are so common, getting emails like this every day contributes to alarm fatigue. (https://en.wikipedia.org/wiki/Alarm_fatigue)

I'd love to see the Linux nightly security scripts replaced with something that only sends out emails when there's an specific actionable event I need to pay attention to. Ideally in a way that can easily be aggregated over all the machines I manage.


Yep and this doesn't demonstrate anything about security. Showing brute force scanners trying out "root/letmein123" doesn't teach anyone the importance of good security, just the importance of not using super-common user/passes.

I cannot figure out why anyone would care or find anything useful in these logs. Change the port, call it a day. Getting worked up about random SSH attempts (or random HTTP "exploit" attempts) seems to be for admins with too much free time.


I used to read logwatch daily when I was at a small shop and only had two servers. It was really interesting to see the attack trends and IP blocks they came from.

It never gave me alarm fatigue because it's not an alarm. It's a log of something that has already passed. Most script kiddies are automatically banned by tools like fail2ban anyway. Seeing the data is pretty interesting.


Among the things I'll do early in system configuration is to reduce such notifications.

Things which should simply be activity logs are moved there. Conditions triggerring notifications are tuned so they don't (failtoban, rate limiting, firewall rules, ...). Makes life much more tractable.


> sudo ufw allow from {your-ip} to any port 22

I'm surprised nobody mentioned this is a great way to shoot yourself in the foot if you don't have a static IP.


Ooo. Fair point. I'll add that now and link to your comment.


Great! Thanks for the article, it makes a good reference when setting up VPS's.


I don't mean to sound flippant but why can't these "lock down your new box" tutorials just be a bash script? Shouldn't they be?


Hardening (along with any other server setup/configuration) should be implemented using some idempotent configuration management software (Chef, Puppet, Ansible, etc). It's 2016, there's no need to configure servers manually.


It depends.

I'm a security guy, so it would be embarrassing and possibly bad for my career if any of my servers got hacked, but I actually don't set up servers that often--it's not part of my job. These Web 2.0 configuration management solutions change pretty fast and don't care about reverse compatibility. So between my infrequent setups my configuration scripts pretty much always break.

Contrast this with bash, which cares a whole lot about reverse compatibility, and it's a no brainer. I've got scripts where the only modifications I've made since 2005 were to add functionality or increase security, never to fix existing functionality that was broken by a change to the system. I'll take that over Chef or Ansible (both tools I've used) any day.


Which is why your hardening code (even if it's bash scripts) should have tests, using something like Test Kitchen. It will provision a VM (or cloud server, or docker container), execute your hardening scripts against them, and then run your integration tests to verify that the machine is hardened to your specifications.


Would be very interested to review server hardening Bash scripts. Have you published anything on Github or elsewhere?


I have, but I don't want to associate this HN account to my GitHub account, sorry. :/


throwaway github account then? :)


> there's no need to configure servers manually.

Other than to learn how, which is also why I'd provide text on how and now a bash script to just do it. Not everyone's needs are the exact same, and it's better to learn how to fish than be handed a line with one hooked. Maybe I have a strong reason not to go SSH keys only? I should at least understand how they work and why they're important so I can make the decision, wrong as many people think it may be.


Right, I was thinking myself... why not have a chef recipe do this, and walk through the systems as you do that.

Ideally the server comes up and applies chef, doing the needful to secure it. Hand-cooking a server is extremely painful.


I mention Ansible and other tools in the article. The idea, as others have mentioned, is to teach a man to fish. You're much better off with having an automated way to perform this, but you shouldn't ever run anything automated that you don't understand.

We'll release an Ansible Playbook over the next week or so that follows these steps.


See related (as inspiration, or an upstream):

  - https://github.com/openstack/openstack-ansible-security
  - https://github.com/geerlingguy/ansible-role-security


Thanks man! I'll take a look.


You are right about automating.

Here is Ansible playbook I recently created https://github.com/chhantyal/5minutes


"there's no need to configure servers manually" assuming you have a configuration you can trust. Where are these published? And do you trust them?


There's the argument that "just download and run this shell script as root" and "lock down your box" are fundamentally incompatible steps.

It's one of those cases where a tutorial explaining every change you're making and why you're making it really is pretty important.


Because Linux systems can vary widely. Even the same distro can have subtle - or even significant - differences from one hosting provider to another. So writing a shell script that was robust enough to "just work" would not only be complex, it would also likely suffer from readability issues (as so many complex shell scripts do) and thus many sysadmins might not want to run something that can't audit - which is quite sensible really. In fact you cannot even guarantee that Bash will be installed, that's just how annoyingly complicated this job can be.

As a side note, it's also sometimes better having a tutorial full of manual steps as that helps educate the "sysadmin" regarding best practices and some of the basics of Linux administration (if they weren't already familiar). That experience can be just as valuable as hardening the server itself.


I created an Ansible role to do the majority of the tweaks immediately: https://galaxy.ansible.com/geerlingguy/security/ (works on RHEL/CentOS/Fedora/Ubuntu/Debian/Arch).

You should really automate this process if you create more than one server, ever. It's fairly easy to get the basics, and if you don't automate it, I guarantee you'll miss one or more steps in the setup.


Because newcomers would then copy-paste the script without really understanding what's going on, and that lack of understanding will come back to bite when something breaks. Teach a man how to fish rather than giving him the fish and all that. Writing a bash script as a next step based on OP's article would be easy once you know what you are doing.


There probably are such scripts, but the educational value is bigger in the post format.

That said, if you do such things at scale, you'll likely have automated provisioning and configuration management systems in place anyway.

For example you could have images that have the static parts already pre-configured, and something like cloudinit for the ssh keys and/or passwords.

Or you provision the systems with foreman, and then use puppet for configuration management.


The Inversoft guide posted does have a corresponding GitHub repo. https://github.com/inversoft/2016-security-scripts

I think scripting all of this is great - but also agree that it is important to understand what is being performed.


You are supposed to understand the implications of all of these steps. If by accident you ever lock yourself out, you will need to know how you set it up. If you just recommend running a script, most people will not learn what they are doing.


Yes. Doing this 'manually' step by step lets you choose which things are really necessary, your configuration might be different, and forces you to learn WHAT it's doing rather than just running a script. I am not a huge fan of Arch Linux (prefer debian unstable), but this is the same principle as the Arch Linux installation process which doesn't have a monolithic installer but forces you to do the process manually step-by-step.


There was a project called Bastille which basically did exactly that: http://bastille-linux.sourceforge.net/

It would run through a series of questions about your use case to build a security policy, and then edit config files for you.

The problem is that it needs to be aware of all the different flavors of Linux it might be running on, so it's naturally fragile and requires lots of maintenance. Sadly, it hasn't been updated in a few years.


I would say the same reason we teach web developers what a linked list is. They might never use it (watch me eat my own boot here) but it's a fundamental principle upon which a lot of the tools they use are built.

It's good to know what's going on under the hood.


They could be, or you could take the route you won't regret later and use a configuration management system. Deutsche Telekom has open-sourced system-hardening cookbooks for Chef, Puppet, and Ansible.


Good to know. I did some searching and found them here: http://dev-sec.io/


Thanks for linking it - was on my phone.


My dotfiles repo includes an "initialize_debian_server.sh" that basically does all this. It's definitely doable.


just cut and paste all of the commands into a single script if you want one?


Sure! That way it's just a matter of doing

    su -
    curl -sS https://some.random.host/trust-me.sh | bash -
What could go wrong?


Not really. It's more like `./provision.sh your-new-host.example.org`

(Because why in the world would anyone want to type in things by hand? Laziness for any repetetive manual labor is a greatest virtue of any good sysadmin.)


Why do people install fail2ban then disable password based authentication entirely? I legitimately don't understand the purpose.

Also, they complain about log spam (from failed SSH attempts) this is one reason to move SSH to a different port. It does NOT increase security, but it DOES reduce log spam from bots trying for easy targets.


Fail2ban can be used to block IPs based on any log file, not just SSH - I believe the author uses an Apache log in the example.


It also has rules for mailers, and other utilities, which I often use just because they're nice and simple to work with.


I always change the ssh port to something other than 22. It has always seemed to work well for me for most automated attacks. Perhaps this is not advisable for some reason?


There is a slight security concern to binding it to a port above 1024 in that a non-privileged user can bind to that port and MITM.

Here is a good summary of the different options you have with ssh and choosing a port:

http://serverfault.com/questions/619898/should-i-change-the-...


From experience, running ssh on non-default port have mostly reduced automated login attempts to 0, year after year.


fail2ban installed = less logspam, and never having to adjust default port with whatever tool is ssh'ing

moving ssh port = less logspam, but now you have to specify your ssh port in every tool that uses it, and it's one more thing that other staff have to remember


Have you tried it both ways? some scripted attacks will try 22 find that it is open then spam it with attacks. If it's not open they will have to scan your ports then spam you with attacks


Fail2ban? sshguard? unnecessary. Just disable ssh passwd auth (which generally is a good idea) -> done/done

If you don't like lognoise from ssh scanners (even if you disable passwd auth), move your sshd port to some random high port and make note of it in your ~/.ssh/config

Generally: if in doubt, take the more simple and elegant solution to a problem.


I agree that fail2ban for ssh seems unnecessary. But, it also provides monitoring for other services like http and common exploits.

I'd be interested in learning from the community if fail2ban adds much value. As I've looked into the service, it seems like simply running the latest security patches obviates the need for fail2ban.


I run a web server with some 50+ Wordpress installs on it. You better believe Fail2Ban is necessary. Without it all resources would be consumed by brute force attacks. If someone knows of a better way I would like to hear about it.


We use the WordFence module to block brute force attacks, seems to work fine.

I cannot believe that Wordpress still ships without basic rate limiting on its login form.


In my experience, better to let a webserver handle stopping bad traffic by whitelisting WordPress endpoints.

Parsing a url and returning 404 from nginx is cheap and scalable, and allows through legitimate traffic that may be sharing an IP (such as TOR).


I too am curious to know what the consensus is. People seem to fall into one of two camps: 1. it's unnecessary if you disable password login or 2. it's an easy thing to add as yet another safety net on top of sane service configuration and firewall rules.

I've taken some flak here for being in the #2 camp so I'm loathe to expose myself once again, but my thinking is that even with password authentication disabled, sshd is still vulnerable to DoS attacks. From the public internet or from other machines in my VPS provider's datacenter. The ssh and recidive fail2ban filters add some minimal defense against that.


If all you're worried about is keeping the low-level noise out of your logs and discouraging waste of resources on brute-force attacks (which won't work because of course password auth is disabled), I think it's sufficient to use the built-in iptables rate-limit feature on SYN packets.

UFW has a feature that will implement this with a limit of 6 SYN packets in 30 seconds from individual IPs.

    ufw limit ssh/tcp
This may not be ideal for some services like http without persistent connections, and if fail2ban provides extra exploit protection it could be worthwhile.


This is a much better answer than, "If you're using fail2ban for that reason, you don't know what you're doing," as someone said to me recently, so thank you. :-) I'll explore replacing fail2ban with this mechanism.


I don't think he means replacing fail2ban, but augmenting it with this.


Useful or not, it opens an extra attack vector on the server. Parsing logs is always a dangerous area from security perspective.


> move your sshd port

Sick of seeing this idiotic advice. Standard ports are standard for a reason. Perhaps one should also change their web and mail ports for "security".


But those ports are more or less supposed to be available to the general public. You keep those where they're expected to be. Any other port that "does not concern you, move along" is fair game IMHO.

Standards exist so you know where to find the things you're supposed to find, and how to talk to them.


He said:

> If you don't like lognoise from ssh scanners [...], move your sshd port [...]

It’s not about security as you claim.


I prefer the “Securing Debian Manual” – it’s an official manual from the Debian project.

https://www.debian.org/doc/manuals/securing-debian-howto/


Be aware fail2ban does not handle IPv6 at all with its default configuration on Debian/Ubuntu.

https://github.com/fail2ban/fail2ban/issues/1123


You're right. Took mwpmaybe's suggestion for the time being and only will accept IPv4 from ssh for now till that's fixed.


Indeed. I add this to sshd_config (with a note to remove it when this fail2ban issue is resolved):

  AddressFamily inet


Great tactical advice, but what a sad situation to be in. "Run this command, then run this command, then run this command ..."

There should be a single configuration file (or set of files) that declaratively describes the whole state of the machine. That way the exact situation of the server can be reviewed by just looking at files, instead of trying to poke and prod at the machine to see what commands have been run over the last X weeks.


There is a set of files that describes the state of the machine, it's called the filesystem. Anything less doesn't describe the whole machine. The 'poking and prodding' is just a convenient way of querying the very small parts of the filesystem that are relevant to that query.

That said, a script that pokes and prods the right places and reports a machine's 'security factor' and prompts improvements would be cool (and probably already exists).


> That said, a script that pokes and prods the right places and reports a machine's 'security factor' and prompts improvements would be cool (and probably already exists).

https://cisofy.com/lynis/


Since I changed the default SSH port of 22 to something else (like 4422), I no longer get any of these drive-by attacks and don't need fail2ban anymore.

I also like to set up a simple Monit configuration to alert me about high cpu usage or when the disk space is about to run out. Instead of emailing me these alerts (and also weekly reports) I've configured Monit to post them to my Slack team of 1.

https://peteris.rocks/blog/monit-configuration-with-slack/


You should still use fail2ban.

https://news.ycombinator.com/item?id=11854576


What's the reason for using a firewall?

Assuming that services which shouldn't be accessible to the outside only listen to localhost not the network (e.g. MySQL on a LAMP stack), isn't that sufficient?

(Honest question, I don't have much experience with syadmin.)


Ideally, you do both: bind your services to the correct interfaces and ports, and set firewall rules as a safety net. This prevents users (or exploits yielding forked processes) from listening on other ports (well, they can listen, but it will be pointless), and if a package update comes along that unexpectedly changes a service's listener configuration, you'll be protected. It also protects you from buggy or broken services that may provide bind/port options in their configuration but end up listening on all interfaces or random ports anyway.


So there's a couple of reasons to add a firewall.

1. If an attacker gets unprivileged access it can slow them down (if properly configured) in getting new tools onto the system or adding a shell.

2. If a configuration error results in a service being started on a network accessible interface by accident the firewall gives you a bit of defence in depth protection against unauthorised connections to that server.

3. you can also use it for logging activity to feed into other systems.


>1. If an attacker gets unprivileged access it can slow them down (if properly configured) in getting new tools onto the system or adding a shell.

That only works if you have an outbound firewall. Which is very onerous - you'd either have to whitelist destinations (package repos, but what if you want to validate arbitrary certificate's CRLs?) or whitelist applications (but not wget etc.)


1) An attacker getting access will only be slowed down/detected if your firewall filters outgoing traffic, which practically no one does because of the inconvenience and maintenance costs. You also need to lock down outgoing traffic to port 80/443, which is how many intrusions download their payloads and calls home for instruction. If you however accept the cost and do use a outgoing filter, it's quite effecting in detecting and stopping attacks, and it is something I recommend for defending assets with high security demands or high risk.

2) As for configuration errors, it depends on what kind of practices you use as a sysadmin. Do you download and run random scripts found on blogs, use experimental versions, and do not spend time reading manuals? Or are you someone who will only run a Debian stable, has verbose settings in aptitude and reads patch notes? It's been a long time (i.e., almost 20 years) since the last time I saw a program that allowed vulnerable interfaces to be accessible on the network without significant warnings in the manual, comments in the config file and readme. Projects and package maintainers have significantly stepped up their security practices, that by the time something reaches stable it should be matured enough that shooting yourself by accident is difficult.


Many services listen to all available interfaces in their default configuration. Many also auto-start right after installation. So additional layer of protection won't hurt.


Firewalls are required for compliance in IT in regulated environments that are common in the Fprtune 500. Sometimes an IDS is necessary to supplement it.


Suppose someone gets access to the box. They shouldn't be able to curlbash http://evilscript.sh into the system.

So you really want to lock all outgoing and all incoming except for very specific channels and protocols to controlled endpoints.


Layered security. You always add redundant security so in case another layer fails you have a fallback. It's the better be safe than sorry version of infosec.


Honest question, why does nftables get so little love vs iptables?


Here's an Ansible role (I made it) that automates the steps described in the article: https://github.com/LucianU/ansible-secure.



I havent checked out your repo but I found myself chanting "Ansible FFS" while reading this post :) Well done.


Can somebody help me out with this question: The default config for unattended-upgrades seems to not enable reboot even if a reboot would be required to activate the upgrades. Wouldn't that had made quite a few important upgrades in the last years effectless if they server did never get rebooted?


Not sure if this is still true, but I've also seen cases where Ubuntu will happily continue to install kernel updates as they come down the pipe, right up until /boot is full of old kernels and ramdisks.


Yes. You should still keep your eye out on patches. If a big vulnerability gets patched requiring a reboot, you'll want to attend to the upgrades yourself.


Exactly. I run this from time to time:

   ansible -i ./invenory/whateves all -e -K -u deploy_dude -s -m shell -a 'stat /var/run/reboot-required'
At least on ubuntu, if that file is there, you need to reboot.


If you turn on email reporting you get an email tagged with [reboot required].


For those saying "why fail2ban?", fail2ban can be used for a great deal more than just watching the sshd log. You can activate fail2ban rules for apache and nginx which help significantly with small DDoS, turning spurious traffic/login attempts into iptables DROP rules. And a lot of other daemons.


At least one log parsing tool I've seen in years past was vulnerable to log injection attacks. Hilarious proof of concept to own a box by way of PTR record.

I haven't checked to see whether fail2ban suffers from this model or not.


I'd be more curious to see a "My first 10 minutes on an Ubuntu desktop" version of the article.


Or alternatively: "My first 10 minutes on an Ubuntu desktop (for users who don't hate Unity, which includes myself)".

In my case: Change the Switch workspace keys from Ctrl+Alt+Arrow keys to Super+ Arrow keys. Remove LibreOffice, install WPS Office. Remove Transmission, install Deluge. Install indicator-multiload, indicator-sound-switcher. Install Kodi. Install Steam.

A lot more stuff, but I have not written it down. =)


Yes, I see a lot of reaction around Unity. To be honest, I had to google it to find out what it was. I have always used Ubuntu Gnome, so I had no clue about Unity nor its issues.

Back to the topic here, I thought someone could outline a security checking for an ubuntu desktop to assess how secure the system is -- or maybe making it secure from a clean install.

Edit: grammar


First time I hear about wps office, why do you prefer it over libreoffice? mobile support?


It's faster (C++ vs Java) and more compatible with the documents I have to open.

It also pleasantly surprised me once:

I received a Powerpoint presentation, and went to a customer meeting, where said presentation had to be presented.

I plugged the HDMI to VGA adapter for the VideoBeam to the laptop, started the presentation, and the presentation was running in the external display, while the laptop display was still showing the normal 'Powerpoint' view. I could load the web browser in the laptop display to check some things while the presentation was still running undisturbed and the speaker and the audience was happily unaware of it.

That level of professional use in software was something I did not come to expect in Ubuntu for any third party software. In fact I don't know if the other Office suites (including MS) have that feature. I guess they do, but still.

It is wonderful when everything just works as intended.



apt-get remove -y unity && apt-get install wmaker wmaker-data pcmanfm lxterminal

Should take less than 10 minutes and be way faster to use :)

(I actually use openbox and fbpanel, but WindowMaker is just too great to forget about)


> First we'll want to make sure that we are supporting IPv6

How does that help security?


That comment was in regards to ufw as it doesn't support adding IPv6 rules by default in commands in older Ubuntu.

If you don't enable IPv6 and the server supports it (odds are it does) all of the benefits of using ufw at all are totally ignored on IPv6.


I'm finding that another important step is this one:

apt-get install etckeeper && cd /etc && etckeeper init

Keeps your /etc under version control so you know what kinds of configuration changes you've perpetrated.


I used to use this, but I found that it's significantly less useful than a git repo with my server-specific Chef cookbook in it.

Manually modifying servers was never a good idea; it's worse now, even with tools like this.


I suppose that's true, but in the field where I work (social science), servers are mostly spun up to scratch an immediate and idiosyncratic itch, so configuration tends to happen organically.

I agree that's probably not a good idea, and learning Pupchefsible is well worth the effort. In the meantime, though, there's at least some degree of reproducibility with etckeeper.


I do this stuff professionally, and I've learned the hard way that you either have a reproducible environment or you don't. etckeeper isn't reproducible. Actually rolling back with something like etckeeper is much, much more likely to break something (by deleting a config file used by a newer service, say) than to save you. If it did something like separate branches for each service or component within /etc I might be more sympathetic...but at that point you have half of a CM system already and might as well just go the rest of the way.

If you need reproducibility without a CM framework, keep backups of your machines.


Ha yeah. I wonder how it's not installed by default, it's such a bliss (until nixos becames the defacto standard)


I have been meaning to write up a similar guide.

I would like to recommend using just iptables instead of ufw, I had a case on my vps where an update to ufw failed and then the firewall was not working.

With iptables, install iptables-persistent package so they are saved when you do restarts. Do not try to block entire country ip ranges as this slows the machine down substantially.

fail2ban is great, I would recommend looking at some of your system logs to figure out new rules to add.


Ferm [1] is wonderful as an iptables frontend. Apart from making it simpler to read, it can avoid a lot of repetition with handy shortcuts such as ability to group things. E.g.:

    proto tcp dport (smtp ssmtp qmtp pop3 pop3s imap2 imap3 imaps) ACCEPT;
.. creates rules for each of the ports listed. You can use multiple groups in the same statement as well (lets say all the services above for some crazy reason also listened to udp - you'd just replace "tcp" with "(tcp udp)").

Being able to set variables is also fantastic. E.g.:

    @def $DEV_PRIVATE = (eth0 eth1);

[1] https://github.com/MaxKellermann/ferm


Also, FireHOL [1] is an interesting option. It also has a DSL for managing firewall rules. The custom service port definition is a little weird, but overall I like it better than ferm.

[1] https://firehol.org/


Seconding the recommendation to avoid ufw. I haven't actually used it or had a problem with it, but if you understand iptables then you don't need it, and if you don't understand iptables, you're better off just learning to use iptables directly so you can handle those cases that ufw doesn't support or clean up after it when it breaks.

I didn't know about iptables-persistent, but it's easy enough to just "iptables-save > /etc/network/iptables" once you're finished changing the config, and "iptables-restore < /etc/network/iptables" in /etc/rc.local. Pretty sure those binaries come with iptables itself, so they should always be available. (I've never dealt with a system that had iptables where they weren't.)


I recommend iptables-persistent because fail2ban is adding rules dynamically, and it just makes it easier to handle with that aspect automated.


I didn't notice that iptables-persistent actually saves the currently configured rules periodically. That's both kinda neat and a little scary, and I'm not entirely sure I see much value in persisting dynamic rules; it seems like it'd be easy to end up with a long chain of stale rules that way. Still worth knowing about the automatic persistence, though.


Surely as rules are deleted they disappear from the saved copy?

Or is the issue that fail2ban et al don't remove stale rules?


It does, but I seem to remember it having trouble cleaning up after itself when abruptly terminated and restarted. It's been a few years, though, so that's probably no longer an issue.


Why don't they disable root logins with password period and only allow SSH key authentication?

Also if you put a passphase on your SSH key does that mean you have to enter it every time you want to SSH to the server (in order to unlock the key) or does it stay cached on most SSH clients (ssh on mac terminal, putty on windows, etc).

Isn't watching failed logins kind of useless? I think it is more important to see what successful logins were made.


They do, don't they? Later down there is a section Enforce ssh key logins. However that makes activating the root account even stranger.

The thing with the ssh key passphrase makes not much sense to me. I think this is just about "keep your private key save". On linux, passwords like this (like gpg) get cached by the usual password input clients.

You are right about watching the successful logins first. He writes it is to raise awareness.


If you put a passphrase on your ssh key you only have to enter it when you initially add it to your ssh-agent.


yes, ssh-agent will let you enter the password once, and then it won't prompt you anymore.

(see https://help.github.com/articles/working-with-ssh-key-passph... )


> Isn't watching failed logins kind of useless? I think it is more important to see what successful logins were made.

Are you talking about Fail2Ban? I'm not an expert but my guess is that it's defense in depth. Rate limiting failed logins is useful in the case where you messed up somewhere else and have a service/login that is vulnerable to password guessing.

I would imagine that auditing successful logins is definitely useful but was left out of the guide for brevity.


Its a tradition to nitpick these kinds of lists. Here's my take.

>I generally agree with Bryan that you'll want to disable normal updates and only enable security updates.

Hmm, fairly certain the Ubuntu (and others) don't do major product updates or API breaking updates via apt-get. You shouldnt have to worry about breaking anything if you use normal updates. This seems a bit too conservative for me and leads to problems down the line of being on an ancient or bugged library and then having to do the update manually later, usually after wasting a couple hours googling why $sexy_new_application isn't working right on that server.

He setup an email alert, but not an smtp to actually send it. Also, OSSEC takes a few seconds to install and is much nicer than emailing full logs.

Lastly, fail2ban is becoming a sysadmin snake-oil/fix-all. Its use is questionable in many circumstances. There's a real chance of being locked out of your own server with this. If people are recommending it, they should be giving noob-friendly instruction to whitelist their IP at the very least.


Not sure if others feel this way but adding this line to sudo never felt right to me...

deploy ALL=(ALL) ALL

I usually instead limit the deploy user to a smaller subset of commands e.g. the init.d script to control a service.

obviously if someone gained access to deploy user we're probably sol anyway... but it just makes it seem safer... we have a to login as an ops user to install or update things on the boxes.


Someone on /r/netsec rightly pointed out that you shouldn't ever add a user directly to sudoers anyways. You should add them to the sudo or wheel group. I've since updated the article.

What I've described is a more of a base, but according the Principle of Least Privilege you could go even one step further and do what you're suggesting. You'd probably want to have a couple of users though. An admin user, a deploy user, and a maintain user all with different privileges.


> you shouldn't ever add a user directly to sudoers anyways

What was the reason for that? I have the deploy user able to run a couple of individual commands without a sudo password (scripts that run canned updates, to be initiated from a buildserver), but I don't see how it would improve things to use a group instead that only holds that user.


I prefer to grant my person-user access to run any command as deploy, so I can kick off deployments from a remote machine using Ansible:

  - include: deploy.yml
    become: yes
    become_user: deploy
And grant separate access for my person-user to restart services as root. And use sudo groups.


I created a script that does almost everything automatically using Ansible and Ansible Vault : https://github.com/guillaumevincent/Ansible-My-First-5-Minut...


I always worry that adding 2FA could make your machine inaccessible if anything happens to google-authenticator in this case. Maybe it's a little bit of paranoia but I don't like the idea of giving control over my ability to log into my server.


It'd be perfectly reasonable if libpam-google-authenticator relied on Google's infrastructure, but despite the infelicitous name, it does not; it just implements the server side of TOTP. The authentication flow is identical to any other correct TOTP implementation, and you can use any compatible client; no integration with Google services or infrastructure is required. (In fact, I don't think it's even possible.)

Speaking of TOTP without Google, if you use iOS and find the Google Authenticator app unsatisfactory, try Authy. It's good stuff, and well worth a few bucks.


Hurricane Electric's network tools iOS app is free and has an OTP client built in, with iCloud Keychain sync.

OTP Auth is another excellent, free OTP client.

Edit: clarified both are free.


While I'm not in the market for an OTP client, this is the network tools app I've been looking for since I bought my first iPod Touch. If you have remote beer-buying support, point me to it.


You're welcome. I only discovered it myself recently when testing their IPv6 tunnel service.

no beer required :-)


I moved most of my 2FA/TOTP stuff to 1Password. Works great on the Apple watch, and it's one less app (I tried Authy, but I have this thing against using too many apps)


I've been thinking about starting to use a password manager (as I should've done long ago), and 1Password looks pretty good. It's pricey enough that I'd like to hear some pros and cons before I pull the trigger, though. How's it worked out for you? Where are the pain points?


FWIW, the desktop app has a free trial, and on top of that, the developer offers a 30-day money-back guarantee if you buy direct from their store. (Not sure about the mobile app, haven't used that.)

I use the desktop app every day and am generally happy with it. It has many thoughtful touches, like automatically clearing the clipboard a minute or so after you copy a password into it, so that you don't later paste your password into some other text field by accident.


There also FreeOTP Authenticator from Red Hat for both Android and iOS.

It's of course open source.


google-authenticator is a fancy name for an algorithm described in RFC 6238. It does not talk to servers or require any network access. Google is not involved in any way; they just made the algorithm popular and branded it.


I didn't read the question as "what if Google discontinues authenticator" but more as "what if I use Google authenticator on my phone and it gets stolen and i can't log in any more"


it provides you with backup codes that you print and put somewhere safe


For protecting against brute-force login attempts, I use sshguard [1]

I really think this should be installed by default on distros like Ubuntu.

[1] http://www.sshguard.net/


I can't see any benefit, what am I missing? Put SSH on a port that's not 22 and done, no more mass scanning. The only thing SSHGuard has ever done for me is to lock me out when I was accidentally using the wrong key.


If SSH is on a non-standard port, it is still possible to brute-force access to the server. You will see fewer automated attempts but you are still vulnerable to a motivated attacker who port scans you and finds the SSH port. Such an attacker is less common than automated scans but is more of a threat. With Sshguard, you are no longer vulnerable to this type of attack at all, no matter which port you run SSH on.


But brute forcing any reasonable password or key is so far-fetched as to not be something to even consider.


If you can make hundreds of login attempts per second, and you can keep at it for days/weeks/years, you can get through some pretty big password dictionaries with lots of variants (e.g. password/p4ssword/passw0rd/p4ssw0rd/etc.).


Out of curiosity, how does it compare to fail2ban?


Seems more flexible, and has IPv6 support by default.


It also was more robust for me and simpler to configure.

Though I switched from sshguard to fail2ban, because after I started blocking attacks on ssh, script kiddies started to brute fore passwords using other services (SMTP/IMAP/POP3 etc.)

Edit: looks like they added ability to monitor other services, I guess I'll re-evaluate it again.


One of the suggestions is to make sure your public key has the .pub extension, and they imply that if someone didn't include the extension they would be reprimanded - any reason for this in particular?


Having a reliable convention like that reduces the risk of someone accidentally copying their private key to a server.


> "You should never be logging on to a server as root."

Can someone explain me, let's say I disabled password logins and only allow login via a key, what are potential downsides of logging in as a root?


Well for one, every command you run has root privileges (instead of requiring sudo). Every process you spawn has root privileges. You're safer keeping your privileges limited and sudoing when required. Also you lose your real-life audit log if multiple people log in as root.


But when I tried using a regular user to run anything on a server - it still requires me to do sudo, so the process will anyway have root privileges, no? So I end up typing sudo most of the times becauase most of processes do not work without root priveleges



Forgot to check if the server isn't backdoored. You will be surprised how many providers add many backdoors and monitoring systems you don't need (m looking at you AWS guys).


I would be annoyed with a cryptic Audi password. I would prefer 'BatteryHorseStaple' passwords. Anything I can't remember gets written on a post it note and put next to my screen with what it is for. This is my behaviour and the problem with cryptic passwords is that there are others like me, willing to keep a good password secret and not willing to be so secret about a clumsy, easy to crack by machine but impossible to remember password.


Get a PW manager instead. No passwords should be stored in your head (because every one should be different) and they should be stored behind encryption -- definitely not plain text nor sticky notes.

http://keepass.info/download.html


It may be useful, at step 0, to check out the server and see basic server orientation. Which Linux is it (cat /etc/*-release)? How much ram and disk (htop, df)? How is the filesystem setup (mount)? What packages are already installed (dpkg -l)? What processes are running (ps aux, htop)? What did the last root, including me, do (history)? I also like to know where is the box physically, roughly (tracert, run locally).


My biggest concern with being on a VPS like Linode, once you're all done securing yourself and binding services to the local LAN IP, is an attack from within the network. The VPS you own is also accessible by others on the same subnet, contrary to what you might assume.

I'd love to see a ufw guide for whitelisting only your own internal IPs to be allowed access to any services for ultimate security.


Not sure what you mean but ufw by default blocks everything on your interface so other machines in the local subnet shouldn't have access.

If you want to have more security and no (or just a single) outgoing service configure OpenVPN with TLS and put all your local services in a local subnet for your machine. So not even a portscan can find something.


Something like Ansible can help with this a ton. My playbooks are littered with tasks such as:

  - ufw: rule=allow direction=in port={{ redis_port }} src={{ hostvars[item]['ansible_ssh_host'] }}
    with_items: "{{ groups['jobservers'] }}"


Very useful, most of this stuff is pretty common for anyone who has done any regular sysadmin work, but definitely good to have a checklist.


echo "set background=dark" > /etc/vim/vimrc.local


You should use `>>` in order not truncate the existing file if there is one.


For this and a hundred other things, I prefer to use a git home directory.


All users in your team are forced to use dark terminal backgrounds?


You can actually autodetect whether the terminal background is light or dark. For any xterm-compatible terminal, write '\x1b]11;?\x07' to the terminal, and it'll write back a string telling you the foreground color (for instance, '\x1b]11;rgb:0000/0000/0000\x07', which if written back would set the foreground color). If the color matches 'rgb/RRRR/GGGG/BBBB', compute the luminance of that color, and assume a dark background if <0.5 and light otherwise.


I didn't know about this, but from now on, when writing CLIs that use color, I'm going to take this into account!


Awesome; more tools should do that. Some caveats, though:

* You might not get a response from every terminal, so limit how long you wait.

* If you don't already have echo turned off, turn if off before sending the sequence, because otherwise it'll be visible as though the user typed it.

* You don't know that the color will use the "rgb:RRRR/GGGG/BBBB" format (a terminal can return anything XParseColor can understand); just read the string from the escape to the terminator, look for 'rgb:', and ignore formats you don't understand.

* To calculate whether a color is "light" or "dark", see https://en.wikipedia.org/wiki/Luma_%28video%29:

    dark = (0.299*red + 0.587*green + 0.114*blue) < 0.5;


Have you measured how long typical terminals take to respond?

Regarding the third point, it might be a good idea to just feed it to XParseColor and process it from there.


> Have you measured how long typical terminals take to respond?

Arbitrarily long. Consider that a user might run your application over SSH via a high-latency network connection. Better to just handle it asynchronously. Your input loop needs to watch for escape sequences anyway, so watch for that one and process it when or if you see it.

Sadly, that only works for interactive screen-oriented applications, not run-and-exit command-line applications that want to use color.

> Regarding the third point, it might be a good idea to just feed it to XParseColor and process it from there.

That assumes you have libX11 and an X Display available. The former is a heavy dependency for a CLI application, and the latter requires you to connect to the X server.

I'd suggest just manually handling the common case of "rgb:R/G/B" (where each component may use 1-4 digits and requires scaling accordingly), and then deal with anything else if your users actually encounter it in the wild.


I assume XParseColor would therefore limit portability to OS X or Wayland Linux. I stand corrected.

I was speaking of this regarding screen-oriented termbox/curses applications, not run-and-exit applications.


Technically you don't need the root password, you can always password recovery if you have access to the box. And how exactly did you lock yourself out of every account with sudo? Of course there's always "messed up my ldap or general network settings, can't log in to fix them". There's nothing wrong with setting your root password to a random string and throwing it away, after verifying your sudo works, I guess.

I will admit to being lazy, and with full automation its faster to spawn a new virtual image and let ansible run its course than to do root password recovery where you boot and tell the bootloader to make the init system /bin/sh and hand edit /etc/shadow and /etc/passwd and then reboot again, etc etc. I mean I can set up a new image almost as fast as I can reboot an old image, and I set up images a lot more often than I do password recovery, so...

Scrap the ssh commentary and set up ssh company wide as per stribika plus or minus local modifications:

https://stribika.github.io/2015/01/04/secure-secure-shell.ht...

"On large scale, you'll be better off with a full automated setup using something like Ansible"

At ANY scale you're better off, unless you're experimenting or time isn't money. It'll take longer to add the time to document and test what you're doing by hand than to convince ansible to do it for you. If you don't document or test you're just doomed, so its not like you can avoid that effort. With automation this is like "first two minutes on a server" not ten.

Some people like to drop a .forward in root's homedir sending mail to your sysadmin mailing list or yourself. I THINK but might be wrong that if you do that you don't have to tell logwatch whom to email to, it'll go to root then forward to the right people. More than logwatch assumes root@something.whatever exists as an email address.

You're missing setting up your centralized rsyslog or local equivalent, your munin/nagios/zabbix or local equivalent... I still configure zabbix by hand because I'm old fashioned but its possible to automate that.

NTP is also missing. You can make Kerberos a very sad faced puppy if time isn't synced. And its easy to set up to point to local trusted servers.

(Note, a post that's nothing but complaining still means the linked article is at least 99.9% correct, it is a nicely written wide ranging TODO list)


> And how exactly did you lock yourself out of every account with sudo?

A single typo in /etc/sudoers or any /etc/sudoers.d file will lock you out of all sudo usage. visudo helps with that, but a single mistake (including in a sudoers.d file installed by a configuration management system or package) will lock you out.


Yes sir, and that's why the paranoid sysadmin ssh's in, sudo su's up, THEN runs ansible and tests that it works before unleashing the ansible (or puppet) across the entire network.

Also if you do the "group auth" thing in sudoers then you edit that file approximately once per employment and never touch sudoers again. Of course that abstracts the problem into "I deleted the wheel (or sudo, or ...) group on the ldap server and now I can't sudo up to fix it". And that's why you make snapshot backups on the NAS, so you can roll back the image of the LDAP server (or whatever you use locally) (and edited to add, and don't do something dumb like use the ldap image running on the openstack to authenticate logins into the openstack... that would be painful indeed)


I prefer lazy sysadmin that avoids all that manual ssh/sudo by making puppet/etc validate sudoers before updating it. You can syntax check with visudo -c.


Thank you for the notes. I'm on the go right now so sorry for the short reply, but I agree with what you've said. This was more of a primer with some hope to quickly explain the basic steps and what they were doing to younger engineers who don't have a ton of experience with it.

RE ssh note: That's the second time someone has linked it to me. It looks really good. I haven't had a chance to read it yet though. Will do.


I have a VPS, when I first got it, it had an additional user setup for some unknown reason. I didn't know it was there until my server was hacked by a bot. I'd suggest adding one step of checking the /home directory or other places to make sure no 'unknown' accounts have been set up.


Thanks for this article - very clear, well-motivated, and concise. I'm saving this for myself and others.


I guess I'm a pretty big noob, but why do people recommend so strongly on password protecting your private key? Losing it pretty much dooms you whether or not it's password protected. It might get you a few hours or so to react and invalidate the public key, I guess...


If you've got a good password on your key, then nobody will be able to use it in years. It most certainly helps.


Years? How long a password would you need to make GPU cracking take years?


A very short one. Reasonable 10+ character passwords should remain out of reach for years.

Also, AFAIK there doesn't currently exist any very effective GPU cracking software for SSH passphrases.


I went to lookup the algorithm GPG uses to encrypt private keys, to help answer the original question, but couldn't seem to find that information anywhere. Do you know what it is?


According to a quick stack exchange search, OpenSSL uses 3DES for encryption of private keys.


You're already given your own concrete time estimate - 'hours' - how did you work that out?


An old article I read about password cracking using EC2 while back... let me look it up.


This is very similar to my "How To Set Up Your Linode For Maximum Awesomeness" guide:

http://feross.org/how-to-setup-your-linode/


If you open up access from/to port 80 or 443, you also open up access to all trojans/spyware/telemetry/auto-update created in the last ten years. You'll want to limit access per user and process.


It might nice if there were some cloud vendor specific addendums. For example on rackspace you almost always want to install the monitoring daemon (it's actually fairly decent and small foot print).


Anyone remember Bastille Linux? https://help.ubuntu.com/community/BastilleLinux


Instead of using unattended-upgrade, I prefer to subscribe to mailinglists and see when there are new securtiy updates.

One could combine that with something like rundeck where you run apt-get upgrade.


Why not make a certified secured best practice 99% covered snapshot and share it as part of the one-click installs that most VPS providers offer nowadays?


I think 2FA is generally bad practice and quite sad it is ubiquitous in e.g. banking and people try to shove it everywhere. It is analogous to password rules, 8-14 characters, numbers, capital letters and other signs. Yet it is very rare you can use a 40+ character passphrase. It gives a false sense of added security, while being annoying at the same time imo. It is very common, for me at least, not to have access to my phone all the time, because I left it at home, in the car etc. Not to mention if you lose it (or someone steals it) you have a huge pita to deal with.


2FA doesn't have to be annoying. Take a look at Yubikey devices as an example of how to do this right. The reality is that it is actually really useful at preventing some common attack vectors: password reuse, keyloggers, etc.

It's even better if you're using a hardware dongle that supports U2F (or can be used as a smartcard for SSH), because that can even prevent active MITM attacks.


no, my point is exactly that the 2 in 2FA is inherently annoying, because you need to have physical access to 2 different devices at the same time.

How does it prevent password reuse? You can use the same (weak) password to lock your phone and login to your banking account (which is again, a false security). However it could be easily circumvented by random generating secure passwords for users (which needs clever advertising like 2FA, because they prefer convenience otherwise). In this case your phone is a single point of failure. You could even argue it increases the attack surface.


> How does it prevent password reuse?

It does not prevent password reuse, it mitigates the risks of password reuse in that it adds the requirement of having physical access to a device, which is a show-stopper for most attackers.

If you're using a password manager with sufficiently complex passphrases, the biggest remaining risk factor are targeted malware attacks (something like a keylogger), which is something that typical SMS- or TOTP-App-based 2FA implementions won't help you with, fair enough. Implementations where certain security-sensitive activities require separate confirmation and where the details are transmitted through a separate channel would mitigate this attack to a certain degree as well. As an example, some banks in Europe provide their customers with card readers with a PIN pad that shows transaction details on a separate display. Banks routinely include transaction details in SMS-based TAN mechanisms, which works as well, but is obviously not quite as good.

> You could even argue it increases the attack surface.

How?


Meant in the general sense. More complexity, more opportunity for attacks and/or implementation bugs. For starters you have a phone number associated with an account already. I would wager losing your phone is nearly impossible to prevent, while picking your passphrase is up to you. Losing your phone could alone compromise your security, but the very least SMS leaks the info where you bank.

I'm aware I am a minority with this opinion, but I would be really grateful if I could at least opt out from phone based 2FA.


Is there a similar guide for Windows servers out there?



It would be useful to discuss what prevents the server from being rooted without a trace during the 10 minutes it takes to execute these steps.


Production boxes are not allowed to be plugged into bare internet or DMZ unless they were seasoned and tested on the LAN first, and the LAN allows no external traffic in (stateful firewall). If people on your own LAN are trying to pown you, you got bigger problems to solve before installing another box.

In the old days this was manually moving ethernet cables, now a days this is changing which VLAN the virtual image talks to or if you use something like openstack that implements its own firewall at the virtualization level you allow no external traffic in until the config and testing is done.

Also you need to verify your install media is not powned, which means you need access to the md5sum of the media (and how do you know someone didn't MITM the correct md5sum?) and you need to verify your md5sum program isn't powned which means you need to verify your verification strategy isn't powned which means this gets recursive real quick.


In general a VPS should come up more or less up to date with your SSH key pre installed and no services running except SSH. It may permit password auth but no accounts would have passwords.


I was going to ask the same thing. How do you stop someone from owning a new Linode between the time you start the server and the time you disable password authentication?


I build images locally in virtualbox/vmware player/kqemu/whatever and upload or copy the image over after they are configured minimally.


Hmmm ... why does root need a password? `sudo su`


> You'll only need this root password if you lose your sudo password.

Also, good to have in case sudoers file is corrupted, e.g. via bad edit, typo, etc..


    cd /etc
    git init
    git add *
    git commit -am "before i screwed up"


1.

    useradd -m deploy
2. "PasswordAuthentication no" probably won't work as you expect if UsePAM is on.


May I suggest Ansible or whatever other provisioning tool, you can subtract 9+ minutes from the title...


please don't.. provision once and snapshot, and deploy the snapshot


What was wrong with 5 minutes? :-)


Nothing! (Except I doubt I can manually get it all done in 5 minutes =) ) Thanks for your great article.

We just ended up adapting your approach with a couple modifications (like 2FA) and extending it to be more of a primer and explain the steps a bit more so that the younger engineers understood what each step performed was doing. I found myself pointing them to your article, but then having to explain what was being performed and it's purpose (not a bad thing - just different audience).

As has been mentioned, in the real world, an Ansible Playbook should be performing these, but teach a man to fish, etc...


I love the article, and thanks for the credit.


Nice guide, better comments, leaving this here for later reference.


sure makes me glad all that (and so much more) happens in the first negative 10 minutes on any server I deploy.

If you are doing this after your server has launched you are doing it wrong.


No production server should ever be manually configured.


How should this be done by, say, a small team of three with no SysAdmin?


Take a day off and get familiar with ansible, it will save you tons of time in the long run and simplify your dev/staging/prod environment for years to come


Some subset of the team should learn the use of tools like cobbler and puppet.


Pretty useful, thanks


Don't just change SSH key requirements, also change SSH port.

Port 22 is possibly the most heavily scanned port around.


Sigh ... "principal of least privilege"


I'd been using this article for a couple years when I was a Linux server neophyte and now some of these things seem obvious to me. A good article for total noobs.


Just realized I'm getting downvoted because I thought this was the first 5 minutes article. Whatever!




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: