1) Your first couple minutes on a server should be used to install a configuration management client, if your bootstrap policies somehow don't already install one.
2) Everything else listed in this document should be configured by a configuration management system.
3) "User account sync tools" should have no place in a modern infrastructure, you should use your configuration management tool to (at the bare minimum) deploy /etc/passwd and /etc/sudoers across your infrastructure.
4) You should not use shared/role accounts. The "incremental cost" is paid back immediately when someone leaves your organization; having to update everyone of a changed password or having a password change have any negative impact at all should not be a thing your company does.
It's always practical to use a configuration management tool. It's not always practical to use specific tools like Chef/Puppet.
Even a shell script that automates your standard install scripts is better than doing it by hand, because they can ensure you don't forget any steps and verify the state afterwards and ensure you don't forget any of the verification steps either.
The problem is that most of the recommendations are bad.
Having an up-to-date system and only accepting security updates is a good policy. Fail2ban is a good tool (but it's a starting point; you should be doing other things to detect suspicious behavior).
The rest is just bad advice.
Having everyone log in using a single user account is a terrible idea. You can't audit who did what, ever. You have to remember to remove people from authorized_keys when they leave, and also make sure that they haven't left themselves a backdoor -- a cron job that reinstates the key, an extra user account, even just changing the root password to something else (how often do you actually check the root password on your boxes?).
User account management is a pain, so that's why we have things like LDAP. Everyone has their own user account. You can audit who does what on every machine, and for stuff that requires root, sudo will log the things people do (of course, if you let people have root shells, that's harder). The only people who get access to a local account (and/or root, but I still think root should just have a random password that no one knows) are a few sysadmins. When someone leaves, you kill their account in the LDAP server.
Even better, if this is a possibility, put up a VPN, and only allow ssh access via the VPN (using a firewall). Tie the VPN login to LDAP (and don't let non-VPN-admins ssh directly to the VPN server), and then you can be sure that without a user account in LDAP, no one can log into your servers.
Blind-updating systems in production is a terrible idea. Things break in the open source world all the time when you do this. Never ever use unattended-upgrades. You just need to be on top of security updates. Period. No excuses.
You should never even have a "my first five minutes on a server" type thing anyway. Rolling out a new server should be fully automatically operationalized. The first time you log into the server, it should be completely ready to go. It should be ready to go without you needing to log into it at all. This takes a small amount of up-front effort, and will pay off immediately when you bring up your second server.
I agree on a lot of things, except LDAP. Of course, it sounds like the ideal way, but in practice, there's a matter of scale to take into account.
Frankly, you need a lot of developers and servers before investing the time to setup a ldap deployment, integrating it with logins, and spending the inevitable hours debugging why nobody can access anything anymore, becomes more worth it than "just rsync/pssh into all servers and edit /etc/passwd".
Actually, if I had to do it, I'd use chef to automate creation (and destruction) of user accounts over an LDAP any day. Chef can be a pain to learn and use, but any sort of LDAP is even worse.
I would probably stand up an LDAP server after I had 4 or 5 user accounts to deal with, or, more importantly, more than, say, 3 servers to deal with.
I think there's a lot of fear and hate surrounding LDAP, but that's mostly for historical reasons. LDAP has gotten a lot easier to set up. Even in 2009 a colleague and I set one up (using openldap) and had other machines authenticating off of it in an afternoon. It's gotten even easier than that since then.
And hell, you should be using Chef to set up your LDAP master and slave. So once you have the config correct once, you can bring up another machine without trouble when needed.
My friend and I are rebuilding an old, text-based browser RPG, occasionally with another friend helps out with art and game content.
We're on LDAP, which we use to SSH into our ec2 servers, and which we use for authentication when we deploy using `git push production master` to a GlusterFS cluster. We're running our LDAP, application, and file servers on Gentoo. We can easily add new accounts, and we have it set up with group permissions (so the friend can deploy game content to test but not prod, for example).
I refuse to believe that LDAP is "too complicated" or "has to scale before it's useful", when a couple of guys can, in their free time, set it up for themselves. It's saved us a load of time in managing servers that would otherwise take away from the limited time we have to actually write code.
It's also a whole lot cleaner than a bunch of Chef scripts running a script across a quantity of servers; using Chef can too often be a crutch to fixing the actual issue.
You are assuming that they have several developers. I bet it's one or two
The article presents this as general advice and doesn't make any mention of a team size that's appropriate for the recommendations. Maybe this guy has only 1 or 2 developers working on boxes, but IMO for any number of people >1 you don't want shared accounts.
Also you are assuming this servers are staying with the company, which may not be the case
In many cases they will be, and, again, the article presents this as general advice. If a server is being configured for a client and sent out, then a general process of setting up local user accounts probably doesn't make sense anyway (unless you only have one client).
Agreed that LDAP isn't the most friendly of things to set up, but there are how-tos for common use-cases, and if you have any skill as a sysadmin, you can do it. As soon as it's set up, it's simple to maintain. Manually keeping accounts in sync across multiple machines is rarely simple. Maybe LDAP is overkill for a handful of hosts and user accounts, but if you expect to grow even a little, expect to need some kind of centralized user account system.
>Also, millions of servers were deployed before Chef/Puppet appeared. You can't tell they did wrong.
You're right -- many used cfengine. Still others used a custom 'config' rpm / deb that deployed all of these files everywhere.
Automated configuration makes sense not just for repeatability, but for auditability and documentation. Especially when you are going to 'hand the server over', the next admin should be able to know what you've changed.
Also, disallow password-based access to everything (use the keys, Luke.)
I guess it all depends on how often you bootstrap a new server. If you do it infrequently, keeping a configuration management tool updated is probably more work. I do it perhaps every two years and keep a similar list (I try to keep up to date if I change something in the server configuration) for that purpose.
It's a basic principle of security. Each account represents one person so that you have a full audit of who did what by watching the activity of a given user account. If everything is run as "devops" user for example, you have no idea who actually performed a given task. Was it Bill, or was it an automated job? PCI-DSS requirements also affect your model for user accounts (hint: shared users are often not compliant).
From the perspective of a sysadmin, this article has a lot of issues and it's inadvisable to follow its recommendations. Who doesn't use a hardware firewall? Who exposes ssh to the internet (requiring fail2ban) when a VPN server is much more secure and easier to use? Setting up an LDAP server is really easy and costs nothing. There's no excuse for shared accounts.
I, also, trust SSH more than any other software. But it is still worth adding an additional layer of security in front of SSH to help protect from exploits.
Let's say that, hypothetically, a 0-day exploit was discovered in SSH which allowed remote code execution. A script kiddie begins trawling the internet for publicly accessible SSH servers to attack.
Your servers allow SSH from anywhere on the internet, and are eventually discovered and exploited. Mine, which will only allow SSH connections from my VPN bastion host, are effectively invisible to the attacker and will not get exploited (by this particular script kiddie, at least).
Adding a VPN server in front of SSH won't protect you from an APT, but it will protect you from 99% of the random, automated attacks that take place.
Your outermost server is the one where you should be most worried about having vulnerabilities - if you have a VPN as the outer layer that means the VPN server must be exposed to the public internet, and anyone who compromises it is in a pretty good position. And I'd rate the odds of a 0-day being found at higher for most VPN software than for SSH.
anyone who compromises [the VPN server] is in a pretty good position.
Sure. But without a VPN, anyone who compromises even one of your other hosts is in the same position. It's a lot easier to audit a single-purpose VPN server for possible security issues than it is to audit all the application code running on the rest of your production systems.
And I'd rate the odds of a 0-day being found at higher for most VPN software than for SSH.
I wouldn't. And even if you're right, getting a VPN login still doesn't get you anywhere. You still have to be able to ssh to the rest of the hosts. That's why we do security in layers.
Routing transparency. Sure, you can script a bunch of tunnels, but it's nice to handle routing at a lower layer. Having worked with both setups, I vastly prefer the VPN solution for ease of setup, use, and maintenance.
Also, different attack surfaces. Two layers of the same security measure (ssh) is, all else equal, not quite as good as two layers involving two different measures (VPN, ssh).
I have experience with many open source VPN servers. The purpose is a bit different -- we provide a VPN service to home users to encrypt their internet traffic. But the same problems should apply.
OpenVPN is the most compatible with a variety of clients. OpenVPN runs in userspace, so the clients for each OS and mobile platform interoperate well. The downside is, it does require a client program to be installed and configured. It's considered very secure, using SSL. Since it's userspace, moving large amounts of traffic means more context switching and higher cpu usage. Despite that, I've found it to be faster and more stable than the alternatives.
L2TP/IPSec is built in to most clients -- Windows, OS X, mobile. But every implementation is different and it's hard to configure a server to work with all of them. There are also more moving parts -- an IPSec server (openswan, strongswan, or racoon), and L2TP server (openl2tpd, xl2tpd) and a PPP server (pppd). IPSec seems to be a secure protocol but it's very complicated. I tend to distrust complicated security.
Pure IPSec has many of the problems of L2TP/IPSec with the added problem of difficult to configure in Windows and OS X.
PPTP is not performant or very secure. Other than the fact that almost every client supports it, I see no reason to use it for a new VPN.
Yes, I agree, I was just giving an example of how an additional layer can help protect against automated attacks, even for highly-secure services like SSH.
I also agree that SSH is less likely to have flaws than most VPN software. But on a properly configured bastion host, by-passing the VPN would just put you in a position where you can attack SSH. You would still need to by-pass SSH to access production servers.
Port scanners do generally scan non-standard ports too, you know...
I don't think that exposing SSH to the internet is that bad, but your argument is not sound - requiring a VPN does add security, because if there happened to be a vulnerability in it that allowed access, all it would do is expose SSH on the machines (I'm assuming you have proper firewalls set up), which you are advocating making public in the first place.
Saying it adds no security is false, because you'd require an unpatched vulnerability both in your VPN server and in the SSH server simultaneously. A zero-day in one is possible, but in both at the same time is far, far less likely.
Your servers allow SSH from anywhere on the internet, and are eventually discovered and exploited. Mine, which will only allow SSH connections from my VPN bastion host, are effectively invisible to the attacker and will not get exploited
So, just to see if I'm reading you right: you're using a VPN in the place of an SSH jump box, not making a judgement about the fitness or trust placed in your VPNd over your SSHd.
When someone advocates using a VPN, that doesn't mean not using SSH too. VPN + firewall just restricts who has the potential to try to SSH to you, and provides additional protection and central access control/management.
Agreed, anyone who has touched PCI DSS would agree you need to associate access with a human user. This would not work. If you look at the security logs it won't differentiate between which keys were used for that generic account.
If they have had access to a shared account, how much work are you willing to put in to verify that there's nothing in that shared account that will get executed by another user later that will quietly reinstate a key?
A reason to have separate accounts is that not only do you terminate access, you also have an easier time ensuring that less of what that person had access to could have been compromised. (This of course goes right out the window if said person has sudo/su access, in which case you have a much harder time, but even then giving them individual accounts means your opportunity to audit becomes so much greater)
After all, it's not the honest guy who'll never try to log in again you're primarily trying to protect against (in fact: for the honest people, a good security policy protects them by making them less likely to become potential suspects if/when something happens - it's in your own interest when you leave an organisation to ensure you get locked out), but the guy who might decide to try to do something later, or who might even be thinking about doing something before they leave.
But if you give people any access to any system surely this is a concern. As a software dev, maybe I've inserted something into one of the build scripts that quietly re-opens my backdoor to the source control server...
I haven't, but if you assume actually malicious users you're probably going to end up with something so locked down it's useless. Aren't you?
It's commonly stated that 9 out of 10 security threats come from employees or other insiders. You should assume malicious employees. Sooner or later you will hire the wrong person.
Now, you must also have a functioning system, and so you may take risks by leaving things more open than you would like if you don't have the resources to thoroughly lock everything down.
But wherever locking things down further costs you very little, you should take the opportunity. And elsewhere you should asses what level of protection you can afford. Ultimately it is a cost-benefit analysis. Many risks are not worth spending money protecting against. Others are vital.
But even disregarding malicious users: Individual user accounts is not just a protection against malicious users, but against careless users. When someone sets a password that gets guessed, you want to be in a position where exploiting that persons credentials is as hard as possible, and tracking down actions taken via the account is as easy as possible.
And yes, you could insert something into a build script. But if the build script is committed, and the commit was pushed from a named, individual account, you're now at the risk of going to jail. Creating deterrents is often a sufficient risk mitigation strategy to be acceptable.
Quite the contrary: You're probably going to end up with documented procedures for deploying software that are simplified, follow existing standards/best practices, and don't rely on complex stone soup build/init scripts concocted by inexperienced developers (and I've seen some doozies).
A developer is more likely to create better and more easily maintainable software if the target audience is assumed to be an ordinary user with no special system privileges. In my experience, when a developer has root and assumes everyone else does, deployment becomes a nightmare.
Not quite sure I understand what you're saying here, I'm not talking about the software being produced, but the systems used to produce it.
What I was trying to say was that there's not really any way for you (server admin guy) to know if I (software dev guy) have inserted something malicious into a script that all the other software folks run constantly (software build system, NOT server build/init script, NOT deployment script).
This is not about the end-user's privileges, or server set up, just how in a team-base software dev environment you're probably going to have to have a measure of trust for your employees.
I see, but I think the same principle applies, even in this narrow case. As a server admin guy or fellow software dev guy, I have to trust that any code you've written has been properly reviewed before checking it into a repository that I pull from. Fortunately, version control tools make this trivial, but you're right, the policy and infrastructure supporting it has to be in place, otherwise you're depending only on voluntary peer review.
Note that as an attacker, there's a high risk of exposure and identification in the scenario you describe, and that's a good thing. A well secured system shouldn't merely prevent attacks, it should also protect innocent users from suspicion (another reason why shared accounts are discouraged).
But they still would know the 'deploy' password needed for sudo access. And while you could be relatively sure that they couldn't get access, you still couldn't be completely sure since they did have sudo access to begin with. So, the best thing would be to change the shared password. That could be avoided with non-shared accounts.
Chef or Puppet are common choices. There are many others, but those two are modern, have large communities, and decent documentation. There's a decent chance someone has already open sourced a cookbook/module for each of many of these items!
No, chef/puppet/etc are configuration management tools. They automate the manual typing of server setup commands, and then verify that the system stays that way.
They are not security tools. So you're on your own on what to actually tell the tools to do. "Install chef" is not a security tip. It's a repeatability tip, so you can get your system up to a known state repeatedly.
For the security side of things, you're back to figuring out what the right steps are, no matter how they're installed.
Can you provide an article as equally succinct as the OP's that provides this information? Your list is painfully devoid of anything of true value. Since it's not hard, and worth doing right, I imagine something should already be written.
This is an excellent request. Reading through all the comments here, it seems like a lot of people are feeling frustrated with the variety of information available and no clear way to discern what is "good".
I don't typically publish writings, but this seems like a good place to start. I'll write something up and post it here for the same critique that we've given Bryan :-)
That site is useless, sample pages I looked contained brief overview paragraphs and no real content. Many contained "todo" items.
So you criticised the original article, but when asked to provide information or advice of your own merely came up with something entirely content free. The original article provided succint, useful advice, something you have failed to do.
Thanks! I really appreciate it. All to often we are quick to criticize someone for missing information without any willingness to back it up with better, correct information. The jokes goes that the best way to get an answer is to publish the wrong one. =)
If Bastille was still in working order, I'd recommend it as a very good starting point for locking down a configuration. Actual configuration deployment setups I don't put much stock in because I'm not managing enough machines for it to be worthwhile. Would love to hear from someone who manages a large farm/cluster/VM hosting on a very clean and straightforward way to manage configurations. Last I looked, most setups were custom, or some hobbled together packages that didn't "hang" together very well.
This depends entirely upon your level of masochism and the kind of language youlike scripting in, as literally anything will do. Even shell scripts. Even Makefiles. Whatever you're most comfortable with, just start writing out configs. (you will eventually come to hate your job if you allow non-programmers to script/program in your CM, but blah blah keep shipping blah blah) Break it all out intoa wide hirearchy so you can reuse some bits in other bits.
I'm a good developer, but setting up systems is something I do so infrequently that I always have to relearn it, and I am positive I could be doing things better. Do you have a link to an article that explains how to go about performing some of these ideas you mention?