Hacker Newsnew | comments | show | ask | jobs | submitlogin
shizcakes 496 days ago | link | parent

The premise of this thing is not good advice.

1) Your first couple minutes on a server should be used to install a configuration management client, if your bootstrap policies somehow don't already install one.

2) Everything else listed in this document should be configured by a configuration management system.

3) "User account sync tools" should have no place in a modern infrastructure, you should use your configuration management tool to (at the bare minimum) deploy /etc/passwd and /etc/sudoers across your infrastructure.

4) You should not use shared/role accounts. The "incremental cost" is paid back immediately when someone leaves your organization; having to update everyone of a changed password or having a password change have any negative impact at all should not be a thing your company does.

This stuff isn't hard. It's worth doing right.



raverbashing 496 days ago | link

You are right, this is a better path

But sometimes, and especially for servers that will be delivered to the customer afterwards, it's not practical to use a configuration management tool.

Also, millions of servers were deployed before Chef/Puppet appeared. You can't tell they did wrong.

Also, Chef/Puppet type solutions may be overkill for some tasks, fabric takes care of the easier cases for example.

-----

vidarh 496 days ago | link

It's always practical to use a configuration management tool. It's not always practical to use specific tools like Chef/Puppet.

Even a shell script that automates your standard install scripts is better than doing it by hand, because they can ensure you don't forget any steps and verify the state afterwards and ensure you don't forget any of the verification steps either.

-----

raverbashing 496 days ago | link

Ok, great, that's the spirit

You can put all his recommendations in a shell script (as the easiest solution) then run it (and if you ever did this more than ONE time you see the value in it)

-----

kelnos 496 days ago | link

The problem is that most of the recommendations are bad.

Having an up-to-date system and only accepting security updates is a good policy. Fail2ban is a good tool (but it's a starting point; you should be doing other things to detect suspicious behavior).

The rest is just bad advice.

Having everyone log in using a single user account is a terrible idea. You can't audit who did what, ever. You have to remember to remove people from authorized_keys when they leave, and also make sure that they haven't left themselves a backdoor -- a cron job that reinstates the key, an extra user account, even just changing the root password to something else (how often do you actually check the root password on your boxes?).

User account management is a pain, so that's why we have things like LDAP. Everyone has their own user account. You can audit who does what on every machine, and for stuff that requires root, sudo will log the things people do (of course, if you let people have root shells, that's harder). The only people who get access to a local account (and/or root, but I still think root should just have a random password that no one knows) are a few sysadmins. When someone leaves, you kill their account in the LDAP server.

Even better, if this is a possibility, put up a VPN, and only allow ssh access via the VPN (using a firewall). Tie the VPN login to LDAP (and don't let non-VPN-admins ssh directly to the VPN server), and then you can be sure that without a user account in LDAP, no one can log into your servers.

Blind-updating systems in production is a terrible idea. Things break in the open source world all the time when you do this. Never ever use unattended-upgrades. You just need to be on top of security updates. Period. No excuses.

You should never even have a "my first five minutes on a server" type thing anyway. Rolling out a new server should be fully automatically operationalized. The first time you log into the server, it should be completely ready to go. It should be ready to go without you needing to log into it at all. This takes a small amount of up-front effort, and will pay off immediately when you bring up your second server.

-----

Wilya 496 days ago | link

I agree on a lot of things, except LDAP. Of course, it sounds like the ideal way, but in practice, there's a matter of scale to take into account.

Frankly, you need a lot of developers and servers before investing the time to setup a ldap deployment, integrating it with logins, and spending the inevitable hours debugging why nobody can access anything anymore, becomes more worth it than "just rsync/pssh into all servers and edit /etc/passwd".

Actually, if I had to do it, I'd use chef to automate creation (and destruction) of user accounts over an LDAP any day. Chef can be a pain to learn and use, but any sort of LDAP is even worse.

-----

kelnos 496 days ago | link

I would probably stand up an LDAP server after I had 4 or 5 user accounts to deal with, or, more importantly, more than, say, 3 servers to deal with.

I think there's a lot of fear and hate surrounding LDAP, but that's mostly for historical reasons. LDAP has gotten a lot easier to set up. Even in 2009 a colleague and I set one up (using openldap) and had other machines authenticating off of it in an afternoon. It's gotten even easier than that since then.

And hell, you should be using Chef to set up your LDAP master and slave. So once you have the config correct once, you can bring up another machine without trouble when needed.

-----

ajacksified 496 days ago | link

My friend and I are rebuilding an old, text-based browser RPG, occasionally with another friend helps out with art and game content.

We're on LDAP, which we use to SSH into our ec2 servers, and which we use for authentication when we deploy using `git push production master` to a GlusterFS cluster. We're running our LDAP, application, and file servers on Gentoo. We can easily add new accounts, and we have it set up with group permissions (so the friend can deploy game content to test but not prod, for example).

I refuse to believe that LDAP is "too complicated" or "has to scale before it's useful", when a couple of guys can, in their free time, set it up for themselves. It's saved us a load of time in managing servers that would otherwise take away from the limited time we have to actually write code.

It's also a whole lot cleaner than a bunch of Chef scripts running a script across a quantity of servers; using Chef can too often be a crutch to fixing the actual issue.

-----

raverbashing 496 days ago | link

You are assuming that they have several developers. I bet it's one or two

If you're dealing with more, let's say, 5, your suggestions became relevant

Also you are assuming this servers are staying with the company, which may not be the case

"User account management is a pain, so that's why we have things like LDAP"

Which is a bag of hurt in itself.

-----

kelnos 496 days ago | link

You are assuming that they have several developers. I bet it's one or two

The article presents this as general advice and doesn't make any mention of a team size that's appropriate for the recommendations. Maybe this guy has only 1 or 2 developers working on boxes, but IMO for any number of people >1 you don't want shared accounts.

Also you are assuming this servers are staying with the company, which may not be the case

In many cases they will be, and, again, the article presents this as general advice. If a server is being configured for a client and sent out, then a general process of setting up local user accounts probably doesn't make sense anyway (unless you only have one client).

Agreed that LDAP isn't the most friendly of things to set up, but there are how-tos for common use-cases, and if you have any skill as a sysadmin, you can do it. As soon as it's set up, it's simple to maintain. Manually keeping accounts in sync across multiple machines is rarely simple. Maybe LDAP is overkill for a handful of hosts and user accounts, but if you expect to grow even a little, expect to need some kind of centralized user account system.

-----

brown9-2 496 days ago | link

If you only have one or two developers, then you have all the more reason to automate the boring and mundane tasks, since the time savings are even greater as a percentage of total time available!

-----

StavrosK 496 days ago | link

I just wrote an ansible script to do these sorts of things yesterday. Now, whenever I get a new server, I just run the script with its ip, and boom, provisioned.

-----

aaronblohowiak 496 days ago | link

>Also, millions of servers were deployed before Chef/Puppet appeared. You can't tell they did wrong.

You're right -- many used cfengine. Still others used a custom 'config' rpm / deb that deployed all of these files everywhere.

Automated configuration makes sense not just for repeatability, but for auditability and documentation. Especially when you are going to 'hand the server over', the next admin should be able to know what you've changed.

Also, disallow password-based access to everything (use the keys, Luke.)

-----

lorenzfx 496 days ago | link

I guess it all depends on how often you bootstrap a new server. If you do it infrequently, keeping a configuration management tool updated is probably more work. I do it perhaps every two years and keep a similar list (I try to keep up to date if I change something in the server configuration) for that purpose.

-----

brown9-2 496 days ago | link

Also, millions of servers were deployed before Chef/Puppet appeared. You can't tell they did wrong.

This line of thinking represents a logical fallacy - no one claimed that anything other than Chef or Puppet is "doing it wrong".

-----

dhimes 496 days ago | link

I would love to see a link to a counter-post of how to do it your way. Not generalities, but a specific guide.

-----

mef 496 days ago | link

For #4, wouldn't the only change when someone leaves the organization be to remove their key from authorized_keys for the shared account? Why would anyone else have to be updated?

-----

joelesalas 496 days ago | link

It's a basic principle of security. Each account represents one person so that you have a full audit of who did what by watching the activity of a given user account. If everything is run as "devops" user for example, you have no idea who actually performed a given task. Was it Bill, or was it an automated job? PCI-DSS requirements also affect your model for user accounts (hint: shared users are often not compliant).

From the perspective of a sysadmin, this article has a lot of issues and it's inadvisable to follow its recommendations. Who doesn't use a hardware firewall? Who exposes ssh to the internet (requiring fail2ban) when a VPN server is much more secure and easier to use? Setting up an LDAP server is really easy and costs nothing. There's no excuse for shared accounts.

-----

luser001 496 days ago | link

Why do you say a VPN server is more secure? Which one?

I, for one, trust ssh more than any other software wrt security, especially with password login disabled. Disclaimer: I am not a security expert.

-----

mryan 496 days ago | link

I, also, trust SSH more than any other software. But it is still worth adding an additional layer of security in front of SSH to help protect from exploits.

Let's say that, hypothetically, a 0-day exploit was discovered in SSH which allowed remote code execution. A script kiddie begins trawling the internet for publicly accessible SSH servers to attack.

Your servers allow SSH from anywhere on the internet, and are eventually discovered and exploited. Mine, which will only allow SSH connections from my VPN bastion host, are effectively invisible to the attacker and will not get exploited (by this particular script kiddie, at least).

Adding a VPN server in front of SSH won't protect you from an APT, but it will protect you from 99% of the random, automated attacks that take place.

-----

lmm 496 days ago | link

Your outermost server is the one where you should be most worried about having vulnerabilities - if you have a VPN as the outer layer that means the VPN server must be exposed to the public internet, and anyone who compromises it is in a pretty good position. And I'd rate the odds of a 0-day being found at higher for most VPN software than for SSH.

-----

kelnos 496 days ago | link

anyone who compromises [the VPN server] is in a pretty good position.

Sure. But without a VPN, anyone who compromises even one of your other hosts is in the same position. It's a lot easier to audit a single-purpose VPN server for possible security issues than it is to audit all the application code running on the rest of your production systems.

And I'd rate the odds of a 0-day being found at higher for most VPN software than for SSH.

I wouldn't. And even if you're right, getting a VPN login still doesn't get you anywhere. You still have to be able to ssh to the rest of the hosts. That's why we do security in layers.

-----

luser001 496 days ago | link

Ok, just in case it wasn't clear: I am asking about the advantages of having a VPN server as a bastion host instead of another SSH server.

You would ssh to the bastion host, and from there to internal hosts.

With the appropriate ssh config at the client end, the tunneling through the bastion can be scripted away (using the ProxyCommand directive and RSA keys).

-----

kelnos 496 days ago | link

Routing transparency. Sure, you can script a bunch of tunnels, but it's nice to handle routing at a lower layer. Having worked with both setups, I vastly prefer the VPN solution for ease of setup, use, and maintenance.

Also, different attack surfaces. Two layers of the same security measure (ssh) is, all else equal, not quite as good as two layers involving two different measures (VPN, ssh).

-----

daemon13 496 days ago | link

Same question - which open source VPN solution would you recommend?

I liked the idea of adding VPN layer to SSH, so would like to get as much advice as possible :-)

-----

inyourtenement 496 days ago | link

I have experience with many open source VPN servers. The purpose is a bit different -- we provide a VPN service to home users to encrypt their internet traffic. But the same problems should apply.

OpenVPN is the most compatible with a variety of clients. OpenVPN runs in userspace, so the clients for each OS and mobile platform interoperate well. The downside is, it does require a client program to be installed and configured. It's considered very secure, using SSL. Since it's userspace, moving large amounts of traffic means more context switching and higher cpu usage. Despite that, I've found it to be faster and more stable than the alternatives.

L2TP/IPSec is built in to most clients -- Windows, OS X, mobile. But every implementation is different and it's hard to configure a server to work with all of them. There are also more moving parts -- an IPSec server (openswan, strongswan, or racoon), and L2TP server (openl2tpd, xl2tpd) and a PPP server (pppd). IPSec seems to be a secure protocol but it's very complicated. I tend to distrust complicated security.

Pure IPSec has many of the problems of L2TP/IPSec with the added problem of difficult to configure in Windows and OS X.

PPTP is not performant or very secure. Other than the fact that almost every client supports it, I see no reason to use it for a new VPN.

-----

kelnos 496 days ago | link

I've had success with OpenVPN. I don't know that I'd specifically recommend it over other options, as I don't have much experience with anything else.

-----

mryan 496 days ago | link

Yes, I agree, I was just giving an example of how an additional layer can help protect against automated attacks, even for highly-secure services like SSH.

I also agree that SSH is less likely to have flaws than most VPN software. But on a properly configured bastion host, by-passing the VPN would just put you in a position where you can attack SSH. You would still need to by-pass SSH to access production servers.

-----

daemon13 496 days ago | link

Which open source VPN solution would you recommend?

-----

vidarh 496 days ago | link

> and anyone who compromises it is in a pretty good position.

You are assuming the VPN host is trusted any more than most people trusts random servers on the internet.

-----

lmm 496 days ago | link

I am, just from my experience in real-life companies. It takes an awful lot of discipline to treat servers as if they were exposed to the public internet when you know full well that they're not.

-----

benedikt 496 days ago | link

This is just faulty logic.

openssh is one of the most secure projects. It's developed by the security obsessed (and I mean it in a kind way) folks at OpenBSD.

I, for one, am ready to place for more trust in openssh than in any VPN daemon. The most commonly used ones are propitiatory.

What if there is a 0-day vuln (not exploit) for these VPN daemons? That far more likely. "Securing" ssh with a VPN is just one step beyond of security by obscurity.

If you are afraid of script-kiddies and scanners, let your sshd listen on a non-standard port.

-----

stephen_g 496 days ago | link

Port scanners do generally scan non-standard ports too, you know...

I don't think that exposing SSH to the internet is that bad, but your argument is not sound - requiring a VPN does add security, because if there happened to be a vulnerability in it that allowed access, all it would do is expose SSH on the machines (I'm assuming you have proper firewalls set up), which you are advocating making public in the first place.

Saying it adds no security is false, because you'd require an unpatched vulnerability both in your VPN server and in the SSH server simultaneously. A zero-day in one is possible, but in both at the same time is far, far less likely.

-----

sneak 496 days ago | link

> "Securing" ssh with a VPN is just one step beyond of security by obscurity.

You're not securing ssh with a vpn. You're adding another layer. ssh is still secured by all of ssh's existing protection.

ssh behind a vpn requires that someone both compromise the vpn _and_ compromise the ssh service to gain that access that, without a vpn, would require them to only compromise the ssh service.

-----

cbs 496 days ago | link

Your servers allow SSH from anywhere on the internet, and are eventually discovered and exploited. Mine, which will only allow SSH connections from my VPN bastion host, are effectively invisible to the attacker and will not get exploited

So, just to see if I'm reading you right: you're using a VPN in the place of an SSH jump box, not making a judgement about the fitness or trust placed in your VPNd over your SSHd.

-----

rdl 496 days ago | link

When someone advocates using a VPN, that doesn't mean not using SSH too. VPN + firewall just restricts who has the potential to try to SSH to you, and provides additional protection and central access control/management.

-----

thomasknowles 496 days ago | link

Agreed, anyone who has touched PCI DSS would agree you need to associate access with a human user. This would not work. If you look at the security logs it won't differentiate between which keys were used for that generic account.

-----

waverider 496 days ago | link

It's a good idea to use fail2ban even if you use VPN.

-----

vidarh 496 days ago | link

If they have had access to a shared account, how much work are you willing to put in to verify that there's nothing in that shared account that will get executed by another user later that will quietly reinstate a key?

A reason to have separate accounts is that not only do you terminate access, you also have an easier time ensuring that less of what that person had access to could have been compromised. (This of course goes right out the window if said person has sudo/su access, in which case you have a much harder time, but even then giving them individual accounts means your opportunity to audit becomes so much greater)

After all, it's not the honest guy who'll never try to log in again you're primarily trying to protect against (in fact: for the honest people, a good security policy protects them by making them less likely to become potential suspects if/when something happens - it's in your own interest when you leave an organisation to ensure you get locked out), but the guy who might decide to try to do something later, or who might even be thinking about doing something before they leave.

-----

Nursie 496 days ago | link

But if you give people any access to any system surely this is a concern. As a software dev, maybe I've inserted something into one of the build scripts that quietly re-opens my backdoor to the source control server...

I haven't, but if you assume actually malicious users you're probably going to end up with something so locked down it's useless. Aren't you?

-----

vidarh 496 days ago | link

It's commonly stated that 9 out of 10 security threats come from employees or other insiders. You should assume malicious employees. Sooner or later you will hire the wrong person.

Now, you must also have a functioning system, and so you may take risks by leaving things more open than you would like if you don't have the resources to thoroughly lock everything down.

But wherever locking things down further costs you very little, you should take the opportunity. And elsewhere you should asses what level of protection you can afford. Ultimately it is a cost-benefit analysis. Many risks are not worth spending money protecting against. Others are vital.

But even disregarding malicious users: Individual user accounts is not just a protection against malicious users, but against careless users. When someone sets a password that gets guessed, you want to be in a position where exploiting that persons credentials is as hard as possible, and tracking down actions taken via the account is as easy as possible.

And yes, you could insert something into a build script. But if the build script is committed, and the commit was pushed from a named, individual account, you're now at the risk of going to jail. Creating deterrents is often a sufficient risk mitigation strategy to be acceptable.

-----

jackalope 496 days ago | link

Quite the contrary: You're probably going to end up with documented procedures for deploying software that are simplified, follow existing standards/best practices, and don't rely on complex stone soup build/init scripts concocted by inexperienced developers (and I've seen some doozies).

A developer is more likely to create better and more easily maintainable software if the target audience is assumed to be an ordinary user with no special system privileges. In my experience, when a developer has root and assumes everyone else does, deployment becomes a nightmare.

-----

Nursie 496 days ago | link

Not quite sure I understand what you're saying here, I'm not talking about the software being produced, but the systems used to produce it.

What I was trying to say was that there's not really any way for you (server admin guy) to know if I (software dev guy) have inserted something malicious into a script that all the other software folks run constantly (software build system, NOT server build/init script, NOT deployment script).

This is not about the end-user's privileges, or server set up, just how in a team-base software dev environment you're probably going to have to have a measure of trust for your employees.

-----

jackalope 496 days ago | link

I see, but I think the same principle applies, even in this narrow case. As a server admin guy or fellow software dev guy, I have to trust that any code you've written has been properly reviewed before checking it into a repository that I pull from. Fortunately, version control tools make this trivial, but you're right, the policy and infrastructure supporting it has to be in place, otherwise you're depending only on voluntary peer review.

Note that as an attacker, there's a high risk of exposure and identification in the scenario you describe, and that's a good thing. A well secured system shouldn't merely prevent attacks, it should also protect innocent users from suspicion (another reason why shared accounts are discouraged).

-----

mbreese 496 days ago | link

But they still would know the 'deploy' password needed for sudo access. And while you could be relatively sure that they couldn't get access, you still couldn't be completely sure since they did have sudo access to begin with. So, the best thing would be to change the shared password. That could be avoided with non-shared accounts.

-----

luser001 496 days ago | link

Are you insinuating that the user could have used sudo access to install a backdoor of some sort? If so, changing the password won't stop them either. Am I missing something?

-----

wvenable 496 days ago | link

Is there a free and recommended configuration management system that does all this?

-----

shizcakes 496 days ago | link

Chef or Puppet are common choices. There are many others, but those two are modern, have large communities, and decent documentation. There's a decent chance someone has already open sourced a cookbook/module for each of many of these items!

-----

grey-area 496 days ago | link

On linode, which he is using, there is one built in - they allow shell scripts to run on first boot which could replay his commands as they are without many changes:

http://www.linode.com/stackscripts/

-----

plusbryan 496 days ago | link

Yes, stackscripts are a great way to take advantage of automation. :-)

-----

davidbanham 496 days ago | link

We use Chef, many people run Puppet.

http://www.opscode.com/chef/

https://puppetlabs.com/

-----

chrisbolt 496 days ago | link

Puppet, chef, cfengine, ansible, and salt are a few.

-----

elktea 496 days ago | link

Salt is wonderful.

-----

plasticgun 495 days ago | link

I just ready your post out of context. Made me chuckle.

-----

buster 496 days ago | link

Voting for Ansible or Puppet here.

http://ansible.cc/

https://puppetlabs.com/

-----

cschneid 495 days ago | link

No, chef/puppet/etc are configuration management tools. They automate the manual typing of server setup commands, and then verify that the system stays that way.

They are not security tools. So you're on your own on what to actually tell the tools to do. "Install chef" is not a security tip. It's a repeatability tip, so you can get your system up to a known state repeatedly.

For the security side of things, you're back to figuring out what the right steps are, no matter how they're installed.

-----

Axsuul 496 days ago | link

Highly recommend Chef due to its Ruby DSL.

-----

chookrl 496 days ago | link

I recommend Chef - http://www.opscode.com/chef/

-----

jasonlotito 496 days ago | link

> This stuff isn't hard. It's worth doing right.

Can you provide an article as equally succinct as the OP's that provides this information? Your list is painfully devoid of anything of true value. Since it's not hard, and worth doing right, I imagine something should already be written.

-----

shizcakes 496 days ago | link

This is an excellent request. Reading through all the comments here, it seems like a lot of people are feeling frustrated with the variety of information available and no clear way to discern what is "good".

I don't typically publish writings, but this seems like a good place to start. I'll write something up and post it here for the same critique that we've given Bryan :-)

In the meantime, a decent source of generalized (not succinct) Ops-type knowledge can be found here: http://www.opsschool.org/en/latest/

-----

epo 496 days ago | link

That site is useless, sample pages I looked contained brief overview paragraphs and no real content. Many contained "todo" items.

So you criticised the original article, but when asked to provide information or advice of your own merely came up with something entirely content free. The original article provided succint, useful advice, something you have failed to do.

-----

jasonlotito 496 days ago | link

Thanks! I really appreciate it. All to often we are quick to criticize someone for missing information without any willingness to back it up with better, correct information. The jokes goes that the best way to get an answer is to publish the wrong one. =)

-----

npsimons 496 days ago | link

If Bastille was still in working order, I'd recommend it as a very good starting point for locking down a configuration. Actual configuration deployment setups I don't put much stock in because I'm not managing enough machines for it to be worthwhile. Would love to hear from someone who manages a large farm/cluster/VM hosting on a very clean and straightforward way to manage configurations. Last I looked, most setups were custom, or some hobbled together packages that didn't "hang" together very well.

-----

peterwwillis 496 days ago | link

First, pick a CM package.

This depends entirely upon your level of masochism and the kind of language youlike scripting in, as literally anything will do. Even shell scripts. Even Makefiles. Whatever you're most comfortable with, just start writing out configs. (you will eventually come to hate your job if you allow non-programmers to script/program in your CM, but blah blah keep shipping blah blah) Break it all out intoa wide hirearchy so you can reuse some bits in other bits.

Hey look, I wrote a crappy one! https://github.com/psypete/public-bin/tree/public-bin/src/si...

Next we implement the OP's comments.

Hey look, I already implemented it in my crappy CM tool! https://github.com/psypete/public-bin/tree/public-bin/src/si...

First push your CM tool and configs to the host:

  scp -r simplecfm-0.2 remote-host:
Then run the main config file which calls the others:

  ssh remote-host "cd simplecfm-0.2/ ; perl simplecfm examples/first-five-minutes/main.scfm"
Aaaaand you're done. Of course I haven't tested these configs and something will probably break (most likely my crappy code) but you'll get the idea from looking at the examples.

-----

matthuggins 496 days ago | link

I'm a good developer, but setting up systems is something I do so infrequently that I always have to relearn it, and I am positive I could be doing things better. Do you have a link to an article that explains how to go about performing some of these ideas you mention?

-----




Guidelines | FAQ | Lists | Bookmarklet | DMCA | News News | Bugs and Feature Requests | Y Combinator | Apply | Library | Contact

Search: