
How I spend my first 5 minutes on a server - plusbryan
http://plusbryan.com/my-first-5-minutes-on-a-server-or-essential-security-for-linux-servers
======
shizcakes
The premise of this thing is not good advice.

1) Your first couple minutes on a server should be used to install a
configuration management client, if your bootstrap policies somehow don't
already install one.

2) Everything else listed in this document should be configured by a
configuration management system.

3) "User account sync tools" should have no place in a modern infrastructure,
you should use your configuration management tool to (at the bare minimum)
deploy /etc/passwd and /etc/sudoers across your infrastructure.

4) You should not use shared/role accounts. The "incremental cost" is paid
back immediately when someone leaves your organization; having to update
everyone of a changed password or having a password change have any negative
impact at all should not be a thing your company does.

This stuff isn't hard. It's worth doing right.

~~~
wvenable
Is there a free and recommended configuration management system that does all
this?

~~~
shizcakes
Chef or Puppet are common choices. There are many others, but those two are
modern, have large communities, and decent documentation. There's a decent
chance someone has already open sourced a cookbook/module for each of many of
these items!

------
networked
Beginner or not, you should probably use visudo [1] instead of

    
    
      vim /etc/sudoers
    

for the sanity checks that it provides, if nothing else. A botched edit of
/etc/sudoers that locks you (along with every other user) out of
administrative access is an unpleasant way to learn this.

[1] <http://linux.die.net/man/8/visudo>

~~~
Maxious
Similarly "ufw allow from {ipaddress-you-will-access-from} to any port 22"
sounds like a good way to accidentally lock yourself out unless you have an
out of band backup

~~~
andrewvc
FWIW I never understood UFW over straight IP tables, is it really easier to
read?

~~~
luser001
IMHO, yes. At least on Ubuntu, it's never been too clear to me how I should
save my rules so that they come back on startup.

The ufw man page is pretty decent.

~~~
leetrout
I've not tried anything complex with UFW so I still use iptables on my bastion
host that handles my vpn tap. It's not terribly complex to make rules come
back on startup (but probably more involved than one would hope).

For anyone else that followed the thread to this point- this advice on
bringing iptables back up on reboot worked for me
[http://rackerhacker.com/2009/11/16/automatically-loading-
ipt...](http://rackerhacker.com/2009/11/16/automatically-loading-iptables-on-
debianubuntu/) YMMV

~~~
hahainternet
This is how I run iptables on a sufficiently large network of machines.

The advice is not complete. IPv6 is real and really works most of the time
these days. Back up your ip6tables to a file too. I like /etc/firewall-4.conf
and /etc/firewall-6.conf but it's down to preference.

Know about iptables-apply too, lest you be caught unaware.

------
josephkern
Firstly, a nice checklist. Easy actionable steps, repeatable, and pretty much
most of what you need.

Secondly, you are about 4-5 hours away from learning puppet (or Chef) and
making this checklist into actual code.

Thirdly, you now have a checklist of items that you can use in a job interview
if you get the oppertunity to gain a new-hire or an intern.

Lastly, good on you for submitting this to a peer-review on HN. We can be a
picky lot.

TL;DR Checklists are a good first step for building a proper config management
system.

~~~
laumars
We can be picky, but with good reason as security is an exact science and a
costly one to get wrong. There's so much conflicting and out right bad advice
posted online these days that sometimes it takes a picky community to help
clarify the best practices.

For what it's worth, some of the advice given in that article was worth
mentioning (eg fail2ban, it's a great tool). But the shared account suggestion
was the complete opposite of how you should be managing user accounts as you
lose audit trails. And for that comment alone, I'd recommend people read that
article with a degree of scepticism before rushing onto any boxes they might
administrate.

That article has generated a lot of good discussion though. So even if just
indirectly, it's been a valuable contribution to HN.

~~~
josephkern
Picky is good! We never grow without constructive critisim. And a community of
practice may yeild better results than an individual (at least for simple
things).

> I'd recommend people read that article with a degree of scepticism before
> rushing onto any boxes they might administrate.

Agreed. But the same could be said of everything; hackers and engineers,
empiricists both.

Some of the advice in the article would seem apporpriate to someone who had
never worked with another senior engineer. No fault of the OP.

System engineering and security are as much a learned craft as a science; a
dialectic between sand castles (if you will).

------
zrail
The first five minutes on any of my servers involve giving it a name,
installing puppet and adding the server name to my central puppet config.

You seriously do this by hand for every server? That seems error prone and a
huge waste of time when tools like puppet and chef exist.

~~~
ams6110
Puppet and Chef are yet another thing to learn and maintain, if the guy is a
part-time admin with a lot of other responsibilities and a small number of
servers it may not be worth it.

~~~
dbarlett
I was in the same position - too many distinct environments for bash/Fabric,
too little time to learn Chef/Puppet/CFEngine. Ansible [1] seems like a good
compromise: you get the simplicity (runs over SSH) and host targeting of
Fabric with the declarative nature and idempotency of the more complex tools.
You can start with all-in-one "playbooks" [2], then split out tasks, handlers,
Jinja2 templates, files, and variables [3].

As an aside, I think the default fail2ban config is too loose and quiet.
Here's [4] an Ansible task file that configures it to be more aggressive and
send notification emails.

[1] <http://ansible.cc/>

[2] <https://gist.github.com/dbarlett/5079802>

[3] <https://github.com/fdavis/ansible-best-practices>

[4] <https://gist.github.com/dbarlett/5079715>

~~~
gingerlime
I'm using fabric quite frequently, and am trying to understand what makes a
configuration management tools a much better choice. I'm currently using
fabric for anything from bootstrap a new environment from scratch, via
restoring a snapshot from backups, to pushing code updates stored on git.
Perhaps I'm being really daft, but it always evades me why something as simple
as

    
    
        sudo("apt-get install -y <name your packages>")
    

needs to be replaced with

    
    
        - name: Install prerequisites for PPA management
          apt: pkg=$item state=present update_cache=yes
          with_items:
          - python-software-properties
          - software-properties-common
    

I do use a pretty homogeneous environment, which makes things simpler, but
this is a deliberate choice to avoid complexity. If I know all my hosts are
e.g. debian 6, then what makes ansible/chef/puppet so much better than fabric?

I'm not trying to be provocative or negative. I'm really trying to understand
the supposedly big difference between what's labelled a deployment tool, and
configuration management tools.

~~~
zargon
Deployment tools like fabric are imperative, and configuration management
tools are declarative. With configuration management, you define the final
state you want the server to be in, and it will do whatever is needed to get
it into that state. Some or all of the parts might already be done, and it
won't change the parts that are already correct (the declarative configuration
is idempotent).

Deployment tools just execute whatever script you hand them, and so the
scripts are either more brittle (server must be in a precise state beforehand
or it doesn't work right), or require more effort to duplicate the work that
the configuration management software does to only make the needed changes.

If your deployment scripts get complex, it's more difficult to see at a glance
what the end configuration is supposed to be.

~~~
gingerlime
Thanks for the clarification. I think I get the theory a bit better now.

I'm still struggling with seeing some practical examples of those differences.

for example, `apt-get` is pretty much idempotent already, no? if I run `apt-
get install -y <package>` x 1000 times in a loop it won't install it 1000
times...

Fabric does give you the building blocks for those kind of checks elsewhere,
such as `exists`, `contains`, `append` (which is supposed to also be
idempotent) etc... I've designed my deployment / bootstrap scripts with fabric
to take this into account. It does add a little overhead, but nothing that
makes me feel I need a better tool... Maybe my deployment base is still
relatively small and homogeneous.

So it's true that I have to put in those checks myself, and that I don't have
a very easy way to discover what state a server is at.

I'll try to take another stab at one of those tools. Maybe things will sink in
when I actually use them. Thanks again for explaining.

~~~
nlew
If you're only doing one or two things, the value is a bit more vague. But
consider even the simplest interaction: the config file for a service should
look like X, and if it has to be changed, the service needs to be restarted
afterward. Oh and there are 10 config files, but you only want to restart the
service once if any of them changes, after they've all been changed. That's
not _hard_ , but it's already starting to look non-trivial.

And what if you want to have the same logic for several services? I guess you
abstract it out to a function. But then it turns out one of those services
doesn't have a restart command, and you have to do stop+start. And another
service won't start if you use restart while it's not running, so you have to
check if it's stopped and use start, otherwise use restart.

It's much more than just whether the code is declarative or imperative, or
whether it's idempotent or not. An imperative tool can change your system from
known initial state A to desired state B. A declarative system can change it
from _whatever_ initial state it's in to desired state B, even if you never
considered it might be in that state.

~~~
derefr
When you want to make a configuration change to your servers, and they aren't
already in a "known state", why would you think any tool could put them _into_
a known state? When a computer has a virus, say, you don't think it's been put
into a "known state" after the anti-virus program gets done with it; the virus
may have done any number of things you might be unaware of, altered any number
of data or configuration files in subtle ways that the tool doesn't look for,
but implicitly relies upon.

Example: what if, say, one of the provisioning requirements is "make sure this
gem is installed", but on one of the servers /etc/gemrc has "install: --no-
rdoc --no-ri" in it? Now on one server, the docs are missing, while everywhere
else they're available. That sort of thing.

If you're already running on an IaaS like EC2, I think there's a simpler,
better way: rather than trying to get "unknown state" to "known state", why
not use the simplest known state of all-- _unprovisioned_? Write an imperative
script that reinitializes a freshly-provisioned IaaS node to your known state,
and then do a rolling reprovision, terminating old nodes and provisioning new
ones.

An equivalent comparison[1]: would you feel safe running an automated script
that would SSH into a production machine and "git pull" a checked-out repo
sitting on it, from whatever state it happens to be sitting at, up to
refs/heads/master, so as to deploy code from it? Could you _guarantee_ that
that repo hadn't been moved to some state where refs/heads/master isn't a
fast-forward commit? Or would you rather do a fresh "git clone"?

\---

[1] A _contrived_ comparison, though; although the former option is
unreliable, the latter is terribly inefficient, and they're both horrible for
keeping a .git directory inside of a directory that might very well be web-
accessible.

------
beagle3
1\. You should do "apt-get dist-upgrade" to get new kernel packages as well,
otherwise you are stuck on an old kernel. (You might want that. I prefer
updated kernel for the security, firefoxen, etc.). "apt-get upgrade" will only
update existing packages - but the kernel updates require new packages to be
installed.

2\. If you're on ubuntu, root already has no password, and your initial setup
user (whether it is called "deploy" or "kilroy") is in the sudoers file.

3\. Other things I install in the "5 minutes with server" are: htop molly-
guard screen git-core etckeeper

git-core because I prefer my etckeeper in git, but if you want it in bzr you
don't need git-core. INSTALL AND CONFIGURE ETCKEEPER AS SOON AS YOU CAN,
seriously. You need it. You'll thank me when you try to figure out when and
how something in /etc got borked. (you need to edit
/etc/etckeeper/etckeeper.conf if you use gif. You need to do "etckeeper init"
and then "etckeeper commit" to establish the baseline)

molly-guard stops you from rebooting the wrong server

screen (or alternatively tmux) lets you keep your session open through ssh
session disconnects (e.g. when moving from wifi to 3G, or between 3G towers
that give you different external IP). The most useful way to use screen is
"screen -xR" which also lets you share your session with someone else should
you need to.

~~~
richardwhiuk
Indeed dist-upgrade should be used - it isn't just kernel updates either.

The server should also be rebooted. Applying kernel updates makes no good if
you never apply them!

screen -x is equivalent to screen -xR afaict:)

I'd also add @reboot screen to crontab, which will recreate a session on
startup - in my bashrc I have :

if [ "a$STY" == "a" ]; then screen -x fi

Other useful things include actually setting up backups (duplicity is a useful
first step here), installing munin/nagios to monitor the new box.

Realistically if you are doing this more than once per blue moon, then you
should be using something like puppet to do this automagically.

~~~
beagle3
> screen -x is equivalent to screen -xR afaict:)

"screen -x" requires a session to already exist, whereas "screen -xR" will
join one if it already exists, but will create one if it does not. At least it
does in v4.00 which I use. If you have a "screen" call in your @reboot, you
already have a session, so they will work the same.

------
belorn
> No secure server is complete without a firewall.

Comments like those are why I normally point people to actual security expects
(like, say, Schneier), and why I recommend that new admins should ignore as
much as possible the practices chanted by the industry. A secure server does
not need a firewall. A firewall can be used to secure a server against a
specific threat, but that's it. The days of ping of death are behind us.

I would like to point out that following the article's guide and firewalling
away ICMP, you can end up with a lot of trouble. (see
<http://serverfault.com/questions/84963/why-not-block-icmp>). Some ICMP
messages are not blocked by default by ufw, so I'm unsure how damaging ufw is
when used like this.

At any rate, a Firewall is a block. A new fresh server install won't have
ports that needs to be blocked. By putting up a firewall, there is nothing to
be gained. Before the firewall, the ports are closed. After adding the
firewall, the ports are closed. All that is gained is a hurdle the next time
one wants to install something like a monitor tool (like Munin), or a new
service.

It might be useful as a last line of defense against malware regarding
outgoing traffic. I am normally against that kind of thing however (as
focusing on the cause is better than the effect). At best, one can catch a
spam malware, but any bot net, web server, ddos or other type of malware are
untouched by the rules (port 80 and 443 is allowed). If the server has email
sending configured so root message can be sent, then the spam malware can use
that route and the firewall will just sit there.

So let's take a newly installed machine. What threats can be identified and
what risks are we trying to mitigate with the help of this firewall (as
specified by the article)? The only thing I can think of is either a Zero day
TCP/IP stack vulnerability (not a realistic threat), or that the admin doesn't
trust the other admins when they install new services. Yes, if an admin
installs a new email server and enables relaying to the whole world against
the explicit recommendation in bold font by the install wizard and the
configuration file, a firewall can block that admins' actions. Then again,
that same admin could just as well have disabled the firewall to "get the mail
to work", so I'm not sure it's a viable defense against bad admins.

~~~
oconnore
Schneier recommends use of a firewall:
[http://www.schneier.com/blog/archives/2004/12/safe_personal_...](http://www.schneier.com/blog/archives/2004/12/safe_personal_c.html)

You are correct that a firewall will not magically solve all your problems,
but it does help to protect against programs that open ports you didn't know
about.

Recommending against them doesn't make sense, and implying that they are only
useful to prevent TCP/IP zero day vulnerabilities is silly (especially since
the firewall likely wouldn't protect against that anyway).

~~~
cbr
That's from 2004 and is about firewalls on desktops, not servers.

~~~
oconnore
IP hasn't changed much since then, and I'm not sure server vs desktop is a
relevant distinction here.

------
jackalope
Maybe this is considered a preinstallation step, but the very first thing to
do is set the system time on the hardware, before you even boot the OS for the
first time. Then the first step after booting is to confirm the time and reset
it, if necessary. This is essential for accurate and usable logs, file times,
version control timestamps, etc.

It's also a good idea to ensure that sshd has fresh keys that are unique to
that machine. Hopefully, your images are installed without sshd keys,
otherwise you'll have multiple servers with the exact same keys, which is
considered bad practice. During initial configuration _before deployment_ ,
you might want to remove the keys so that sshd will create fresh ones when it
starts:

    
    
        rm -rf /etc/ssh/ssh_host_*

------
nhebb
Looking through the responses here, I'm hopeful that someone will launch a
"Sysacademy" variant for system administration training. There are tutorials
scattered around the web, but (at least for those of us who don't know where
to look), there doesn't seem to be one place that puts in under one umbrella.

~~~
dbarlett
I have high hopes for Ops School (<https://ops-
school.readthedocs.org/en/latest/>).

~~~
manish_gill
Looks really awesome. Thanks! :)

------
bnegreve
> _vim /home/deploy/.ssh/authorized_keys [and] Add the contents of the
> id_rsa.pub on your local machine and any other public keys that you want to
> have access to this server to this file._

Cool trick: use ssh-copy-id <server> from the client machine.

From the man page:

    
    
        ssh-copy-id - install your public key in a remote machine's authorized_keys
    

It is much easier than editing the remote .ssh/authorized_keys file since
copying and pasting the key is error prone due to extra new lines typically
added by the terminal emulator.

------
rdl
My big thing is making sure 1) the machine comes back up cleanly after a
reboot and 2) is current on all patches 3) is running as little as possible.

Also a big fan of externally verifying what ports are open, and making sure
the system is in monitoring, backup, config management systems. Config
management is kind of optional if you have a small number of servers which
don't duplicate configurations, though, and there's often no need to back up
the OS, but any data should be backed up automatically.

------
bcl
Don't forget:

    
    
      netstat -ntap | less
      ps aux | less
    

Also check to see what's enabled to run at boot time via whatever your flavor
uses.

Check for unusual daemons, ssh running on other ports (yes, the provider pre-
loaded systems with a back-door ssh without disclosing it to us).

This is especially important when you are taking over admin on a server you
didn't setup yourself. Other folks have weird ideas on how to admin things.
Like webmin for example...

I also like epylog for finding unexpected stuff in the logs.

~~~
ck2
_to see what's enabled to run at boot time_

what's good beyond this:

    
    
      chkconfig
      cat /etc/rc.local

~~~
majelix
chkconfig is redhat specific. I believe debian-varients use update-rc.d.

Generically (for Linux), take a look at /etc/init.d , /etc/inittab, or
/etc/rc?.d.

------
robomartin
I went through the article and then read every single post on this thread. I
am not a security expert so I won't even try to contribute except to say that
I see a lot of people offering criticism without taking the extra step of
explaining how they would go about hardening a fresh Linux install (or a pile-
o-servers in a rack, whatever is applicable).

It'd sure be nice for those of us who are not security experts to read
alternative approaches rather than, paraphrasing and not picking on anyone,
"using a firewall is dumb" or "blocking ssh is pointless".

I like isolated ideas such as using a script to completely automate the
provisioning of new boxes. Kind of a no-brainer if you ask me. The problem is
that such recommendations are not followed by something like "Here's the
script I use on Ubuntu 12.04 LTS".

How about it guys? Would you care to attempt to produce a canonical HN "How to
harden your server" reference?

Maybe one of the security experts on HN can start a repository on Github to
evolve a canonical script. I'm pretty much 100% Ubuntu 12.04 LTS, so it is my
hope that this is one of the platforms that is addressed.

I did some looking around and this is what I found (I am in no position to
evaluate the merits of any of these at anything beyond an intermediate level):

<https://github.com/bluedragonz/server-shield>

<https://github.com/eglimi/linux_hardening>

<http://www.cyberciti.biz/tips/linux-security.html>

<http://ubuntuforums.org/showthread.php?t=1002167>

[http://www.thefanclub.co.za/how-to/how-secure-
ubuntu-1204-lt...](http://www.thefanclub.co.za/how-to/how-secure-
ubuntu-1204-lts-server-part-1-basics)

[http://www.andrewault.net/2010/05/17/securing-an-ubuntu-
serv...](http://www.andrewault.net/2010/05/17/securing-an-ubuntu-server/)

<http://ubuntuforums.org/showthread.php?t=1919111>

<https://help.ubuntu.com/12.04/serverguide/security.html>

<http://www.sans.org/score/checklists/linuxchecklist.pdf>

[http://nvd.nist.gov/scap/content/stylesheet/scap-
rhel5-docum...](http://nvd.nist.gov/scap/content/stylesheet/scap-
rhel5-document.htm)

[http://blogs.csoonline.com/ubuntu_lts_vulnerability_scrub_ag...](http://blogs.csoonline.com/ubuntu_lts_vulnerability_scrub_against_national_vulnerability_database_nvd_nist_gov)

<http://ubuntuforums.org/showthread.php?t=510812>

~~~
belorn
You are right, and I have some blame here of not including positive criticism
in my comments. I do not have a script to offer, but I can go through the
process I work by.

#1: A good password is a must. During installation, have a second computer
generate a good password and either memorize it, or GPG encrypt it somewhere
on the second computer. pwgen is decent in generating passwords.

2#, I fully agree with the article on automatic updates if its a personal
computer. For others, one can have root mails sent if you are a fast and and
read mail daily. This is how many people read about vulnerabilities before
they reach the news.

#3: When installing large package like web services, I keep in mind of the
long term prospect of each project. I ask the questions: Is there a deb
package? Is it being maintained by a large group of independent developers? Is
it mentioned in discussion at Serverfault? Are there any recent updates? What
does the Wikipedia page have to say?

#4: read the man page, and check any section labeled security. Some man pages
will say things like "we have this port open. Its completely insecure, and we
expect either the local network to be safe or that you use a firewall".
Through this just happened once for me, its still a good practice to check the
man page with new services.

#5, avoid php themes/mods that require you to manually patch things. They
won't be updated by Ubuntu, so things will either end with you uninstalling it
or forgetting that it exist and thus get hacked. Sometimes ubuntu will just
install over the mod, dealing with the issue for you.

Other than that, harderning depend on use case. A wiki/forum will need some
form of anti-spam protection. A media center need access control or firewall
to only allow local network. Unsecured protocols like nfs and nis need
something like ipsec. A mail server needs authenticated smtp.

~~~
robomartin
> have a second computer generate a good password

Why a second computer?

~~~
belorn
The Ubuntu installer does not ship with pwgen, and its good to have the
password done right, right there while installing.

------
ck2
Your first 5 seconds should probably be

    
    
      w
      last
      dmesg

~~~
stevekemp
Add "free" and "top" there too.

~~~
cs02rm0
I've had df -h write out to the motd before for clients who insisted on tiny
slices of disk, no automated monitoring and no manual checking of disk space
even after it chewed itself up.

I'm not sure it helped, but it made me feel better.

------
belorn
It saddens me each time I see a security best practice guide that suggest
turning off ssh access for root. Its a very useful feature, and the security
industry should focus on the security problems rather than removing features
without thinking about the actually benefits of doing so.

Sysadmins with root access should be able to handle a random 8 character or
longer password, and that number is large enough for a secure public
accessible ssh. If your not a sysadmin or unable to remember a random
password, try go with a passphrase like "correct horse battery staple". If you
have too many machines and thus can't remember passwords, then use a master
password locally with a bit larger password and store certs there.

To do some math to illustrate the security of a random 8 character long
password, someone would need to fill a 100/100 line for several hundred years
to get through them all. By that time you should have noticed the constantly
full connection, and be happy you haven't died of old age yet even after your
200th birthday.

However, 8 character passwords are not the suggested length by security
experts. They suggest using a 10 character long password, as that is also
secure in the case that your password hash somehow got lost.

For the several servers that I have, I have been more worried about the logs
from failed attempts than I have been of anyone guessing the password. Logs
wears on the hard drive, so one might want to install fail2ban to lower the
number of writes to it. It also decrease the noise level in the server room.

~~~
drucken
You fail to understand that true security is not about passwords but
_control_.

Disabling remote root login,

1\. cheaply reduces attack surface

2\. necessitates assigning administration rights to specific persons and
roles.

3\. greatly increases command visibility and value of audit trails.

4\. combines extremely well with disabled terminal shell over SSH

5\. can reduce (unnecessary) system resource usage.

Etc.

There are very good reasons why this practice is standard. Unless you are
specifically creating a test or honeypot box of some kind, you would be
foolhardy to ignore it.

~~~
josephkern
You fail to understand that true security is not about passwords or control;
true secrity is about _risk mitigation_.

Control is an ignorant tyrants last redoubt. Security is only about:
identifying risks and dealing with those risks. All other messures are
worthless as actual security objectives.

I am not arguing with you in priciple, only in particular. All of your points
are well taken, to reduce risk, but not to control.

------
ultimoo
Nice writeup. Although I prefer to automatically configure using Chef, I
didn't know of fail2ban and logwatch -- I will definitely look into those.

Also, since this post is for beginners, you should mention about restarting
the sshd over the lish tty and not over the ssh ptty.

------
waverider
1\. I don't think it's a good idea to use the same account by multiple people.

2\. And you're not using a configuration management tool (like SaltStack,
which is also a remote execution engine) this will give you: \- a central
point to manage all your server \- predictable configuration on all servers
with the same role \- a configuration documentation place (and even history if
you git the confs) \- will make managing multiple users a breeze

3\. Use VPN and private services on private IP.

------
armored_mammal
So I'll agree with many of the commentators that several of the practices
suggested aren't 'ideal.' However, they are easy and possibly better than
having no 'practice' at all.

Just as an example, the shared user account with unique SSH keys per user.
Sure, it's obnoxious in some respects, but many of the criticisms I'm reading
in the comments like "but they could reinstate their access with a cron job
that re-adds their key when they leave" and such are silly - presumably those
who are using the shared account are developers/sysadmins with sudo
privileges. Regardless of whether they have a shared account they have
privileges to do whatever they feel like to the systems in question. Hence I'd
argue it's a fairly reasonable solution for the situation when you don't have
the time/resources to configure something more complex and you have to trust
all parties anyway.

I think there are two larger takeaways:

First - Managing multiple users across many servers and dev systems is not
easy enough, particularly for smaller organizations, and only gets worse when
you try to get more granular about who can do what.

Second - umm... no idea anymore. Forgot what was second. Automation is good?

------
petsos
I find it very bad practice to blindly pass -f to commands.

    
    
      chown deploy:deploy /home/deploy -Rf

~~~
X-Istence
"-Rf" not found... :P

flag arguments should go before any other arguments to be compatible with the
most Unix systems.

------
matwizzle
I'd propose using OSSEC over logwatch & fail2ban. Ossec seems to be a bit of
an obscure tool, but a thoroughly functional one at that. Logwatch gives a bit
too much info at once to interpret properly, while OSSEC will only alert you
when something is actually up.

Ossec provides (among some other things):

* Log file monitoring, with severity levels, occurence counting, time-based rules and auto-response. This means (for instance) you get to watch your auth.log for failed login attempts and after 10 failures fire a script to ban him, alert sysop by email or have hubot alert your sysops. Or whatever floats your boat.

* File integrity monitoring. Make sure noone's been mucking around with your files. It has real-time support (through inotify), but if you don't want use that, make sure you store the database it keeps off-server for forensics if need be. Pro-tip: FIM and auto-updates are a tad unnerving.

* It can watch command output. You can use that to make sure the `netstat -tnap` output doesn't change, for instance.

* For larger/more compliant instances, it has a client<->server setup available.

~~~
daemon13
I have heard good words about OSSEC but never tried it because of it's
perception being:

\- heavyweight;

\- not actively developed.

Are my perceptions right? If no, how would you recommend to start using it?
Any good tutorials or other pointers?

~~~
matwizzle
Last version dates from 2012-11-19, not the fastest of releases - but seems
ok. Trend Micro is involved (and offers commercial support).

It doesn't feel heavyweight to me. It does start a bunch of daemons for all of
its processing and it has the client->server bit built right in. That may make
it feel heavyweight, I guess. But, you don't have to use the client->server
stuff if you don't want to. It'll still do all of its magic for you.

Ossec will do a lot out of the box. So, I suggest installing it with the
default ruleset and the active-response stuff turned off (the installer will
ask you). Then dive into the rules and ossec.conf (knowledge of regex is
required).

Documentation @ <http://www.ossec.net/doc/index.html>

------
magnetikonline
I wouldn't bother with fail2ban considering password based SSH logins are
disabled (which is good).

Since the author is using ufw to control iptables, better to just use "ufw
limit" rules for SSH port 22 to slow down the rate of any automated SSH bots
trying to give your server a workout.

~~~
danieldk
Indeed. I have always been a bit worried about such approaches, since they
parse log files and attackers have some control over what is written to log
files (user names and host names).

------
jedberg
To prevent having to log into root via remote console, set up a second backup
account with an ssh key and sudo access, and then put that key somewhere safe.

The chances of both that and the deploy user getting corrupted at the same
time is unlikely.

~~~
plusbryan
That's a good alternative. At least for Linode, remote console is fairly easy,
but some don't offer this feature.

------
KevinMS
I was wondering about this the other day, is

    
    
        PasswordAuthentication no
    

really that necessary if a strong password is used?

Honestly, being a private key screw-up away from never being able to log in
again scares me a little.

~~~
ziziyO
That's never the case. Any decent colo facility has some way for you to reboot
in single user mode and remote in to that, IP KVM is one such tool.

------
shuzchen
I dunno about this guy but apt-get update/apt-get upgrade takes upwards of 15
minutes on new installs (as does yum upgrade on rhel-compatible systems).
Kinda eats up into my first 5 minutes on a server.

~~~
cdjk
Look into apt-cacher-ng. I have a slow internet connection at home and it made
updates go a lot faster for multiple machines. Newer versions (in wheezy, not
squeeze) even work with yum.

------
jiggy2011
The guide recommends blocking SSH access to anything other than your own IP
address. The problem is that my IP number sometimes changes at which point I
end up locked out totally.

So to get around this you either have to allow SSH from anywhere or you have
to use some remote KVM system. Most of the remote KVM systems seem to be based
on Java applets which is not really something you want to enable on your
system.

So what is the best way to get around this? Just open up SSH and be diligent
about your SSH security? implement port knocking?

~~~
plusbryan
This may not work for you, but I have a similar situation when I'm at home or
I'm travelling. To solve it I set up a VPN at the office (we use Meraki
hardware, so it was literally just a click) and connect to this first - then
connect to the server.

~~~
jiggy2011
I did consider that approach, but the closest thing I have to an office is my
home (with a dynamic IP) anyway!

I thought of setting up a VPN on a server, but all that would really do is
move the problem from securing SSH to securing VPN.

------
anderiv
I like the KISS aspect of this, though for accountability purposes, I prefer
each user to have their own account.

One correction for you: the sshd_config lines should have no "=" symbol.

~~~
plusbryan
Good catch! Fixed

------
itry
I use "adduser" instead of "useradd". Its more convinient as it does the
remainings steps automatically for your. It comes with ubuntu by default. Any
downsides?

~~~
kaoD
For some reason it's not working in last Arch's build. Manual "useradd" is
such a pain in the ass...

------
Matsta
<https://github.com/Xeoncross/lowendscript>

I just use this bash script when setting up a new VPS. It pretty much takes
care of it all for you, plus it sets up a ngnix/mysql stack already optimised
for law RAM machines.

Also sets up 3proxy which is handy for viewing Hulu/Netflix :)

------
alan_cx
Imagine you are a fairly normal windows user or even sysadmin. Imagine you are
considering Linux to replace some task that a windows server performs.

Now imagine the conclusion after reading this thread. As some one who can just
about get something useful done in Linux, this thread makes me want to never
use it again, it just looks too scary. Loads of disagreements which seems to
have lots of dire consequences. OK, great discussion for deep geekery, but
scary as hell for normals.

Now, when ever I see those annoying posts from smug Linux users who jump in
every time a Win or Mac user highlights a problem, I now have this discussion
to point to as to why Mac and Win users wont generally go near Linux.

Sorry chaps (and chapesses), I am on your side, Linux is a great thing, but
this has to be the worst advert for Linux ever.

~~~
marcosdumay
There is just one action on the post that will have dire consequences if
ignored: disabling the ssh password authentication.

The rest are decisions that you don't need to rush into, and things you should
learn when they make sense. Be advised that "the proper way to configure a
server" is an elusive beast, you'll be always improving it, it's never done.

------
philfreo
While I agree these practices seem pretty safe/standard, it's still enough
manual work that many people will just skip half of it out of laziness for
small projects.

Does anyone have good reusable Puppet (or bash scripts) published that they
use on all new servers?

~~~
vahe
Here's something I wrote very quickly (bash script):
<https://github.com/vahek/vSetup> It does most of the stuff mentioned in the
post, however doesn't setup automatic updates or Logwatch.

~~~
harshreality
(to both corin and vahe): That kind of shell scripting is the horrorshow that
motivated configuration management tools* in the first place. Shell scripts
require a lot of added complexity to manage multiple heterogeneous servers, or
to be idempotent.

* Puppet, Chef, Ansible, Salt, CFEngine, etc.

~~~
vahe
I agree. However, there is no learning curve to get started using this script
and was put together in a few minutes.

------
twodayslate
Why install fail2ban and logwatch instead of csf?

~~~
davidandgoliath
Does it still have all of those root exploits?

~~~
twodayslate
I wasn't aware of any. If a malicious user has root access though, you are
fucked anyways. Please correct me if I am wrong.

------
thaumaturgy
A few gentle suggestions:

> _The days of passwords are over. You’ll enhance security and ease of use in
> one fell swoop by ditching those passwords and employing public key
> authentication for your user accounts._

ssh keys are better than passwords only because they contain (and require)
more information. On the other hand, if your dev's machine is lost or stolen
or compromised, so is your ssh key. This is especially a problem in
environments with a shared account with full access, as you have. So, it's
probably a good idea to make sure you're using a passphrase with your ssh key
(during ssh-keygen), unless you need a passwordless login for a shell script
or other automated remote system.

> _passwd deploy: Set a complex password - you can either store it somewhere
> secure or make it something memorable to the team. This is the password
> you'll use to sudo._

Not necessarily. Anybody with access to the "deploy" account can use "passwd"
to change its password to anything they like. (Edit: I'm wrong on this! passwd
does require your current password; I've just gotten used to doing it for
other accounts via sudo, which doesn't.) Changing the passwd on your own
account doesn't require sudo. For this reason, I think it's better to simply
give deploy nopasswd access to everything, and then delete and lock deploy's
password to prevent it from being used at all (passwd -d -l deploy). You'll
have effectively the same amount of security, but this way nobody will need to
remember or retrieve a complex password, and you'll prevent, say, some
accident in /etc/ssh/sshd_config from making deploy remotely accessible via a
password.

You can do something better than this though, but it takes a little effort.
Deployment is often the same steps over and over again (an rsync or an
occasional apachectl graceful in my case). You can give the deploy user
nopasswd access to _only_ a shell script that's writable only by root; this
way, deploy can still do 90% or more of their job without ever being given
system administrator rights. You do have to be a little careful writing shell
scripts though -- $ __* and "$@" still trip me up once in a while.

> _Logwatch is a daemon that monitors your logs and emails them to you. This
> is useful for tracking and detecting intrusion._

This seems of dubious security value to me -- probably better as a generic
sysadmin tool, so that you get annoyed by noisy logs and seek out and fix
minor problems instead of ignoring them. Thing is, if someone does get access
to your server, you pretty much can't trust it at all anymore. With services
like Linode, you're really better off just launching a clean new instance, re-
running your setup script (if you have one), and moving your data over.

I had to deal with the occasional intrusion in some pretty icky servers at an
ISP once upon a time. We used rkhunter for a while, but I learned pretty quick
that successful attacks against Linux servers are plenty enough sophisticated
to alter all the basic tools that you would use to detect and remove the
rootkit.

There is one caveat: I've been playing around with the idea of setting up
rsyslogd to route syslog messages to mysql, and then using mysql replication
to have an up-to-the-second offsite copy. I'd combine that with Snoopy
(<https://github.com/a2o/snoopy>) or something similar. The point isn't to try
to clean up an intrusion, it's to see how the intrusion happened so that I
could close that hole. I haven't gotten around to setting this up yet, so I
can't say anything terribly smart about it.

Finally: if you're going to have a problem with unauthorized access to your
Linux or BSD server, it's _probably_ going to be via one of its services, not
via brute force ssh or anything similar. So, if you're concerned about this
kind of stuff (and if you're being paid to be a sysadmin, you have to be),
then you need to spend most of your attention making sure that your various
services are set up correctly
(apache/php/mysql/postfix/dovecot/spamassassin/etc. in my case).

~~~
rdl
I love ssh keys. However:

The big problem with ssh keys is not being able to _enforce_ ssh key
passphrases on users. From the server perspective, you have no idea if the
user has set up a passphrase. There are security standards which mandate
certain kinds of passwords (complexity) and are silent on asymmetric keys, so
you couldn't use keys in those environments.

The old solution was to do some post-login hack to require a password as well
(e.g. to su), or do a VPN (which could have multiple forms of auth) and then
ssh with keys after that, but the newest ssh (and I believe commercial ssh for
a long time) now supports requiring multiple authentications per login, so you
can do ssh key plus passphrase.

There are also DLP/etc. reasons why ssh can be problematic in some
environments (i.e. where you're required to log/analyze actions taken by
users, particularly admin users). The solution there is to use a bastion host
and ssh in and then ssh out, with the user account locked down to log. SSH
Communications (the commercial ssh people) have an interesting ssh MITM box
which essentially does what all the SSL x509 MITM CA things do.

~~~
packetslave
SSH keys also never expire and are easy to copy/steal. Key-based auth is
officially discouraged at the day job (for laptop-to-desktop type things) for
exactly this reason.

~~~
linker3000
We throw an OTP (One time Password) into the mix using Yubikeys:

<http://www.yubico.com/products/yubikey-hardware/yubikey/>

~~~
thoughtpolice
Yep, I've been meaning to get one of these. I really dislike being unable to
enforce a password on your private keys.

I've also been using Duo Security as my new 2FA solution and I like it a lot
(it also has support for using Yubikeys to provide your OTP.)

------
kbuck
Why install fail2ban? You already have SSH password auth disabled, and you
only allow SSH connections from your office. Won't this just risk banning your
own office if someone's SSH client is misconfigured?

~~~
vacri
fail2ban is useful for things other than SSH - I've seen it deal handily with
people probing our asterisk server.

~~~
tuzakey
agreed, you can set up fancy jails for people scanning other services too,
someone who probes SMTP/POP/IMAP doesn't need to hit SIP and SSH. Depending on
the scenario you could choose to say block an entire netblock from hitting ssh
after a single offensive IP probes a few services. Even a 10min jail time will
cause most attackers to give up and move along to their next victim (unless
you're being targeted.)

------
plainOldText
The first two things I prefer to do after I log in for the first time (on
ubuntu):

    
    
      > ufw enable
      > ufw default deny
    

This way, after I log in, I will not allow anyone else to connect to my
machine (I've had instances when by the time I changed my root password "bad
guys" had already tried to connect to my machine).

Of course after I do the server setup (which is usually a script that will
change ssh ports, install packages, etc) I will allow other services in ufw.

~~~
stevekemp
Once upon a time somebody wanted to give me access to a new system - they ran
"adduser steve", and set the password to be "steve".

Two hours later, when I read the mail, that account had already been
compromised. I knew it was a risk, but had no idea that it would happen so
__quickly__.

------
npsimons
_passwd

Change the root password to something long and complex._

And bam! Not even a full paragraph in and security fail. root login should be
disabled completely, and all use of privileges should be through sudo. Debian
sets this up for you automatically upon install if you supply an empty root
password. Of course, disabling root is just the beginning (and the first user
created needs to be locked down, as they are now essentially root).

~~~
marcosdumay
> root login should be disabled completely

Would you care to explain why?

~~~
npsimons
The foremost reason is that root is carte blanche to ruin a system. This may
be fine for a developer seat or even a desktop that is not critical
infrastructure (shared), but on a multi-user system (and this includes things
such as email and web servers), you really should have a "think twice" prompt,
along with logging of who did what and when (eg, sudo). Even if you leave root
as ssh key only login, then once an attacker has gained that key (or found a
remote exploit), they don't need a password; they're root. Setup administrator
users who are given a limited set of sudo commands and only allow them to
login via SSH with keys and require complex passwords for sudo, and that's
multiple layers of protection and logging. Nothing's perfect, but it has long
been recognized that root is a big gaping hole in UNIX security; that's why
things like SELinux and RBAC were created, and it's why Windows for so long
was so insecure (ie, the main user _was_ essentially passwordless root).

~~~
marcosdumay
The problem of sudo is that it's so easy to make a mistake in your command
(and in sudoers). That's a security flaw in itself.

Also, you can always sudo bash, or sudo su.

You have a point about that extra password. But not about SELinux and RBAC
being created because of that, nor in comparing the security of a system with
root to one where eveybody is root.

All said, I'm still unconvinced. The logging isn't that usefull (I've tried
it), and the extra password isn't relevant enough. Also, none of them will
detain a malicious user (Linux has very week defenses against you grabing your
own password).

~~~
npsimons
_Also, you can always sudo bash, or sudo su._

Not if you lock down sudo properly. If you're just doing "username ALL=(ALL)
ALL", then yes, you've got a big gaping hole. Even for my main administrator
accounts sudo is locked down to a specific list of commands. As for logging, I
get an email every time sudo is run on my systems. It's not built into sudo,
but there are multiple packages which take care of such things. You can even
set it up to do remote logging to an external source.

As I understand RBAC, it was specifically created to break up the
responsibilities of root to different roles with privileges which were then
assigned based on different access control mechanisms.

 _Linux has very week defenses against you grabing your own password._

This sounds like a major hole which I'm sure the security community would love
to hear about; care to elaborate? Just to pick one example, how does one grab
a password entered over an ssh session?

------
bluesmoon
We do something similar, though we have a script that sets up the box. We use
linode as well, so the script deploys a new box with a complex root password
and my id_dsa.pub file. It also sets up /etc/skel and the profile files so
that useradd uses them later.

We don't use a single deploy user though, instead, each user with deploy perms
is in the sudo group.

------
giulivo
I think you're doing it wrong.

Having a _shared_ account, with sudo privileges and a common password doesn't
look smart.

Also, you're forcing devs to copy their public keys around.

I think you underestimate the benefits brought in by a centralized system,
like LDAP, which would also allow you to manage the permissions with some more
granularity

------
dave1010uk
Couple of related questions:

1) Many people seem to be recommending Puppet / Chef. How many servers or
installs do you need before this is a good ROI? (Over using odd bash scripts
or cPanel/WHM)

2) Am I right in thinking kernel updates don't get applied until the server is
rebooted? If so, how / when do you manage this?

~~~
crucialfelix
I wasted several days trying to get a good puppet setup. In the end it still
needs more work, and the deploy system is complicated. I only have 4 boxes to
manage.

Ansible looks great. Much lighter weight but it can still scale up.

Puppet and chef would both benefit by supporting single run light weight modes
of usage.

~~~
pindi
Puppet does have a single run mode of usage:
<http://docs.puppetlabs.com/man/apply.html>

I use it to manage just 5 servers, with a Fabric script that rsyncs the
manifests up to the servers then runs `puppet apply` to apply the changes.

~~~
crucialfelix
true, and that's exactly what I was doing. rsync and then run puppet apply.
that's just a lot of grinding if you only made one little typo.

and I see here I have capfiles and a system to bootstrap a new server. way too
much stuff for this scale. (4 servers)

------
berlinbrown
It may seem like a waste of $20 a month if you aren't doing anything big with
it.

But you can setup a linux virtual server on linode and go through all of those
steps on your own. Also test out your setup.

<http://library.linode.com/>

~~~
voltagex_
EC2 is cheaper if you're using the server for hours in a month - i.e. to
learn, like I am.

------
fsniper
I do not recommend unattended upgrades. Every upgrade without any exceptions
have the possibility to destroy any working system or your applications. Test
and deploy would be wiser.

By the way, what the heck developers do on production systems?

~~~
RobAley
Bear in mind that in small company, developer == dev op == sys op == network
manager = testers == toilet cleaner. i.e. they do everything, dev or
production.

As regards to unattended upgrades, again in small companies the choice is
usually between them or no upgrades (no time for the test/deploy cycle across
all platforms in use etc., less frequently used systems get forgotten and so
on). As the article points out, its safest just to go for security updates
only. If the security update breaks things, then yes, your system is down
until you fix it. If a hacker breaches your system, your system is down until
you re-deploy it, and then you have to deal with the PR fallout of lost
credentials and data etc. Its only anecdotal, but I've had no problems with
unattended security updates on my ubuntu boxes in the 5-ish years. Security
breach attempts are a daily occurrence. It's nice if its not an either/or
choice, but for many it is.

~~~
marcosdumay
If you don't have time to test, you have yet another reason to not automate
updates. It's better to run them only when you are around and have some time
to fix whatever goes wrong.

~~~
RobAley
We're talking security updates here, by the time you've found time to test,
your server may well be hacked.

------
frostiebot
Kinda concerned about the usage of ufw with fail2ban without the mention that
you have to do a little more work for fail2ban to actually work with ufw
(otherwise fail2ban causes interesting issues/clobbers ufw's rules)

~~~
GaryGapinski
I haven't seen this problem on recent versions of Ubuntu (10.04, 12.04).
fail2ban bans will preempt regular ufw rules (as they should).

------
cdjk
I've never understood the point of fail2ban if you disable ssh password
authentication. Yeah, it might eliminate some spam in the logs, but if you
only allow key-based authentication that doesn't really matter.

~~~
petejansson
Keys can be brute-forced.

~~~
dchest
No.

~~~
drivebyacct2
Pretty much, not to be a jerk but it's a bit alarming from someone claiming to
do security consulting.

------
cbracco
This will get buried but I recently posted a line-by-line guide of my first
time setting up a VPS server, maybe it will help someone!
<http://cbracco.me/vps/>

------
plasma
Is there a web app that I can just have control my servers from? I'd like to
do puppet/chef, but I read that its not 100% working for Windows yet (I need
to manage both) and it sounds overly complicated.

------
berlinbrown
What do you about all of those chinese hackers hitting your sshd server.

I have 30 different ips and fail2ban doesn't seem to ban them.

~~~
luser001
Just curious. Have you enabled password logins on your servers, or do they
only allow RSA key-based logins?

~~~
berlinbrown
I am new to this. I don't allow ssh logins password based.

The hackers fail according to sshd but logwatch lists a list of chinese and
russian attempts.

I was hoping my firewall would block them. I tried entering a block of ips but
some of the same ones are connecting.

I don't know what this means:

Illegal users from: undef: 20 times 183.60.177.246: 7 times 217.14.134.68: 7
times 219.149.30.170: 6 times

------
serf
don't vim sudoers. the days of the password are not over. fail2ban is next to
useless with key auth.

------
sneak
This is terrible. Never let developers log into servers. The output from your
devs are commits.

------
niggler
How does UFW compare to the rhel firewall configuration utilities?

------
itry
Is there a benefit of using ufw instead of iptables?

~~~
cmpitg
It's easier to get started with UFW, and you don't need to learn much about
networking to be able to setup your firewall. I haven't checked this out (I
prefer iptables and never actually UFW), but it's said UFW is a front-end to
configure iptables.

------
snambi
This is very useful

------
vahe
I also recommend changing the default SSH port.

~~~
afhof
Don't do this. It adds almost no extra security and makes it hard for routers
that prioritizes port 22 traffic as interactive.

~~~
kristofferR
Sure, it'll not stop dedicated manual intrusion attempts, but it will actually
prevent a ton of automated bots from even just trying to connect with common
passwords through SSH.

~~~
corin_
Which is irrelevant if you have any one of: strong passwords, no passwords,
fail2ban

~~~
lhnn
Which is relevant if you're one to actually look at your login attempt logs.

------
outside1234
My take on all this is that you should spend 0 minutes on a server and be
using a PaaS solution. There might be cases where you can't, but increasingly
I can't find them.

------
martinced
I honestly do not understand why you need a _deploy_ account with sudo access.

I much prefer a _deploy_ accound which DOES NOT have sudo access. I add
firewall rules transparently redirecting 80/443 to non-privileged ports that
the webapp is actually listening to. Hence no need to be _root_ / sudo'ed for
the _deploy_ account.

You then could get a bit fancier and set the login shell for the _deploy_
account to /bin/false or something fancy (like a fake honeypot shell) to add a
bit of "defense in depth" in case someone exploiting a hole in your web server
manages tries to drop down to a shell. You'd then use another account to do
the (automated) deploy/start/stop/update/patch whatever.

I'd also say that during the first five minutes you should set the default
firewalling rules to REJECT anything and then only whitelist what is actually
allowed.

~~~
shabble
> _I'd also say that during the first five minutes you should set the default
> firewalling rules to REJECT anything and then only whitelist what is
> actually allowed._

At which point your 'net connection blips and suddenly you wish you'd paid
extra for console access.

~~~
pavel_lishin
That's how I felt about the line where he recommends only allowing the deploy
user to connect from a certain number of white-listed IPs. That's great and
secure until you get a phone call while you're at the airport or on vacation,
and then you hope that you can SSH into a machine at the office just so you
can tunnel through from a white-listed IP.

~~~
shabble
Yeah, I've always felt nervous about doing whitelisting (and to some extent,
things like fail2ban) for reasons like that. Having a couple of backup hosts /
friends with shell-servers listed who you can bounce through might mitigate it
somewhat though, whilst still avoiding 90% of the portscans-from-(china|mars)
stuff.

One idea I had was to enable something annoying and kooky like port-knocking
or OTP pass/connection enabled 'backdoor entrance' for emergencies, but ended
up being too lazy, and realised it was just expanding the attack surface.

To my original point, we had an interesting setup for one network where the
firewall changes were all manipulated via a script which required the change
to be applied, and then confirmed after a short delay with a different
command, otherwise after 5 minutes the ruleset would revert to the previous
known-working one. It definitely saved some downtime/late night DC trips.

