

Things I set on new servers - Treffynnon
http://simonholywell.com/post/2013/04/three-things-i-set-on-new-servers.html

======
brokentone
This article is all http daemon related which is a very small subset of server
config...

I don't want to devolve into all my own tips and then have arguments about
fail2ban, but I have one tweak to the TRACE/TRACK note, really there are some
silly things people can do with any requests other than POST/GET (also silly
things people can do with those, but that's primarily App security).

I disallow OPTIONS, HEAD, TRACE... everything, with this sort of an Apache
config:

<LimitExcept POST GET> Require valid-user </LimitExcept>

The biggest place this can cause issues is with load balancers using a HEAD
command to check the existence of a server.

~~~
apendleton
This is probably over-broad. HEAD is also used by browsers to determine
whether or not to re-request cached content; disabling it will still allow
pages to load properly, but will waste bandwidth and slow down page loads as
browsers re-request unchanged content.

OPTIONS is necessary if you want to offer APIs that support CORS:
<http://en.wikipedia.org/wiki/Cross-origin_resource_sharing>

~~~
ntoshev
Why would a browser use HEAD instead of conditional GET to re-request cached
content? It would require one more round-trip if a refresh is actually
required.

~~~
apendleton
I've definitely observed both behaviors in Firefox, and to be honest, I'm not
entirely sure why. I'll see what I can dig up...

~~~
mbreese
HEAD just returns the headers, not the content. So, it is a faster way to see
if content has been updated. If it has, then the client can request the
content with a GET.

~~~
apendleton
Right, the question was why not use a _conditional_ get, which either returns
content if the thing has been modified, or a 304 Not Modified and no content
if it hasn't -- this accomplishes the same thing as HEAD with only one
request. Above, though, I wasn't sure why one or the other wasn't always
used...

I looked it up, though. It looks like you have to have an ETag to do a
conditional GET, so browsers use HEAD if the original response (the cached
one) didn't include one.

~~~
vsync
> I looked it up, though. It looks like you have to have an ETag to do a
> conditional GET

Nope, not the case. May I ask where you read that? It differs from my reading
of RFC2616 and my experience.

If the server gave you an ETag though you're supposed to include it in future
cache-conditional requests. Section 13.3.4
([http://www.w3.org/Protocols/rfc2616/rfc2616-sec13.html#sec13...](http://www.w3.org/Protocols/rfc2616/rfc2616-sec13.html#sec13.3.4))
describes the interaction between ETags and modification dates.

~~~
apendleton
A blog post that's probably not worth linking to, since apparently it was
wrong.

Am I correctly inferring that you need at least a Last-Modified in the
response to do a cache-conditional request, though?

~~~
sluukkonen
Either Last-Modified or Etag is sufficient. Most servers will add both,
though.

Edit: The difference is that Last-Modified allows clients to use a heuristic
to determine if the response should be cached for a certain duration (unless
explicit Cache-Control or Expires headers are used). The heuristic isn't
specified, but a 10% fraction of Date - Last-Modified is suggested, and in
fact, that is what e.g. Internet Explorer does.

------
zalew
_> Hide your versions_

 _> Another super simple, but often overlooked adjustment to make is to
prevent the server from broadcasting too much information about itself. Whilst
attackers maybe able to source the information in other ways the harder we
make it the more likely potential attackers are to give up and move onto a
softer target. It is similar to introducing yourself to someone and giving
them specific details about yourself such as "I rarely lock the back window
when I pop into town"._

newsflash: exploits will hit the vulnerability anyway, without asking about
your name and version.

could somebody explain to me why these _security by obscurity_ measures are
still popular? especially in an age when running bots hitting every public
facing piece of equipment is so cheap?

~~~
nisa
> especially in an age when running bots hitting every public facing piece of
> equipment is so cheap?

For me exactly that _is_ a reason to disable versions. If I would be the
attacker I would probably do a

    
    
        SELECT hostname FROM scanresults WHERE webserver == vulnerable-version
    

Besides that, a header like:

    
    
        Server: Apache/2.2.16 (Debian) 
         DAV/2 SVN/1.6.12 PHP/5.3.19 mod_ssl/2.2.16 OpenSSL/0.9.8o
        X-Powered-By: PHP/5.3.19
    
    

...gives me a lot of clues for adjusting my exploit payload.

In practice it may be an obscure attack vector - who knows. But more
information is always helping the attacker.

If there is a database somewhere (it is) - I don't want to have my specific
versions in there. Who knows that kind of exploit is on the HN frontpage
tomorrow.

"Security by Obscurity" is a concept used by bad cryptographic ciphers. See
Kerckhoff's Principle. I don't see how this applies to server information
disclosure.

~~~
mhurron
Exploits that would 'need' to know the version are scripted. They spew the
attempt anyway. If it works it works, if it doesn't the script moves on.

You don't really get anything by disabling the server from reporting it's
version. On the other hand, it doesn't take any time to do it.

But don't think you gained anything by disabling it.

~~~
bigiain
I suspect there's some small measure of protection still - if there's a brand
new exploit available (say, the current WordPress W3 Total Cache and WP Super
Cache problems), I'd rather not have my sites rise to the top of the list of
addresses for the botnets to start probing - because I'd left the version
information publicly available (or made those sites trivially findable via a
googledork).

It won't protect me, but it might buy be a little extra time to respond...

------
tachion
The title is a bit misleading, as it mostly applies to web daemons
configuration, rather than to the servers themselves, and, while being a nice
addition to default configuration, is extremely narrow and not really enough
when it comes to securing (web or any other) server, while novice reader can
get under impression, that it's it.

~~~
Treffynnon
I think I do a reasonable job of making that clear with the introductory
paragraph, but yes it is something that cannot be understated. They are just
three little things that do not constitute a complete security policy.

------
columbo
My #1 install on any server is fail2ban, then it's server specific stuff.

~~~
euroclydon
If you're using SSH keys exclusively, what does fail2ban really buy you? The
HTTP monitoring sounds like it might be useful, but also might be an easy way
to reject the Googlebot and de-list your site.

~~~
jallmann
Fail2ban can be used to rate-limit nearly all services that have the potential
for abuse. I have it set up to track connection and message frequency, bans on
message content (not following protocol, overly large, malformed, etc) and so
forth.

Having a system like F2B is nice because it compartmentalizes abuse handling
and you can set up rules in one place for all your services, both user-facing
and not. Since the rules/actions are user defined, anything is possible --
I've had actions that send alerts to Twitter, a system that distributes bans
to hundreds of servers, and centralized logging that gives very good insight
into how users are poking around.

------
ook
Nothing

Automated network installer (eg cobbler) installs OS, which installs a
configuration management system (eg puppet or chef or ansible) which sets up
the server appropriately.

Done correctly someone logging into a non development server should be an
alertable "red flag".

Even for a development server you should use veewee, vagrant, box grinder etc
etc to produce something consistent and repeatable.

"Editing a file in /etc directly 'by hand' should be an obscure art done to
teach internals or to scare children on halloween." -@yesthattom

~~~
lifeguard
Ideal world, meet actual world.

~~~
ook
Automating infrastructure and treating it like code is a similar shift in
mindset to embracing test driven development for the first time.

It appears daunting, but once you get over the hump you can't imagine how you
ever survived without it.

If you have a mythical quiet Friday afternoon install Vagrant and try and
replicate your manual setup steps for a new server and share it with your
development team.

Even just having the steps required to set up a development environment
represented in re-usable versioned code is worthwhile.

Next time a new hire starts that afternoon repays itself when they have a
fully working dev environment ready in less then an hour.

Going from that, to doing this stuff in production is a lot of work, but you
get similar pay offs at every step as long as you're willing to invest a
little time.

~~~
lifeguard
You don't have to convince me it is good. In my entire career I have never
seen a company that manages even 50% of their servers this way. It has always
been a situation of engineering 'being too busy cutting wood to make better
saws'.

~~~
twic
The company where i work manages >90% of its servers that way. This company is
blessed with some extraordinarily bloody-minded sysadmins who made the time to
sharpen the saw in the face of mounting piles of wood to cut.

~~~
lifeguard
Your sysadmins manage themselves where you work?

------
lifeguard
1\. install rkhunter

2\. update it: #rkhunter --update

3\. generate checksums of important files: #rkhunter --propupd

*NOTE: when normal system s/w updates are installed, some of the files watched by rkhunter may change and thus generate false warnings. It also needs to be run again to update checksums after updates.

~~~
hostyle
What stops an attacker running rkhunter --proupd after he/she has installed
backdoors in a few of your binaries ? I realise what rkhunter does (searches
for common backdoors), but I can't see what advantage the proupd argument
adds.

~~~
lifeguard
It is useful if accounts other than root have been compromised, like the web
server's.

------
monokrome
How does an article like this actually get points on HN? It's called "Things I
set on new servers", but it should really be called "Things That I Configure
in Apache", and the suggestions aren't even anything all that interesting or
useful. Do people really even use Apache any more?

What?

------
nisa
Also check for this dangerous configuration bug if you are using FastCGI and
PHP (I'm not sure if this also applies to other FastCGI applications)

[https://nealpoole.com/blog/2011/04/setting-up-php-fastcgi-
an...](https://nealpoole.com/blog/2011/04/setting-up-php-fastcgi-and-nginx-
dont-trust-the-tutorials-check-your-configuration/)

------
hackmiester
When silencing Nginx's version number, what is the value in continuing to
supply the "Nginx" header to indicate which product it is?

~~~
apendleton
Two reasons, I think: one (as a sysadmin once explained it to me) is that
there's a certain degree of public good/advertising that comes from publicly
supporting an open-source project by advertising that you use it in your
headers. Services that aggregate web server market share (Netcraft, etc.) use
the Server header to build stats.

It's also not _that_ hard to fingerprint webservers (though not necessarily
their specific versions) without making use of the Server line by testing for
other subtle differences in behavior (see, for example,
[http://82.157.70.109/mirrorbooks/apachesecurity/0596007248/a...](http://82.157.70.109/mirrorbooks/apachesecurity/0596007248/apachesc-
CHP-2-SECT-3.html) ). So on balance, hiding the version makes it hard to
single you out for vulnerabilities in specific versions, but hiding the server
name altogether doesn't really add much.

------
olalonde
Does anyone have a link to an article explaining TRACE attacks? I had never
heard of it (after 8 years of web development!).

Edit: Oops, should have Googled before commenting:
<https://www.owasp.org/index.php/Cross_Site_Tracing>

------
purephase
For Rails folks, you can accomplish a lot of this with the excellent Twitter
gem "secureheaders".

<https://github.com/twitter/secureheaders>

~~~
TallboyOne
Can anyone explain how to do this with nginx? I'm not using rails but I would
really like to have all those. I'm not sure what's best though.

------
treve
What about a great default content security policy? :)

------
shanbady
In the examples you provided, I believe you meant to say `top != this` instead
of `top != self` as you had it. minor edit

------
Rickasaurus
Gotta wonder why these aren't defaults.

------
cabirum
PHP also uses cookie named PHPSESSID to store session ids. Use session.name to
specify a different one.

~~~
laumars
That has no impact on security. If an attacker can read your cookies then it
didn't matter if you're sessions are called PHPSESSID or WETTROUT, they're
still readable.

~~~
evan_
It only matters if you're trying to disguise the fact that you're using PHP,
as the article suggests.

~~~
laumars
Which again, doesn't add any additional security.

