Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: How do you handle server security/intrusion monitoring?
35 points by ericd on Nov 22, 2009 | hide | past | web | favorite | 20 comments
I've been wondering for a while now - how do owners of small webapps monitor intrusion attempts, scan for past successful intrusions, and prepare for the possibility of one?

Sometimes the effects of an intrusion are obvious (a blackhat takes down the site), I'm more worried about the case where someone plants something on the system to monitor and send info back out. How would you detect this?




A really skilled intruder won't leave any obvious clues behind if he gets in. You are likely to encounter this only if someone is deliberately targeting your server for some reason.

That said, most intrusion attempts are unsophisticated, analogous to a person walking around a parking lot trying car doors to see if any are unlocked.

Tools I've used (it's been a while so some of these may be dated) include logwatch to check for suspicious log activity, portsentry to watch for suspicious connection activity, and tripwire to spot modified system files.


Tripwire is useful for post-facto detection - at least as long as the attacker doesn't get root - but logwatch defaults to letting stuff through, which means that any change in log formatting by any program means that your "security" suddenly fails.


Awesome, those tools sound great. I hadn't heard of anything like tripwire, but it's one of those things that I was hoping existed.


sudo apt-get install rkhunter chkrootkit logcheck monit

rkhunter and chkrootkit will both look for insecure configuration, permissions, and known root kits. rkhunter will also maintain a database of the signatures of your system binaries. I assume there are overlaps between the two. I don't really care, I run both.

logcheck will watch your logs and email you when something shows up in a log that doesn't match a configurable set of regular expressions. When I add a new piece of software to a system, logcheck goes berserk. I go through the emails I receive and add in new regex's to ignore the harmless notifications until they finally taper out and disappear, only leaving me with the true exceptions.

Look over the documentation for all three of these, and make sure that they're actually running. Personally, I don't want an email unless something is wrong. If I get an email every day, I'll start to ignore it. I don't think I'd notice if an email I was ignoring didn't show up, so I optimize for no output unless it's something I should read. I chase down the cause of every exception in my logs. No excuses. If it isn't an exception, I add a regex to logcheck.

I run monit because it is pretty easy to set up, and it sends me an email when a server process gets changed or restarted. It lets me enforce limits on a server process, so I can potentially get warning of an attack as it happening. I suspect monit is nowhere near the most sophisticated tool for this purpose, but it meets my needs very well.

As for other basic security stuff... Don't use port 22 for ssh, use something much higher so that you won't get the continuous pounding on the front door that an ssh server on port 22 receives. Disable root login, as well as password login. Use certificates to log in to your machine. Configure servers to listen to 127.0.0.1 whenever you can. Install a software firewall to only allow access to http and ssh anyway.

Unless you've done something stupid with your web app, you've made it incredibly difficult to impossible for script kiddies. If you piss off an elite black-hat, it's probably safe to assume they can own you, and you likely won't know it. So make good backups, with regular offline backups that can't get destroyed by a hacker with a particularly malicious bent.


1 - use a minimal server base install. start with a linux/bsd install that has as little running as possible for your needs. I've found both ubuntu-server and FreeBSD work for me.

2 - Install things that you understand. I don't install large meta-packages. I also install most things from source: nginx, ruby, gems, postgresql. All these things make/install without much effort. When compiling, only include options you need.

3 - Lock down SSH. Login only with SSH keys, protocol 2. Also, change your sshd port to something less likely to be scanned. This way, your SSH log files won't have too much garbage in them and you actually can check them with little effort. You can find lots of tutorials on locking down ssh.

4 - Document _everything_. If all else fails, your docs enable you to recreate your server environment within a few hours.

If you do all this correct, when you upgrade your server, e.g.: apt-get upgrade, you will only be upgrading basic distro stuff, mostly security updates.

oh yeah, _little_things_ like making sure your ruby/blub app processes are connecting to your db with an "app" id instead of an "admin" one is a choice decision ;).


FreeBSD handbook goes over some practices, treat security like layers of an onion peel.

http://www.freebsd.org/doc/en/books/handbook/security-intro....

Some things we found useful

* don't have ssh bind to a public interface on. You need to VPN in first and then ssh to the private IP. we only have 80 and 443 open on public interfaces

* portaudit is awesome. it keeps you up to date if there is an exploit for software that you run, not just "is there an update". portaudit will tell you if there is an unpatched vulnerability out there so you can make a decision on what to do.

* paranoia - don't trust anything. we run WP on a DMZed box that doesn't have access to anything. we mod_proxy all requests to this box from our edge servers. jails are also great for this, give everything the least amount of permissions necessary to run

* monitor your servers, any weird cpu usage or bw. cacti is quite nice

You can also block all outbound traffic on your serves that isn't responding to a requests.


* you use logs so you can be aware of suspicious events at network, system and app level

* you have scripts to analyze these logs and notify you if anything suspicious happens. This is faster, can be done more often and is less tedious for you

* you have some reference files somewhere so you can compare checksums


Logs are great for forensic investigation after one has been alerted to an exception, my question is more about expanding alert coverage to various other suspicious activity.

Do most people just make a strong password and trust that that keeps the baddies away?

I think one thing that would be interesting would be a utility that checks the checksums of files and contents of directories of various installed components against canonical versions stored elsewhere. Does this exist?


Check out http://OSSEC.net, its a host based intrusion detection system. It does everything you described and more. I am slowly working on a better Web UI with real time alerts.


Wow, that looks great. Any idea how it compares to the other tools mentioned? (tripwire, portsentry, logwatch)


There isn't one single solution to cover all angles. So yes, logs are useful but won't protect you either. And yes, they could help you to discover intrusion.

Regarding authentication, you could use public/private key auth for ssh. This is stronger than passwords. You could also use two factors authentication (look for yubikey for an example)

And yes, such a tool exist: for instance tripwire.


Good tip on the pub/priv key auth for ssh. Also on tripwire. Thanks!

Custom logs generated by your own app for things like login attempts, etc, are definitely useful for detecting issues. I was referring more to things like apache connection logs, and other logs generated by other programs - I would imagine that there's a huge range of things to look for.


When I say logs, I mean all sort of logs (network, system, app...). So that include auth logs, apache logs and so on.


Monitoring intrusion attempts is almost always a waste of time; secure the server instead.

Set up the server in a secure way, with many layers of defense. Any applications that accept data from the internet should be very robust. After that, add mitigating stuff like SQL permissions, a chroot jail for the web server, running as a totally unprivileged user (i.e. not nobody), and so on and so forth. Pick a secure kernel, too, and make sure that your software is compiled with exploit mitigation techniques (e.g. ASLR, W^X). Throw out anything that needs patching more often than once a year (once every ten years is a better goal). Don't pick a weak password and leave SSH password logins enabled.

If you are interested in detecting intrusions, a first layer may be a list of checksums of crucial files. This is built-in to my favourite OS, OpenBSD, but others can use Tripwire, AIDE or any of the other programs. Note that an attacker with root privileges can typically bypass this.

EDIT: Look at mod_security, too, if you use Apache and cannot trust the web developer (which may be you). Just accept that it will, in the end, not save you.

EDIT2: And don't store passwords in plain text. Or MD5.


Is per-user salted MD5 not secure? I'm using SHA1 nowadays, but I have an old app using MD5.


There are no currently known cryptographic weaknesses in this application of MD5, and using salts is adequate protection against rainbow tables. But a weakish password can still be guessed far too quickly.

Using a proper key derivation function like OpenBSD's pessimised Blowfish (with a suitable numbers of rounds), John the Ripper tries about 60 passwords per second on my hardware. Against salted MD5, 3000. (Also see PBKDF2, which is also a good solution.)

SHA1 isn't really that much better here, sadly - the problem is not cryptographic weaknesses, it's weak passwords and a function that can be evaluated too quickly.


Heh... Basically, we don't.

Sad, really. It's mostly due to apathy and no mandate from higher-ups to pursue stronger security, but in general our farm is only strong enough to keep generic attacks at bay. 0-days, man-in-the-middle, hell even some SSL vulnerabilities are possible for us. We have a "security team" so we defer most of the security precautions to them and just keep our base software configured not to let anyone unauthorized in.

However, utilizing host-based as well as network-level intrusion detection tools, combined with kernel-level and userland-level security patches and mandatory security access control policies is the best all-around solution. Grsecurity for kernel patches, an OS base with strong security in mind (many new distros use things like stack-smashing-protecting patches to glibc), snort for network IDS, tripwire/portsentry/others for host-based IDS, SELinux configured for all your hosts to further enforce your access requirements.

And of course, diligence: monitor security lists (particularly the one for your distro/kernel) and update immediately when a patch comes out.


Using OSSEC (open source).

Seriously, it puts rkhunter, logcheck and all these old tools in their toes..

Check http://www.ossec.net

It monitors your logs, file changes, etc all by default, simple to install, etc.. I love it.


+1 for OSSEC. It is my preferred one for sure. I use that to monitor my Apache logs, Wordpress, SSH and a few more.


Hello,

I'm the technical co-founder of a IT security start-up.

Here are my thoughts for comprehensive server security in the case of a server exposed to the Internet (typically a web server in a data center for example. Many tools etc refer to Linux but the ideas are the same).

Sorry about the formatting, some day I'll come around writing a blog article or something.

You need to consider many aspects of security.

In general measures will fall in one of the three classic Protection, Detection and Response & Recovery categories.

1) Protection means hardening your server (system and apps).

The three biggest exploits from the outside are: weak passwords, out-of-date or unpatched applications or system and running unneeded services or unsafe versions when other more secure versions exist.

There are the three main doors or external "attack vectors" in your network. If you protect these three doors then you are 90% or more secure than the average organization. There are other possible doors (for example, you may have a misconfigured server, or a web form that can be exploited with an SQL injection) but these three are almost always present in all exposed networks (in order of importance):

a) Weak passwords: login access with poor passwords (passwords that are words in a 'hackers dictionary', like a simple word or combination like 123456) are probably the single most vulnerability exploited in the Internet.

Solutions:

- Use strong passwords. (most important) Also:

- Log access

- Filter the login access (in the firewall, based on IP origin for example).

- Use a password policy that enforces strong passwords

- Carry your own password brute force / dictionary attack.

b) Unpatched software. Exposed services (web, mail etc) and applications (like a web-based CRM application etc) that are not updated usually ahve well-known vulnerabilities that mallicious hackers look for and have the tools to exploit.

Solutions:

- Update system and application software (ideally automatically, periodically)

- Subscribe to the software security newsletter (if it exists) or keep track of its development

- Run periodically an external vulnerability assessment

c) Unnecessary or unsecured services running. Exposed services or applications that are not used or needed are just other ways for intruders to get in. Sometimes the organizations don't even know that they are there; especially in the past some server installations would by default install unneeded services. Another side of this is to run unsecure applications when there's a perfectly similar solution that is more secure. For example an FTP server transmit all information (including passwords) in clear text over the network, so an encrypted solution like SFTP/SCP is preferred.

Solutions:

- Remove unnecessary software packages.

- Run periodically an external port scan (see nmap for example)

- Run periodically an external vulnerability assessment (see Nessus, Qualys or have an independent third-party to do it).

- Look for safer alternatives to server software.

As other protection and hardening measures:

- Use firewall to block by defaults all ports that are not in use, implement basic safety measures (for example avoid spoofed addresses; no connections from the outside pretending to be from an internal IP address) and rules to mitigate DoS (limit the maximum number of connections at a given time from a particular IP address). Linux’ netfilter (iptables) can do all this.

- Protect management login pages (including ssh) with port-knocking and/or source-based IP filtering.

- Web server (apache): take off its signatures, deny / access, explicitly allow access, uninstall unused modules, limit upload/post sizes, check with benchmark tool (ab etc), check vulnerabilities with specialized tools (nikto, others), consider mod_security, optimize web server (this helps in case of high traffic or DoS): enable compression etc (see YSlow).

- Security by obscurity: this is fine as long as you know what you're doing. For example, change the ssh port from 22 to something else as other people suggested.

Besides the server you also want to protect its availability. Probably the simplest and cheapest thing is to have a backup server and use DNS fail over (the hardest part is synchronizing databases).

2) Detection.

There are several general tools and ideas for intrusion detection:

- Logs. Logs are a sysadmin’s best friend. There are auxiliary tools or whole systems to manage/archive logs etc, from parsers to complex apps like the ones offered by intellitactics for example. Tools like Linux’ logwatch can email periodically status messages.

- Monitoring tools. Sudden unexplained big increases in CPU or bandwidth may indicate a security problem. Monitor your server from the inside (there are many tools for Linux: cacti, monit etc) as well as from the outside with an uptime server monitor (pingdom, mon.itor.us, wasitup.com etc) as well as a change monitor (ChangeDetection.com etc).

- Intrusion Detection Systems (IDS). I don’t recommend in general using a network IDS (like snort etc) for a server basically because you’ll get all these alerts and you won’t know what to do with them and at the end you’ll ignore them. Do install a host-based IDS like a checksum application (tripwire or afick for example).

- Rootkits (rkhunter etc): they will detect basic exploits.

3) Recovery. This is arguably the most important aspect. For a single server it means mostly having a good backup strategy. I like to use VPS’ and have daily and weekly backups of the whole VPS snapshot both in the data center as off-site. Recovery with a VPS is in principle (barring un-committed db data etc) as simple as restoring its snapshot backup.

Also: I have licensed tools and experience in security assessments; I can do a quick check of your web server and give you some recommendations at no cost.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: