Hacker News new | comments | show | ask | jobs | submit login

I like the Sensepost people, but can sum up this talk very succinctly:

* You need to make sure you've firewalled (and/or on single-host configurations bound it exclusively to lo0) memcached so people can't talk to it from the Internet.

* If you survey the Internet, you'll find a lot of people who have not obeyed this rule.

* If you don't obey this rule attackers can get and possibly alter most of your data; an exposed memcached is probably game-over for your site.

The only things you should be able to talk to on anything in your data center are ports 443 and 80, plus 25 on your single mail relay if you have one, plus 22 (heavily filtered) on a single relay SSH server. If you can talk to more than that, you've probably done something very wrong.




Absolutely, there's no doubt that it's an operations issue rather than a memcached issue and, when found, indicates poor ops standards (remote enumeration of a cache is a debug feature apparently). With services moving out of private data centers and into the cloud, security tools are different from the arsenal used for internal apps; if you're an app guy deploying on EC2, you don't have a network security guy who's responsible for protecting your services. It's now your job to configure EC2 to protect your services, which is a small but significant difference. In terms of scanning for open ports, I think the main takeaway was that in distributed apps running off shared infrastructure, the basics often aren't followed by under pressure admins and devs. Memcached density was about 1 per 274 scanned addresses, which was higher than I'd have thought; it's apparently a mistake that's not uncommon.

Unfortunately it's often not good enough to point out open ports, and one needs to demonstrate exploitability. The main thrust of the talk was so say "if you come across memcacheds, don't skip them, there's coolness there". e.g. an open memcached used by Django directly equates to remote code exec due to Python's pickle.

You're spot on with the final point; firewall firewall firewall.

[fd: that's my name on the preso]


You missed the point where they found major sites wide open and extracted credentials for gowalla, bitly, and poked around PBS.

Bad gowalla, bitly, and PBS operations departments. Shame on you.


Personally I am so paranoid with SSH that I don't run it on the default port.

That alone allows you to avoid most attackers trying to guess valid server ips.

Now to put a honey pot on port 22.


Oh yes, the first thing I do is change the SSH port, eleminates 99,99% of the auth attacks.


Damn. Why didn't I ever think of that? I'm doing that tomorrow.


Remove root login over ssh and disable password logins and you are good to go. If you want to get fancy Google for ssh port knocking.


disable password logins

I can't emphasize this one enough. Unless you need to login from a lot of different machines, there really isn't any excuse not to do this. It also has the bonus of making logins really easy since you don't have to type a password.


If you host your systems on a VPS service like linode or slicehost then you have the backup of a web based console in case you screw up royally and have a HD crash on the one machine your SSH key was on (for example).


Heh, port knocking looks cool, might do that just for the heck of it on a server that only I use.

Also, yes, among with changing the port the only way that should be possible to get in is through 'keys.


What happens when your 1 relay host goes down?


You restart it.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: