Hacker News new | past | comments | ask | show | jobs | submit login
Cache on Delivery (Mining random people's memcached) (slideshare.net)
85 points by jackowayed on Aug 5, 2010 | hide | past | favorite | 29 comments



I like the Sensepost people, but can sum up this talk very succinctly:

* You need to make sure you've firewalled (and/or on single-host configurations bound it exclusively to lo0) memcached so people can't talk to it from the Internet.

* If you survey the Internet, you'll find a lot of people who have not obeyed this rule.

* If you don't obey this rule attackers can get and possibly alter most of your data; an exposed memcached is probably game-over for your site.

The only things you should be able to talk to on anything in your data center are ports 443 and 80, plus 25 on your single mail relay if you have one, plus 22 (heavily filtered) on a single relay SSH server. If you can talk to more than that, you've probably done something very wrong.


Absolutely, there's no doubt that it's an operations issue rather than a memcached issue and, when found, indicates poor ops standards (remote enumeration of a cache is a debug feature apparently). With services moving out of private data centers and into the cloud, security tools are different from the arsenal used for internal apps; if you're an app guy deploying on EC2, you don't have a network security guy who's responsible for protecting your services. It's now your job to configure EC2 to protect your services, which is a small but significant difference. In terms of scanning for open ports, I think the main takeaway was that in distributed apps running off shared infrastructure, the basics often aren't followed by under pressure admins and devs. Memcached density was about 1 per 274 scanned addresses, which was higher than I'd have thought; it's apparently a mistake that's not uncommon.

Unfortunately it's often not good enough to point out open ports, and one needs to demonstrate exploitability. The main thrust of the talk was so say "if you come across memcacheds, don't skip them, there's coolness there". e.g. an open memcached used by Django directly equates to remote code exec due to Python's pickle.

You're spot on with the final point; firewall firewall firewall.

[fd: that's my name on the preso]


You missed the point where they found major sites wide open and extracted credentials for gowalla, bitly, and poked around PBS.

Bad gowalla, bitly, and PBS operations departments. Shame on you.


Personally I am so paranoid with SSH that I don't run it on the default port.

That alone allows you to avoid most attackers trying to guess valid server ips.

Now to put a honey pot on port 22.


Oh yes, the first thing I do is change the SSH port, eleminates 99,99% of the auth attacks.


Damn. Why didn't I ever think of that? I'm doing that tomorrow.


Remove root login over ssh and disable password logins and you are good to go. If you want to get fancy Google for ssh port knocking.


disable password logins

I can't emphasize this one enough. Unless you need to login from a lot of different machines, there really isn't any excuse not to do this. It also has the bonus of making logins really easy since you don't have to type a password.


If you host your systems on a VPS service like linode or slicehost then you have the backup of a web based console in case you screw up royally and have a HD crash on the one machine your SSH key was on (for example).


Heh, port knocking looks cool, might do that just for the heck of it on a server that only I use.

Also, yes, among with changing the port the only way that should be possible to get in is through 'keys.


What happens when your 1 relay host goes down?


You restart it.


I've come across unprotected Memcached deployments in a couple of pen-tests. On one engagement in particular someone poking Memcached would be able to (temporarily) increase their account balance and even access CC numbers from recent transactions. :(

As far as I see it, this is one of the unintentional side effects of "hosting in the cloud". If you had co-located servers you'd whack up a firewall and only allow your internal IPs to access non-HTTP ports. Alas everyone now just spins up an S3 image and palms it off to Amazon.


With EC2, you have to opt-in any ports open to the public.


There you go, I stand corrected.

Are you able to make requests between instances on non-public ports? As someone else pointed out Memcached infrastructure typically won't sit on your local webserver.


Yes, it's pretty easy to set this up with security groups, you can restrict ports to only open to machines within another security group.

So lets say you've got memcached, mysql and a bunch of webservers.

On the memcached security group you open 11211 to the webservers group

On the db security group you open 3306 to webservers

On the webserver group you open 80 and 443 to everyone.


How can people not run a restrictive firewall on their externally-accessible boxes? EC2 even has this built in as a network-wide feature. Our production servers run with the following setup:

* block everything from outside internal network

* open port 80 on web server

* open port 22 on all boxes, but only allow key-based authentication. Oh, and only allow connections from an ip whitelist.

The restrictions on port 22 are probably a little overkill, I admit.


Restrictions on port 22 are NEVER overkill. In fact, you should run another port entirely.


If you're already using key authentication, that doesn't add much security. A port is just a very short password shared by all users--- instead of logging in with "I'd like to login, please initiate public-key authentication", you now have to login with "I'd like to login, the magic number is 2222, and please initiate public-key authentication", which isn't really any more secure.


Call me an engineer, but I don't understand why anyone would dismiss security through obscurity just because it's theoretically useless. If you can reduce the lifetime expected value of successful attacks on your system, it should be considered a win regardless of how you do it. (Of course, relying solely on security through obscurity is a sure-fire way to greatly increase said EV.)


It will deflect a lot of network-capacity-wasting opportunistic attempts if the masses don't even realise there's an SSH port there.


The blog entry on the topic is available at http://www.sensepost.com/blog/4873.html Also, GoWalla & Bit.Ly were informed pre-publication, and fixed their stuff.


Memcached by default only connects to localhost. Were these people using it like that and/or does that matter?


In most deployed configurations, you use memcache in a distributed fashion (so N machines can connect to a pool of caches), so you usually have them bind to a public interface. One that you stick behind at least one firewall.

Exposing memcached (or riak or whatever) is operational malpractice.


It's definitely dumb, but people might be lulled into thinking it isn't necessary because the trend among other servers is to essentially include their own firewalling setup, with a default of blacklist-everything that you have to explicitly override by whitelisting IPs, or by having clients authenticate. For example, you used to rely on firewalls to make sure nobody was sending spam through your internal mail relays, but these days most MTAs handle that filtering themselves, taking whitelists of "relay from here is ok" IPs in their config files, even though in theory iptables/ipfw/etc. could handle that job just fine, and more efficiently. More relevantly, most RDBMSs have a similar setup--- they won't just let any client talk to them, even when bound to a public IP.

For efficiency, though, memcached explicitly decided not to handle that kind of filtering itself, e.g. by having the memcached config file take a list of IPs of other memcached servers to talk to (defaulting to none), or including some kind of auth mechanism, and instead relies on the system firewall for that. That's a reasonable design decision, but increasingly unusual. Most servers these days see the system firewall as a backup level of defense, not the primary one, and aim to be secure even when unfirewalled.


I lulled


Can only speak for the current version (1.4.5), but it binds by default to all interfaces.

If a memcached was bound to non-public interface then it wouldn't be reachable from the internet (and you'd have to explicitly configure it as such). However, if you're running the cache and the app on separate machines, the cache will need to be reachable. For poorly thought out deployments this means publicly reachable.


AFAIK one of the points in the slides was that, by default, memcached binds to all interfaces.


This was one of my favorite briefings at Blackhat.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: