I like the Sensepost people, but can sum up this talk very succinctly:
* You need to make sure you've firewalled (and/or on single-host configurations bound it exclusively to lo0) memcached so people can't talk to it from the Internet.
* If you survey the Internet, you'll find a lot of people who have not obeyed this rule.
* If you don't obey this rule attackers can get and possibly alter most of your data; an exposed memcached is probably game-over for your site.
The only things you should be able to talk to on anything in your data center are ports 443 and 80, plus 25 on your single mail relay if you have one, plus 22 (heavily filtered) on a single relay SSH server. If you can talk to more than that, you've probably done something very wrong.
Absolutely, there's no doubt that it's an operations issue rather than a memcached issue and, when found, indicates poor ops standards (remote enumeration of a cache is a debug feature apparently). With services moving out of private data centers and into the cloud, security tools are different from the arsenal used for internal apps; if you're an app guy deploying on EC2, you don't have a network security guy who's responsible for protecting your services. It's now your job to configure EC2 to protect your services, which is a small but significant difference. In terms of scanning for open ports, I think the main takeaway was that in distributed apps running off shared infrastructure, the basics often aren't followed by under pressure admins and devs. Memcached density was about 1 per 274 scanned addresses, which was higher than I'd have thought; it's apparently a mistake that's not uncommon.
Unfortunately it's often not good enough to point out open ports, and one needs to demonstrate exploitability. The main thrust of the talk was so say "if you come across memcacheds, don't skip them, there's coolness there". e.g. an open memcached used by Django directly equates to remote code exec due to Python's pickle.
You're spot on with the final point; firewall firewall firewall.
I can't emphasize this one enough. Unless you need to login from a lot of different machines, there really isn't any excuse not to do this. It also has the bonus of making logins really easy since you don't have to type a password.
If you host your systems on a VPS service like linode or slicehost then you have the backup of a web based console in case you screw up royally and have a HD crash on the one machine your SSH key was on (for example).