Some of you might have heard about our security breach on Saturday. I just wanted to leave a quick note and clarify that this MongoHQ security breach was the method used to obtain our users' access tokens, which led to the wave of spam on Saturday.
This is a key final piece of our investigation which brings it back full circle. I'm very happy that we have been able to gain the full understanding and can be confident there is no backdoor.
I want to be clear that this is still our fault. If access tokens were encrypted (which they are now) then this would have been avoided. In addition, MongoHQ have provided great insights and have much more logging in place than we have ourselves. We’re also increasing logging significantly as a result.
I've updated our security breach blog post with this information. If you want to see the full set of events, take a read here: http://open.bufferapp.com/buffer-has-been-hacked-here-is-wha...
Let me know if you have any questions about this. I'll keep an eye on this thread.
Buffer security breach - October 26, 2013 
MongoHQ security breach - October 28, 2013 
I can understand that MongoHQ wanted to obtain the full picture here and not put other customers at risk by exposing this information before the situation was fully locked down.
I wouldn't be surprised if it was Buffer's investigation that tipped MongoHQ off to the breach.
When it comes to infrastructure as a service, it appears to me that there is an imbalance between the sensitivity of information entrusted to external systems on one hand, and the standards such systems are held up to on the other.
EDIT: case in point, on http://mongohq.com I can't find a single mention of the word "security". The language is all about ease of use, performance, scalability, disaster recovery, and low cost, but as far as I can tell not a single word is spent on what procedures are in place to safeguard your data from unwanted access.
EDIT: removed inflammatory comment.
Like everything in software it's about tradeoffs. Maybe they erred a little too far on one side of the curve, so let's learn from that. But it's unfair to expect startups to be in the same league as banks security-wise. Do you have any idea how much a good pentest costs?
It's a great trade-off for most early-stage companies, because managing databases is hard. I'd rather leave it to the experts who specialize only in managing databases. You and your product team have a thousand and one other things to think about other than managing your database. Your provider may end up making mistakes, but that's part of the risk you take.
Security breaches are a mess for everyone involved, and we're in relatively new territory here in the Infrastructure As A Service space. Overall, I have little doubt that IAAS overall is a good thing. As an industry, we'll learn and improve on how to deal with all things "security", but we're clearly not there yet.
Also farming out a piece of infrastructure and having a secure stack doesn't have to be mutually exclusive.
Looks like the database is managed by MongoHQ. That's what OP is talking about.
That answers my question! (See https://news.ycombinator.com/item?id=6619265)
Kudos for how you guys handled this during this tough few days.
Thanks for the kind words :)
Or were you somehow able to identify which fields from your MongoHQ database were actually accessed by the hacker?
Buffer has been exceedingly fair with us, we are fully in favor of them giving customers all the information they have.
They waited until after MongoHQ made their own disclosure, and all evidence (including comments on the post) point to a fairly good working relationship between the two.
I'm sure both parties wish this hadn't happened, but I don't see any bus throwing...
I don't use MongoHQ for any projects right now (I evaluated their offering a year or so ago and opted to go with MongoLab, which I am still suing), but their response impressed me. There is indeed such a thing as screwing up properly, imho.
Kudos to the MongoHQ team, and best of luck in dealing with this mess.
this is either an unfortunate typo or a great backstory ...
What's interesting is that I think we've all been there at one point. You start building something, it gains momentum, then you get so bogged down with adding functionality and fixing bugs that you never get around to implementing the security features you know you should. At what point do you say "Okay, we have too much at risk to not correct this immediately"?
I mean COME ON, MAN.... there are mistakes, and then there's incompetence.
I mean yeah if the password was 'letme1n' that's one thing. Whereas if it's a "at least 16 characters, mixed cased, punctuation and no english words"; maybe that's another.
But, I would have thought at the very least, ssh with keys-only on the external-facing bastion host.
Heroku's lack of 2-factor auth has literally given me nightmares.
Other companies (looking at you Linode...) could stand to learn a thing or two from how MongoHQ is handling this. Extremely transparent, and explaining in very clear terms the steps they'll be taking to mitigate the breach and prevent future incidents.
All that being said... Ouch.
As a precaution, we took additional steps on behalf of our customers to invalidate the Amazon Web Services credentials we were storing for you (for the purposes of backups to S3). While this prevents the abuse of your AWS credentials by any malicious party, it may have resulted in additional unintended consequences for your AWS environment if you were utilizing the same AWS credentials for other purposes.
If you had S3 backups configured with MongoHQ, when possible, we have disabled the IAM access keys via AWS. In any case, please go to the AWS Management Console and regenerate any keys given to MongoHQ .
You should have a support issue open in your AWS portal now, if you need any help getting new keys for other apps. If you can't find it hit us up at firstname.lastname@example.org and we'll escalate.
As a company we are now trying to figure out how long our access tokens have been out in the wild.
You should look into using separate AWS keys for your DB backups and whatever it is your app uses those credentials for. This not only prevents any future availability issues because of key revocation, it also allows you to set fine grained permissions on your access keys limited just to what they're being used for.
Build your product correctly from the start and have a security policy early. Don't leave XSS in your website, don't just hack code together and throw it on a server. As a security guy I love seeing new startups products because so often the code is so freshly written and immature that anyone it's asking for trouble. Measure twice cut once
On the other hand, collectively, startups losing everyone's credentials/data are burning the commons wrt people's willingness to use new products.
But then on hand #3, it's not as if BigCo's have a much better record, so I guess this gets chalked up to the general 'why in the world do we release software this bad?' and the general answer is 'because people have been trained to accept it'. In the same way that if the car was invented today it'd never surpass our current health&safety considerations.
So I guess my point is, software is terrible in general, and security is just one aspect of many. As long as we don't have a better way to write it, it'll keep being bad. It's not fair to expect startups to set a higher standard, when they're optimising along several other dimensions at the same time.
Startups have fewer assets than big companies. If you can't secure 1 webserver or you have a security breach with only 3 employees, there is something seriously wrong.
If you expect to one day have 10,000 servers you better believe I expect you to be able to handle 3 or 5 or 10 by yourself.
This is a serious question - the best that I'm aware of are blog posts, but every time someone writes a post about security, HN comments (et al) give contradictory advice.
Maybe it's high time we start taking "not screwing over your customers" into account when considering what makes a success.
To be clear, this breach had absolutely nothing to do with Mongo. It occurred via a web interface which could've been put in front of any hosted database.
IE What does the VPN buy you, specifically, on the employee side?
(I understand entirely what it buys you on the other side of the equation, such as a smaller attack surface, i'm just trying to understand why you would think having a VPN would have made this particular case more secure)
Another benefit is that as your internal userbase changes, you can revoke access from a single point and be reasonably assured that you've mitigated risk, which is something you only get with individually-secured services if you have a reliable directory system.
A problem with individually-secured ops/support systems is that most 3rd party code is not ready to be securely deployed Internet-facing.
Both approaches are totally workable, but the VPN approach is easier.
This is essentially something you need to worry about anyway, for other attack reasons.
It's for this reason that pentesters learn quickly that the "make an arbitrary HTTP query from the target's own server" bug is usually sev:critical; for instance, in virtually any Fortune 500 network, that pivot gets you (with a little effort and 50 lines of code) to a JMX console somewhere, and from there code execution.
There's no good reason not to do both (ensuring that your internal services are authenticated reasonably and don't expose functionality or information pre-auth, AND setting up a VPN). But the VPN is the most valuable step.
As tptacek mentions, breaching perimeter security from external is "game-over" in most cases.
I just wouldn't trust having something critical like "impersonate user" on the open internet - even if secured by https + 2fa.
Someone's updating their resume tonight.
Of course, if they do, then firing might be the way to go. First time mistake, second time incompetence.
"Employees may not use passwords that are used with any service outside of the company."
Zero enforceability (but some companies would probably ask for a list of outside passwords I'm sure), but in the face of a direct policy violation, 100% fireable, and can help a company in terms of liability.
In general though, in any security breach the most common way to pivot is via password re-use. You'll see this happen with many privileged employees at almost every company.
At a start-up, it most likely was not an engineering decision, but an agility trade-off. Get that product out the door now!
"Our support tool includes an 'impersonate' feature that enables MongoHQ employees to access our primary web UI as if they were a logged in customer"
Bastion servers if properly firewalled might be OK for a short term solution. The concern there is if you allow unfettered ssh (for example) is someone watching for the inevitable brute-forcing that will ensue?
I was very impressed with their customer service and would highly recommend them.
In speaking with New Relic, these keys cannot be regenerated.
Wondering if anybody could advise as to what security issues that might create? I presume it could potentially give access to some infrastructure information. Have emailed their support but waiting on a reply.
I'd like to chime in that, this circumstance notwithstanding, MongoHQ has been great during the time we've been a customer of theirs.
With the license key someone could configure an application to report to the account affiliated with the key. They would not be able to access the account or use the key to retrieve any personal information from the account.
When are companies going to realize that security is a critical aspect of their business? Why are people only acting when there's some big security breach, using trivial systems before this?
Yesterday, I decided to go with MongoHQ for my current project and this morning I saw they got hacked. Irony aside, couple of months back I decided to check AppFog (PaaS) and the next day they were acquired. :)
I believe this is a great time to become a new user of MongoHQ - security has been tightened, experience has been built. The "storm is over" and the chances of a new "storm" in near future (with the measures these guys have taken) have decreased substantially.
I think they reacted pretty impressively, once the breach was detected.
As much as I understood, the biggest flaw in their operations was the fact that many/all internal/support systems were exposed publicly and not within a VPN.
Still, they were around for just about 2 years, which isn't that much time, after all.
Also that "god mode" only makes things easier, if you're hiring a managed service by definition it means someone must have access to manage the service.
The answer is: manage it yourself effectively locking access to any 3rd party.
I agree with parent that customer should be able to disable god mode on their account.
Production hardware should be on a separate network/VLAN/whatever anyway.
Anyone using the password "password" or "password123" or other basic common passwords, plus passwords that are just dictionary words, will likely be cracked if the attackers make a dedicated effort no matter what setting is in use. In general though, cracking bcrypt hashes is much, much, much slower than cracking MD5 or SHA1 hashes.
So anyone using a somewhat decent password is likely safe from having it cracked. While with MD5 or SHA1, you generally need a fairly entropic password over at least 10 characters in length to remain safe (a password of totally random 10 characters still wouldn't be cracked with MD5 or SHA1, but it's rare that people use /dev/urandom output as a password).
Somebody would have to REALLY want to access your account and have SIGNIFICANT financial/computational resources to crack it. And that only gives them one password. Significant and independent work is required for each individual password.
On the other hand, there's nothing stopping a developer from doing something really dumb like setting bcrypt iterations to 1.
I _really_ hope that doesn't turn out to be the case here.
| AWS |
| +-----------+ |
| | VPC | |
+------+ | | +-----+ | |
| You | +----------->VPN | | |
+------+ | | +-----+ | |
| +-----------+ |
It's a support system - do you want them to paste in their private key to login?