He didn't do this because he's an amazing sysadmin. He did it because he's competent, knows how to put a service on the Internet, and RTFM.
I mean. If you can connect to something and use it without having to authenticate yourself, wouldn't it naturally cross your mind to check that others can't do the same? It's just common sense.
This is simply first-time developers with no server experience, deploying their first project, who don't understand the first thing about having a server on the internet. Publicly exposing an open port for software that is innately designed to run on a private network has nothing to do with... whatever you want to call all these "exposé" pieces about "attacks".
You could write an article to explain the introductory concepts of system administration to newcomers to the industry. But to even mention the word "attack" when the only thing involved is an open port is... sigh.
I feel for newcomers who need to learn. I really do; the amount of information one needs to absorb to be even remotely competent is vast, and takes at least a few years to pick up (and then another decade or two to fine tune that knowledge). But this is not a situation where the maintainers of software packages are to blame for not educating their users, or insinuating that their products are not "secure out of the box". It has nothing to do with individual software packages, and everything to do with the very core aspects of having a computer connected to a public network.
Real companies with real data make some pretty elementary mistakes with regards to security. I'm a security tester and the number of times I've got access to systems deployed by real companies who've really paid money for an external security reviewer using things like default creds is quite high.
It's tempting to think that this is just an education issue and that once people know how to do security well things will get better but personally, my opinion after 16 years in security is that this isn't the case.
Effort spent on security is a trade-off with other things and in many cases people make the choice (either unconsciously or deliberately) not to prioritise it.
The insecure defaults were an issue sure, but anyone installing a piece of software in production without at least reading up on config options needs to find another job.
We need good and consistent rules about this, and "well I was giving it away for free" isn't as clear a boundary as people will think it is.
I would imagine that if someone were to sue MongoDB Inc. today about the issue in the article, their first defence would be clear documentation that explained/recommended production guidelines.
I don't know if D-Link had similar, but then again legal systems sometimes produce weird results that don't seem rational.
I agree that we need good and consistent rules, in addition to that, we need DevOps/SysAdmins/SRE's that are responsible enough to know what they are doing.
Carefully read documentation instead of "quickly deploy" only to come back a year later writing soppy "don't use MongoDB or XYZ because we didn't read the manual". :)
MongoDB is on 3.4 now, so I would rhetorically wonder why some people/companies are still on <=2.6.
If the data that is being ransomed is that important, it'll be a good lesson to those DB maintainers to upgrade and secure their stack.
This part seems to be glossed over but is a HUGE issue.
It sounds like several companies have tried to pay the ransom with varying levels of success  ... why are they not just restoring from backup? I can only assume they don't have backups. (!)
What is their DR plan if the server dies? Or someone accidentally pushes code that messes up the contents of the DB? Or someone tries to drop the development database but oops: they didn't notice they were connected to the production server?
Even if you're using a hosted service, what if they go down? Get hacked? Lock you out because of a billing dispute/TOS violation/DMCA takedown/accident? Hired bad sysadmins that didn't do backups (correctly)?
Not having backups of your data is inexcusable and just reeks of utter incompetence, and has nothing to do with configuration defaults or documentation.
You still need to do external backups. You may have a lot of trust in the provider and these less frequently, but you should still do them.
 Had this happen to me once in the early 2000's: company I worked for had a dedicated server at a colo facility. After several days of them not responding to phone/email/etc, their answering machine was changed to a message saying the SEC had seized all assets and had all the owners 'under investigation' or something like that. We had external backups, but immediately took the latest stuff and got everything migrated to a new system in a new facility. Server stayed up for a few weeks after that, but then suddenly their whole IP space went offline. We never did get our server back.
Other than the cost, I recommend it for people who can afford it. Wonderful service that I was happy with for a long time.
I'm obviously not going to defend companies that don't have current backups (though this is practically everyone), and the importance of backups is always a great thing to emphasize, but in this case, the best option is to pay the ransom and get your stuff back.
Now I'm sorry, but MongoDB inc IS at fault as well, for not forcing developers to create credentials upon installation from the beginning. Any vendor that doesn't do that with its product isn't serious about security. Let's take Wordpress, imagine it didn't force its users to create an admin account upon deployment, everybody would be mad about it. But somehow MongoDB got a pass all this years? bullshit.
I hope this hack will permanently damage MongoDB brand.
There's a moral distinction between culpability and impact. There are profoundly stupid things that people do, yet still need protection from.
Those DB admins were incompetent by lots of measures, but their data still has value and its seizure is a public harm. It's the job of the rest of us (in this case, MongoDB's developers) to take reasonable steps to minimize the chance of that happening.
Secure defaults are a very reasonable precaution. MongoDB fucked up.
We help each other out in this society. So in this case if you're a database developer with a good handle on deployment security, you don't put a insecure-by-default product in the hands of people who aren't. I genuinely can't understand why people are arguing to the contrary.
Even knifes are sold with some package that prevents them from cutting before the package is removed.
I agree, I don't think your job is done just because you wrote somewhere "pay attention to this".
But that's fine. At our place, our mysql cookbook is maturing. This mysql cookbook makes it really easy to say "hey, we need a mysql master with these 2 databases". The only security overhead consist of generating 4 passwords in that case (and we're planning to automate that).
Once you've done that, you get a mysql master with passwords, restricted root access, a local firewall, backups and all kinds of good practices. It's secure because our cookbooks are good, and people use our cookbooks because they are easier than handling mysql directly.
And that's a kind of cooperation I'm really happy about. Devs want services, and we provide easy ways to setup services in decently secure ways.
Do the community chef, ansible and puppet cookbooks use secure defaults for MongoDB? Asking because I have never used MongoDB.
Yes, but I've grown to dislike that term, though it's my current job title. There's a number of people who're titled with DevOps and they're yelling about Docker this and CD that, and AWS/RDS those.
That crowd is way to excited about some tools and some solutions in specific use cases, and they sometimes tend to drown out the real value of config management and close cooperation of devs, ops and other involved people with their own specific focus - for a very large amount of use cases.
> Do the community chef, ansible and puppet cookbooks use secure defaults for MongoDB? Asking because I have never used MongoDB.
I'm a chef guy. First google result for "mongodb cookbook" doesn't set authentication, but makes it easy to enable required authentication. Second cookbook result doesn't enable authorization by default.
This makes sense though. If a community cookbook manages mongodb, that cookbook is supposed to support all use cases of mongodb and it usually tries to mimic the default use case of the application in question. To maintain that, the chef cookbooks for mongodb don't enforce authorization.
However, if I was supposed to implement a mongodb cookbook to use in my place, I'd intentionally fail the chef run if authentication is disabled and stop mongodb in that case. This would be trivial in both cookbooks I looked at.
This is not exclusive to DevOps, though DevOps is a hot buzzword right now so lots of posers are flooding in. Most people, no matter their station or the prestige of their company, have no idea what they're doing. Knowing this is one of the most important things for dealing with the corporate world (I think I'm missing a couple of the other most important things, though...).
Different people try to cover this up in different ways. One popular way is acting like they always know about new cutting-edge tools sponsored by AppAmaGooBookSoft and that the next thing will finally be the Silver Bullet we've all been waiting for.
This impulse has brought us the prominence of node.js, MongoDB, the proliferation of "big data", and many other vastly-overdeployed niche products that have been ravenously and shamelessly misused by incompetent people trying to fake it through their careers. Our standards for this are, frankly, sad. It must have to do with a combination of a cultural bias against non-junior engineers outside of Java/.NET-land and putting completely unqualified MBAs in charge of tech (and yes, this is also applicable in startup world via VC proxies) -- but I digress.
Been hearing that song and dance re: docker and k8s for the last year at least, and boy is it ever tiring. Docker and k8s are both very niche, very immature products that greatly complicate system administration. They are missing features that you rely on and that you will have implement horrifying hacks to work around. Why are you doing this to yourself, again? Oh, because AWS is too expensive and you want to consolidate (fake reason btw, real reason "because it makes people think I'm smart")? Yeah, about that...
It's fun to throw together a lab for experimentation, maybe hook up a weekend project, but no sane Real Company is going to be moving its stuff to all-containerized k8s any time in the next 2 years.
My current project at work? Converting all of our environments to docker/k8s...
There are devs who are unaware of the concept of authenticating access to an application?
Yeah, well, we're in the middle of a trend to act as though dedicated systems people are bad, pains in the arse, you shouldn't need or want them.
Come on, man, this is 2017. Get with the program.
you can't just demand the user be smarter, you put sane defaults on there so they actually have to go to at least a small amount of effort to shoot themselves in the foot.
Besides, you should be using containers for everything anyways. If something happens to it, just throw it out and spin up a new one.
I wouldn't call this reinforcing behavior...I'd call it protecting your users. Common sense is something that comes with experience, and MongoDB is popular enough that it's likely to be in many web devs' first projects in production. Is it their fault they haven't developed this "common sense"? Yes and no. On the other hand, is vulnerable-by-default ever a good idea?
I am a former Sr. Systems Administrator, and Sr. Systems Engineer. My current title is Sr. Devops Engineer.
Pretty much everybody who knows their way around a compiler and administers systems could probably take this title and be fine, and then you're not suffering under the yoke of devops or whatever. Because when I hear "developer," I can't help but think a feature engineer on the product side for some reason. Same with that word creeping into "devops," and that leads to things like "full stack" engineering -- creating the impression that only a select few work with or consider "the whole stack," which is a scary subliminal message. Even if you're a developer simply changing the colors on the Facebook feed or something, you should always be thinking about the full scope of things and how your changes will be operated. (Your operations nerds will love you if you think about us ahead of time.)
As an aside, systems administrator with no software skills is a totally and completely fine career, with a number of very smart people (and friends, still proudly announcing every year of uptime in texts), and in a lot of these discussions I see them denigrated. Just remember that, not saying anyone is doing it.
I would expect anyone responsible for something to have competency in it or find someone who is to help. Titles are irrelevant.
Refusing to deploy a process into production without setting a password is more like refusing to drive over a bridge that's obviously missing chunks and crumbling -- that is, it should be obvious, even to the untrained eye, that there is a massive structural issue that makes it unsafe.
I will grant that some junior people may assume that local connections are coming in over a trusted socket and that TCP connections are required to hit an authentication process, but even this is kind of a stretch, and no one should be leaving production deployments for real companies in the hands of someone that inexperienced.
I will also grant that guilt-by-association is probably not appropriate here. There are a lot of good people at companies that have grossly incompetent and/or disorganized managerial superstructures that may have caused something like this to end up exposed to the internet. I have personally worked at multiple places where the daily comedy of errors we called "work" could've resulted in an unpassworded Mongo ending up on the public network.
Any way you slice it, driving over what is a clearly destroyed, ruined bridge is someone's fault, somewhere (and per regular management practice, the person least responsible for the fault will probably take the fall :) ). Let's just not pretend that this is an ordinary complication or oversight in the course of sysadmin.
Furthermore, running "apt-get install" usually isn't enough to get something exposed onto the public network, even for home users, whose routers block all incoming ports by default. Someone has to go in and explicitly open the traffic before something like Mongo gets exposed; there's no reason that labs, developer machines, etc., would be publicly accessible.
Mongo is hardly alone here; comparable services like MySQL, PostgreSQL, and Redis also install without a password by default. For MySQL, this is perhaps less noticeable because Debian/Ubuntu (and probably some other distros) include a prompt to allow the user to set a new root password as a post-install script. For PostgreSQL, the default mode uses IDENT authentication, i.e., it's accessed by dropping to the "postgres" user ... and, like all other system users, including root, this user is passwordless by default.
For Redis, it's passwordless by default and I'm sure there are many installations that have this same "vulnerability" in redis-land (there are also many installations that have real vulnerabilities, because Ubuntu's redis-server package in 14.04 hasn't been patched), because people rarely think about redis auth either. Just since last year, there is a feature called "protected mode" that requires the user to explicitly disable a flag if they intend to run a server that is a) bound to a non-loopback interface and b) not passworded (but it doesn't actually require someone to set a password). That's a cool feature, but as users don't RTFM, I'm sure they either flip protected mode to false without understanding what it does or just give up on redis and install MongoDB instead. :)
The point here is that pretty much everything you can install on a server is passwordless by default. I don't think the system of someone who never thinks about passwords would last very long at all. This is NOT a MongoDB-specific anomaly. Passwordless is the typical case for most services.
A feature like Redis's protected mode is a nice bonus, but it's not fair to lay responsibility for this at the feet of Mongo or to call it a "vulnerability", as the OP does. The responsibility lies with people who say "I'm going to put all of my company's data in here and push it live without thinking even a tiny bit about security."
Do a lot of people do that? Yes. Should MongoDB take more steps to protect such people from themselves? Maybe. But this is a very basic, very routine thing, which is easy to look up and rectify -- not some obscure anomaly buried under 600 pages of documentation or requiring a special compile-time flag. Mongo doesn't deserve the heat for people who never set up a password.
Having something listening on a public IP without any password protection is irresponsible whether it's a database or security camera. Last time I installed mySQL I had to both set a root password and specify socket to listen on, and it took five seconds and no documentation reading was needed, so it is possible to both be user friendly and have good security. With mySQL you also have to explicitly set from what IP a user can connect from.
Secure should always be the default! Making it not secure should require messing with config files, not the other way around. Most people will just stick to the defaults. There are plenty of examples where you do not need a password, for example opening your fridge or turning on your TV, both being connected to the Internet. Being able to connect to it from your friends place should ring a bell, but I don't think most people try that. Just like people don't regularly port-scan their networks.
This is not the case in the United States at all. In fact, it's the opposite. ISPs used to only offer modems that were intended to plug directly into one's computer, which would bridge to the computer's ethernet interface and expose it directly to the internet. Third-party routers would be placed in the middle to allow multiple computers to connect and they almost always include a firewall that has everything blocked by default.
Recently, ISPs have been bundling their modems into routers and providing customers with a single device that facilitates multiple computer access, wifi, etc. Every time I've encountered such a thing, it has had a firewall that blocked everything (except maybe a port for remote tech support) by default. This is by far the most common.
IPv6, as always, is a pipe dream, but when it becomes real, yes, we will not have the assumption of NAT to fall back on -- though it's pretty safe to assume that consumer-level hardware will continue to block all ports by default.
>Last time I installed mySQL I had to both set a root password and specify socket to listen on, and it took five seconds and no documentation reading was needed, so it is possible to both be user friendly and have good security. With mySQL you also have to explicitly set from what IP a user can connect from.
This is not a feature of MySQL, but a feature of the distribution you installed it on. It is true that MySQL's configs don't listen on all interfaces by default. IP-based security generally means that every user is wildcarded, and MySQL auto-generates a wildcard root account without a password iirc.
>There are plenty of examples where you do not need a password, for example opening your fridge or turning on your TV, both being connected to the Internet.
MongoDB is not the equivalent of a fridge or TV. MongoDB is the equivalent of a power saw. If you can't be arsed to take basic precautions while using it, you really shouldn't be anywhere near it.
And while many TVs and fridges may have an annoying built-in wifi feature now, most people do _not_ think of these in the same context as server software. :)