Hacker News new | past | comments | ask | show | jobs | submit login

NewsBlur's founder here. I'll attempt to explain what's happening.

This situation is more of a script kiddie than a hacker. I'm in the process of moving everything on NewsBlur over to Docker containers in prep for the big redesign launching next week. It's been a great year of maintenance and I've enjoyed the fruits of Ansible + Docker for NewsBlur's 5 database servers (PostgreSQL, MongoDB, Redis, Elasticsearch, and soon ML models).

About two hours before this happened, I switched the MongoDB cluster over to the new servers. When I did that, I shut down the original primary in order to delete it in a few days when all was well. (Thank goodness I did that! It'll come in handy a few hours from now).

Turns out the ufw firewall I enabled and diligently kept on a strict allowlist with only my internal servers didn't work on a new server because of Docker. When I containerized MongoDB, Docker helpfully inserted an allow rule into iptables, opening up MongoDB to the world. So while my firewall was "active", doing a `sudo iptables -L | grep 27017` showed that MongoDB was open the world. More info on SO[1].

To be honest, I'm a bit surprised it took over 3 hours from when I flipped the switch to when a script kiddie dropped NewsBlur's MongoDB collections, and ransomed about 250GB of data. I am now running a snapshot on that old primary, just in case it reconnects to a network and deletes everything. Once done, I'll boot it up, secondary it out, and be back in business. Let's hope my assumptions hold.

[1]: https://stackoverflow.com/questions/30383845/what-is-the-bes...




I think there are some good lessons here:

1. Even if you have one way to protect your database (e.g., firewall rules), you should have another. In this case, use a database password or (better) client TLS certificate to authenticate traffic. We're all human and we mess up. You should be designing systems that are graceful in response to your inevitable mistakes.

2. If you can afford another server/a hosting provider with VPC-like features, don't put your databases on the open internet. Run them in a private network (RFC 1918), behind a NAT and a load balancer/entrypoint that only routes to your internet-serving applications. Allow only those application servers to hit your databases. If you had done this, the attacker wouldn't have noticed your mistake, because all they could hit was your public server.

3. Keep regular backups. Oh, and test them! If you don't test your backups, you don't have backups - you have archives. In the GitLab data loss incident [0], they had 3 methods of data backup, but all of them failed. A regular test would have discovered this. Don't make that mistake.

Good on you for sharing the events as they happen. I think people tend to be much more forgiving in response to openness. Don't freak out, and write a public postmortem when you're done.

[0]: https://about.gitlab.com/blog/2017/02/01/gitlab-dot-com-data...


Just want to yes and you.

In general, put everything in private subnets, and make the only way any traffic can get to a server is through a load balancer. There are very few reasons to have a server itself have its own public IP address, and using your load balancer as a chokepoint, means you can set up layers and layers of redundancy to prevent traffic from ever being able to reach a database under your control.

This holds true whether we're talking RDS DBs, something you've spun in a K8 cluster, or something you're running on a vanilla compute box.

Assume you will screw up at some point and open up a port that shouldn't be. Ask yourself, "what's the impact here?" If you're OK, even with a fat fingered port, great.

Assume you will have a dev deploy a DB without a password. "What's the impact here?" If you're ok, even without a password, great!

That isn't to say that the above two scenarios are something you should tolerate, but automation can help detect these sorts of issues and make it easy for you to resolve them. While that automation is running, you want to make sure you're not going to get owned.

Kudos to the OP for sharing their story. Always lots to learn on this front, and we can always get better at our cloud operations.


Defense in depth... VNets, TLS certs, RBAC etc.


Layer 4 and/or 7 proxies -> private networks -> services. Auth and encryption all the way on everything. IP white-listening were suitable. Pretty much the standard way of building web were I live.


"Private subnet" is just one way to accomplish it - what you want is a firewall that drops inbound traffic by default, and private subnet/NAT is one common way to do it that does have some downsides (extra complexity - you need a NAT gateway now).


+1 on the private subnet.

You want to design your system so that if a network a critical misconfiguration occurs you don't open yourself up - you simply stop working.

(Too many years chasing EMR clusters getting dropped onto the internet by users with AWS console access).


I agree with only the third of those. The other lessons I'd take would be:

1. Be cautious about trendy technologies that promise to make life easy - often they cut corners to do so, and often security is one of those corners

2. Use real authentication rather than network firewalling. Make your datastore TLS-only and require a valid client certificate to connect; that way it doesn't matter if it's exposed to the internet (indeed I'd argue you probably should expose every server to the internet - much like Chaos Monkey, it's counterintuitive but it forces you to build your systems with the right kind of resilience from day one)


Yeah, there's the whole "zero trust" possibility which I didn't mention. If you do authentication/authorization really well, you can stop doing (2). For things like databases, I think it's better to treat them as if they _could_ be exposed to the open internet, without actually doing so. It's generally not the case that anyone on the internet needs to query your DBs.

As described, you're _only_ relying on client TLS to protect your database. What if the TLS key leaks? So you're down to one layer again - one mistake and it's game over. So maybe you need Hashicorp Vault so you can have client certificates with very short validity periods. So do you expose Vault to the Internet, so you can fetch your client certificate to query the DB? What if Vault has a bug? And it's turtles all the way down ...

I love zero trust designs. But I think saying you can just slap client TLS on the problem and be done is laying the foundation for a repeat of this event - however you look at it, you want redundancy in your security.


All internal services should be protected, even on your home network.

If you have 100 devices on a network where everything is unprotected, that’s 100 different ways someone can try to get full access to 99 other devices.


At every layer of “breakage”, you are cutting down the chance for an attacker to access your system. While it’s possible the failures might cascade it seems likely that these failure modes are pretty independent of each other.

You’re simply pointing out that no amount of layers are foolproof. The goal is not to reduce failure to 0% but to 0.001%.


I think you misunderstood me - I mean that you want multiple layers to your security, and you need to be careful to select layers that fail independently. If Vault has a sufficiently bad unauthenticated bug, then attackers can simply use it to request database credentials and query the DB, which is now on the open internet.

An easy way to get independent failures is to layer a private network with strong auth and firewall rules. While it's certainly possible to expose your DB to the internet safely - given sufficient protections - you won't get that with just a TLS key. And even if you try to implement the "obvious" additional layers here (Vault, right?), it's easy to inadvertently include design problems that reduce to "only one failure and the system is exposed."


My view is that for most cases the cost/benefit of multiple layers doesn't stack up. Given a fixed amount of available time and effort, you'll generally get better results by focusing that effort on making one really good layer - e.g. putting active monitoring in place so that you detect when your single layer breaks (whether that's an attacker from outside the network being able to connect to inside the network or an attacker with an expired certificate being able to connect to a live system) - than by spending half as much time each on two layers, IME.


Unless this is literally running on a single 1U host in some colo, there is no excuse for not having defense in depth. An old school DMZ if you are in a datacenter. A VPC if you are on the cloud. Then client certs for everything. Two factor for ssh. Auditing. These are straightforward to set up, with different options if you have time-but-not-money or money-but-not-time. If you have not-money-and-not-time then this is probably going to end badly no matter what.


> there is no excuse for not having defense in depth. An old school DMZ if you are in a datacenter. A VPC if you are on the cloud. Then client certs for everything. Two factor for ssh. Auditing.

Maybe there is no excuse, but literally every company I've worked for (including Fortune 500s) has been missing at least one item from your list. So "industry best practice" means committing less time and money than it would take to implement all those things properly (rightly or wrongly) and you need to triage and prioritise.


So the excuse is "my company doesn't take infosec seriously". Like every similar issue of "should do but don't" (testing, formal promotion processes, diversity, harassment response), you get to decide to tolerate it or get a job elsewhere. My experience is that the companies that take this stuff seriously also do a better job of converting my skills into cash, and as a result, pay me better.


Interesting. My experience is that companies that "move fast and break things" have been better at making money and paying me (whereas I personally lean too far in the perfectionist direction). Interesting businesses face a wide range of risks - competition, regulatory, market - and infosec is rarely the biggest one IME.


I just want to say that I think this is absolutely a fair approach for many systems, but it sounds so radical against the backdrop of perimeter security that you're catching undue flak. You need to take a lot of care to build and design the "one layer to rule them all" in a way that's ridiculously sound, but this can actually produce a strong design if you pull it off. I didn't give this advice because I looked at where NewsBlur is today and figured they had some learning curves to get over before they could make the right tradeoffs to do it safely. If they got that much wrong with their DB, they might not know how to design a sane zero-trust network, or even how to make rational decisions in that world.

You see this "authN/authZ above all else" line of thinking in Google's security design [0], with their ubiquitous login wall. For their employees, that login wall has extra hardening - you can't do regular password resets, and you need to possess a physical security key (which acts basically as a scaled-down single-purpose HSM) and pass a suite of posture checks on the device (proportional to the sensitivity of the protected resource) to pass through it.

Then they put this login wall in front of everything, even internal services, and that tends to be OK because the system "fails closed," and the only way to access their protected resources is via physical safe rooms deep in the Google offices in which elevated privileges may be obtained.

Putting all your systems on the Internet forces you to get the incentive structure right, and that can be useful in companies where "private networks" can serve as justification to weaken security - if your login page is Internet-exposed, then you simply have no choice but to make it strong enough to withstand the chaos of the Internet. There's significant merit in that.

[0]: https://sre.google/books/building-secure-reliable-systems/


> 2. Use real authentication rather than network firewalling.

This is in vogue but it’s wrong because it presents this as an either/or.

In reality, people are going to goof up and auth flag at one point, accidentally bind a service, or just run a service with a 0-day (a.k.a everyone).

There is no reason to run a server accepting traffic from every IP if your clients are coming from known ranges.

People see the zero trust model and mistakenly think it means no network-level filtering. This is completely wrong and all of the big players still protect backend services with network ACLs on top of required auth.


In reality time and attention are limited resources. Does network-level filtering have a good RoI relative to putting the same effort into improving security at the service level? IMO no (at least not until you've reached a very high level of service security where you're hitting severely diminishing returns) - network-level security is necessarily at least one of a) crude b) complex enough that it becomes an attack surface itself - though reasonable people can disagree.


You are looking it as a network level filtering.

I am looking at, make private network, and then you have to explicitly add an gateway (usually a load balancer) that can access it.

In short don't design system where you need to filter, design the system, where you need to take explicite action, to make something public.

This is easily, done on something like AWS.


The very concept of a "private network" leads you down the path of making a security boundary that has far too large a surface area, IME. Either it becomes a big bag with all your hosts in it, and you write systems that trust all the requests they get (even if you know you "shouldn't") because you know only your systems are on the network, and then an attacker figures out how to make one of your systems make an arbitrary request and you get owned. Or you put each component on its own private network but then you have to open up every port you're actually listening on so that your components can talk to each other (and you probably automate that in your kubernetes/puppet/whatever setup, like what happened here with docker) and the private network does nothing.


An independent firewall accomplishes the same even if every server has a public IP.


Respectfully, all of the suggestions are good, as are your two. But, segregating your public traffic and private traffic is an excellent way to prevent these sorts of issues from happening. While it doesn't prevent someone who's compromised your network from accessing the DB, it means someone can't fat finger a port and open a DB to the public internet.


Well, your second point is his first point.

And his second point is table-stakes as far as I'm concerned. As others have said, do all of them. They are not hard to do.


What strikes me about your reply is that 1, 2, 3 are all absolutely basic table-stakes things. SQL injection mitigation level things.

Have we suddenly stopped teaching the basics?


Youre downvoted but you’re right - this is entry level stuff and it seems to get routinely not taught or simply forgotten and ignored.

Everyone seems to want to worry about “nation state actors” and getting with novel 0 days when in fact missing the basic low hanging fruit is likely to result in far more damage


I'd also suggest to use another server/vps to monitor your setup (availability, performance, ...), and include a ports open check in the monitoring tools. Having an external check of your usual ports (thoose that should be closed) is a good way to find when something is off.


Another takeaway, IMO, use infrastructure-as-code to define the components in your infrastructure, so something like this critical firewall configuration won't be missed


Docker will happily override your firewall rules even with infrastructure as code. We block the AWS/GCP metadata IP to avoid potential exploits using that to pivot, but Docker also manages to override that sometimes, so we have a cron job that re-applies it.


Another important missing point is not having docker publicly exposed. Just reach docker through a ssh tunnel (eg via rdocker) or a vpn.


Can I also add don't run MongoDB into Docker containers. Use MongoDB Atlas or switch to AWS and DocumentDB with proper VPC and a network level firewall (security groups).

@conesus It's just not worth using discount hosting providers for this exact reason. Use a M(icrosoft) A(mazon) G(oogle) cloud, yes it costs more.... But your time is worth it!


Why not run it in containers? What's the downsides?

I mean, if you can reliably run Kafka or Postgres in containers, what's sufficiently different about MongoDB?

Or are you talking about using Docker itself as the container host as opposed to K8s or ECS etc.?


I don’t get why the original response is downvoted. Of course you can run those in docker containers, but it is generally suggested that you don’t. From my experience i would run stateless services in containers and persistent storages in vms, dedicated servers, or cloud services.


People have been running database instances just fine on their own for decades, without the help of "big brother".

There are reasons why running a high performance database instance in containers is problematic, but security is not one of them - not any more so than application containers.

You just need to know what you are doing, it's not a black art.


Truth - I personally avoid doing so, but that’s only because I prefer not to introduce that extra layer of complexity into my stack due to lack of full understanding of the technology. There’s nothing stopping me from bothering to learn all the trade offs and pitfalls and doing so, but so far there hasn’t been enough of a compelling reason for me to go through that effort. I just go with the general advice “you usually don’t want to dockerize stateful applications” and leave it at that.


I was not referring to security, otherwise completely agree.


> it is generally suggested that you don’t.

What's the difference between a container and a instance in a cloud service?


Oooph, good luck. And when you have time please make Docker aware that this well known foot-gun has finally done serious harm. They have known and ignored for years that iptables/ufw on Linux is totally broken and wide open when using Docker: https://github.com/moby/moby/issues/4737


Glad someone else highlighted this old ticket.

I bet this, in combination with the extremely irresponsible solution to ship mongodb without auth as default has caused countless of data leaks and destruction events. We just haven't heard about most of them. Elastic provides the same foot-gun.

Last year someone deleted almost 4000 open mongodb and elastic databases in what was called the Meow attack [1].

In my opinion it's as irresponsible as HW manufacturers shipping with default passwords, something which finally got the attention of regulators[2]. So I wouldn't be surprised if we at some point see some attempts to keep sw-developers accountable for what they give out.

I have for some time been using the data-oil analogy to describe where we are at. If data is the new oil, and a database is a tanker, then we are at the single hull tanker stage. We need double hulls, but just like the oil industry, the sw industry has little incentive to fix it themselves. I am hoping we get some regulation which improves the situation, because otherwise this will keep happening.

1: https://www.bleepingcomputer.com/news/security/new-meow-atta...

2: https://techcrunch.com/2018/10/05/california-passes-law-that...


Yeah, it seems like there's a weird inbetween phase when projects go from "awesome tool used and loved by some core people" to "this is the new normal, it's everywhere" where these issues get lost. I could see back in 2014 moby not really feeling like the quirks of ufw & iptables were its problem. But now in 2021 with how many millions of times docker run is used per day on machines all across the internet... it's just irresponsible security. It sucks, it isn't really docker's problem in the first place... but it's reality and someone needs to grab the hot potato and keep it from burning people for the good of us all.


I know the CICD code I wrote and manage at work launches approximately a million docker containers a day, so I suspect the total number of docker containers used per day is well into the billions.


Do tell about where you would need a million docker images for a CICD pipeline.. It's either many apps, or some very complicated pipelines


We have a suite of about two thousand integration tests and we run each test in its own docker container for isolation purposes. Multiply by 500 jobs a day and you get to a million containers.


Or many builds of the same app. Think compatibility matrix testing: take one suite of tests and run them against every one of hundreds of permutations of versions of their dependencies, one container each.

Not GP, just hypothesizing. This is what Travis matrixes do, for example.


I agree, there is a huge difference what one should expect from a early OS project and what one should be able to expect from billion dollar companies.

But it seems that without external pressure (regulatory?), it's not getting fixed at either end.


I spun up Mongo on a cloud VM a while back to assess viability/suitability, it was Meow'd within 30 seconds, absolutely insane.

I shut it down and moved on, we don't use Mongo to date.


Did you follow our guidelines? https://docs.mongodb.com/manual/administration/security-chec...

You must have compromised the binding to localhost in some way to allow this to happen as MongoDB only listens on localhost by default.


> You must have compromised the binding to localhost in some way to allow this to happen as MongoDB only listens on localhost by default.

Serious: Listening on localhost-only works in dev environments only. In production, it is not the norm to run the application on the same host as Mongo, especially given what a resource hog Mongo is. So, for practical purposes, listen-on-localhost is actually an obstacle is needs to be disabled first-thing anyway.

You guys know this too, because you do exactly this (and a lot more) on that Atlas thing you guys love to upsell everyone and their grandmother on.

Honestly, it is telling that this is the only defence you could provide — that you listen on localhost — and not anything _actually_ secure in prod. Must come in handy when upselling Atlas, I guess, that your the default configuration conveniently omits everything.

> Did you follow our guidelines? https://docs.mongodb.com/manual/administration/security-chec...

Snark: maybe they couldn't trust your guidelines because MongoDB the company is a known blatant liar[1].

1: https://twitter.com/jepsen_io/status/1255867265997844484


There wasn't a lot of information in your previous post. As I pointed out there are a comprehensive set of guidelines for enforcing security. Our defaults make it difficult to accidentally expose your data these days. However if you do add a MongoDB database to a public IP address we strongly encourage you to add a strong password. Better still do not expose your database on the public internet. Put it behind a firewall with auth enabled, secure it with a certificate and only allow access to named IP addresses.


> There wasn't a lot of information in your previous post.

As the first paragraph of my comment says: listen-on-localhost is untenable in production. Unless, of course, you guys seriously believe people should be running their production applications on the same host as mongo daemons. Honestly, I wouldn't be greatly surprised if you guys believe that.

> Our defaults make it difficult to ...

You have one (1) default that does that. Singular. None of your other defaults do that. And, as I've said above, that one (1) default is also useless, because it's one of the first things that need to be disabled in production anyway.

> However if you do add a MongoDB database to a public IP address we strongly encourage you to add a strong password. Better still do not expose your database on the public internet. Put it behind a firewall with auth enabled, secure it with a certificate and only allow access to named IP addresses.

Everybody knows this. You aren't adding anything new. Nobody's claiming MongoDB _cannot_ be secured. Everybody knows that it can be. The question, instead, is: why does every user of MongoDB even need to make it secure?!

I doubt you can answer that honestly, but plenty of us suspect we know it anyway: because MongoDB Inc. "cares" a lot more about developer experience, than it does about their data.


Postgres containers won't start unless a password is set. Be like postgres.


Given the famed hatred MongoDB Inc. has for Postgres, I half suspect they might be doing this partly out of pure spite towards Postgres. "Eww, no. We won't do that. _Postgres_ does that."


> Listening on localhost-only works in dev environments only

You're awfully arrogant for someone who has no clue in how to properly architect systems.

If you're using Kubernetes then it's very common to have a service mesh in a Production environment to enforce certain safeguards e.g. mutual TLS and provide circuit breaking, auditing, logging etc. In which case MongoDB would be running on localhost.

If you're not using Kubernetes then it's also common to have some form of middleware to achieve the same as above e.g. HAProxy, F5. Again, in which case MongoDB would be running on localhost.


So I need either 1. Kubernetes and a service mesh; or 2. HAProxy or F5; just to secure access to a database?! Especially when the database is already capable of TLS mutual auth?! Is this what your claim of skill in "how to properly architect systems" comes from?! Needless over-architecting?

Look, I've done my fair share of fronting services (including DBs) using TLS/SSH proxies and load balancers. When I needed them. But the question isn't if any of that can be done or needs be done. The question is: Why does MongoDB, which has all of these security support built-in, not enable them out of the box?

And your answer to that is ... throw more stuff on top of it?! Are you seriously claiming that every production user of MongoDB should take on so much software surface area just to fix the broken defaults Mongo ships with?! This is worse than what even MongoDB Inc. does; at least they document how to enable security (lol, what a concept) and merely automatically blame the user for all lapses.

And no, k8s service meshes and LBs in front of DBs aren't nearly as common in production as you're claiming.


I've worked for a number of the Fortune 10, banks, telcos etc.

Everyone has put some sort of middleware between their applications and databases.

Your claim that no one is running databases on localhost is simply your ignorance.


> I've worked for a number of the Fortune 10, banks, telcos etc.

Are you seriously claiming that the only production users of DBs are "Fortune 10, banks, telcos"?! Or are you claiming that because those guys do something a certain way, everyone else must also do it like that?! This is a weird variation of the Argument to Authority, and even more flawed than the original.

> Everyone has put some sort of middleware between their applications and databases.

Really? "Everyone"? Or are you just generalising the state of the entire industry based off of your limited experience with a small number of players in it?

> Your claim that no one is running databases on localhost

I made no such claim. I said it's "not the norm" and that it's "untenable", not that "no one does it". It doesn't matter that there are a few examples you've seen that do; they're not representative of the entire industry.

On the other hand, for the vast majority of the industry who run databases in production, my claims hold.

The vast majority of the industry, that does not include "Fortune 10, banks, telcos".


I ran the Docker image, I'm not sure which docs I followed, it was a while ago, but in this case listening on localhost doesn't really apply.

Docker, as we know will open exposed container ports to the world, that shouldn't really be the baseline though for not having your instance compromised in less time than it takes to enter an iptables rule correctly, or read the guidelines.

I'm not trying to place blame, it was an exploratory endeavour anyway, but Meow existed because security in Mongo is a guideline and not a rule

As someone who builds secure software solutions for a living it doesn't thrill me that security is often an "optional extra" (looking at you elastic).

If I asked our customers/users the same question you just asked me, and then followed it up with "You must have compromised...", I'd be in hot water.

A combination of factors contributed to us choosing not to use Mongo at the time, if we have such a need again, it will be considered without prejudice.


That "you must have compromised.." was in relation to the comment by pritambaral not your overall analysis.


It is demonstrably obvious this is a lie:

The first time I commented in this whole discussion was _in reply to_ your "you must have compromised ...".


> Docker, as we know will open exposed container ports to the world

Only if you choose to explicitly expose them.

In which case the fault is entirely with you.


> So I wouldn't be surprised if we at some point see some attempts to keep sw-developers accountable for what they give out.

Wow, the level of entitlement around open source is appalling. People download and run shit for free and get mad that they didn’t configure it right and want the government to go after the maintainers.

I can’t wait for the day where us open source devs have to contribute patches via pseudonyms and tor because they aren’t “government compliant”.


This isn't an entitlement issue; it's a public safety issue.

If some philanthropist were to give out free bicycles to everyone, but it turns out that unless you tighten a bolt one of the main support bars will likely snap and could even impale you, the government would rightfully go after them regardless of any "without warranty" disclaimer or EULA.

This is already well-trodden legal ground in the physical space; it's just a matter of bringing it into the virtual space.


It will kill open source. And a lot of other small company software development.


No it won't. The example GP has given didn't kill any industry either.

Open source projects seldom get popular overnight. Responsibilities grow as you get more popular, that is all...


Software is very different than making bicycles. Anyone can make an open source project in their attic and promote it. You cannot do that with most (or any) other of those other examples as it takes actual factories etc.

That is the GP example. If I make 1 bike by hand and put that somewhere on the street, and someone takes it and breaks their legs, that is not my fault. Nor should it be. If I sell that 1 bike, that can be another story.

It will kill open source imho as if you, in your underwear in the attic are responsible for some code you write and put on GitHub, you won't write that code anymore because of the risk; but I don't want to find out who is right here.

Responsibilities should befall the person deploying the software. Aka if you write some software and throw in on GitHub, that should not do anything. But if I take that software and put it in my app or backend and takes people's money because of some bug, then I am the one who is responsible. And this is already the case anyway. No changes are needed.

I think we agree though as I think we are talking about the person who writes and deploys/sells the software: they should indeed be responsible and already are.


Ok so now it's the manufacturer's fault if you expose containers without authentication on internet, that users are fine with just "believing" that their firewall works as they expect without even testing: that's perfectly fine, manufacturer's responsibility!

Except with GDPR, which makes you responsible for having regular security testing, I mean, just an nmap after changing your infra, it takes 2 minutes, and yet when you don't take these 2 minutes you find a way to blame the manufacturer. I mean, don't you /know/ that bots are scanning the internet for open unauthenticated services, and still want to be taken seriously. Back in the days every kid was doing it to host warez you know.


I am definitely not arguing users should not be responsible for what they do. My point is that companies (especially billion dollar companies) can, and should be (more) secure by default.

I am also making the observation that this does not seem to get fixed by itself, rather it seems it needs regulation, which includes GDPR.


I’d argue that anyone that uses Mongo given those defaults is equally culpable


Mongo recruiters often reach out to me, I tell them that I'd never get that stain off my resume.


We simply stopped using Docker's control of the firewall. It also takes docker-proxy out of the equation so you have far more control and it's less resource intensive (docker starts a proxy instance per port exposed).

I've commented here in the past about my feelings on Docker's attitude towards real issues raised by users, the only way I can describe it succinctly is "contempt".

There's some serious flaws, with tickets that are the best part a decade old now, one simply can't rely on them to do anything.

Stop using Docker or solve the issue yourself is my advice.


This massive PITA is why I switched to using Hetzner's upstream colo firewall instead of on-host iptables on my boxes there running Docker. It's lame and also it's not even stateful, but at least the garbage software coming out of Docker Inc these days can't accidentally open my system up to the world.

The userspace proxy used by Docker Swarm is still trashing my L3 client IPs though, yet another docker networking hassle.

Ultimately solely relying on an L3 firewall for access control is a bad idea, though. Your services should all have authentication turned on, even if they are only bound to localhost, for reasons which at this moment must be obvious.


It is 100% unfair to blame Docker for this foot-gun, especially because they cannot really do anything to fix it, because that's how the firewall works in the Linux kernel. Look: Podman has exactly the same issue when not running rootless.

The root cause is that IP packets going to the containers are not going through the INPUT chain of the "filter" table (they go through FORWARD), while various firewall projects like ufw or firewalld only provide convenient management of the INPUT chain, and, worse, don't event provide a good way to express the notion of "packets going to external port 27017 and then forwarded" (i.e. the equivalent of the --ctorigdst option provided by raw iptables).

Here are some options for container engines:

a) Use only a userspace proxy (which is what Docker does when configured with {"iptables": false}). This way, there is no packet forwarding, so packets go through the INPUT chain, just as expected by high-level firewall packages like ufw or firewalld. The major downside (which makes this method useless e.g. for containerized mail servers) is that the information about the source IP is lost, and there is no good way to fix this. I guess TPROXY can help here, but nobody uses it.

b) Use slirp4netns (which is what Podman does when running rootless). It has a really nice mode (available via "podman run --network=slirp4netns:port_handler=slirp4netns") where on the host side, there is only a userspace process listening (so that packets go through the INPUT chain), but inside the container, packets going out of tap0 have the correct source IP. The downside (actually a Podman limitation) is that you can't set up multiple containers communicating over internal IPs.

I would say that I am not really in favor of options (a) and (b) because of the overhead created by the proxy or by slirp4netns. If port forwarding can be done in the kernel (and it can, the only missing piece is --ctorigdst in high-level firewalls), it should be done in the kernel.

c) Document the situation better (e.g. I don't see the --ctorigdst option mentioned at all on https://docs.docker.com/network/iptables/), shift the blame to firewall authors so that they start creating a duplicate of each INPUT rule also in the FORWARD chain with --ctorigdst added as necessary.

d) Provide usable primitives (similar to Network Policies in Kubernetes) to control the firewall for containers, so that ufw or firewalld is not needed.


My point is that this is a well-known issue with absolutely catastrophic results when you get it wrong. It would cost the docker app almost nothing to do a quick check on docker run to see if you're on a machine with iptables & ufw enabled, and then violently complain or fail in a very obvious ways like:

*** WARNING YOUR FIREWALL ISN'T WORKING!! RUN AGAIN WITH --my_firewall_is_broken_and_I_accept_the_risks OPTION TO CONTINUE!!! SEE THIS FOR MORE INFO: <hyperlink to docs> ***


Docker could also enumerate the IP addresses on the host, and alert if there are any that aren't in RFC1918 space.


Horribly contrived- but you can run rootless docker with slirp4netns if you:

* Install slirp4netns

* DOCKERD_ROOTLESS_ROOTLESSKIT_PORT_DRIVER="slirp4netns"

* Upgrade to rootless docker 20.10.7 or higher (was bugged in 20.10.6. Yup.. bad.)


This has also hit me hard when it opened up Mailcow's not open relay protected internal network. Creating an open relay, and almost getting me banned by Hetzner. I try to avoid Docker when not absolutely necessary just to keep my stack as simple as possible.


Yeah, configure `ufw default deny incoming` and Docker will sneakily bypass that and gives the internet unfettered access to your ElasticSearch and MongoDB and whatever. And somehow, that's apparently "not a bug". It's one of the biggest footguns I know of, and could not believe it was expected behavior when I first encountered it.

We have a strict policy that servers must not have a routable IP address at all, and must have an external firewall applied. It's a good idea in any case, but turns out it's absolutely necessary with Docker.


This actually got me a while ago but with redis and some script kiddy turning my dev server into a bitcoin miner.

Anyone else running docker and using iptables really needs to read this https://docs.docker.com/network/iptables/


The insidious thing is that there's no indication, failure, log or anything to tell you something is out of the ordinary either. It would be one thing if it just exploded and failed to run, but it's even worse that it silently interacts with iptables & ufw to allow all the traffic through. The exact opposite of what you want or intended.


Only reason I noticed was a command was taking slightly longer than normal and I checked htop and saw redis using 100% cpu. Absolutely insane this is the Docker default.


I’m kind of shocked this is even deemed acceptable architecture. You’d think docker wouldn’t even touch iptables unless explicitly told to.


It's how they can make containers feel like isolated little subnets without resorting to vxlan or other kernel-level stuff. It's a great development experience and I'd be sad to see it go. But.... it really needs to proactively detect and warn users. The issue has been known for many years. A quick little check and error out on startup if you're running on Ubuntu or have ufw enabled would probably save 99% of the pain people have had with it over the years.


It works fine when users create docker networks for their containers to communicate, which docker-compose does by default, instead of publishing ports on 0.0.0.0 like yolo


I would say that this is the expected behavior. Also, I don't think one should rely on firewalls in this way.


Relying on firewalls to do what firewalls do and have done and continue to do seems perfectly acceptable. Yes, your database should have authentication enabled too, but expecting ports to not be unexpectedly open is the entire point of firewalls.


Kind of what I said. Firewalls are for blocking unwanted traffic. It should not be used as a replacement for other security measures. "unexpectedly open", well, there I simply disagree.


So if you have a firewall set to block everything, and you run a docker container that listens to your global IP, you expect it to magic your firewall for you?


Yes. I would assume that any service platform or services for that matter may have open ports as default and that you should place it in a private network with a proxy in front of it.


Redis can be exploited to run executables ???


From antirez the guy who wrote redis's blog:

'The Redis security model is: “it’s totally insecure to let untrusted clients access the system, please protect it from the outside world yourself”.' -- http://antirez.com/news/96

That blog post also helpfully shows how to write you own key into .ssh/authorized_keys to you can log in as the redis user over ssh. From there use your favourite lunar priv escalation bug to p0wn the box completely. (Or just run your cryptominer as the redis user...)

Note: that's about 5 years old now...


Holy shit! thanks for the details


Yeah - kinda crazy theres no auth by default AND eval is allowed. Pretty trivial for someone to have it download a script and run it pretty much with free reign.


Redis doesn't accept unauthenticated external connections by default for a while now, specifically to try and eliminate this footgun.

https://github.com/redis/redis/commit/edd4d555df57dc84265fdf...


I had an issue where I used the redis docker image and didn't understand docker networking properly so I set the network mode as host so my other container could connect. Not knowing this had exposed redis to the world unauthenticated (in about 2018).

Eventually a kind script set a password on redis which caused me to notice and fix this issue.


Interesting, this definitely happened with a more recent version... Wonder if theres some other exploit at play too (could also be the containerized version?)



It can execute Lua, I'm sure there's plenty of fun for hackers to have with this: https://redis.io/commands/eval


I guess most people have faced this. I got years ago too but after a few seconds some internal monitoring system alerted me that new ports have been exposed.

I am not sure anymore how i solved it but after some time (and none of the documented solutions working) i decided to let docker do to iptables whatever it wants to and used another firewall which filtered the traffic again after iptables was done "filtering" it.


Almost exactly the same thing has happened to me except Selenium and they were trying to log into Playstation Network accounts.


Same exact thing happened to my test redis instance two years ago.


What is this black magic? Why is docker concerned with iptables at all?

And these docs read a lot like: “We are going to totally ignore any of your firewall rules unless you follow all these steps exactly and do a lot of manual work.”


I'm sorry that you have to go through this, it seems inevitable these days. However, while it's nice that you're sharing your analysis of the situation, you start off by downplaying the attack and calling them a script kiddie. If for example someone finds out they can brute-force Facebook's 6-digit password reset token because they didn't put any rate-limiting in[0], are they a hacker? Is there major skill involved in doing so or just a million-iteration loop to go through all the combinations? They received a $15,000 payout indicating that Facebook values their mistake seriously (although I'd say it's worth much more given it's a guaranteed full account takeover.) So regardless of what you think of the attack, easy or not, you still made a security mistake and you should admit that first and foremost rather than brushing it off as a "script kiddie situation".

Other than that, it's commendable that you have working backups and are responding calmly and with a plan. I hope you get everything back in working order smoothly :)

[0] https://www.theverge.com/2016/3/8/11179926/facebook-account-...


hackers start with a target and try to find a vulnerability. script kiddies start with a vulnerability and try to find sites vulnerable to it. it's not about the skill involved in making the exploit, it's about the effort around that.

the case you mention is more of a hacker feat because that exploit had to be crafted specifically for facebook. meanwhile in this case it was most probably someone who just continuously scans the IPv4 address space for open mongo instances and applies the same generic "exploit" against them if it finds some in a fully automatized process


" script kiddies start with a vulnerability and try to find sites vulnerable to it. it's not about the skill involved in making the exploit, it's about the effort around that."

The terms get clouded a bit - but in my definition a "script kiddie" is pretty much this: someone with not much skill(like a kid), but hands on some hacker tools/scripts - to find easy targets and feel powerful. And later on, try to make some money.

And they can make great effort in doing so - but they remain script kiddies. They don't really know how to hack.

Whether this was just a "script kiddie", I doubt. More a professional ransomware gang. But what op probably meant was, it was not a targeted attack.


I think the "script kiddie situation" comes from the part of trying to ransom the data


Put passwords on your production databases. Even if it's behind a firewall.


Yeah, my setup is: private network + whitelist + password.


The same thing happened to me a few years ago. I used DigitalOcean's Docker image and it had some message about UFW in motd, so I assumed it works with Docker. So I created a container with passwordless mongodb and it got wiped in a few hours.

And DO still have this in motd for newly created droplets:

  Welcome to DigitalOcean's 1-Click Docker Droplet.
  To keep this Droplet secure, the UFW firewall is enabled.
  All ports are BLOCKED except 22 (SSH), 2375 (Docker) and 2376 (Docker).
Full motd: https://pastebin.com/cdaecHU8

Though it links to https://do.co/3j6j3po and it mentions ufw problem:

> Note: The default firewall for the Docker One-Click is UFW, which is a front end to iptables. However, Docker modifies iptables directly to set up communication to and from containers. This means that UFW won’t give you a full picture of the firewall settings. You can override this behavior in Docker by adding --iptables=false to the Docker daemon.


So the makers know of the security issue, but still leave it in by default? That's bad. Either fix the issue, or put warnings all over the place that cannot be missed to inform the user.

This is just what another poster commented on, sacrificing security for ease of use.


Yeah, I reported it to DO support and suggested to add a warning, but seems like it was never added.

  DigitalOcean Support Thursday, March 15, 2018 9:53 PM
  Hello,
  
  Thank you very much for bringing this to our attention.
  
  I will create an internal escalation to our images team to review this. :)
  
  [...]


It seems that you are using DigitalOcean. They offer a cloud firewall [1], which sits in front of your droplets, and you can limit inbound ports to only the necessary ones (e.g. 80, 443, SSH).

I'm always using this with providers that support it, since I've managed to mistakenly open ports that should be private (either by misconfigured firewall, or due to the Docker "issue").

[1] https://docs.digitalocean.com/products/networking/firewalls/


That's a lot of blame being placed outwards there. It doesn't matter how script kiddie a person is if they got past your security.

Disappointing response, this.

What data got leaked? Please let haveibeenpwned.com know if your system leaked emails or worse.


I think his point was _precisely_ that it was a stupid mistake on his part, and not a 'master hacker' subverting his defences. But I agree it's odd that he doesn't seem to have heard of defence in depth, and relied entirely on one UFW rule as opposed to also setting a password on Mongo and/or (preferably) configuring firewalls on the cloud provider level.


Yeah, downplaying the guy who hacked you doesn't make it any better.

Actually it makes it worse, since your security is so bad that any "script kiddie" can hack into your system.


There are search engines for services exposed to the internet, like https://www.shodan.io/

If your mongoDB server is exposed to the Internet it will show up there. When that happens, it's only a matter of time until someone targets you.

You can write an alert that probes for sensitive services exposed to the Internet. In that way, if this happens again, you get an alert that you can use to detect the problem early.

Also... use authentication for your database, it doesn't take much effort to do. With a secure password.


We have a service where we'll notify you if anything new gets exposed on one of your IPs:

https://monitor.shodan.io

If you have a membership (which is a one-time payment of $49 - no subscription necessary) then you can monitor up to 16 IPs. We have a lot of individuals that have configured monitoring for their home network just in case something accidentally gets exposed.


How does shodan works like how do they know if something is exposed to the internet.

Are they scanning networks 24/7

I’m just a noob in security so therefore learning


Here is an overview of what Shodan is:

https://help.shodan.io/the-basics/what-is-shodan

The scanning algorithm is mostly just this:

1. Generate a random IPv4 address

2. Select a random port from a list of ~2k ports

3. Check the random IP on the random port

4. Store the result of the check

5. GOTO 1

The above loop runs endlessly and because IPv4 is fairly small it doesn't take long to check everything.


If everyone somehow magically switched to the much larger IPv6 address space would that be a big problem for you?


We also crawl IPv6 but it's a very different and more complicated algorithm. We would still end up crawling a sizable chunk but there are obviously unknowns in how biased our dataset would be (i.e. we might index mostly cloud servers and fewer residential devices).


That's what I thought, you might have to maintain a map of ISPs and such but even then it'd be hard to find all the clients under them.


Is there a way to opt-out from the scans?


They do a monthly scan, with additional spot checks available on-demand:

https://help.shodan.io/the-basics/on-demand-scanning


We actually scan on average once a week. I used that language to be ultra conservative but I'll need to change it. For the past 8+ years we've been doing weekly scans.


In case you see this, the most interesting question I think I could possibly ask is, what does the current real-world impact of IPv6 appear to be practically speaking?

Abstractly and intuitively, IPv6's massiveness would seem to put an end to the interesting closed loop of address space vs backhaul capacity that has developed around v4. I can't help but wonder though - with for example some providers leasing out ginormous blocks of address space according to fairly predictable patterns (and customers just using the first v6 address that pops out - if at all), this makes me wonder if it'll be possible to steer v6 scans using a mix of statistics, machine learning, and Perl if statements :).

The other thing I'm idly curious about is how you actually scan on a regular basis. Broadly speaking about long-term viability, I guess the TL;DR probably boils down to coordination and careful nurturing of reputation similar to what the large-scale email providers maintain. But from a technical perspective, I do wonder if/how much things like peering, and BGP, and noise-cancelling routing (if you will), etc, come into the picture - and how big the links are :D

I would be very happy to coincidentally discover writeups touching on these questions anytime. Thanks for reading :)


They go through the entire ip address range scanning specific ports.


The TL;DR is that IPv4 means there's only 4 billion IP addresses, which modern 1-10Gbps backhaul links can ticker-tape through in a few minutes. IPv6's 18,446,744,073,709,551,616 IP address limit will sadly make broad scanning utterly infeasible going forward until users have 1Tbit connections or so :(

But for now we can do this with the v4 parts that are left: https://www.youtube.com/watch?v=nX9JXI4l3-E

Also, some crazy person good-haxed a bunch of routers and modems back in 2012 and made http://census2012.sourceforge.net/paper.html (without access to fast connections and before the advent of masscan and other straightforward tools, too). Of note is that the "Unallocated" grey areas in the the analysis images are a curious illustration of how much less-full the IPv4 internet was just ~9 years ago.


In case anybody's interested, here's what the "hack" looks like:

    nbset:PRIMARY> show dbs
    READ__ME_TO_RECOVER_YOUR_DATA   0.000GB
    admin                           0.000GB
    local                          16.471GB
    newsblur                        0.718GB
    
    nbset:PRIMARY> use READ__ME_TO_RECOVER_YOUR_DATA
    switched to db READ__ME_TO_RECOVER_YOUR_DATA
    
    nbset:PRIMARY> show collections
    README
    system.profile
    
    nbset:PRIMARY> db.README.find()
    { "_id" : ObjectId("60d3e112ac48d82047aab95d"), "content" : "All your data is a backed up. You must pay 0.03 BTC to XXXXXXFTHISGUYXXXXXXX 48 hours for recover it. After 48 hours expiration we will leaked and exposed all your data. In case of refusal to pay, we will contact the General Data Protection Regulation, GDPR and notify them that you store user data in an open form and is not safe. Under the rules of the law, you face a heavy fine or arrest and your base dump will be dropped from our server! You can buy bitcoin here, does not take much time to buy https://localbitcoins.com or https://buy.moonpay.io/ After paying write to me in the mail with your DB IP: FTHISGUY@recoverme.one and you will receive a link to download your database dump." }


> you face a heavy fine or arrest

Heavy fine yes but not arrest AFAIK. Anyway this is a script programed to scary the target.

Do you even store personal data inside that database?


From their Twitter feed: mongodb is just RSS feed data, personal data is in postgres and wasn’t accessible to the script kiddy


And would you take that statement at face value from a company that just left their docker based mongo instance Internet public? It’s safe to assume that your info has already been leaked, but situations like this are why that assumption is safe.


I suppose we'll find out if/when the data will be leaked as the hacker claims?


If you give out your personal information to, for example, newsblur- the odds are very, very good that this wasn’t the first time you’ve entrusted a company to protect your privacy, and whether you realize it yet or not- you have already been sorely disappointed.


There's something about this threat that really is awful. The legal extortion angle. We'll turn you over to the regulator if you don't give us money. Aside the fact they can take the money and package you to the regulator anyway, with complete impunity, it seems like the regulation needs to be revised in some way to take this very serious threat out of the hands of people who will abuse it.


This is just an another reason why user data should be dealt with very carefully, not a reason to nerf the legislation designed to dissuade people being careless.


Agree with user and customer data being handled with care, but I do not like seeing criminals using the law to further a criminal enterprise. That is problematic.


> Heavy fine yes but not arrest AFAIK.

Newsblur is an American org. GDPR is a foreign law that has no relevance to American firms lol.

<insert Saruman "you have no power here" meme>


> GDPR is a foreign law that has no relevance to American firms lol.

I couldn't agree more with the spirit of your comment, but sadly the reality may be somewhat more nuanced:

GDPR in the USA https://www.cookiebot.com/en/gdpr-usa/

"The GDPR has extra-territorial scope, which means that websites outside of the EU that process data of people inside the EU are obligated to comply with the GDPR. ... In fact, the very first GDPR enforcement was against a Canadian company... being a website in the US does not exempt you from GPDR compliance and the territorial distance will not protect you from its enforcement either."

Reminded me of:

CISA amendment would allow US to jail foreigners for crimes committed abroad https://www.theguardian.com/technology/2015/oct/22/cybersecu...


There's no sadly here, it's the opposite. In your world Facebook could still abuse European's privacy.


In my world, I would not be committing a crime if I, someone who has never stepped foot in Asia, criticised the Chinese Govt.

https://www.axios.com/china-hong-kong-law-global-activism-ff...


In other news, a company selling a GDPR compliance service is trying to scare companies into buying their service. Shocking to see!

In reality, a US business with no EU presence only has to follow US laws. The only "enforcement" power the EU has would be to order the website to be blocked in the EU, and I'm pretty sure they can't even do that.


This is horrific. So the hacker is claiming to have a copy of our data. 0.03 BTC is less than $1000. Regardless of you being able to restore from backups, I assume you're paying the ransom to hopefully avoid the leak, right?


Paying a ransom marks you as will-pay. The price will keep rising till they find your limit.

The data is already leaked, let your users know what was leaked and recover from there.

See also: 80% of orgs that paid the ransom were hit again https://news.ycombinator.com/item?id=27552611


You misunderstand.

I paid Samuel and entrusted him with my data. Not too much, but enough for it to matter. When faced with a massive leak like this, he downplays everything, calls the hacker a "script kiddie" and calls this "good practice for what will be the first of many sleepless nights", looking at it only from a "service disruption" perspective.

So far we've gotten no indication of what's been leaked, if it contains deleted feeds, or what he's doing to prevent the data from being leaked by the hacker, if anything. He's been solely focused on restoring the service and ignoring the leak. Compared to not having access to an RSS reader for any random period of time, the leak is orders of magnitude more serious to me and I'd wager to most of Newsblur customers.

I honestly don't care if paying a ransom or interacting with the hacker makes him more likely to be targeted in the future, his duty towards his customers was to keep their private data private and not only he failed at that, but he doesn't even seem to register that as his main priority. As far as I'm aware, if he allows the data to leak publicly, then there's no "recovering from there", he's not getting any more of my money.


I'm on the same side of the argument as you and indeed I believe I feel as strongly about it as you. Especially in regards to brushing it off, calling them script kiddies[1], generally being "well aw shucks aren't I great for not deleting my copy of the data, I'm so great"[2] about the whole thing grinds my gears too.

I'm saying whoever is ransoming the data already has the data, the data is out of Newsblur's control, therefore the data is already leaked.

The data leak is past tense. It has already happened, not will happen. No amount of money will undo that. If that means they've lost you as a customer, that's how it is.

What we now need to know is what data was leaked?

[1]: which to be fair Newsblur, they are, but if a script kiddie hacks you using something so basic as a missing firewall rule.. Arguably not knowing Docker's quirks but using it anyway is the same damn thing as what script kiddies do. Sys kiddie if you will.

[2]: Why is that cause for celebration? Do you not have backups?


There is a material difference to users between a single attacker having (and possibly ignoring) a data dump, and that attacker publishing that dump publically, or selling it to someone who plans to exploit its contents.

The attacker has offered to not publish if they are paid. Their word probably isn't worth much, but $1,000 seems like an affordable sum for a business to gamble on them being honest about it. And if Newsblur doesn't fix their security problems they'll be targeted again either way.

As someone who has a decade of data in Newsblur, if there's any chance that an affordable ransom will keep my data from spreading further I want Samuel to take it.


The fact that you believe paying the ransom is even an option shows that you really aren't even qualified to be discussing this topic. People with your mindset are a big part of the reason that ransomware is still going strong. The other big part is people who don't run their systems correctly in the first place.


Giving them $1000 confirms the value, allowing them to list the dump at a higher price than the usual $10-50 spammers would pay (each) for the email addresses alone



People used to break into systems because they were smart and curious, now we've got these fucking cockroaches holding people ransom.


This happened to us on a test DB as well and from what we've seen on the network traffic they was not much there, less than 1MB or something. So you can be sure they have not stored your data and you will not be able to recover anything. Pretty expected from someone who only ask so little.


0.03 BTC?! Someone is doing this for a lousy thousand dollar? Unbelievable.


> Unbelievable

Really? If they ask 100K dollars they are probably not going to be paid. So, just hijack 100 servers and assure that they will pay (since 1K dollars is "not" much if you are running a business).


Curious this got downvoted so quickly, maybe your "hacker" is among us now!

Have you contacted the relevant authorities?


What happened to your service is the security equivalent of the scholar's mate. It's important to be able to lose with dignity and move on.

Your adversary was unsophisticated but this incident was your fault.


"XXXXXXFTHISGUYXXXXXXX" is right lol


Use the new rootless mode and you won’t have issues with it inserting it’s rules above UFW.

You can then expose ports to a specific IP and use UFW to allow it.

Much cleaner than any UFW-docker hacks out there, and more secure.

https://docs.docker.com/engine/security/rootless/


The real workaround has always been to disable iptables and masq in the docker daemon and set up those things yourself, with your existing firewall. Binding your port to loopback works too, but that's more prone to accidents.


I'm sorry to hear that you got bit by the docker networking thing. It bit me twice in the past. Once with a new server and once when they changed the config format from envvars to json (we were disabling dockers iptables nonsense).

Do you know if they simply encrypted the data in place or if they succeeded in exfiltrating a full copy?


Is there a colo or host that will run honeypots within your /24 and detect and automatically block malicious traffic sources like this?

Seems like it would be a dead simple detection system, and would be a huge value-add.

You could of course order an additional IP and do it yourself, too, but it seems like the colo doing it would be more efficient in terms of scale. (And they'd likely be more diligent in doing it, avoiding obvious DoS vectors like triggering the honeypot from AWS netblocks.)

Hetzner (my host) sends me nastygrams from their IDS when my box tries to connect to RFC1819 space (which isn't even routable via them!) when running p2p software like ipfs. You'd think if they are willing to complain to customers about zero-impact stuff like that, they'd be willing to blackhole non-customers for nonzero impact malicious traffic.


What kind of database auth did you have? Wouldn't they have had to access config files or related in order to obtain your passwords, usernames, etc?


I think by default mongodb has no enabled access control, so there is no default user or password.


Am I misunderstanding or do people launch their Mongo container without even MONGO_INITDB_ROOT_{USERNAME,PASSWORD}? It's clearly mentioned in the image README. Takes 15 seconds to set. I'd be incredibly concerned if anybody with more than a day of infrastructure experience did this, even worse on a production database.


How is this acceptable… requiring a password, even a weak one might have at least bought some time in this situation.



Mongo is so insecure that it's commonplace to not bother with usernames and passwords and just firewall the hell out of it instead. Plus that's one more plaintext password you'll end up storing all over the place. Its default configuration requires no authentication.

Not saying it's a good practice but it's a common pattern I've seen.


> When I containerized MongoDB, Docker helpfully inserted an allow rule into iptables, opening up MongoDB to the world.

Yeah I had the same problem. If you are not using orchestration engines like Swarm of Kubernetes you can just avoid it by explicitly binding the port to your local interface (i.e. -p 127.0.0.1:27017:27017 instead of just -p 27017:27017)


> Turns out the ufw firewall I enabled and diligently kept on a strict allowlist with only my internal servers didn't work on a new server because of Docker.

I’ve been got by this too.


Absolutely horrid that Docker still bypasses iptables / ufw by default.


Why do you call them "script kiddie" and not a hacker? IMO it's still a hacker even if the attack is not very sophisticated or even if you made a big security mistake.


No hacking skill or knowledge required to simply download and run someone else’s script. That’s why they are called “script kiddies”.


I think it’s fine. Script kiddie is a strict subset of “hacker” in the negative sense of the word “hacker”. It’s a way to convey to the reader the level of sophistication used in the attack by describing the hacker in this way.


Right, and there should be a sense of shame associated with being pwned by, for example, not setting a password on your public internet accessible (redis || postgres || mongo) instances. You didn’t get hacked, you let a child have their way with your application. Hence: script kiddie


script kiddie is simply a hacker term, it has nothing to do with age or even skill.


> it has nothing to do with age…

It’s in the name, so it doesn’t seem worth disputing that at least at the time when the term was coined- it did have to do with age

> or even skill

Skill is literally the defining feature.

What exactly do you mean by “it’s simply a hacker term” anyway? Are words just sounds we make with our mouths?


I can infer so many errors in the architecture, I wonder how this may have survived so far.

1. you put your DB in a server which is exposed to the internet.

2. you have no VIP/NAT in front of your systems.

3. you rely in iptables , while knowing some automatic system is manipulating it.

3 hours? I wonder it took so long. I expect this infrastructure will be a script kiddies party room within a few minutes.


As someone who has been running multiple services with millions of users for decades:

1. I need to be able to connect to my DB from anywhere.

2. No idea what that even means.

3. Don't know. Never even touched the firewall.

I have a PW on my DB and that's it. Why do I need more than that?


You need to:

- put the database in a virtual private cloud (VPC), an internal network

- setup a Virtual Private Network (VPN) also placed in the same VPC from which developers can connect to to access the internal network

- setup at least two MongoDB users, one `readWrite` user that can connect from the internal network and one administrative user that can only connect from localhost

- setup a key based SSH connection only accessible from the VPN to the MongoDB instance

- setup Security Groups (firewall) to lock all the unused ports and IP origins out

That way you'll need a VPN key, an SSH key and the MongoDB admin user's access to fully compromise the database.


> 1. I need to be able to connect to my DB from anywhere.

Understandable and reasonable. there are ways to achieve this without exposing the DB but it take some more effort.

> 3. Don't know. Never even touched the firewall.

Firewalls are good, because they add an extra layer on security. Personally I believe, there should be firewall on most servers (firewall should be the default). But firewalls aren't magic and they are just one layer.

> I have a PW on my DB and that's it. Why do I need more than that?

Because this means you have 0 margins for error. Either you PW auth works flawlessly and without bugs, or you are screwed.

Also you expose one more endpoint that can potentially be a ddos target.


> 1. I need to be able to connect to my DB from anywhere.

So do I. So I setup all my dbs with TLS mutual auth or equivalent. Even databases that don't support TLS natively (e.g., Redis 5 and below) get a TLS/SSH port-forward setup for them at the network boundary.

> I have a PW on my DB and that's it. Why do I need more than that?

If you're not using an MITM-proof connection (e.g., TLS or SSH), and you connect to your DB from a network that has me (maybe we're in the same coffee shop, maybe I'm working in your office, or maybe I just work at an ISP between you and your server), then I have your PW.


Why would you need to be able to connect to your database from anywhere?


Similar thing happened to me on side projects a couple years ago. A docker update (or something like that, I don't remember the details) rewrote iptables config and opened my mongodb to the world. I didn't notice and the whole thing got ransomed over and over...

Had some fun using mongodb but I don't think I'll ever use it again =/


I don't know what your monitoring setup is like, but I'd recommend something like Prometheus' blackbox-exporter to help alarm when your network "changes".

I setup alerting rules so that certain subnets or servers suddenly being exposed to certain instances of blackbox will wake people up!


Docker is such a massively leaky abstraction :( Thanks for the detail. Newsblur is great.


Here's the fix to prevent docker messing with ufw rules:

https://github.com/chaifeng/ufw-docker


Is 3 hours a large time for an open server to be discovered? Do the attackers just have a giant list of ips that they constantly scan and can instantly know if it’s suddenly open to traffic?


Yes if you publish a known insecure service like mongodb, on the standard port, on a well known VPS provider you can expect it to be automatically compromised within hours if not minutes.

As others commented, scanning the whole Internet is even not a problem so scanning a "limited" part where you are likely to see these services pop up is even less of a problem.

I think the takeaway is that you cannot hide in the masses on the Internet anymore, 10-20 years ago you could throw up a insecure server and it could be fine for a long time.

Nowadays you must assume someone will find and try to login to your service, even if you put it on a non-standard port.

Also, if it's a HTTPS service take note they when you get a certificate you will be announcing that domain to the whole world and publish it to a searchable database (for example https://crt.sh/ ).


If you want to secure your on-premise MongoDB we publish a checklist here https://docs.mongodb.com/manual/administration/security-chec...

better still use MongoDB Atlas and get our best security practices baked in.


Check out masscan [0]. It’s extremely easy to scan IPv4 very rapidly and find targets in an automated fashion. It advertises scanning the internet in 5 minutes.

[0]: https://github.com/robertdavidgraham/masscan


It's possible to check every IP on the Internet nowadays so you don't need to keep a list. You can just brute-force check every IPv4 address in less than an hour. I've written on the subject a few times:

https://blog.shodan.io/its-the-data-stupid/


ZMap claims to be able to scan the entire IPv4 space of the Internet in about 45mins. [0] There's no reason not to believe that claim, either.

With many people doing this, it is kind of surprising that it took so long for a vulnerable server to be discovered.

[0] https://zmap.io/


Been bitten by servers listening on 0:0:0:0 before. Nice thing about deploying on AWS is this class of problems is avoided by security groups defined in code or yaml and managed in github.


In addition to the comments I see here, one more note: seems like a lot of change at one time.

* Ansible

* Docker

* Big redesign

* New database cluster

* New firewall config

I have found great benefit to breaking problems down into smaller parts even though some times it causes some extra work.


Good luck!

I appreciate your transparent account of the situation but what does it say about your company if your database got popped by a “script kiddie”?


Quite often people claim (for example talk talk) that they were attacked by a nation state. It often later transpires that it was a 16 year old from Croydon using a basic SQL injection vulnerability.

At least they’re being honest up front. but yes it does show a naivety and inexperience of basic security.


It says more about Docker than anything else. This is an insane default setting, it's something that should have been fixed when it was first brought to their attention.

Computer security is hard enough without loaded footguns like these lying around.


Debian has no firewall rules by default. Up until recently also home directory permissions that were not good for multi user systems. Both by design.

Defaults are often insecure but maximise interoperability or general usability. Look at Windows !


Yes, Debian - and Ubuntu, for that matter - have some pretty bad defaults in some places. Having users' homedirs UGO rwxr-xr-x is pretty bad.

The defaults should be secure with explicit unlock steps for those that know their environment well enough that they can explicitly relax some restrictions.


Well, Docker CE comes with a huge Disclaimer of Warranty (https://github.com/docker/docker-ce/blob/master/LICENSE). I don't think we can complain. "I should have tested it before deploying to production" it's the right thing to say.


I understand your point but how do you explain the complete absence of database security controls? That part is on Newsblur. Defense in depth is important!


Yes, they absolutely have some culpability, but making a change like this without alerting the administrator of the system is the rough equivalent of any process with 'root' privileges on any one of your servers suddenly executing an iptables command to allow all access. You'd only know about it because you got hacked.

Such drastic changes to the security model should only be one after explicit instruction.

The number of companies that operate without access controls between servers on the same segment is unfortunately quite large, database security controls are - again - more often than not left at their default setting and those too are quite often insecure.

Defense in depth always has limited depth, though I totally agree that running a database without access controls is not the way to go.


Docker is a minor part of the issue.

The main problem is lack of basic security.


docker has so many of these footguns.


Completely unrelated: NewsBlur was the first rss service I paid for after google reader closed down. I used it intensely for a long time and have very fond memories. I especially liked that you had open-sourced the code and spent some time looking at the architecture.

I'm now self-hosting a rss reader, but NewsBlur will always remain dear to my heart.


I ran into this iptables issue in 2017 while setting up Monica in a Docker container, decided to never use docker since.


I was caught out by this too[0]. I now have a fw script which runs automatically for demos etc.

[0] https://github.com/docker-library/redis/issues/259#issuecomm...


> Docker helpfully inserted an allow rule into iptables, opening up MongoDB to the world

Oh boi. I just knew(0) that default value is problematic.

Too bad this time it caused somebody real money.

(0): https://news.ycombinator.com/item?id=26678025


Once this is over consider looking at network topology as a security mechnism in it's own right. Professionally I try operate a subnet hierarchy: public, intermediate, private where there's no routing information between public and private and private has no internet connectivity.


Thank you for letting us know and being clear about it. Really like NewsBlur. Hope it comes back soon!


Did you have default or something setup for mongo db users? Because even if it open to world a strong password and fail2ban would stop this. Still dumb of docker to not be more clear. Sometimes I wish BSD jails become more popular.


Just want to say that I love Newsblur have I have been using it for free for many years. Reading this today made me stop and think about how hard you must work on it, so today I will donate/subscribe!


This is just another reason to laugh more about dockers "enterprise production ready" offering. So many things I encountered in Docker are so _not_ enterprise at all.


The docker part bit me in the behind as well, had absolutely no idea it would circumvent ufw by design.

Angry at myself for not reading the docs carefully but who has time for that? :/


> This situation is more of a script kiddie than a hacker.

Is this a useful distinction? Define the sharp line between a "script kiddie" and a "hacker".

You're describing a person who detected your vulnerability and acted within a 3 hour window of it appearing and closing. If I were you I'd be more focused on reflection and not on putting labels on whoever hacked you.


I think I can imagine why you would need a combination of PostgreSQL, Elasticsearch, and Redis, but what problem does MongoDB solve for you?


You have backups, right?


My theory is that its MongoDB that is behind the ransomware, why else would they 1. Not have auth protection 2. Open up the firewall !?


If you read what the author said and understood it, you would see that mongodb didn’t open up the firewall and it does have auth protection but the author chose not to enable it.


When I containerized MongoDB, Docker helpfully inserted an allow rule into iptables, opening up MongoDB to the world.

This is crazy. Your network should have been on a private IP address space behind a firewall running static NAT exposing only ports 80 and 443 on a routable IP address. This is network architecture 101.


Don't pay, they have deleted your data and you're not getting it back. This non targeted attack has been running since 2017 and is pretty well documented now, nobody ever reported getting their data back after paying. ALWAYS read for existing documentation about an attack you have been victim of, should be one of the first thing you do, especially prior to pay.

https://www.imperva.com/blog/ransomware-attacks-on-mysql-and... https://www.itproportal.com/news/ransomware-attacks-on-mongo... https://security.stackexchange.com/questions/237048/mongo-db...

Everybody falls for that, I mean, look at the BTC these guys made, it's crazy! Anyway, Docker uses the DOCKER-USER firewall chain:

https://docs.docker.com/network/iptables/

Example:

https://yourlabs.io/oss/yourlabs.docker/-/blob/master/tasks/...

People should really test their firewalls after setting it up.

Another thing, instead of using Ansible+Docker and exposing ports like that, use Ansible+Docker-Compose, so that your containers of a stack have their own private shared network, then you won't have to publish ports to make your services communicate.

https://docs.ansible.com/ansible/latest/collections/communit...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: