Hacker News new | past | comments | ask | show | jobs | submit login
Unauthorized access to Docker Hub database (docker.com)
296 points by yskchu 52 days ago | hide | past | web | favorite | 62 comments



I've found Docker's communication about this incident to be pretty poor.

1. The email they sent out didn't specify whether your account was included in the 5% of compromised users, or whether you had linked GitHub or BitBucket accounts that they unlinked. The only way to know seems to be if you still have a linked GH/BB account then you're (probably?) ok.

2. They mention you should "check security logs to see if any unexpected actions have taken place" and linked to GH/BB security audit log pages, but I don't believe that's sufficient, you also need to check for rogue commits.

3. They haven't said when the breach occurred, so there's no way of knowing how far back to look. They "discovered" it on Thursday, and say it was a "brief period", but that's meaningless.

4. They downplayed it as "brief", "non-financial user data", and "less than 5%" of users. I care more about the integrity of source code and builds than any financial information I might have given to Docker.

I can sometimes forgive companies for breaches like this, if they own up to it and do an excellent job of communicating what happened, how, when, what the impact and mitigations are, both internally and for their customers. That was not the case here.

EDIT: they discovered the breach Thursday, but still haven't given a timeframe for when it may have first occurred.


I also find it frustrating that they are not stating when exactly did the breach occur. The message implies that they know, due to the "brief period" claim, but they are not explicitly stating one of the most important facts. No mention in the FAQ either.

I'm guessing that they are either not quite certain about the exact timing and duration, or that the brief period was actually embarrassingly long. Otherwise, that's one of the most important facts that anyone would communicate.


Yeah, the ambiguousness of “brief” is what implies that they do not know, with certainty. It seems most likely to mean “we think it happened at or after time n, but haven’t looked at everything yet, so the belief might change”.


Our whole lives are but brief glimpses on the grand scale of thing. :)

This really highlights the value of locking down the base images used in our own pipelines.


Oh, and if anyone with write access to your repositories had their GH/BB account linked to their DockerHub account, your source code could be compromised.

AFAIK the only way to check is to ask every person who has write access and hope they tell the truth.

Honestly, everyone should probably check their repos for recent activity / commits.

We need more fine-grained permissions for things like this. Principle of Least Authority, people.

Now is a good time to review authorized application https://github.com/settings/installations and if you're part of a GitHub organization I highly recommend setting up OAuth application restrictions https://help.github.com/en/articles/about-oauth-app-access-r...


I haven't seen anyone mention that Docker Hub's automatic build integration requires either "Owner"-level permissions on your organization or "Admin"-level permissions on the individual repository. Based on the GitHub-side audit log, Docker Hub seems to be using this access to add deploy keys to your repository, but this isn't mentioned in the documentation (which is why we had to go spelunking in the audit log), and if you try to take a least-privilege approach and grant only the read-only access that Docker Hub should require, your GitHub repository will simply not appear in the list of available repositories when you try to configure an automatic build.

Lots of people may have exposed credentials to Docker Hub that can do much more than disclose proprietary source code.


Signed commits would be useful, right? I don’t know much about GitHub permissions, but I’ve skimmed GitLab’s. Why would Docker hub have write access to the repo? Is there really no read only access for repos?


I think it's possible, but I'm not sure what DockerHub does. GitHubs OAuth tokens can't be restricted to be read-only or to specific repos (https://stackoverflow.com/questions/26372417/github-oauth2-t...) but I think Deploy Keys can.


They have a lot of scopes compared to GitLab (https://developer.github.com/apps/building-oauth-apps/unders...). I find it crazy that anyone would click authorize on a 3rd party app with the repo scope without having a way to (easily) identify unauthorized commits.


A lot of developers are in a hurry and interested mainly in getting their builds to work so they can move on to the thing they actually care about. They don't think about the potential consequences of the easiest thing they did to make it work.

This is how a sizable number of security incidents happen. The easiest thing to do is reckless, so people do it.


> They downplayed it as "brief", "non-financial user data", and "less than 5%" of users.

This caught my attention, as well; rubbed me the wrong way.


Companies seem to have realized that security incidents are more of a public relations issue than anything else after Equifax. Just lockdown as much info as you can, limit anyone talking about anything, spin spin spin. No reason to be more proactive than that, you only get punished for it.


Where did they say it started Thursday? The blog post says it was discovered Thursday ...


Good point. I probably assumed it due to the "brief period" language, but you're right, that's highly ambiguous.

I've edited my comment.


> 2. They mention you should "check security logs to see if any unexpected actions have taken place" and linked to GH/BB security audit log pages, but I don't believe that's sufficient, you also need to check for rogue commits.

It does not seem to me obvious how one would go about determining with confidence that "no unexpected actions have taken place", in really either of these venues, and the process of doing so does not seem trivial.

This is what makes it scary, indeed, and I agree the advice glosses over it as if it is obvious and easy how to do this.


I didn't even get an email from Docker but I still changed my password.

Funny enough, I only found out about the issue from seeing it on HN the other day.


Not surprising given their past outlook towards security.


Also, have they even released the hash algorithm or if it was salted?


I'm not even sure if their disclosure meets the requirements of the GDPR, considering the breach involved user data.


It would be helpful to include what specifically went wrong when there is a security incident at your company. Every failure should be a learning opportunity for others. Perhaps there should be some sort of safe harbor for disclosing security compromises as it benefits the greater community.

I can understand why you'd want to cover your ass in this type of situation. However, I think keeping these things secret leads to more harm over time as people brush off weaknesses in their own systems for lack of concrete examples of where it caused harm.

Was an employee careless with credentials? Was some service not updated? Was it a typical attack like a SQL injection that caused the leak? Having more real world info helps people model threats better.


I wouldn't take this disclosure as an indication that they won't do a public post-mortem. The latter takes time and you want to get it right and to be thorough, on the other hand you want the disclosure to come as soon as possible for users' security.


Yeah, I would have liked to see them commit to a comprehensive post-mortem in the Q&A.


Have to bring this up again, but Docker specifically blocked manual automated builds before this happened from Docker Hub and required that you use their app to link your account. Therefore, it will be near impossible to trust them after this.


What is a "manual automated build"?


Docker hub used webhooks before to trigger builds from github. One-way and no compromise potential unlike the app which has write permissions.


Haha, good question. That didn't sound quite right.

It means essentially an automated build that kicks off on a Git commit, but that doesn't require app access to your repo or organization like they now require.


Manually triggered perhaps?


Why did they do that? / do you think they did that?


How does liability work in these cases? If you had a security breach due to this security incident, would it be Docker’s liability or yours? It’s probably in the terms and conditions, but I would think it’s your liability since you can host your own registries and it’s your responsibility to act on Docker’s warnings (and I would think users expect and demand abundance of caution). But what happens if Docker mischecked and sent you a false negative on being compromised? And what happens if a full post-mortem is released detailing gaps in security best practices between what’s Docker does and what you could do yourself?

This may well be a moot point; I think if you really wanted to be sure of what you were including in your code you would pull down tarballs and validate checksums for all dependencies before building on a secure network.


This also does not seem to have affected "Docker Official Images"[1]:

> Q: Were any of the Docker Official Images impacted by this incident?

> No Official Images have been compromised. We have additional security measures in place for our Official Images including GPG signatures on git commits as well as Notary signing to ensure the integrity of each image.

[1]: https://docs.docker.com/docker-hub/official_images/


IANAL.

Typically Docker would only be held liable if misconduct can be proven. Incompetence is typically not enough (which is why e.g. Equifax is not liable for the damages following their hack).

I do think these laws need to tighten up for security-related incidents, but right now, it is what it is.


Equifax has been found liable in court(s):

[warning: autoplaying sound]

https://finance.yahoo.com/news/people-successfully-suing-equ...


From your link:

> ...the judge noted that Equifax had a duty to safeguard information, failed to heed warnings from the Department of Homeland Security, and “willfully” violated the Fair Credit Reporting Act and state regulations.

IMO, ignoring government warnings and violating regulations is much different than failing to stand up to an attack.

I would be very resistant to making "being hacked" a crime - in almost all cases, the hackee is the victim of an attack. If you feel the need for legal action, we should increase our "anti-hacker" laws and enforcement.

We don't fine banks for being robbed. It's the robbers fault something bad happened, not the bank's.

EDIT: formatting


> We don't fine banks for being robbed. It's the robbers fault something bad happened, not the bank's.

No, we don't fine banks for being robbed. However, if the bank had clearly insufficient security on their vault, was notified of this being a problem, and made zero efforts to fix the problem then yes they should be held liable.


Isn't that exactly what I stated in my comment?


> We don't fine banks for being robbed

The comparison seems specious - the customers of the bank don't lose their money when a bank is robbed. The security is for its own benefit.

Still, your point stands.


2 things that might be useful this week:

Implement PGP/GPG signed commits in your organization.

Learn how to create docker images from scratch. (my own very basic tutorial on this is here: https://write.as/aclarka2/create-a-centos-7-docker-image-fro... )


Why is there no 2FA?

Why can't I use my Github account to login (which already has 2FA turned on)?


And why do we need to sign in with username and password to push an image!? I'd much rather either use some type of access token - or even better, a certificate - to push, hate to use passwords in CI!


I don't understand why this isn't supported, we have several system users in our organisation to try and replicate this, but I would much rather use access tokens, certificates or api keys.


Yeah, we have also been forced to create a few "users" to push to our org account too on docker hub, luckily, we mainly use gitlab for private images (and some public), in which we got tokens and even temporary tokens to use directly in the CI scripts.

I have felt quite unsatisfied with the security of docker hub since I created my first account, but after this issue, I can say that I'm seriously scared for it.

From now on, I'll use my own Alpine base image from "scratch" instead of the one on hub ;P


The race is on.

The first guys to implement a container hosting and building solution that is verifiable will dethrone docker.

I hope docker does this themselves, mainly because that will be the fastest route to this happening.


OBS[1] has supported building container images for several years.

Unfortunately the interface isn't great if you aren't used to it, but because it's integrated into the build system of the entire distribution you get automated rebuilds when your container image's dependencies change for free.

[Disclaimer: I work for SUSE and am an active openSUSE contributor.]

[1]: https://build.opensuse.org


I'm not sure what you're trying to say exactly, but I'm pretty sure that this already exist and there are multiple solutions and self-hosted registries as well for it. Check out Harbor:

https://goharbor.io


I'm not talking about private hosting of containers, I'm talking about an alternative to today's public registry of container images that is 100% verifiable so that when something like this happens it is possible to be certain that no one has tampered with any of the container images.


You mean something like posting signed hashes in a public place? So you can verify what you got is what you want?


Do you mean formal verification? Verifying such a vast web service may be nearly impossible, especially since you'd still have to rely on a database server, os, and kernel, the most popular of which are seemingly fundamentally incompatible with verification.



You can only have 100% verifiable container images if the Dockerfiles uploaded by users are reproducible (in the sense of https://reproducible-builds.org/). The vast majority probably aren't, and I'm not sure the Hub could reliably detect those that are.


I'm considering building a registry (manually to start with) using https://tahoe-lafs.org/trac/tahoe-lafs as a PoC to see if it could work. If it does, then that gets you verifiability, immutability, as well as the ability to make any image you want public without making other ones you own public. The only potential downside is that it might not work with the existing docker mechanisms for pulling images.


If you're looking for cryptographic signatures on container images, check out what my company Sylabs.io is doing. I wrote a comment on the other HN thread about this breach: https://news.ycombinator.com/item?id=19769590


I think when stuff like this happens there needs to be a public, technical, post-mortem. That way we can learn what went wrong and how we can protect ourselves from future breaches.


That is unlikely to ever happen...


In other industries it's done, and usually there's some regulation around it. It's possible that this would be added to our industry too, maybe via legislation or maybe the big giants all come together and decide on a format and guidelines on responsible post-mortems on data breaches and it's adopted by others. I think it can happen and I hope it does.


Immediately deleted my Docker hub account, deleted my github ssh keys, changed all my passwords.

I want Docker to succeed as a company but they just haven't made a compelling case for me to give them money yet. I guess they are focused on servicing larger companies.


> less than 5% of Hub users (x2)

That they know of. Not sure if I will keep highlighting this.



The compromised information was sensitive, but wasn't financial. So what WAS it? It included usernames and hashed passwords. But what ELSE?


Probably everything you have stored in Docker Hub.


Side question: Since docker had lots of read/write access to private repositories that might have included sensitive data, is this a GDPR incident for the customers that would need to be reported?

Technically, a customer didn't have a breach/leak that may have resulted in data being exfiltrated, but they also cannot rule it out, and as they've explicitly trusted docker, is that an event that should trigger a chain of official reports?


The reason that the intruder didn't get access to any of Docker's financial systems is that they use Netsuite for all of that fun stuff: This screams to me (at least) that the intruder probably had access to more than what they are discussing.


"Q: Why did you delete my GitHub tokens before notifying me" - I feel offended




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: