- Quay.io offers scanning as a standard feature on all accounts including free open source accounts. This also includes notifications to external services like Slack. This is what it looks like when you ignore an image.
- The Kubernetes community has started automating scans of all of the containers that are maintained by that community to ensure that they are patched and bumped to the latest versions. A recent example.
The cool thing is that both of these systems utilize Clair Open Source Project as a way of gathering up data sources from all of the various distribution projects. This all leads to the reason we feel automated updates of distributed systems are so critical and why CoreOS continues to push forward these concepts in CoreOS Tectonic.
I gave Clair a shout out in the article, and I intend on adding it as an optional scanner to Federacy.
Funny, I'm pretty sure we met before either of us started our existing ventures, when you came to the San Francisco DevOps meetups. :)
If you hit any issues with Clair feel free to file an issue; we have a lot of folks who have been helping maintain the project.
Some quick feedback from the post and your website. It says there is a quick command to scan my container images but I couldn't find the command. I also signed up but the confirmation URL was a 404 and from "team" with subject "Confirmation email".
In any case really happy to see more people digging into these problems and coming up with new solutions.
The command is a standard 'wget/bash' script that you will receive when you login, but it's pretty simple to grok.
I'll send you a new confirmation email soon.
I think another aspect that is missed is that just because you use a vulnerable image doesn't necessarily mean you are at risk of being compromised no matter what other security layers you employ. This gets to the practical scenarios of security operations.
I didn't address the implications of software vulnerabilities in respect to other mitigation techniques, however, as it's far outside the scope of the article. I probably should at least add a second addendum though. I'll work on this soon. Thanks!
I'm curious what makes you say such analysis and mitigation is easier with docker?
I don't know why it's easier to mitigate risks, though. Maybe just because
it's easier to run the analysis.
Not sure I buy this. Sure, I can query the docker daemon for what images are running, but that's not enough to tell me which images are vulnerable. I still need to build something to actually scan the images.
Also, on any linux host, I don't need a daemon to tell me about deployed software - the package manager can do just that, and the tool used for scanning in this article appears to just query the package manager, which would work just as well on any linux host outside of docker.
There are certainly flaws in this approach; it's one of the reasons we intend to support multiple scanners. We started with vuls because clair wasn't released yet and we wanted to support more than containers.
clair does static analysis
vuls uses a package manager and changelogs
If you can query what images are running, you can tie it with list of deployed
software. Then you can compare that list with database of known
vulnerabilities; obviously, you'd do the same if you were assessing the host
OS without Docker. What's easier is that you already have an API that can be
> Also, on any linux host, I don't need a daemon to tell me about deployed software - the package manager can do just that
But you need to get to each of these hosts somehow and get the data out of
package manager, so a report can be prepared. This is the part that makes it
easier to assess what you have in the case of Docker. Then there is also
software that was not installed with OS-supplied package system, because
programmers somehow dislike those and work around them with virtualenv or
> [...] the tool used for scanning in this article appears to just query the package manager, which would work just as well on any linux host outside of docker.
I haven't read the article, but most probably you're right.
This is why many people can get away with a minimal base image like Alphine-- a tiny busybox shell provides enough features to run the application while still supporting some manual debugging with docker exec. It also avoids false positives like these, letting you more quickly find precisely what you need to upgrade when a new OpenSSL vulnerability is announced.
(Disclaimer: I work on Google Container Engine / Kubernetes).
The only exception is when people have access to the underlying container, willing or not. Then these vulnerable binaries can lead to a vulnerable container.
This is also why the subjectivity in CVE rating is such a significant problem.
If you can I'd recommend this as a good practice to reduce these kinds of problems. The fundamental fact is that if you don't have a library installed, you can't be affected by a vulnerability in it. So the smaller your image, the fewer possible avenues for shipping vulnerable libs you'll have and you'll have to spend less time re-building images with updated packages.
However, I intend to validate this presumption in a future project.
Alpine is definitely one of the major points, as well as static binary images and some advice on Dockerfile configuration.
In order, I would opt for: binary image, alpine, then debian. There are other choices like CoreOS, FreeBSD, etc. if you are comfortable moving away from linux.
NixOS is another distro that looks interesting.
Do those without vulnerabilities use a CI/CD process which results in the container being auto-updated whenever there are new releases?
Not sure about the state of CI/CD in the image building process, I assume it varies wildly. Two of the major points I'll address in my next posts are regarding deprecation in Docker repositories and lines of a Dockerfile important to minimizing vulnerabilities.
And maybe there's an opportunity for a chrome browser extension that can overlay an indicator when choosing a docker image to pick one that uses best practices like that.
This also applies to most of the AWS, Digital Ocean, etc images I have seen as well. I'll be writing more about how to mitigate this in the next article.
Happy to help if you need a hand.
No-one's doing it --> specialists doing it --> everyone's doing it.
With the speed of modern development, ideally everyone should have a good handle on basic security practices, ideally with a specialist team available for more niche requirements
However as you say in the small company space, it's very hit or miss as to what effort can be put into this kind of work.
The thing I'd say about services that do package vuln. scanning is that they can be useful but it's easy to get seduced by absolute sounding numbers (e.g. a CVSS 10 oh that must be much worse than a 4).
Unfortunately from what I've seen scoring can be pretty arbitrary (e.g. https://raesene.github.io/blog/2014/11/17/want-to-improve-yo... )
Also the problems there have been in the CVE space (http://www.theregister.co.uk/2016/05/25/mitre_fighter_deploy...) could reduce the efficacy of that kind of scanning if there are gaps where vulnerabilities are not being placed into the system.
All that's not to say there's no value in that kind of work, it's definitely a piece of the programme, but it's important to get it in the appropriate context :)
On top of this, the fact that the rating systems used by the different vendors/sources of vulnerabilities are quite different, and like you mentioned, the implicit subjectivism... it's a mess. But a solvable one! That's what I'm working on.
Thanks for the links.