Don’t Docker themselves have a tool for this, Docker Scout? Pops up with how many known vulnerabilities are in each layer when you go to the page for a specific tag on Docker Hub.
I think it’s a somewhat new product so it may not be too widespread yet, but it seems to work pretty well from my admittedly uninformed perspective.
There's a number of image scanners which (IME) can produce different results from each other, although it's not always as clearly right/wrong as you might expect. Trivy (that this page is based on), Docker Scout, and Grype are three of the more common ones.
If this detects things that Docker misses, then it's a good product. Consider adding support for GitHub Actions so a PR can automatically kick off a scan. You'll see lots of repeat images, so cache appropriately. With an integration, I think you could charge a subscription for this tool.
It looks great. My main concern with Docker Hub images is what else is in the image that shouldn't be there. Not necessarily CVE issues, but just down right malicious code.
How do you identify what is a safe Docker Hub image?
Surely it's not just reputation of the publisher of the image.
Docker is slowly improving tooling for reproducible builds. I'm working in a blog post presently about how to do it. Reproducible images allow others to audit the build. If you're worried about third party build systems getting infected and injecting malware (unknown to the otherwise trustworthy publisher), this can help.
At the moment I'm rat-holing on apt package pinning, which doesn't work at all like I expected. Looking like I'm stuck between the Debian snapshot archive and vendoring .deb files (I don't like either).
Hi there, I'm from the Trivy team -- you can scan the misconfiguration of container images i.e. Dockerfile, with Trivy as well.
However, without the source code being open source, you cannot really check what anyone is up to -- thus, don't just use any container image on DockerHub
For production, I'd recommend not using any image that isn't in the base image set, which are maintained by Docker, and if you're using Docker Hub, you already trust Docker :) There is also "verified publisher" scheme where Docker have done some verification on the publisher, so you may also want to trust those.
Outside of that any image can have anything in it.(Docker do sometimes remove actively malicious images if they're notified of them)
If you want an image similar to an existing one, you can often just read the dockerfile and create your own.
Scanning one package is one thing, but distributing what are essentially entire disk images for a functioning OS plus whatever software you’re after just never seemed appealing to me. It’s doubly laughable when it’s Java or something else with its own VM, because for some reason we never have enough abstractions.
I did like the idea of running an OS that is a purpose-built container host, and containers that are stripped as bare as possible, until I thought about it more and realized that that’s supposed to be what a normal OS does with normal software. Zones and jails and cgroups and LX(C|D) were good ideas that we needn’t have reinvented on top.
There are a number of web/SaaS based vulnerability scanners out there actually :) Snyk's SaaS does it, for example. Free to sign up, free to use. My company's SaaS also provides container scanning with Grype and Trivy. There was another free web-based tool I found a while ago too (forgot the name... will look for it).
It's worth noting that whilst vulnerability scanners are very useful, they pretty easy to bypass, if someone creates an image and doesn't want them to flag things up.
Security Scanners are not supposed to be defensive tools -- they flag what they can find; evaluating the quality of the resources is still the responsibility of the user
you should include seed phrase and private key detection. a few crypto protocols that offer public docker images have been drained from accidentally committing keys to docker hub.
I think Trivy does that already [1]. I personally use trufflehog [2] to find secrets of all kinds. Unfortunately, these sorts of tools have false positives
It might sound like nitpicking but I find it dangerous to say vulnerability == CVE.
CVEs are one source of information about potential vulnerabilities but they are amongst the least reliable these days. I've heard them being called Curriculum Vitae Enhancer.
And Trivy itself uses more sources than just CVEs as well. With upcoming regulation like the Cyber Resilience Act we'll get even more sources of vulnerabilities.
I believe the future will be/should be one of distributed sources for this information. Every vendor might want to be their own authoritative source of vulnerabilities.
This is a long way of saying: Useful tool, congratulations on the launch! I'd suggest a change of name as you limit yourself with the current one.
(Not the author.) I’ve only really encountered CVEs when they have been mentioned on patches, usually Debian security updates.
Do vulnerabilities normally get a patch and are we expecting upcoming regulation to require the patches are
installed? If action is required to be taken when vulns are published do we all have to just uninstall the thing until the bug gets fixed, lest we invalidate our corporate insurance policy?
Will I have to cease my current policy of running Trivy, reading the CVE output, and then declaring (and making a git commit saying) “while this stdlib library technically supports CORBA and our OS technically supports IPX, we
don’t use CORBA or IPX… or networking… or this library… so I’m ignoring this!”
I'm not sure if you're replying to the correct thread?
All I'm saying is that all CVEs are supposed to be vulnerabilities but not all vulnerabilities have a CVE. So, the name artificially limits the scope of the product.
Trivy reports more than just CVEs
I have been tangentially involved in the Cyber Resilience Act (CRA). I'm more an interested party than a real expert.
We recently wrote a document on how we would like to approach our own vulnerability management process. It received a lot of comments and we'll gladly take more:
> Do vulnerabilities normally get a patch and are we expecting upcoming regulation to require the patches are installed?
Not all vulnerabilities get a patch, it's really up to the project. But this is what the CRA is about. In the commercial context it will require vendors to handle vulnerabilities by e.g. providing patches. And depending on which industry you are in you might also be required to install said patches.
> If action is required to be taken when vulns are published do we all have to just uninstall the thing until the bug gets fixed, lest we invalidate our corporate insurance policy?
I doubt it but it's a good idea to have a good overview of what you're running in your company. This extends to dependencies which might be included in things you're running. That is what SBOMs are meant for. They will be required in the future.
> Will I have to cease my current policy of running Trivy, reading the CVE output, and then declaring (and making a git commit saying) “while this stdlib library technically supports CORBA and our OS technically supports IPX, we don’t use CORBA or IPX… or networking… or this library… so I’m ignoring this!”
No. That is excellent! It will be formalized into a machine readable format. Currently often called a VEX statement: Vulnerability Exploitability Exchange. One popularish format for this is CSAF. But CycloneDX (an SBOM format) can also be used for this.
Having this in a machine readable format makes it easier for users to consume the information.
The tooling for this is not great yet, which is what our document above is about.
Hope it helps.
Happy to chat about this if you're interested. Reach out if you like. Details should be in my profile.
The question on everyone's mind is whether or not regulators (Fed/State RAMP, etc) will accept variances with a vex statement. The posture for the vast majority of businesses subject to this kind of regulation is "fix every single CVE regardless of impact".
That's a good question and I encourage you to join the CISA Working Groups on VEX etc.
It is only indirect work but it might help steer things in the right direction.
I'm repeating myself but: CVEs are useless. Please don't use them as an equivalent for "vulnerabilities".
Regulators might take a few years to catch up but if I'm able to DDoS my competition with bogus vulnerabilities then people will do just that.
The CRA says that only "exploitable" vulnerabilities have to be fixed and it does accept VEX statements. So that's good.
I would have liked to see "exploited" instead of "exploitable" but it's better than nothing.
I love the hundreds of critical vulnerabilities on test libraries for nonsense like ReDOS or "if you write code that does something unsafe, it will do something unsafe."
I think it’s a somewhat new product so it may not be too widespread yet, but it seems to work pretty well from my admittedly uninformed perspective.