
The most popular docker images each contain at least 30 vulnerabilities - vinnyglennon
https://snyk.io/blog/top-ten-most-popular-docker-images-each-contain-at-least-30-vulnerabilities/
======
DCKing
Although vulnerability scanners can be a useful tool, I find it very
troublesome that you can utter the sentence "this package contains XX
vulnerabilities, and that package contains YY vulnerabilities" _and then stop
talking_. You've provided barely any useful information!

The quantity of vulnerabilities in an image is not really all that useful
information. A large amount of vulnerabilities in a Docker image does not
necessarily imply that there's anything insecure going on. Many people don't
realize that a vulnerability is usually defined as "has a CVE security
advisory", and that CVEs get assigned based on a worst-case evaluation of the
bug. As a result, having a CVE in your container barely tells you anything
about your actual vulnerability position. In fact, most of the time you will
find that having a CVE in some random utility doesn't matter. Most CVEs in
system packages don't apply to most of your containers' threat models.

Why not? Because an attacker is very unlikely to be able to use
vulnerabilities in these system libraries or utilities. Those utilities are
usually not in active use in the first place. Even if they are used, you are
not usually in a position to exploit these vulnerabilities as an attacker.

Just as an example, a hypothetical outdated version of grep in one of these
containers can hypothetically contain many CVEs. But if your Docker service
doesn't use grep, then you would need to _manually run_ grep to be vulnerable.
And an attacker that is able to run grep in your Docker container has _already
owned you_ \- it doesn't make a difference that your grep is vulnerable! This
hypothetical vulnerable version of grep therefore makes no difference in the
security of your container, despite containing many CVEs.

It's the _quality_ of these vulnerabilities that matters. Can an attacker
actually exploit the vulnerabilities to do bad things? The answer for almost
all of these CVEs is "no". But that's not really the product that Snyk sells -
Snyk sells a product to show you as many vulnerabilities as possible. Any
vulnerability scanner company thinks it can provide most business value (and
make the most money) by reporting as many vulnerabilities as it can. For sure
it can help you to pinpoint those few vulnerabilities that are exploitable,
but that's where your own analysis comes in.

I'm not saying there's not a lot to improve in terms of container security.
There's a _whole bunch_ to improve there. But focusing on quantities like
"amount of CVEs in an image" is not the solution - it's marketing.

~~~
kayfox
I work for a network hardware and security vendor and its utterly
disheartening how many customers come to us and don't actually care about the
impact of any of the vulnerabilities they ask us about, they just care about
the CVSS score, its PCI impact and their often bizarre policy about them.
Theres often less concern about actually doing something about security risks
and more concern about meeting their compliance goals. Now, this may be biased
by who actually reaches out, but it is scary that big names have underlings
who dont know the first thing about some of the security issues their
"investigating".

In another discussion the other day, I had heard programming these days
compared to slowly transitioning out of the hunter-gatherer phase and into
more structured society. From what I have seen this largely rings true, we are
still relying on software that is largely not engineered, but written with
loose engineering. The security industry seems to largely be like this, but
more of a wild west (as depicted in Westerns) feel to it. Some companies and
organizations have structured strategies for security, but even in large
organizations like Equifax theres still a kinda "go shoot the bad guys and tie
up the gate so the cattle dont get out" aspect to it, very ad hoc.

I am hoping the industry moves more towards engineering things, standardizing
interactions, characterizing software modules, etc so that the security
industry can spend less time on wild goose chases when trying to figure out
how something is supposed to work and how this latest vulnerability applies to
that.

~~~
tracker1
It comes down to risk, cost, reward. If it costs you one developer 3 months to
build a utility used by 5 people in your company, but would cost 3 years for a
team of 5 to write an "engineered" version with security focus, it may never
happen.

It depends on need and risk.

~~~
flukus
> It comes down to risk, cost, reward.

In theory I agree that there are trade offs like this, but in practice I
rarely see them being applied properly. A small startup up using electron to
build a cross platform app for instance, I can see how that's a good trade
off, but then you see multi-billion dollar companies with hundreds of devs and
millions of users building electron apps when they can easily dedicate the
resource for native ones.

Security tends to be similar, giant (non-tech) companies with lots of
important data are the ones that optimize for cost the most and don't care
about the risk.

~~~
tracker1
I'm not sure that I entirely agree... VS Code is really well supported across
platforms that otherwise may have been left behind. Contrast to say MS Teams,
which doesn't have as broad support outside Windows/Mac, that irks me more.

I'm a pretty big fan of Electron + Cordova to reach a broader base of users. I
don't think it's a net bad as a user who prefers the same tools on windows,
linux and mac as much as possible.

There are a lot of things you get in the box with a browser based platform
beyond the cross browser support. I mean even reflows and alignments working
right are far more consistent, more easily. CSS/Styling is the same as the
browser which is very flexible/capable. Some may dislike JS, but it gets the
job done.

But on the flip side, I've seen people build an entire application
(executable) or service from what could be a simple script in any given
scripting language.

------
haroldp
I'm a little appalled at the general attitude here. "These issues are probably
nothing," is just not a good approach to security. "My app only exposes port
80 so I don't care about local exploits," is not a good approach to security.
"Docker images always have a bunch of junk installed that you don't actually
use," is not a good approach to security.

Am I crazy?

What does this tell us about Docker as an ecosystem? It's amazing tech, to be
sure, but I feel like a lot of projects are leaning on, "just install the
docker image," to avoid the hassles of making flexible, compatible,
installable, readable software. If people out in the world can't install your
software because it's not compatible with a library they have updated to patch
a security vulnerability, they you will hear about it, and maybe get a patch.
If people just install your docker image... eh, it works, why bother looking
behind the curtain and see how? That's a ecosystem where I would _expect_ to
see a lot of bloat and security vulnerabilities creeping in and getting worse
over time.

~~~
rconti
Both are correct. I've been a sysadmin fixing vulns in PCI infra identified by
a Major Security Vendor, cussing about how pointless it is to fix most of
them, all in order to change some magic number to get below the acceptable
threshold. I've worked for that Major Security Vendor. And now I'm working
elsewhere, using Docker images from god knows where. It truly is quite
stunning how cavalier people can be about their container deployments, but the
reality is the vast majority of these vulns have never mattered to the vast
majority of people. But it's also important to be on top of what the vulns
are, so you can assess if they matter to you or not.

We started in a place of way too little concern about security
vulnerabilities. Some environments are still there, but many have been driven
by draconian policy to go way overboard.

~~~
haroldp
Oof, I have been there on dumb PCI "fixes"!

But my big concern here is, "How do Docker users stay on top of
vulnerabilities?" And I worry that for many of them, the answer is that they
don't. Or they just update their image when a new version comes out. And the
latter answer could actually be a big win for security... provided Docker
image maintainers are staying on top of vulnerabilities. Is the Docker
infrastructure doing a good job of policing that? Of highlighting images that
have known vulnerabilities?

Lots of people are replying that the article doesn't give any details about
_which_ vulnerabilities. That's valid, but is Docker giving details about
known vulnerabilities?

~~~
rconti
I think this is what people are banking on with CI/CD. In theory if the
maintainers knew what they were doing, we could just roll new containers same
day. Because patching things in-place has ALWAYS been a nightmare.

------
SahAssar
I really don't like these kinds of alarmist reports if they are not actually
saying what the problems are and how they can lead to actual, real, serious
attacks. A lot of claimed "high CVSS" vulnerabilities aren't when you put them
in context, and sometimes vice versa.

Yes, we should strive to be a lot better and maintaining dependencies is one
thing that generally everyone in modern development does bad, but this sort of
alarmist posts that have no concrete examples generally just lead to people
ignoring the whole field/industry. If those images are exploitable in the ways
they are intended to be used highlight that. If it's as bad as this post makes
it sound then that should be easy.

~~~
objectified
CVSS is a common and open approach. CVSS scores that are particularly high,
are often directly exploitable. If they aren't, then probably their
calculation was done wrong. You can try it for yourself here:
[https://www.first.org/cvss/calculator/3.0](https://www.first.org/cvss/calculator/3.0)

Having said that, what can lead to a serious attack for your organization is
often subject to a combination of factors. Say, there is an SSRF vulnerability
in your own application, because the HTTP library you use doesn't parse the
URL correctly, so now an attacker can let your application perform arbitrary
HTTP requests. But fortunately, the connectivity of your application server is
quite limited, so that an attacker can't reach internal systems, can't go to
the internet, uses strong authentication to web services it does use, all the
good stuff. So now the chance of a successful, serious attack is largely
diminished.

Also, it can be quite complicated to know what the exact, real dependencies of
your application are. What is the transitive/recursive list of dependencies
your application uses? Which of your application's dependencies actually use
libraries on your system? And what are _their_ dependencies? I think that cost
wise, it is cheaper to make sure your application dependencies, containers,
host system libraries, container orchestration tools, etc. are always up to
date.

And yeah, I agree that the post doesn't do a good job at all to provide a sane
rationale on _why_ you should update. Anyone who has ever administered an
operating system knows that security vulnerabilities are found in them every
day. But the awareness that a Docker container is subject to the same pace is
definitely not present everywhere, and it probably should be.

~~~
SahAssar
I'm not sure if that was the idea, but nothing you said refutes what I said.
If there is a potential SSRF due to one of those vulnerabilities show that, if
there is a potential but unlikely RCE show that.

Just saying that the default node image has 580 vulnerabilities helps no one
actually trying to fix these vulnerabilities or assess how to prevent this in
the future.

------
BossingAround
Did you know Red Hat has a free container registry [1] which is being
constantly watched for CVEs and CVEs are fixed like within a week of
announcement?

Just try "docker run -it registry.access.redhat.com/rhel7-minimal /bin/bash"
and you're good to go...

[1]
[https://access.redhat.com/containers](https://access.redhat.com/containers)

~~~
ihattendorf
What's the licensing status for these images?

~~~
e1ven
The slide at [https://www.redhat.com/files/summit/session-
assets/2017/LT12...](https://www.redhat.com/files/summit/session-
assets/2017/LT122012-dherrman-rhcc-lightning-talk-final.pdf) says that the
License will be displayed on the Get Image page.

This page says (for the NodeJS image)

"Before downloading or using this Certified Container, you must agree to both
the Red Hat subscription agreement located at redhat.com/licenses and the Red
Hat Connect Certified Container Partner’s terms which are referenced by URL in
the Partner’s product description and/or included in the Certified Container.
If you do not agree with these terms, do not download or use the Certified
Container. If you have an existing Red Hat Enterprise Agreement (or other
negotiated agreement with Red Hat) with terms that govern subscription
services associated with Certified Containers, then your existing agreement
will control."

~~~
indigodaddy
So I can't tell from this if you need a RHEL subscription or not to use the
docker images? I guess the most salient question will be can the images yum
install things or not? I'm guessing they will have that functionality without
requiring a subscription but that's just speculation based on my reading of
that blurb.

------
jrockway
Docker's concept of base images is quite useful. You can build your
application in a convenient container, then copy the resulting binary into a
container with nothing else (except SSL certificates and the time zone
database, for Go code anyway).

[https://gist.github.com/jrockway/cceef8bb5dcef62743f8bcbc044...](https://gist.github.com/jrockway/cceef8bb5dcef62743f8bcbc044cd2ad)

I started doing this around the time we started doing vulnerability scanning
and now the containers are both tiny and free of scannable security issues. I
recommend that others take this approach if possible, as having too much stuff
in your container increases app startup time, storage costs, and your attack
surface.

~~~
curtis
I've been doing something similar, but I've been taking it one step further:
Alpine's apk package manager will let you treat a subdirectory as "root" and
you can install packages there. Then you can write the root out as a tar file
and use that as the file system for a "FROM scratch" Docker build. One
deficiency is that every package seems to depend on the Busybox shell, so the
only way to get rid of that is to delete it after it's installed.

So far this approach seems to work OK, but it feels unnecessarily hacky, and I
wish there was better tool support for this kind of thing.

------
mrweasel
Generally speaking people should be much more careful about just pulling down
images from Docker Hub. So many image are extremely popular, but also very
wrong. For instance there are images that contain some web-framework, a
webserver and a supervisor process, ignoring the fact that it goes completely
against Dockers own documentation.

You also find image that contain some weird little tool, apparently solely
because author of the Dockerfile wrote it himself, not because you actually
need it.

I would wish that more people would build packages for their distribution of
choice and simply install via the package manager, rather than pulling down
compile time dependencies and rebuilding things like webservers, databases or
frameworks in containers.

We reach the point where I'm concern when people around me use images that
aren't just base OS images. The quality of image from Docker hub is all over
the place.

------
aboutruby
A lot of people say you don't need Heroku when you have Docker images, but
Heroku actually takes care of the vast majority of vulnerabilities for you.

~~~
mikepurvis
This has always been my hesitation with Docker— unless you have a proper
pipeline in place to be constantly rebuilding your overlay from updated
versions of the base image, you're basically just carrying a snapshot of
unknown binaries into the future with you, indefinitely.

Obviously, any reasonable shop will have such a pipeline in place, but
DockerHub and the whole ecosystem of docker getting started tutorials seem to
really encourage a "set it and forget it" mentality toward a container once
it's built and working.

~~~
orf
Do they? The whole point is you'd rebuild your app every redeploy, which
includes from the base image.

Docker has a lot of problems but build repeatability is not one of them, in my
experience. It makes it really really frictionless, in some cases way too
frictionless

------
tofflos
How about the Distroless containers from Google?
[https://github.com/GoogleContainerTools/distroless/blob/mast...](https://github.com/GoogleContainerTools/distroless/blob/master/base/README.md).

~~~
alpb
Similarly, Google Cloud has managed "base container images" for debian, centos
and ubuntu: [https://cloud.google.com/container-registry/docs/managed-
bas...](https://cloud.google.com/container-registry/docs/managed-base-images)

------
LunaSea
Oh surprise, surprise, another click bait, low effort Node.js security blog by
Snyk.io.

------
Thaxll
I would be very cautious with those vulnerabilities reports, 95% of the time
it's libraries from the Docker image that you will never use or don't have
access to.

~~~
vjeux
I am admin on a few high profile js projects. Github has enabled sending
alerts for vulnerabilities, I haven’t yet seen one that was actually a real
one. Lots of times there’s a deep dependency that we’re not even using that
can accept a regex that may cause a DoS.

~~~
eunoia
The deep dependency regex vulnerability warnings are particularly annoying.

Can anyone speak to what the actual attack vector is?

~~~
cyphar
There is a type of regex called a "pathological" regex which can be used to
make certain regex implementations have exponential time complexity. If you
exposed this in your application, someone could DoS you. Some libraries might
have accidentally pathological regular expressions, and so some user input
might be able to trigger the pathological case (Atom had a bug a few years ago
where certain source files would cause the editor to lock up, and it was
caused by a bad regex they used for parsing to figure out the auto
indentation[1]).

Russ Cox wrote an article about this in 2007[2], and the situation is still
the same. Go doesn't have this problem since Russ Cox is one of the lead
authors of Go, and wrote Go's regex library.

[1]: [http://davidvgalbraith.com/how-i-fixed-
atom/](http://davidvgalbraith.com/how-i-fixed-atom/) [2]:
[https://swtch.com/~rsc/regexp/regexp1.html](https://swtch.com/~rsc/regexp/regexp1.html)

~~~
gpm
Rust's regex library also doesn't have this issue [1], and is easily usable by
other languages [2]. If you're concerned about this issue in your code and
your not using go it's probably easier to use this than the go one.

[1] [https://docs.rs/regex/1.1.0/regex/](https://docs.rs/regex/1.1.0/regex/)

[2]

\- C (and thus everything with a FFI): [https://github.com/rust-
lang/regex/tree/master/regex-capi](https://github.com/rust-
lang/regex/tree/master/regex-capi)

\- Go (heh): [https://github.com/BurntSushi/rure-
go](https://github.com/BurntSushi/rure-go)

\- Possibly somewhat out of date Python:
[https://github.com/davidblewett/rure-
python](https://github.com/davidblewett/rure-python)

~~~
cyphar
Sure, but the only reason I mentioned Go is because Russ Cox literally wrote
the paper on this problem and also happened to write the Go regex library (as
well as a large part of Go itself). I wrote an FSA-based regex implementation
in Python some years ago[1] as a learning experience, but that's not really
relevant to someone asking "what is a pathological regex".

[1]: [https://github.com/cyphar/redone](https://github.com/cyphar/redone)

~~~
gpm
I agree entirely.

I wasn't trying to say "you should have linked this instead", just trying to
point anyone who sees this and goes "that's an issue in my code" in the right
direction.

~~~
cyphar
Ah, sorry -- I misunderstood the thrust of your point. Didn't mean to bite
your head off.

------
bayesian_horse
Most of these vulnerabilities won't matter to most users. Privilege
escalation, inside a docker container (mostly) doesn't go very far, and
developers should avoid providing code execution to users in the first place.

With docker, you will attempt a defense in depth. Even if someone breaks into
an app in a container, it can be very hard to break into other containers or
the host.

I suspect that many, maybe most developers have lower hanging fruits on the
security tree than upgrading deployed docker containers daily.

The removal of a vulnerability which can't be used as a link in the "kill
chain" of a hacker attacking your system isn't improving security that much.

------
vbernat
Currently, "node:10" is based on Stretch. The image is not totally up-to-date.
Despite what the article says, Debian Jessie is still maintained, part of
Debian LTS effort. After pulling "node:10-jessie", "apt update", "apt list
--upgradable" says:

    
    
        curl/oldstable 7.38.0-4+deb8u14 amd64 [upgradable from: 7.38.0-4+deb8u13]
        libcurl3/oldstable 7.38.0-4+deb8u14 amd64 [upgradable from: 7.38.0-4+deb8u13]
        libcurl3-gnutls/oldstable 7.38.0-4+deb8u14 amd64 [upgradable from: 7.38.0-4+deb8u13]
        libcurl4-openssl-dev/oldstable 7.38.0-4+deb8u14 amd64 [upgradable from: 7.38.0-4+deb8u13]
        libpq-dev/oldstable 9.4.21-0+deb8u1 amd64 [upgradable from: 9.4.20-0+deb8u1]
        libpq5/oldstable 9.4.21-0+deb8u1 amd64 [upgradable from: 9.4.20-0+deb8u1]
        libsystemd0/oldstable 215-17+deb8u10 amd64 [upgradable from: 215-17+deb8u9]
        libtiff5/oldstable 4.0.3-12.3+deb8u8 amd64 [upgradable from: 4.0.3-12.3+deb8u7]
        libtiff5-dev/oldstable 4.0.3-12.3+deb8u8 amd64 [upgradable from: 4.0.3-12.3+deb8u7]
        libtiffxx5/oldstable 4.0.3-12.3+deb8u8 amd64 [upgradable from: 4.0.3-12.3+deb8u7]
        libudev1/oldstable 215-17+deb8u10 amd64 [upgradable from: 215-17+deb8u9]
        systemd/oldstable 215-17+deb8u10 amd64 [upgradable from: 215-17+deb8u9]
        systemd-sysv/oldstable 215-17+deb8u10 amd64 [upgradable from: 215-17+deb8u9]
        udev/oldstable 215-17+deb8u10 amd64 [upgradable from: 215-17+deb8u9]
    

There shouldn't be 500 vulnerabilities here. Of course, node itself may pull
many outdated libraries outside of Debian (which is a common practice with
software using bundled copies instead of system libraries), but without
details, it's hard to know what is accounted for a vulnerability. Moreover, if
the Alpine version has 0 vulnerability, it would mean all the vulnerabilities
come from Debian.

The article mentions backports of fixes, so I suppose they just don't compare
blindly package version numbers with the versions provided in the CVE report.
For Debian, they could use the security tracker to know if a CVE is fixed and
in which version (something Alpine is lacking, so it's difficult to assess the
security of Alpine). However, many CVE are not fixed because the security
issue is deemed to be too minor. A bit more details about the 500
vulnerabilities would help to understand.

~~~
rixrax
Docker Hub has been providing 3rd party component details for some years now.
And based on my limited exposure, they've been pretty spot on in regard to
what 3rd party code is included, CVEs impacting shown components (meaning they
appear to mostly correctly show backported patches to otherwise vulnerable
libs). See below (requires DockerHub account):

URL: [https://hub.docker.com/_/node/scans/library/node/current-
sli...](https://hub.docker.com/_/node/scans/library/node/current-slim)

URL:
[https://hub.docker.com/_/mongo/scans/library/mongo/4.1](https://hub.docker.com/_/mongo/scans/library/mongo/4.1)

~~~
vbernat
For node:10-jessie, this is:
[https://hub.docker.com/_/node/scans/library/node/10-jessie](https://hub.docker.com/_/node/scans/library/node/10-jessie).
This seems credible.

------
cheald
FWIW, we use Gitlab to help protect against this: we use its CI scheduling to
rebuild our base images on a regular basis. Our Docker image builds are
managed via CI, and we have a schedule set up to rebuild them with --no-cache.
This keeps our base images fresh without slowing down most marginal builds
during the workday.

You could obviously do this with cron, as well, but if you already have a CI
pipeline managing your base images, it makes sense to set up a recurring
build.

------
nivenhuh
Looks like Alpine-based images were found to be "ok". Not sure why folks would
use non-alpine based images unless you really have some odd dependencies?

------
ricardobeat

        The node:10-alpine image is a better option [...]
        while no vulnerabilities were detected
        in the version of the Alpine image we tested
    

Note that Docker Hub offers no way of verifying download statistics _per tag_
, so we don't know how many containers are using the base node image - in my
experience everyone uses alpine.

~~~
tracker1
I tend to build on default/full and release in alpine.

------
coleifer
Can't tell if this is fud or not. For example with node, typically you'd only
be exposing port 80, right? So as long as your http server isn't vulnerable
you're ok. Or same with Redis, which I hope isn't anywhere near the public
internet?

~~~
acdha
> So as long as your http server isn't vulnerable you're ok

… and everything loaded by every dependency under any situation which doesn't
require admin access. Your server can be fine but if e.g. you process images
you have to follow libjpeg, libpng, zlib, littlecms, etc.

Yes, it's a lot better than a full multiuser Unix system where you have to
worry about background processes which aren't useful for a dedicated
microservice but there's a long history of vulnerabilities in components being
combined into successful exploits and it's usually far more expensive to try
to analyze those chains than to upgrade.

This brings me to:

> Or same with Redis, which I hope isn't anywhere near the public internet?

That's hopefully true in general but also consider chained attacks: say you're
running a web app and I find a way to run code in the app process. That might
be limited but if I can poke at Redis enough to run code there I can test
whether you were as diligent about sandboxing it. That'll hurt if, say, there
was a container exploit which someone delayed patching because they “knew” our
app only runs as an unprivileged user.

------
stevebmark
Most of these "vulnerabilities" are in the operating system that aren't run
when you run 99% of Docker containers...

~~~
ben509
Okay, what about the 1%? And how do you find out which 1% apply to you?

~~~
stevebmark
I'm guessing you would know because you're doing a lot of specific work in the
container to boot an OS, like get a full Ubuntu instance up and running in a
container

------
finchisko
I have an idea how to solve problem with outdated images. What about building
service, then will automatically rebuilds your image, when there is a update
base image and automatically publish it to docker hub (or private hub). What
do you guys think? Is there any interest in such a thing?

------
g105b
Can we have a new tab on Hacker News? "ask", "show", "jobs" and "marketing"?

------
zeristor
At a previous employee they were quite chuffed to have a Windows 2000 locked
down build provided by the NSA

------
PeterHK
if you have issue with having updated packages and or dont like unneeded stuff
in your containerbuild. use nixos `dockerTools.buildImage` (builder needs to
be nixos machine but that should not be a problem) also i like `skopeo` to
upload the image

------
je42
things like
[https://github.com/GoogleContainerTools/distroless](https://github.com/GoogleContainerTools/distroless)
should help to reduce the attack surface of your containers.

~~~
collinmanderson
Interesting. So instead of using a package manager inside the container, it
downloads .debs from debian, parses and extracts the files, and then uses
bazel to build the container directly.

------
musicale
If only there were a technology where you could fix a vulnerability (or other
bug) once and it would automatically propagate to all of your applications
without having to rebuild them.

Maybe we could call it a "base OS image" or maybe "shared libraries."

------
ganoushoreilly
Anyone else think this was Synack at first with a short url?

~~~
ganoushoreilly
Why would someone downvote an observation that this product was confused with
another cyber security firm?

~~~
ganoushoreilly
Let's make a threepeat.

------
alexnewman
How does this compare to vagrant?

~~~
e1ven
One big difference is that (afaik) people aren't putting vagrant images into
production, it's for local development.

~~~
alexnewman
sure they do. you are totally wrong. they just shouldn’t

