
Over 30% of Official Images in Docker Hub Contain Security Vulnerabilities - mikagrml
http://www.banyanops.com/blog/analyzing-docker-hub/
======
vacri
"Containers have revolutionized software development by providing a very
efficient path to take software written by developers and run it in production
in a matter of minutes or hours, rather than days or months using traditional
approaches."

FUD. The technology of deployment does not change 'minutes or hours' into
'days or months' \- it's management red tape that does that. In fact, in my
use case, Docker takes a similar time to build as a normal package (.deb)
using an up-to-date base image, but is actually _slower_ to deploy, since now
my servers have to download a stupidly large container with build-essential
(npm doesn't really survive without it), python (because npm maintainers use
python frequently), and graphicsmagick (for the in-house app), instead of
'just the app' that's in a _normal_ package.

If your environment is simple enough that you don't have to be concerned with
testing in 'staging' against staging databases or similar, then you're
definitely not saving 'days', because your env just isn't that complicated.

~~~
justingood
"The technology of deployment does not change 'minutes or hours' into 'days or
months'"

I wouldn't say that's true. We're transitioning into multiple languages, and
want to have an environment that will allow future languages to be added as
required. Building a generic infrastructure to run containers lets us run
everything on the same base platform. Otherwise, we'd need to tailor the
images and configuration for the individual language type. When a new language
is introduced, it can take 'days or months' to get everything working well.

That's not to say Docker doesn't require the same attention to security as
other options. This seems to me akin to running a downloaded base VM image
without first doing updates.

~~~
KaiserPro
Or, you could do what HPC has been doing for years and seperate the config
from the machine.

What do I mean by that? shared drives.

Seriously, install python$ver plus dependenceies into /mnt/bin add it into
your path. You now have a single source of (readonly optional) each binary
version.

this means that you can have many versions of the same software all compiled
in a different way. But because they are in the path, they can be
transparently managed. Also it means that much of the config management is now
in one place, making joining nodes super simple.

~~~
DannoHung
We do this at my company and it is a fucking nightmare. Why? Because there are
like 4 different operating environments and there isn't an _official_ way to
do installations and you also have to manage site installations of various
packages for each version of each language dependency. And god forbid some
environment variable is pointing to the wrong version of something because
it's _not_ just as simple as setting PATH and LD_LIBRARY_PATH when every thing
and its mother tries to set its own fucking environment variables all pointing
to wherever they think they were compiled at.

No, it is _much much MUCH_ better to actually have an application build with
its dependencies and deploy with its dependencies. And you know how you fix
issues with security patches? You have a real build system that rebuilds your
binaries and you redeploy regularly.

------
eliaspro
Not surprised at all.

And here we have the prime example, why the Docker-model of building and
distributing containers is horrible when it comes to security and maintenance.

Bundling dependencies for production environments has always been and always
will be a terrible idea.

~~~
jtheory
This sounds like an oversimplification, though:

> Bundling dependencies for production environments has always been and always
> will be a terrible idea.

We're considering Docker currently -- not for the distribution model at all,
since we'd only ever use our own internally built & maintained images -- but
as a clean way to break apart dependencies, and make it possible to run a
diverse multiple-server-type environment (production) in miniature
(development, demo, UAT).

I quite like the idea of something that may occupy multiple VMs or dedicated
servers in production be able to run as a lightweight app in a dev
environment, with exactly the same dependencies in place -- that's quite
useful.

If this kind of use case is also a terrible idea, I'm interested to hear more
-- we're just now tinkering with the idea, and haven't yet moved from theory
to practice.

My own concerns revolve around how easy it will be to keep updated on RHEL
patches, for example -- apparently we should be able to keep both host and app
dependencies updated without much trouble, but it adds more complexity to the
maintenance cycle (it seems).

~~~
creshal
> My own concerns revolve around how easy it will be to keep updated on RHEL
> patches, for example -- apparently we should be able to keep both host and
> app dependencies updated without much trouble, but it adds more complexity
> to the maintenance cycle (it seems).

That's about the "problem" with Docker – it's deceptively easy to roll out
everything as its own containerized app. Updating? Not so much.

It turns Docker from a magical silver bullet into a slightly fancier way to
handle reproducible deployments. Using it this way is fine, but not what
Docker is marketed as by many.

~~~
olavgg
Actually its pretty easy, I just did it yesterday for my PostgreSQL container.

Debian/Ubuntu example: sudo docker exec -it my_pgsql_container_name /bin/sh -c
"apt-get update; apt-get -qqy upgrade; apt-get clean"

~~~
tomaac
And what happens when you launch new container from the same image? You need
to run apt-get/yum again. Or rebuild image.

~~~
s_baby
That's why you keep everything with state in a separate volume container.
Attach volume to built image and that's it.

~~~
jeromenerf
Mount data, logs, configuration, eventual extensions in the data container?

For pg, there might be some migration needed when jumping from a major version
to the next. Which requires both versions installed, on Debian at least.

~~~
s_baby
>Mount data, logs, configuration, eventual extensions in the data container?

Many programs have their state represented as files that are stable across
versions. If you have a cluster of the same image with different states it's
more efficient to move volume containers across a network. Easier to
backup/upgrade too.

pg is going to give you those problems whether you are using Docker or not.

------
jkyle
I have a few contentions with the study.

First, if you look at their own analysis the number drops from 30% to 23% when
limited to only the latest tagged images in the official repository. I'd
expect to see a higher rate of vulnerabilities in previous versions...that's
why you rebuild. Find me a linux admin that would accept their OS is
vulnerable if you're citing old, unpatched versions.

Second, they seem to virtually _all_ be package vulnerabilities. These would,
ostensibly, reach parity with whatever the target distro is by simply updating
packages on a rebuild.

Finally, I think one would be hard pressed to lay any vulnerabilities traced
to updated, current packages at the feet of docker. That fault would seem to
lie squarely with distro package maintainers.

So, two simple rules would seem to bring the security of container deployment
in line with standard bare metal deployment (by the metrics applied in this
research):

1\. Don't use old shit

2\. Rebuild your selected docker container to ensure packages are up to date.
Why? See rule #1.

~~~
Sleaker
I thought the point of using docker containers was that they were pre-packaged
apps. Not so you had to continually rebuild the container with your own
updated packages. Doesn't having to rebuild the container to fix security
vulns defeat one of the major reasons to have versioned docker images released
for use? You could very well end up breaking dependencies.

~~~
DannoHung
You're sort of combining two things: 1) Docker makes it super simple for
anyone to package software and run it 2) Dockerhub makes it simple to share
software that you have packaged with other people.

Personally, my biggest gripe with Dockerhub is that a Dockerfile should be
_required_ in order to upload to the hub, and it should show the Dockerfile
that produced each version. The fact that people can create fundamentally
unreproducible binaries is nasty (there's also the issue of not specifying
versions in the apt/yum steps used in the Dockerfiles, but that's just a
general problem with the way package management software is designed).

None of that's a problem with Docker itself though.

~~~
Sleaker
Ahh got it. I have only really used lxc, so not super familiar with docker
other than it being a container tech. Thanks for the explanation :D

------
dantiberian
This isn't great, but it's not quite as terrible as its being made out to be
for the official packages. The Mercurial bug is only relevant if you're using
Mercurial with user supplied input on your production servers. Unlikely if
you're not BitBucket. [http://chargen.matasano.com/chargen/2015/3/17/this-new-
vulne...](http://chargen.matasano.com/chargen/2015/3/17/this-new-
vulnerability-mercurial-command-injection-cve-2014-9462.html) Is a good read
on the subject.

The libtasn1 bug seems to be only relevant if you're using GnuTLS. Again, not
great but not the most widely used library either.

Cutting those two out cuts the number of vulnerable images in half and there's
probably a few more rarely used programs with security issues further down the
tail. Again, this isn't great, but it's not quite as terrible as the authors
are making it to be.

The user supplied packages on the other hand seems to be quite a bit worse.

~~~
acdha
The take-home message is that you need to have a strategy for deploying
updates. It's true that not all bugs are exploitable but there's a long
history of people being catastrophically wrong in that kind of conclusion.

More importantly, however, you want updates to be a routine frequent thing so
you don't train people to ignore them or let the backlog build up to the point
where the size itself becomes a deterrent to updating because too many things
will change. If you install updates regularly, you keep changes smaller and
keep the focus on the tight reaction time which you'll need for serious
vulnerabilities.

~~~
jgummaraju
One of the authors here. I'd like to second this take-home message. The core
of our work was to bring to the fore-front that package management using
containers is important and we need to have sound operations
management/security practices in place.

We think Docker, and containers in general, is a great way to deploy software
-- the speed and agility is so much better than traditional approaches. This
also means that we should have sound security practices in place from the very
beginning, or else we could easily end up with insecure images floating around
in several places (dev laptops to public cloud).

~~~
acdha
> the speed and agility is so much better than traditional approaches

Complete agreement here – Docker's strong points are exactly the things which
make patch deployment easier than in legacy environments. Hopefully we'll
start seeing orchestration tools which really streamline the rebuild/partial
deploy/monitor error rates/deploy more cycle when updates are available.

------
mpdehaan2
It would be interesting to see if Docker could develop an integrated security
scanner, checking the package lists of each image, and email out consumers of
those images when security vulnerabilities come out.

If Docker Hub is a monetization strategy, I think a lot of people might be
willing to pay for that -- though it's weird, because that's a problem golden
images themselves created, so maybe it's not fair -- and the world would be
better if security info was always free. Tracking security updates is hard if
you use a lot of deps, anyway, this has the benefit of being a central place
that can check these things. Most developers shipping software definitely do
not track security history for most of their components, and this is a huge
opportunity.

Problem gets harder when people get things from outside package managers and
vendor stuff though -- which does not help.

I owe Red Hat for a large part of the way I think about things, and I do think
the world would be better if package managers were used more extensively for
exactly the reason of tracking vendor security. I also realize not everybody
can package everything and do like to vendor deps (or similarly use language
specific package managers often installed in arbitrary locations) or put them
together _however_ (random internet tarballs), and this ironically is why
things like Docker also exist too.

The immutable systems movement is good, but something to clean up security
practices would be a huge plus to avoid the comparisons to regression back to
"golden images". Using random base images vs distro base images makes it
worse, but using stale distro images is itself a thing.

------
jakozaur
A bit overstated. They definition of security vulnerable == got package which
is vulnerable.

However, merely having some packages with vulnerabilities may not be enough.
E.g. you have security in package manager (apt), but you never use it after
building the image. Or even shellshock is no flyer, if you don't use CGI
scripts and don't have ssh access.

In Virtual Machines this problem also exists. I guess it is more about how
often you update your software than Docker itself.

~~~
innguest
Tell me about it!

Gotta love those security experts that your company hires when they say to you
"your app has a security issue right here" and I say "alright then prove it,
hack it, let's see if there really is a security issue" and they can't do it.

If I don't want to worry about deployment, there's Heroku. If I don't want to
worry about testing, there's Circle CI. If I don't want to worry about
scaling, there's AWS EC2. If I don't want to worry about security, there's...
nothing. Because it's not a real product. At least not real in the way
databases, deployment, testing and scaling are.

So when people say "programmers don't care about security" I honestly don't
understand what they mean since I've never seen a secure app. It's like
there's this mob of believers that want to convince you security is the
salvation. OK, teach me by showing. Show me a bunch of secure apps and we'll
learn from it. But those don't exist, so no one ever learns, but that doesn't
keep "security experts" from blaming programmers building real things in the
real world for not caring about their imaginary friend.

I'll believe security experts care when they create a _service_ and sell it
for money to people like me.

~~~
nickcano
Security Guy: Hey, bank, it seems like your vault is accessible via some old
sewage tunnels.

Bank: So what? Nobody knows about those tunnels.

Security Guy: But someone who finds them, like me, but with less morals, could
rob you.

Bank: Prove it. Rob the vault.

Security Guy: ..... ?

Finding a vulnerability isn't the same thing as exploiting one, and a lack of
exploitation doesn't imply a lack of vulnerability. You also have to consider
that a small portion of vulnerabilities are actually exploitable, but it's a
very hard problem to find out which ones are and which ones aren't. Exploiting
a single vulnerability is typically harder, in fact, than patching a dozen of
them (for example, you can easily start using a secure version of strcpy(),
but exploiting it requires an attacker to smash the stack or ROP their way
into full execution).

The bottom line is that you're not only naive if you believe what you just
said, but you're doing a _huge_ disservice to anybody who uses any code that
you may write.

~~~
innguest
Security Guy: Hey, bank, it seems like your vault is accessible via some old
sewage tunnels. But fret not, I as a security expert that goes around making
sure places such as banks and schools can't have specific places accessed by
entrances other than designated the ones, have a solution for you. Just put
your safe inside this chroot building. What this does is makes sure only
sewage goes through sewage pipes (not people). So all you have to do is
purchase this solution and we will guarantee that no one will come into your
bank through sewage pipes.

Why does that never happen? Why are security experts always consultants and
they never have a product to sell?

Naive is a person that thinks just because they are a security expert,
programmers will care. No amount of shaming will change that. If you're a
security expert your job is to make this so easy that I almost don't think
about it. Like I almost don't think about databases, deployment, testing,
scaling. Getting on your high horse and begging programmers changes nothing.

Just look at RSpec. All of a sudden everyone wants to write tests because it's
fun and easy and looks sort of like English. Now we don't have to care much
about tests, we just write them and RSpec runs them, collects and reports
errors, formats them nicely, tells me the path and the line number where each
error occurred, etc. Now imagine you're a "testing expert" and there's no
RSpec and you keep yelling at programmers to change their ways, to write and
maintain tests, and so on. No one would do it (like few did before the recent
craze). So please, learn from that lesson, round up some peers, and contribute
to your damn field by letting me forget about it.

~~~
nickcano
So a structural engineer shouldn't worry about the structural integrity of his
buildings, only that they stand up under ideal conditions? A car manufacturer
shouldn't worry about crash-testing or other safety concerns, only that their
car moves?

HOW DOES THAT MAKE ANY SENSE?!?!

Like it or not, we're stuck on Von Neumann architecture, and as a result, data
can be treated as code and vice-versa. The consequence of this is that, under
certain circumstances, data can be carefully crafted to act as code, and can
be executed in an unforeseen context. As a software engineer, _it is your job_
to take precautions when developing software. Precautions that prevent this
execution. Security people do the best they can to make it easy to develop
safely, but all of that is _useless_ if the developers ignore it. And, because
security vulnerabilities are a manipulation of context-and-program-specific
control flow, there's not a way to encapsulate all security measures in a way
that is transparent. It's just not possible. Only developers know the
specifics of their software, and only developers can protect certain edge
cases. If you assert otherwise, you have a fundamental misunderstanding of the
systems that you work with, and you need to re-evaluate your education before
continuing to work in the industry (assuming you do). This isn't an opinion.
This is a fact.

Lastly, us "security experts" do contribute to our field. Security is one of
the hard problems in computer science - far harder than whatever you're doing
that lets you _" not think about databases, deployment, testing, scaling"_ \-
and there's a lot of solutions that have been engineered to deal with software
that has been created by people like you. There's _static code analysis tools_
, which can detect bugs in code before it is even compiled. There's _memory
analyzers_ that can detect dozens of different classes of memory-related bugs
by just watching your software run. There's _memory allocators_ and _garbage
collectors_ that can prevent issues with use-after-free and other heap-related
exploitation bugs at run-time. There's _data execution prevention_ and _buffer
execution prevention_ that, at run-time, help prevent code from being executed
from data pages. There's _EMET_ and other real-time exploit detection tools
that exist outside of your software and can still prevent exploitation. That's
not even an exhaustive list. There are literally hundreds of tools out there
that make finding and fixing security bugs easy, but those tools can't patch
your code for you. That's why there are consultants, code auditors, and
penetration-testers that can give advice on how to fix bugs, find bugs where
automated tools fail, and even coach developers into writing more secure code;
because having smart, security aware developers is one of the major ways to
defend against security bugs.

~~~
innguest
> As a software engineer, it is your job to take precautions when developing
> software.

On other people's software as well? Why was it not PostgreSQL's (random
example) job to make sure their software rejects invalid input? All it would
take is for them to use a typed language (given that the type system in
Haskell, for instance, is enough to prevent SQL injection). So tell me, when
does it become my job to patch whatever database code I choose because no
database ever has concerned itself (it seems) with solving this for everyone
else in one fell swoop (so we didn't have to think about it anymore for all
these decades of dealing with SQL injection in every language that implements
a database driver)?

Before the first million programmers had to write the same damn code to clean
the input to give to these databases, the database coder should have fixed it
themselves. But you weren't there to chastise him so we didn't get it.

Maybe the "mere mortal" programmers like me would be more excited about
security if the industry standard software was also secure (we would want to
mimic it, and keep it all secure, and not _introduce_ security problems). No
security expert has fixed the SQL injection problem where it should be fixed,
but they do charge by the hour to fix it in every company that uses a
database.

~~~
nickcano
That's a horrible example. SQL injection IS the fault of the programmer, not
SQL itself. SQL injection is achieved by adding extra code to a query, which
is only possible when a programmer allows inputs that can contain code to be
concatenated directly into a query. Here's an example:

    
    
        query = "SELECT * FROM USERS WHERE NAME = '" + userinput + "'";
        exec(query)
    

This input can be given:

    
    
        ' OR 1=1--
    

To make the application show the entire list of users. If this programmer used
parameter binding, which is supported by PostgreSQL, MySQL, SQLite, and any
other SQL platform you can think of, then SQL injection wouldn't be an issue.
They could simply do something like this:

    
    
        query = "SELECT * FROM USERS WHERE NAME = @:USER";
        statement = prepare(query, "USER", userinput)
        exec(statement)
    

Just because you don't know the right way to do something securely, doesn't
mean it's not there. But you're right, no security expert fixed this problem.
It was fixed by the library designers of these SQL platforms. Security experts
just charge you by the hour to teach you that you're unfamiliar with the
existing security mechanisms inside of these platforms.

Also, just to be pedantic, I'll point out that a type system wouldn't change
how SQL injects currently work, lol, no clue how you think that's the case,
but I wouldn't put it past you at this point.

~~~
innguest
I've programmed for a while now. I think I've heard vaguely of parameterized
statements. :)

Just to be pedantic, I'll point out that maybe _your_ C and C++ "type" system
wouldn't change how SQL injects currently work, lol, but the one I use can
avoid not just SQL injection but XSS attacks:
[http://www.yesodweb.com/page/about](http://www.yesodweb.com/page/about)

I'll say it again, you're wasting your time staying in that small rickety
photocopy room called C/C++. But I wouldn't put it past you at this point.
Whatever that means, hahah.

~~~
nickcano
I'm sorry, I thought we were talking about security? Are you leaking the other
thread into here just so you can feel like you won both, instead of neither?

And I never said anything about any C/C++ type system doing anything? But
okay.

Back to the topic: if you've heard of them, why did you insist that SQL is
inherently insecure? Did you forget they existed, or did you just think I
wouldn't notice? Are you that cocky?

I really hope your employer one day recognizes your incompetence and fires
you, because the software world is plagued with enough bugs without people
like you purposely and gladly laying out a red carpet for them to walk in on.
I can't continue to argue with what is either a relentless geyser of
misinformation or a brilliant troll, so I'm done. Maybe one day you'll come to
your senses, but I doubt it.

~~~
innguest
The way a strong type system solves this SQL injection problem (despite your
saying it's impossible and ignoring my having shown your wrong) is by
automatically escaping arguments before binding them to parameters.

Well guess what, you don't need pre-compiled statements to benefit from this
feature - all you need is the hoisting aspect of it. In other words, if SQL
drivers did not offer the _unsafe_ function exec_query that takes the whole
query as a string and returns a result, and instead they only exposed a
hoisted version of that function that takes a list of arguments and a
placeholder query as a string...

    
    
      exec_query ["john", 12] "SELECT ... WHERE... = $1 AND ... = $2"
    

Then there is no SQL injection problem, as the SQL database driver would
always automatically escape the arguments before binding the parameters.

So if only SQL database drivers did _not_ offer exec_query but instead forced
the user to provide the _whole_ query string in one go with placeholders, then
the driver would be able to enforce security at the proper software layer -
which is not everyone's program that interacts with a database.

------
ColinDabritz
It might be interesting to have a gate on publishing images that explicitly
runs tests for major known vulnerabilities. You could at minimum flag images
as "known vulnerable", or reject publishing attempts.

The flag might make sense on a new vulnerability, and it could be applied
automatically. Imagine [Tag: Heartbleed - Untested] when the vulnerability
happened, then as the automated process rolls through the images [Tag:
Heartbleed - vulnerable] [Tag: Heartbleed - no vulnerability detected]. Future
images are required to pass first.

We have to be careful with widely distributed images.

------
cyphunk
Based on their definition of vulnerable the Ubuntu 12.04LTS installation image
is also vulnerable. I think this is only news to anyone that hasn't setup a
fresh install of Windows. I remember some presentation from the Honeynet
project circa 1999 about how a new win98 installation, without updating
service packs, took less than N (N<24) hours until compromise. Still I guess
it is worth reminding people to not trust official containers without first
applying security updates, and maybe never trusting unofficial containers,
depending on your project

------
phelmig
Great article. We'll need a better integration of security tracking and
handling in our containerized infrastructure soon.

You have to be a little bit careful when it comes to version numbers and
matching them to security issues. Most linux distributions for example apply
security patches to older releases.

E.g. Ubuntu 14.04LTS comes with Apache 2.4.7-1ubuntu4.4 which one might parse
as 2.4.7 which has multiple security issues.

The article references to distribution specific vulnerability ratings so I
assume they als matched those versions correctly.

~~~
yoshiotu
Study co-author here. We did observe that it's essential to be careful about
comparing package version numbers on a per-bistro basis, and there are some
tricky cases such as the one you pointed out, and rpm epoch numbers as another
example. I believe we handled them correctly in the study.

------
andmarios
An issue I have with both official docker hub images and dockerfiles provided
by software developers is that almost always they run their software inside
the container as root.

------
efuquen
I wonder what would happen if you attempted the same study with AWS AMI's on
the official images. Get the latest versions, don't update your distro, and
see how many vulnerabilities you get. How often does AWS really rebuild their
official AMIs?

Ultimately keeping your OS completely up to date is on you, not Docker, not
Amazon, _you_. VM's suffer from the exact same problems as Docker containers.

 _Edit_ : Also, security issues with using community AMIs are already well
known, should be no surprise the same applies to Docker community images.

------
bkeroack
They should do the same study for VM images on Vagrant Cloud (aka Hashicorp
Atlas), or any other repository of binary software/images built by untrusted
third parties.

I thought it was obvious that public images on Docker Hub were to be used for
experimentation only--even in that case I only use the "official" Docker
images in the library namespace. Anyone using Docker for serious purposes
should build their own or at least vet the pre-built images.

------
justincormack
Docker Hub as a build service doesnt make it very easy to update older images;
you can set manual triggers to rebuild current if the FROM container changes,
but thats not automatic. Other dependencies are not very easy either as you
only get one FROM, then everything else is probably from git repos, packages,
language packaging tools or tarfiles, which obviously need checking for
updates.

------
DyslexicAtheist
docker hub: the petri dish of choice for malware

~~~
DyslexicAtheist
dockers biggest strength is also its biggest weakness IMO. they did lots of
changes to the default capabilities (linux capabilities) to improve security.
But the underlying problem of fixing old bugs in images remains, along with
that its contents are often a disorganized mess: Coming straight from the
developer as a black box (more or less) into production environments (yeah
when has that ever been a good idea?).

Docker IMO creates a "never touch a running system" attitude. The "running
system" in this case is the docker image which nobody dares touching after the
developer has left the company. (or the developer themselves have no idea
anymore what it contained 3 weeks later)

Also the overhead of setting up containers in a secure way is even more work
than not using docker in the first place (ever had too look seriously into
SElinux? not something you do casually on the side as it's massively complex).

So the justification that "by using docker we save time on deployment" is a
farce. I guess it creates new jobs though for container specialists.

to paraphrase Theo de Raadt:

‟You are absolutely deluded, if not stupid, if you think that a worldwide
collection of software engineers who can’t write operating systems or
applications without security holes, can then turn around and suddenly write
virtualization layers without security holes.”

EDIT: is it still possible in Docker/LXD to access /proc/sys/kernel/panic or
/sys/class/thermal/cooling_device0/cur_state ? And how about consuming all the
entropy of the host via /dev/random ?

------
boroboro
I'm confused.

Looking at the top vulnerability CVE-2014-9462 in mercurial.

It affects mercurial clients that access crafted repositories as far as I
understand.

[https://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2014-94...](https://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2014-9462)

Even if I use mercurial in my Docker image to get my app and not prepackage it
(what I do), and I know this is about public images, how is this "high"
vulnerability? I don't deny it's one I would just like to learn why it is
classified high if e.g. I use Docker for my HAProxy.

------
starikovs
As a workaround, update/rebuild your containers more often and deploy more
often.

------
starikovs
So, what? Everything is vulnerable.. You're not restricted by official images,
just create your custom image that is not vulnerable ;)

