Using alpine with Python in production is usually a mistake. Python is considerably slower with musl libc than with glibc and on top of that no one is providing built wheels for musl-based platforms so you need to drag in the entire buildchain to package most projects. Who wants to build numpy from scratch as part of their build pipeline? And having a C compiler present is not great as it allows exploits to escalate their seriousness considerably.
Anyway most of these security scanners are bunkum:
There are not, in fact, a huge number of known and unpatched vulnerabilities in debian stable. If your tool is finding numerous problems, it is time to look more closely at your tool.
> Python is considerably slower with musl libc than with glibc
I've noticed this years ago too. Interacting with the pg package (PostgreSQL) which has a C dependency was noticeably slower on Alpine vs Debian when performing common database operations like selecting data. Since around 2018'ish I've been using Debian (Slim) and haven't looked back. Haven't had a single issue.
Sure slimmer fixes part of the problem, because it reduces the attack surface, but that's not really the issue.
Somehow everyone just assume that because something is in a container, then it won't need patching. What really happened is that we moved the patch management responsibility from operations to development. The developers just didn't notice.
One issue could be that it breaks many people mental model of containers. Container images are frequently used as a "Works on my machine" and is just bundled up and shipped.
It's even broken on Docker Hub. What's the point of an python:3.9 image, when it's never actually updated? Developers base their own Dockerfiles on these base images, but often forget that they need to add an OS update step. I don't understand why images, like the python ones, aren't continuously updated centrally.
> What's the point of an python:3.9 image, when it's never actually updated?
The library images are usually kept current. For instance, python:3.9 tag was last updated 6 days ago according to [1]. What's true is that you don't get those updates unless you actively rebuild your own images that are based on those library images. That's why you want some sort of continuous vulnerability scanning in your toolchain.
> but often forget that they need to add an OS update step.
Is this standard practice? I've never done this but it makes a lot of sense. Is this basically just adding `RUN apt-get update && apt-get -y upgrade` at the top of the Dockerfile? (Assuming a Debian image.)
In practice, it will eventually bite you. Someday some package that you depend on will have a security issue that causes a necessary config change that has a default you don't like.
Also, note that you have just changed to a non-reproducible container, where running it on Monday is not guaranteed to be the same as running it next Friday.
If you control your update source, which is standard practice for old school linux admins; this is not true.
I pull new updates on a set schedule and only push the ones that are tested in nonprod. There are any number of tools that do this, Uyani is a good free one.
> Also, note that you have just changed to a non-reproducible container
Precisely, and that may be what stops many from just updating their images. I think ideally we'd have centralised updates, perhaps just one a week, and then use tag for that week.
You could do all this in-house, but it's a lot of work for small teams. Redundant work.
In the end, what gets deployed is the responsibility of the entity doing the deployment.
Maybe the organization wants to deploy the latest trunk on every commit. There are probably some situations where that is reasonable. Those situations probably shouldn't involve people's money, privacy or safety.
Setting up a two-stage local repository isn't very hard. The intake side gets updates from upstream, and the deployment side gets updates from the intake side when the packages have been reviewed and hopefully tested. Do this for everything with an external upstream -- Ruby gems, Python modules, JS libraries, whatever -- and you have insulated yourself against supply-chain attacks. As a bonus, if your leftpad function just goes missing upstream, you still have a full copy in your deployment repository, and will until you decide it's time to implement your own.
> In the end, what gets deployed is the responsibility of the entity doing the deployment.
Yes and no, I don't disagree, but remember classic hosting, semi-managed infrastructure still exists. We often get container images delivered and are responsible for the operational side of things, but we have no control of what is actually inside the containers.
Sure we can make requests, or inform a customer that we believe what they are doing isn't safe. Ideally we could reject a request to run a container, but in reality that's not really an option. That shifts the responsibility of security more in the direction of the developers and they often do not have regular patch management as a priority.
If your organization isn't in control of the software you are running, you are a service provider and you have to treat them as potentially hostile external entities. Your responsibility is for the infrastructure, and you must explicitly disclaim responsibility for the functionality and security of anything inside the containers.
This is, incidentally, the antithesis of "devops".
Interesting, thank you! I still don't use containers (working on very small teams), and as I've tried to get into it, this is one of the questions I had where I was sure I was missing something and couldn't figure out what everyone else did... this is making me feel a lot more competent! (Although questioning the reasonableness of our industry!).
Containers aren't reproducible anyway. Skipping apt upgrade will not give you the same result on a rebuild. Builds on debian/ubuntu aren't deterministic.
Additionally, if you don't "apt update", some of the packages you try to install from mirrors will 404.
Unless you're using something with deterministic builds, reproducibility is a myth anyway. Update your OS and test the image and save the artifact somewhere with "docker save | zstd".
For one, the packages you install on Monday may not be available on the mirrors on Tuesday. The mirrors prune old packages not in the indices.
You have to do an “apt update” to ensure that the packages you install will be fetchable, because the apt indices inside the image are out of date and “apt-get install -y whatever” may fail with a 404. That means you aren’t guaranteed to get the same version installed from an “apt-get install -y whatever” after Monday’s “apt-get update” as you would after Tuesday’s “apt-get update”, as the current version of “whatever” may have changed in the interim, even if you don’t run an “apt upgrade”.
In any case, a lot of the files on disk are generated dynamically at install time for certain types of things, and include things derived from the state of the system (which frequently depends on remote network resources, as described above), so issuing the exact same dockerfile FROM+RUN+RUN+ADD etc lines will not result in an identical image result when run on different days: it’s nondeterministic. A deterministic build is one where the same build always results in a byte-for-byte identical build artifact.
There is effort being put in to make the building of the backing .debs deterministic, but AFAIK no Debian or Debian-like is trying to make apt itself work in a deterministic manner when installing packages. There are still postinstall scripts, for example, that are system-state dependent.
Really, you need to be saving your build artifacts when you do Docker builds. Saving the Dockerfile and expecting to be able to rebuild the image at any time later isn’t a good bet. You might, sometimes, be able to rebuild a mostly-compatible image, but there is no chance whatsoever you will be able to build a byte-for-byte compatible image, and it’s entirely possible that your build might just fail entirely (e.g. if you are pinning specific package versions that fall out of date and are no longer fetchable from the mirrors).
Then, in the worst case scenario, you can always load your original working/saved image back in, replace/patch/modify specific files on disk to address issues (either with vendor tools or manually) via a new build that pulls FROM the saved artifact image, and make a derived one.
So... what is the conventional/best practice way of dealing with this?
I ways always confused about this, thinking I didn't understand what people were doing here... but it turns out maybe there is no good solution and most people are ignoring it?
Would you expect this to result in more attention at some point, after it results in more exploits?
> So... what is the conventional/best practice way of dealing with this?
I don't know. I have been able to tolerate "apt-get dist-upgrade" on my legacy non-container system: upgrading from one Ubuntu LTS to the next is hard, but within an LTS we've never encountered a problem over 15 years (I think? more than a decade at least) this has been done.
A newer project uses NixOS, but it is so new it has no users. NixOS allows me to mention the particular commit of nixpkgs - that means the versions of the packages in the package manager - into a lockfile. This means I can run daily updates, run my tests against them, and deploy them. And because the lockfile is checked in, if things suddenly start going wrong on the umpteenth of Octember, I can see what changes happened on that day: was it the file I commited? no? oh I see, libtwiddle was upgraded. Yes, it was libtwiddle that broke things.
As I say, this is a brand new project. It may not work as perfectly as all that. And swallowing Nix requires a certain amount of koolaid: it is user friendly but it's very picky about who its friends are.
> Would you expect this to result in more attention at some point, after it results in more exploits?
I think that we will switch to paid platforms that offer managed runtimes. These will probably offer targeted, well communicated updates of dependencies (everyone will hear Microsoft Python is releasing ms-py-http-3: here's what you need to know), and have fewer libraries that offer more code. In a sense these will be switch back to distributions. We will say "why would you manage your own dependency?" And the cycle will repeat with a new flavor.
Best practice would be a pipeline producing an up-to-date bare image with your major dependencies tested and a CVE check. Then use that base layer in your deployed system.
The far more maintainable way to do this rather than every dev team at a company doing ad-hoc patching is to have your ops team maintain your base images that they're responsible patching and then every team derives from that.
This gives you the ability to move the pieces independently from one another so when you release it's actually (sw_version, platform_version) and lets you track down bugs caused by platform updates easier.
Any solution that works is going to have to work for a small team of seven developers at a company that has twenty employees, most of whom assume they're tech experts because they tweet a lot. The bigger companies run the most code, but it's the smaller companies that are must vulnerable and most in need of solutions.
Not sure what is standard practice but what seems to work for me is building it up in layers. Start with Alpine:latest install OS level packages tag that as baseimage:N. Use that image as the base of the development level Dockerfile. Then on a schedule build base image N+1, and run the tests.
That way if something in the base changes and causes a regression you can pin to the last known good until you can fix the bug.
First, the author is being fooled by false positives due to bad scanner settings. Basically there are a huge number of CVEs that are meaningless, and closed by some distros but not others in the CVE databases. The result is a spew of "OMG LOOK AT THIS BADNESS" which security scanners vendors like because it makes them look useful, but is actually just noise.
Third, Alpine has some issues in some cases, although for Go at least neither of these issues is usually relevant so Alpine is fine.
1. musl can be subtly incompatible with some applications, with annoying bugs. Personal experience: if you using minikube in a WeWork office, Alpine-based (or really, musl-based) containers would fail to resolve DNS inside Kubernetes due to a concatenation of circumstances that was mostly the fault of WeWork's ops team but which glibc handled better than musl. The problem has since been fixed by WeWork, AFAIK.
2. For Python specifically, binary precompiled packages (wheels) won't work on Alpine, which means you ahve to recompile the whole universe, which means container builds are slow. There's a PEP which might get this fixed, but for now, not worth it. https://pythonspeed.com/articles/alpine-docker-python/
It's a common misunderstanding: vulnerability != exposure
There was a rash of CVEs last year because Alpine or Debian or some base image had some sort of SSH exploit baked in, and a 'security researcher' got CVE numbers for pretty much every major container image. In reality, nobody exposes SSH from a container because that's stupid and not useful. Sure, there absolutely was a vulnerability in those images, but no it was never ever actually exposed.
I think this blind focus on CVEs without context is doing harm to the security process, it's taking away from actual work that isn't being done. For example, installing TeamViewer on servers doesn't have a CVE number, but that almost caused a town in Florida to be poisoned.
I'm all for smaller images but his final distroless Python image to run Flask is going to fall apart as soon as his Flask app needs to connect to a database.
That's because the official Python PostgreSQL DB package requires C dependencies which get built and referenced when you install the pg package. You also need certain system libraries to exist on your system in order to build them, such as libpq-dev on a Debian based system.
Yep but the author of psycopg2 recommends not using it in production. That's mentioned in the package's docs with "The binary package is a practical choice for development and testing but in production it is advised to use the package built from sources."
But more generally, this is only 1 package of many that has a C dependency and requires certain system lib files to run.
Luckily most of those vulnerabilities will be dormant, impossible to reach and exploit. Hopefully.
I don't think there's going to be any change in how people package containers. So perhaps what is needed is a dependency-walking Link-Time-Optimization-like tool that can trim down, perhaps by masking rather than removing, the dead code?
> Luckily most of those vulnerabilities will be dormant, impossible to reach and exploit. Hopefully.
Our people who use docker for some of our infrastructure have taken to using locally rolled images (from scratch or by updating official ones) for this reason.
I don't think any of the issues found when we scanned were actually exploitable in our config, but we need to be a bit more careful than just hoping (due diligence & security-in-depth and all that - an unknown flaw elsewhere could expose one of the issues and that in turn allow some form of DoS, data exfiltration, or arbitrary code execution).
We were surprised how much excess stuff, much of it out of date, is in some official images. Sometimes even the required dependencies seemed slower to be updated than we would prefer.
I still have a hope that increasing awareness of the problem may eventually lead to a reduction in the number of Dockerfiles starting `FROM <something_huge_and_not_really_needed>`.
One thing to watch out for, when using container scanning tools, is how they handle "unfixed" vulnerabilities in images based on Debian/Ubuntu.
Both those distros maintain a list of CVEs that they know of but don't have a patch for. Traditional VA tools (e.g. Nessus) default to not flagging those, but a lot of container scanning tools will default to showing them, so you end up seeing wildly different results. (some more details https://raesene.github.io/blog/2020/11/22/When_Is_A_Vulnerab...)
Whether you consider this a problem is ofc, dependent on your threat model, but it's one to consider.
(full disclosure, I work for a company that makes Trivy , but not on that project :) )
i am someone who does not use containers, and every time I try to get into it (knowing how popular they are now), I just get... stuck. In feelings of wrongness, among other things.
One of which is that I really don't understand the "security story" with regard to patches for vulnerabilities etc. How one is meant to know when a patch is required and how one applies it, what the "conventional" or "best practice" workflow for this is.
This article is making me think maybe there's nothing I'm missing....?
I think I might have the complementary partial understanding to yours, so I'm interested in comparing notes.
I use containers quite often. In my mind, it's just like managing a physical server -- I'm responsible for configuration/patching/system upgrades/etc -- but with the significant upside that I obtain a (mostly) reproducible environment in which to run the software. This is the part I really can't live without anymore. I don't have to deal with builds from a colleague that don't work on my machine.
>How one is meant to know when a patch is required and how one applies it, what the "conventional" or "best practice" workflow for this is.
So I'm mostly in the "dev" rather than "ops" space (without containers I can stay out of it), but I believe our (non-container) machines are running nightly updates of OS and other packages via `apt-get update` etc. That is definitely a pre-immutability world, which certainly has it's own problems.
Alternately, perhaps you deploy to heroku, where you hope [right or wrong] "that's heroku's responsibility, I'm not sure how they deal with it, but i assume they do"
Of course, the more "modern" alternative to heroku is some kind of k8 host (or using heroku's docker-based instead of traditional process)... which I gets puts the "patch vulnerabilities in OS" back in your lane again, even if you are paying for hosted K8 of some kind? That's among what I'm trying to get out of paying for PaaS!
Looking at other comments in this HN post discussion (I confess I asked versions of this question a couple times)... it seems there is some controversy over how people actually do this with container-based infrastructure, especially on small teams...
Same here, for the most part. I'm a freelancer, so I end up having to do a fair amount of cloud management (often with k8s), but it's not the part of the job I particularly like, nor is it what I spend most of my time doing.
>I believe our (non-container) machines are running nightly updates of OS and other packages via `apt-get update` etc. That is definitely a pre-immutability world, which certainly has it's own problems.
With containers, you would effectively do the same thing. The workflow is:
1. Pick a base image (e.g. debian:latest) and write a Dockerfile based on that. Add any system libraries or other configuration you need here. Run apt-get upgrade.
2. Build your image and give it a version tag.
3. Rebuild the image nightly and bump up the version tag.
4. Perform a rolling upgrade to the new docker image in k8s or whatever.
There are more elaborate workflows, but I've never needed anything else. Note also that k8s is not necessary at all (and arguably not worthwhile).
>it seems there is some controversy over how people actually do this with container-based infrastructure, especially on small teams...
I think this is spot on. Containers are quite general tools, and there are quite a few reasons to use them.
Personally -- and I suspect this is where you might benefit as well -- I use Docker to standardize the runtime environment. That's basically it. The benefit of having the same runtime environment across dev machines, and in production has been incalculable.
It’s not that different from regular Linux distros. A container built from a distro base image (like Debian or alpine in the first part of this article) is just a packaged version of that distro + your app. Distro maintainers apply patches to software in their repositories and its your responsibility to update your system or image.
What this article gets wrong is that it builds from a Debian base image without first performing apt-update && apt-upgrade. It’s similar to running a vulnerability scanner on a fresh install (with apply updates during install unchecked).
You basically pay for a tool that scans running images for vulnerabilities, then you have to rebuild images and redeploy them all.
Then you get to version pinning and other things you need for stability and you discover containers are no less of a hassle than patching traditional os were.
I'm struggling with this now too. It gets even crazier when you have dozens of microservices, each in their own container.
Imagine being mandated by InfoSec to scan ~24 images or ~10 GB every release.
The images are a mixture of python services and some upstream images like redis and mysql. If anyone has an idea on how to make this less painful, I'm all ears.
1. Install security updates (`apt-get upgrade`, or equivalent).
2. Only scan for security problems that _have updates available_. If there are no updates ... there's nothing you can do. Many security problems will never get updates, because it's an obscure problem that e.g. only happens if you install a NFS server in your container (which I assume you're not), or the upstream maintainer has closed it as WONTFIX. Some scanners (e.g. Trivy) have this as a command-line option, or a switch you can toggle in the UI.
- Consistent packaging/execution environment. containers are delivered and executed in a standard way regardless of target env.
- A level of sandboxing from the underlying host. Things like automatically applied Linux namespaces and dropped capabilities if you use Docker/runc, and then more sandboxing automatically applied if you use things like gVsior or firecracker.
1) I was more thinking of the delivery mechanism via centralized registries (i.e. Docker pull works for anything)
2) Sure in theory, you can totally replicate docker's sandboxing with Linux security principles, however containerized deployment gives you an easy to use sensible starting point :)
The main appeal of containerisation is not security, the main appeal is having a deployment process that's consistent across the entire team/org/company.
Sure, if all you have are Go binaries, you can just run those directly, but if you have some Go binaries and also some Python applications and also some Java beans, you stuff them all into containers and then they can be deployed with the same tooling.
Genuinely curious about alternatives. I have a simple node application that all I really need to do is run npm install and node index.js. Even with such a simple setup, provisioning hosts was a huge pain. Each one needed node installed on it, and I had to deploy the app to each new host. With k8s, I just change the number of hosts I want in a config file and everything just works.
In isolation there is no difference. In fact if rust or go apps are the only apps you deploy containers are probably overkill for you. But if you also deploy java/node/python/ruby then containers provide not just reproducible builds but a common deployment method that is language/framework agnostic. And the consistency is often worth it even if some of the deployments don't strictly need docker themselves.
Consistency is sometimes an underrated value but it can provide teams a lot of speedup when it's there.
I don’t need slimmer containers, I need slimmer VMs.
Honest question: I’m using Vagrant and virtualbox to locally reproduce my cloud infrastructure. I can’t do this just with containers (Some of my servers do not run containers, so I install packages via Ansible and apt, I use systemd, etc.). How do you reproduce locally infrastructure with just containers?
Containers for sure aren't a replacement for VMs. And indeed, VMs still have and will have legitimate use cases. There is actually a relatively simple way to turn a container into a VM. Maybe you'll find it useful https://iximiuz.com/en/posts/from-docker-container-to-bootab...
You can't reproduce your entire configuration with containers, since kernel version might be different. But everything else is fair game given proper namespacing, and AFAICT modern Linux aims for namespacing of all possible resources.
I think most would be served well by using Alpine images. In my experience their quality is quite high. Unfortunately as mentioned, musl is a non-starter for a number of uses.
Anyway most of these security scanners are bunkum:
https://pythonspeed.com/articles/docker-security-scanner/
There are not, in fact, a huge number of known and unpatched vulnerabilities in debian stable. If your tool is finding numerous problems, it is time to look more closely at your tool.