If you want to look at actual container security best practices, check out CIS  & DISA , and NSA , with some theory at NIST , as well as the documentation from your preferred cloud vendors, be it AWS, Azure, GCP, or other, as well as the specific container security practices.
I wish all "marketing documents" were this detailed. In other words, I disagree with you. I've read the blog post and it doesn't seem too high level. The resources you indicate are nice, but a 60-pages kubernetes hardening guide by the US Government is perhaps one level deeper than a blog post on internet.
Their own services and blog posts is also referenced in almost every section of the post, even when better external resources exists. Zero competitors are listed in any section. Doesn't sound very neutral to me.
I also agree with you on the fact that a "smarter" kind of content marketing would go beyond these limitations; it would mention competitors, or alternatives; and it wouldn't highlight its company's own services too much.
If someone from Sysdig is reading, these are suggestions for you, guys.
Perhaps "Ultimate guide" is a bit of a misnomer.
"Ultimate Guide, Executive Version" ?
I read the entire article thinking it would be a shill, I saw little evidence that it was. In fact, I got to the end and I still don't know what the hell Sysdig is.
If anything, Sysdig fucking sucked at marketing this one, if it was supposed to be a puff piece for the product.
Nist gets it right by starting there.
If you’re writing software against, say, dotnet3 (which has a docker image based on Debian) then you’re basically noised out.
Why is that?
Every company I have worked security in, including where I am at now, regularly reads government guidance. Especially NIST guidance, which is referenced all over the world.
They aren't perfect (you know, being humans and all), and can sometimes be slow in disseminating information to the public, but you're out to lunch if you think they "don't really know" anything.
Your comments reads overly defensive to me.
>It's funny that you use the term "actual" to describe the guidance from the US government. They don't really know what they are talking about.
Perhaps you can understand why I thought you were speaking generally, when your comment is written generally. I can't read minds to figure out what your silently scoping your comment to.
But if saying I laughed and why I laughed is overly defensive, my apologies. I'm not sure how else I would tell someone I find their comment funny.
In a similar vein, a fairly mid-level dev was recently trying to convince me that "Rob Pike is a clueless idiot who knows nothing about language design".
And fwiw, Rob Pike definitely did make mistakes. Golang is a great language, but it's not perfect.
My general point is that there a lot of people who see the world in binary - genius or idiot, perfect or incompetent.
However, they regularly put out drafts and socialize them at an early stage.
Additionally, there is a huge amount of content that they produce that isn't widely disseminated outside of DoD/IC.
i.e. We covered this across several articles like this one about tags:
This other one about file integrity monitoring (Disclaimer: A rather commercial one)
And I recall others more explicit on the read-only part, but I’m away from my laptop now.
Edit: Found it (point 1.3 in https://sysdig.com/blog/dockerfile-best-practices/ )
Thanks for pointing it out. Definitely it should be more explicit.
Another favourite of mine would be using multi-stage builds and minimal base images in production (FROM Scratch, where possible). having limited or no tooling in the running container makes an attackers life trickier for sure.
I know it's not exactly a production setup, but I really do feel that it's atleast the most secure runtime environment I've ever had accessible at home. Probably more so than my desktops, which you could argue undermines most of my effort, but I like to think I'm pretty careful.
In the beginning I was very skeptical, but being able to just build a docker/OCI image and then manage its relationships with other services with "one pane of glass" that I can commit to git is so much simpler to me than my previous workflows. My previous setup involved messing with a bunch of tools like packer, cloud-init, terraform, ansible, libvirt, whatever firewall frontend was on the OS, and occasionally sshing in for anything not covered. And now I can feel even more comfortable than when I was running a traditional VM+VLAN per exposed service.
Care to share about the details of the security services side of your stack too?
For network observability I'm using Cilium's Hubble, which I will soon figure out how to get into a greylog setup or something. For container image vulnerability interrogation I'm running Harbor with Trivy enabled, initial motivation was to have an effective pull through cache for multiple registries because I got rate limited by AWS ECR (due to a misconfigured CI pipeline, oops), but it ended up killing two birds with 1 stone.
Next on my list is writing an admission controller to modify supported registry targets to match my pull through cache configuration.
Is there something more specific you wanted?
Yeah sure, what is your network infrastructure too? :)
Are all the containers Linux only, or other OSes too?
If I wanted to run Windows in the cluster I'd probably have to look at KubeVirt. KubeVirt is oriented towards getting traditional VM workloads (ones you'd run in QEMU, Hyper-V, etc) functioning in a Kubernetes environment. While kata-containers is oriented towards giving container runtime based workloads (images that run on docker, containerd, CRI-O) the protection of virtualization, with minimal friction.
Previously external to the cluster I had some Windows VMs hosted on QEMU/KVM + libvirt for experimentation with Linux and Active Directory integration, but they've since been deleted. The only remaining traditional VMs I have are 2 DNS servers and one OpenBSD server for serving up update images to my routers.
For network infra I have a number of VyOS firewalls both at the edge and between VLANs, and Mikrotik devices for switching.
Would I be generally ok if I use gvisor to give a shell environment to customers and just keep the host up to date?
Or is using containers just relatively pointless for multitenant compute in a SaaS compared to giving customers virtual machines?
If you can't imagine the kind of SaaS I'm talking about, think something along the lines of Github's new online IDE, CodeSpaces.
I say this as a Kubernetes consultant. If you want "multitenancy" in the sense of distinct product or application teams all employed by the same parent company or organization, it's fine. But if you're talking truly different organizations with no implied trust between them, don't put them on a shared cluster.
I'm kind of curious how Github does this, because you can still get very minimalistic with VMs. Make the startup script for your application something that also mounts the filesystems it needs and name it /sbin/init and you just made yourself a poor man's unikernel.
The vast majority of container breakouts are due to bugs in the control plane and not so much the kernel.
The same is likely true for VMM's/hypervisors until those really started getting mature.
dotCloud and and Heroku are both examples of multi-tenant containers.
I think the challenge for process isolation container based stacks (as I'm sure you know :) ) is that there's multiple components/groups involved in security and then there's co-ordination with the underlying Linux kernel as well, which makes things tricky, as Linux kernel devs will have potentially differing goals to the container people (e.g. the challenges about how to handle the interaction of new syscalls and seccomp filters)
If you compare that to something like gVisor, where there's essentially a single group responsible for creating/maintaining the sandbox, it's an easier task for them.
So forgive me for asking. :)
There was a good report that covered a lot of the risks and mitigations here https://raw.githubusercontent.com/salesforce/kubernetes-cont...
But even then that had limited scope and didn't cover things like networking.
I’m not saying the article is totally bad, but calling it an ‘Ultimate Guide’ makes the author a charlatan.
Do you recommend to disable CPU limit? In the general case.
That has the slight clash with the fact that Hyper-V containers are not currently supported under Kubernetes (https://kubernetes.io/docs/setup/production-environment/wind...):)
For more depth on the challenges of securing Process isolation containers with Windows https://googleprojectzero.blogspot.com/2021/04/who-contains-... is a great read.
Virtualization & Containerization security depends a great deal on the security of the underlying platform.
Hyper-V can be used on endpoints , similar to VMware Workstation.
It can also be installed as a role on top of Windows Server , and, used as bootable OS of its own (likely deprecated in the future, so no hyper-v server past server 2019).
Related to this is the type of Windows server install, as it touches on attack surface also , but I believe there are constraints for the very small installs.
This matters because attack surface is likely to be, from smallest to largest: hyper-v server < Windows Server < Windows Endpoint