Hacker News new | comments | show | ask | jobs | submit login
Kubernetes Security – Best Practice Guide (github.com)
212 points by aboutsimon 3 months ago | hide | past | web | favorite | 44 comments



It would help a lot to have a Why for each section. For example, why use a private topology? Why block access to the AWS Metadata API?

I'm not saying it's wrong to do those things, but it would help to prioritize changes if you can understand the severity of the security vulnerabilities you're exposed to.


This guide really has no audience. If you are responsible for a K8ts cluster you will need a depth of knowledge that requires knowing why. If you are using a hosted K8ts then you only need to know potential issues (why again). No one needs to know only what to do.


No need to turn it into a blackhat's roadmap.

Having anybody else being able to enumerate portions of your infrastructure is not good.


Yes please! It's one thing to know what you should do because you've been told to do so, but quite another to fully understand why you're doing something.


Indeed very helpful, I created issues on Github.


From my experience of reviewing Kubernetes deployments for security here's where I'd start on securing Kubernetes.

- Make sure that all the management interfaces require authentication, including the Kubelet, etcd and API Server. some distributions don't do that consistently and from all perspectives. Whilst the API server generally is configured like this, I've seen setups where either etcd and/or the Kubelet are not and that's generally going to lead to compromise of the cluster.

- Ensure that you've got RBAC turned on and/or stop service tokens being mounted into pods. Having a cluster-admin level token being mounted into pods by default is quite dangerous if an attacker can compromise any app. component running on your cluster.

- Block access to metadata if your running in the cloud. For example, if you're running your k8s cluster on EC2 VMs any attacker who compromises one container, can use the metadata service to get the IAM token for the EC2 machine, which can be bad for your security :) this is likely to be done with Network Policy, so you can use that to do things like block access from the container network to the Node IP addresses as well.

- Turn off unauthenticated information APIs like cAdvisor and the read-only kubelet port, if you don't need them.

- Implement PodSecurityPolicy to reduce the risk of containers compromising the hosts


For EC2 Metadata, just use kube2iam or kiam. In a AWS-environment, it is pretty much guaranteed that some of your services will need to use it to get access to some other AWS service, and it will come in handy that time.


Some excellent talks on defense in depth with Kubernetes:

- Hacking and hardening Kubernetes Clusters by Example by Brad Geesaman -> https://www.youtube.com/watch?v=vTgQLzeBfRU

- Shipping in Pirate-Infested Waters: Practical Attack and Defense in Kubernetes by Greg Castle -> https://www.youtube.com/watch?v=ohTq0no0ZVU


You recommend RBAC but then state that the k8s-dash starts with full permissions. That's not true at all when using RBAC. You need to define which namespaces, resources, etc get accessed. Right now with k8s if you deploy RBAC + k8s-dash (which is basically deprecated anyway) and don't set up its RBAC svc account you won't be able to view things in k8s without putting in your personal admin token because it would use the default service account which has no/very limited permissions.

Definitely suggest adding more RBAC examples to this. And things like ETCD w/SSL, etc.


Someone installed the dashboard for us before the RBAC stuff was added. The fun started when we realized that the old ClusterRoleBinding needed to be deleted manually.

Clicking SKIP hence gave full access until we did.

It's nice when things are idempotent, but removing stray things that should go absent is usually overlooked.


If k8s-dash is deprecated, what's the replacement?


I may be wrong on that, or it's in flux. About 6 months ago in one of the changelogs or GKE blogs it was mentioned that the k8s-dash was being deprecated as GKE was launching its own proprietary dashboard (basically the GKE UI now). But sig-UI is still a thing so maybe that changed or I misread. Or it was more specific to GKE users, not entirely sure and having trouble tracking all that down now.. Sorry if misinforming anyone, hopefully a SIG member can chime in.

I don't believe I'm confusing this with kube-ui, which was deprecated for kube-dash.

https://github.com/kubernetes/community/tree/master/sig-ui


kubernetes dashboard[0] is still very much alive afaik.

Some providers / distros may have deprecated it, but the community hasn't.

0 - https://github.com/kubernetes/dashboard


Another tool that can help here: https://github.com/aquasecurity/kube-bench


This looks potentially very useful, thanks for sharing!

There appear to be several of these worth investigating. Ordered by highest to lowest apparent activity level and update frequency:

https://github.com/aquasecurity/kube-bench (Go)

https://github.com/neuvector/kubernetes-cis-benchmark (Bash)

https://github.com/dev-sec/cis-kubernetes-benchmark (Ruby)


I'll add https://github.com/nccgroup/kube-auto-analyzer to that list (disclaimer, I for my sins, wrote it :) )


This is quite useful, thank you for pointing me in this direction!


Super useful thanks, added it to the guide


If you're interested in Kubernetes security, I'd recommend looking at the CIS benchmark https://www.cisecurity.org/benchmark/kubernetes/ which is relatively well maintained and has a lot of information about possible Kubernetes security configuration.


I've wondered if there is a k8s-native way to run this? Perhaps as a DaemonSet?


Best practice: don't roll your own Kubernetes cluster and use a distribution like OpenShift[1].

They take care of providing, among others, a secure default configuration.

[1]: https://github.com/openshift/origin


It would be nice to know how OpenShift deviates from vanilla Kubernetes in terms of security and best practices. I used kubeadm to install my K8s PoC cluster and from what I've read it utilizes best practices for a "reasonably secure" installation. https://kubernetes.io/docs/setup/independent/create-cluster-...


Kubeadm does set up a few security features as long as its a fairly recent version. There are others you still need to configure and/or enable if you need them. (eg NetworkPolicies, OIDC, Encrypted data at rest, etc, etc.) I don't see openshift setting many of these either.


Well first of all OpenShift enforces that no pods can run as root. That's a pretty big deviation from vanilla Kubernetes right out of the box.


I was wondering how they enforce it.

>OpenShift runs whichever container you want with a random UUID, so unless the Docker image is prepared to work as a non-root user, it probably won't work due to permissions issues.

Source: https://engineering.bitnami.com/articles/running-non-root-co...


It’s not quite random. Every namespace gets assigned a unique block of 10k UIDs and the default container UID is the first in the block for all unprivileged users. Granting access to a higher powered PSP (actually a security context constraint which was the basis for PSP) changes the defaulting.


It's enforced with a default PodSecurityPolicy, which describes the attributes/capabilities that containers can have on the cluster.


How precisely does it provide a secure default configuration. As far as I can tell it enables very few security features built-in to kubernetes (of which there are many not enabled by default or require configuration to properly enable). It's still up to the administrator to secure.

Edit: Not to mention your comment doesn't address half the article, which dives into security tangential to kubernetes itself such as AWS


It comes with restrictive PSP equivalents by default. We disable every insecure port on the cluster. End users can’t schedule onto masters or other core infrastructure. Node authorizer is now on by default so nodes are limited in what they can do if a node escape happens. We generate unique certs for all nodes to uniquely identify them. End users can’t directly schedule onto specific nodes, or set endpoints to point to node IPs and bypass network policy. The default SDN plugin applies automatic network policy firewalling for projects. A user in one namespace can’t create an ingress rule that steal domains from another namespace. We enforce SELinux by default on all nodes and maintain the upstream policy that has been tested in our largest and most diverse environments (openshift online). We block through RBAC access to daemonsets which can be abused to DoS nodes. We support having default quotas and limits in all user created namespaces by default, and also quota how many namespaces users can create.

Almost every Kubernetes security feature started in openshift and was moved upstream in some form, although a few protections haven’t made it because they are too specific or would complicate Kube.


Thanks for the detailed reply. These should be posted big and up front in the project Readme. One of my, and my teams', biggest concerns and unknowns getting into Kubernetes, as we were coming from a traditional environment and deciding on a strategy or platform, was security.


Basics are missing, e.g. don't run privileged containers, don't give all your pods cluster-admin rights, don't allow random hostpath mounting. (You would be surprised how much software couldn't run if this would really be enforced.)


Have you seen CIS Kubernetes Benchmark? It has 100+ security check points.


Learned about them this morning, it covers almost all of my findings and more. Thanks for sharing!


Security guide is for RHEL6? Nobody is actually running K8s on RHEL6 without systemd, are they? Are they?!?!


Hahaha, I actually do run production Kubernetes clusters at work on CentOS 6 so I noticed that too. I suspect they're just posting a general Linux security guide rather than specifically targeting RHEL6 because running Kubernetes on CentOS 6 is really an uphill battle and I can't imagine that many other people embark on that journey.

It's entertaining that you singled out systemd as the thing that makes it difficult to run Kubernetes on CentOS 6, the init system is by far the easiest part haha. In a standard Kubernetes deployment, the only daemons you really need to have running are dockerd and kubelet, you could feasibly run it without an init system at all, especially now with cri-o. What makes you consider systemd to be important on kubernetes nodes? (FYI: I actually really like systemd, so this isn't a jab at it, I'm just curious)

For a taste of the battle:

- It ships with Kernel 2.6 which is pretty unacceptable in the container world:

-- Supports only a subset of modern namespaces and cgroup controllers

-- Has terrible bugs like containers getting OOM-killed due to the kernel not flushing buffers/cache to disk when the cgroup is running out of memory.

-- It doesn't have overlay2 support and aufs dropped support in 2012.

-- We've been running custom kernels since long before we adopted Kubernetes, so this wasn't a hurdle for us. We currently run a mainline kernel 4.9 with many patches. That said, there are yum repos out there for modern kernels.

- Docker stopped supporting CentOS 6 long ago at version 1.7. That said, they didn't kill off the CentOS 6 build support until the beginning of the moby split in 1.13 so if you were running a custom kernel and an updated iptables beyond 1.4, everything worked. We run 17.06 now, which was much more painful to get building.

- Need to build and upgrade util-linux, e2fsprogs, iproute2, libseccomp, and probably a few others.

So once you've done all that, an init script is the least of your problems lol. CentOS 6 also ships both sysvinit and upstart, so you could write an upstart config instead and get similar enough behavior to systemd.


Seems like a lot of hoops to have to jump through! Thankfully I managed to avoid having to go through all of these trials by being turned off that it didn't have systemd. Your response makes me glad that my unwarranted dislike for other init systems kept me from going down that path, though.

Thanks for the response, and hats off to you for making lemonade in that situation.


Yeah that jumped out at me too.

There is a RHEL7 guide at the predictable url. I've no idea what the differences are.

https://access.redhat.com/documentation/en-us/red_hat_enterp...


Shameless plug: Try Rancher 2.0 (beta announced today and GA coming in a month). With Rancher we enable all the security stuff by default but try to actually make it usable (which is quite hard). So RBAC, PSPs, network policy is all on by default. We give you quite a bit of tools to manage RBAC and PSP.

If you are managing you're own cluster you really need a guide like this. It's very easy to create a very insecure cluster. People are actively targeting poorly configured k8s clusters too. It doesn't take to long before you start mining Bitcoin :)


The main kubernetes docs have a good section on this as well.

https://kubernetes.io/docs/tasks/administer-cluster/securing...


Not included, but I would like to see how to inject secrets when running an image.


Kubernetes has great built-in support for injecting secrets as either environment variables (like API keys) or volume mounts (for things like certs). You can configure them to be encrypted at rest as well.

https://kubernetes.io/docs/concepts/configuration/secret/

https://kubernetes.io/docs/tasks/administer-cluster/encrypt-...


one thing to watch there is that you have to be using a relatively recent version (1.9+ IIRC) to get encrypted at rest for secrets in base Kubernetes


At image startup you mean? Or while the image is actively running.


If you use the file based secrets(or configmap), they'll be updated when the underlying secrets are.

Obviously your code would have to handle using these new secrets and not just simply read the file at startup.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: