
Kubernetes Security – Best Practice Guide - aboutsimon
https://github.com/freach/kubernetes-security-best-practice
======
iooi
It would help a lot to have a _Why_ for each section. For example, why use a
private topology? Why block access to the AWS Metadata API?

I'm not saying it's wrong to do those things, but it would help to prioritize
changes if you can understand the severity of the security vulnerabilities
you're exposed to.

~~~
eikenberry
This guide really has no audience. If you are responsible for a K8ts cluster
you will need a depth of knowledge that requires knowing why. If you are using
a hosted K8ts then you only need to know potential issues (why again). No one
needs to know only what to do.

~~~
220V_USKettle
No need to turn it into a blackhat's roadmap.

Having anybody else being able to enumerate portions of your infrastructure is
not good.

------
raesene9
From my experience of reviewing Kubernetes deployments for security here's
where I'd _start_ on securing Kubernetes.

\- Make sure that all the management interfaces require authentication,
including the Kubelet, etcd and API Server. some distributions don't do that
consistently and from all perspectives. Whilst the API server generally is
configured like this, I've seen setups where either etcd and/or the Kubelet
are not and that's generally going to lead to compromise of the cluster.

\- Ensure that you've got RBAC turned on and/or stop service tokens being
mounted into pods. Having a cluster-admin level token being mounted into pods
by default is quite dangerous if an attacker can compromise any app. component
running on your cluster.

\- Block access to metadata if your running in the cloud. For example, if
you're running your k8s cluster on EC2 VMs any attacker who compromises one
container, can use the metadata service to get the IAM token for the EC2
machine, which can be bad for your security :) this is likely to be done with
Network Policy, so you can use that to do things like block access from the
container network to the Node IP addresses as well.

\- Turn off unauthenticated information APIs like cAdvisor and the read-only
kubelet port, if you don't need them.

\- Implement PodSecurityPolicy to reduce the risk of containers compromising
the hosts

~~~
captn3m0
For EC2 Metadata, just use kube2iam or kiam. In a AWS-environment, it is
pretty much guaranteed that some of your services will need to use it to get
access to some other AWS service, and it will come in handy that time.

------
hardwaresofton
Some excellent talks on defense in depth with Kubernetes:

\- Hacking and hardening Kubernetes Clusters by Example by Brad Geesaman ->
[https://www.youtube.com/watch?v=vTgQLzeBfRU](https://www.youtube.com/watch?v=vTgQLzeBfRU)

\- Shipping in Pirate-Infested Waters: Practical Attack and Defense in
Kubernetes by Greg Castle ->
[https://www.youtube.com/watch?v=ohTq0no0ZVU](https://www.youtube.com/watch?v=ohTq0no0ZVU)

------
swozey
You recommend RBAC but then state that the k8s-dash starts with full
permissions. That's not true at all when using RBAC. You need to define which
namespaces, resources, etc get accessed. Right now with k8s if you deploy RBAC
+ k8s-dash (which is basically deprecated anyway) and don't set up its RBAC
svc account you won't be able to view things in k8s without putting in your
personal admin token because it would use the default service account which
has no/very limited permissions.

Definitely suggest adding more RBAC examples to this. And things like ETCD
w/SSL, etc.

~~~
Filligree
If k8s-dash is deprecated, what's the replacement?

~~~
swozey
I may be wrong on that, or it's in flux. About 6 months ago in one of the
changelogs or GKE blogs it was mentioned that the k8s-dash was being
deprecated as GKE was launching its own proprietary dashboard (basically the
GKE UI now). But sig-UI is still a thing so maybe that changed or I misread.
Or it was more specific to GKE users, not entirely sure and having trouble
tracking all that down now.. Sorry if misinforming anyone, hopefully a SIG
member can chime in.

I don't believe I'm confusing this with kube-ui, which was deprecated for
kube-dash.

[https://github.com/kubernetes/community/tree/master/sig-
ui](https://github.com/kubernetes/community/tree/master/sig-ui)

------
mcdan
Another tool that can help here: [https://github.com/aquasecurity/kube-
bench](https://github.com/aquasecurity/kube-bench)

~~~
jaytaylor
This looks potentially very useful, thanks for sharing!

There appear to be several of these worth investigating. Ordered by highest to
lowest apparent activity level and update frequency:

[https://github.com/aquasecurity/kube-
bench](https://github.com/aquasecurity/kube-bench) (Go)

[https://github.com/neuvector/kubernetes-cis-
benchmark](https://github.com/neuvector/kubernetes-cis-benchmark) (Bash)

[https://github.com/dev-sec/cis-kubernetes-benchmark](https://github.com/dev-
sec/cis-kubernetes-benchmark) (Ruby)

------
raesene9
If you're interested in Kubernetes security, I'd recommend looking at the CIS
benchmark
[https://www.cisecurity.org/benchmark/kubernetes/](https://www.cisecurity.org/benchmark/kubernetes/)
which is relatively well maintained and has a lot of information about
possible Kubernetes security configuration.

~~~
captn3m0
I've wondered if there is a k8s-native way to run this? Perhaps as a
DaemonSet?

------
lima
Best practice: don't roll your own Kubernetes cluster and use a distribution
like OpenShift[1].

They take care of providing, among others, a secure default configuration.

[1]:
[https://github.com/openshift/origin](https://github.com/openshift/origin)

~~~
stuff4ben
It would be nice to know how OpenShift deviates from vanilla Kubernetes in
terms of security and best practices. I used kubeadm to install my K8s PoC
cluster and from what I've read it utilizes best practices for a "reasonably
secure" installation. [https://kubernetes.io/docs/setup/independent/create-
cluster-...](https://kubernetes.io/docs/setup/independent/create-cluster-
kubeadm/)

~~~
humbleMouse
Well first of all OpenShift enforces that no pods can run as root. That's a
pretty big deviation from vanilla Kubernetes right out of the box.

~~~
captn3m0
I was wondering how they enforce it.

>OpenShift runs whichever container you want with a random UUID, so unless the
Docker image is prepared to work as a non-root user, it probably won't work
due to permissions issues.

Source: [https://engineering.bitnami.com/articles/running-non-root-
co...](https://engineering.bitnami.com/articles/running-non-root-containers-
on-openshift.html)

~~~
smarterclayton
It’s not quite random. Every namespace gets assigned a unique block of 10k
UIDs and the default container UID is the first in the block for all
unprivileged users. Granting access to a higher powered PSP (actually a
security context constraint which was the basis for PSP) changes the
defaulting.

------
erikb
Basics are missing, e.g. don't run privileged containers, don't give all your
pods cluster-admin rights, don't allow random hostpath mounting. (You would be
surprised how much software couldn't run if this would really be enforced.)

------
praving5
Have you seen CIS Kubernetes Benchmark? It has 100+ security check points.

~~~
aboutsimon
Learned about them this morning, it covers almost all of my findings and more.
Thanks for sharing!

------
collinf
Security guide is for RHEL6? Nobody is actually running K8s on RHEL6 without
systemd, are they? Are they?!?!

~~~
paulfurtado
Hahaha, I actually do run production Kubernetes clusters at work on CentOS 6
so I noticed that too. I suspect they're just posting a general Linux security
guide rather than specifically targeting RHEL6 because running Kubernetes on
CentOS 6 is really an uphill battle and I can't imagine that many other people
embark on that journey.

It's entertaining that you singled out systemd as the thing that makes it
difficult to run Kubernetes on CentOS 6, the init system is by far the easiest
part haha. In a standard Kubernetes deployment, the only daemons you really
need to have running are dockerd and kubelet, you could feasibly run it
without an init system at all, especially now with cri-o. What makes you
consider systemd to be important on kubernetes nodes? (FYI: I actually really
like systemd, so this isn't a jab at it, I'm just curious)

For a taste of the battle:

\- It ships with Kernel 2.6 which is pretty unacceptable in the container
world:

\-- Supports only a subset of modern namespaces and cgroup controllers

\-- Has terrible bugs like containers getting OOM-killed due to the kernel not
flushing buffers/cache to disk when the cgroup is running out of memory.

\-- It doesn't have overlay2 support and aufs dropped support in 2012.

\-- We've been running custom kernels since long before we adopted Kubernetes,
so this wasn't a hurdle for us. We currently run a mainline kernel 4.9 with
many patches. That said, there are yum repos out there for modern kernels.

\- Docker stopped supporting CentOS 6 long ago at version 1.7. That said, they
didn't kill off the CentOS 6 build support until the beginning of the moby
split in 1.13 so if you were running a custom kernel and an updated iptables
beyond 1.4, everything worked. We run 17.06 now, which was much more painful
to get building.

\- Need to build and upgrade util-linux, e2fsprogs, iproute2, libseccomp, and
probably a few others.

So once you've done all that, an init script is the least of your problems
lol. CentOS 6 also ships both sysvinit and upstart, so you could write an
upstart config instead and get similar enough behavior to systemd.

~~~
collinf
Seems like a lot of hoops to have to jump through! Thankfully I managed to
avoid having to go through all of these trials by being turned off that it
didn't have systemd. Your response makes me glad that my unwarranted dislike
for other init systems kept me from going down that path, though.

Thanks for the response, and hats off to you for making lemonade in that
situation.

------
darren0
Shameless plug: Try Rancher 2.0 (beta announced today and GA coming in a
month). With Rancher we enable all the security stuff by default but try to
actually make it usable (which is quite hard). So RBAC, PSPs, network policy
is all on by default. We give you quite a bit of tools to manage RBAC and PSP.

If you are managing you're own cluster you really need a guide like this. It's
very easy to create a very insecure cluster. People are actively targeting
poorly configured k8s clusters too. It doesn't take to long before you start
mining Bitcoin :)

------
mugsie
The main kubernetes docs have a good section on this as well.

[https://kubernetes.io/docs/tasks/administer-
cluster/securing...](https://kubernetes.io/docs/tasks/administer-
cluster/securing-a-cluster/)

------
corpMaverick
Not included, but I would like to see how to inject secrets when running an
image.

~~~
jeffstephens
Kubernetes has great built-in support for injecting secrets as either
environment variables (like API keys) or volume mounts (for things like
certs). You can configure them to be encrypted at rest as well.

[https://kubernetes.io/docs/concepts/configuration/secret/](https://kubernetes.io/docs/concepts/configuration/secret/)

[https://kubernetes.io/docs/tasks/administer-
cluster/encrypt-...](https://kubernetes.io/docs/tasks/administer-
cluster/encrypt-data/)

~~~
raesene9
one thing to watch there is that you have to be using a relatively recent
version (1.9+ IIRC) to get encrypted at rest for secrets in base Kubernetes

