I'm not saying it's wrong to do those things, but it would help to prioritize changes if you can understand the severity of the security vulnerabilities you're exposed to.
Having anybody else being able to enumerate portions of your infrastructure is not good.
- Make sure that all the management interfaces require authentication, including the Kubelet, etcd and API Server. some distributions don't do that consistently and from all perspectives. Whilst the API server generally is configured like this, I've seen setups where either etcd and/or the Kubelet are not and that's generally going to lead to compromise of the cluster.
- Ensure that you've got RBAC turned on and/or stop service tokens being mounted into pods. Having a cluster-admin level token being mounted into pods by default is quite dangerous if an attacker can compromise any app. component running on your cluster.
- Block access to metadata if your running in the cloud. For example, if you're running your k8s cluster on EC2 VMs any attacker who compromises one container, can use the metadata service to get the IAM token for the EC2 machine, which can be bad for your security :) this is likely to be done with Network Policy, so you can use that to do things like block access from the container network to the Node IP addresses as well.
- Turn off unauthenticated information APIs like cAdvisor and the read-only kubelet port, if you don't need them.
- Implement PodSecurityPolicy to reduce the risk of containers compromising the hosts
- Hacking and hardening Kubernetes Clusters by Example by Brad Geesaman -> https://www.youtube.com/watch?v=vTgQLzeBfRU
- Shipping in Pirate-Infested Waters: Practical Attack and Defense in Kubernetes by Greg Castle -> https://www.youtube.com/watch?v=ohTq0no0ZVU
Definitely suggest adding more RBAC examples to this. And things like ETCD w/SSL, etc.
Clicking SKIP hence gave full access until we did.
It's nice when things are idempotent, but removing stray things that should go absent is usually overlooked.
I don't believe I'm confusing this with kube-ui, which was deprecated for kube-dash.
Some providers / distros may have deprecated it, but the community hasn't.
0 - https://github.com/kubernetes/dashboard
There appear to be several of these worth investigating.
Ordered by highest to lowest apparent activity level and update frequency:
They take care of providing, among others, a secure default configuration.
>OpenShift runs whichever container you want with a random UUID, so unless the Docker image is prepared to work as a non-root user, it probably won't work due to permissions issues.
Edit: Not to mention your comment doesn't address half the article, which dives into security tangential to kubernetes itself such as AWS
Almost every Kubernetes security feature started in openshift and was moved upstream in some form, although a few protections haven’t made it because they are too specific or would complicate Kube.
It's entertaining that you singled out systemd as the thing that makes it difficult to run Kubernetes on CentOS 6, the init system is by far the easiest part haha. In a standard Kubernetes deployment, the only daemons you really need to have running are dockerd and kubelet, you could feasibly run it without an init system at all, especially now with cri-o. What makes you consider systemd to be important on kubernetes nodes? (FYI: I actually really like systemd, so this isn't a jab at it, I'm just curious)
For a taste of the battle:
- It ships with Kernel 2.6 which is pretty unacceptable in the container world:
-- Supports only a subset of modern namespaces and cgroup controllers
-- Has terrible bugs like containers getting OOM-killed due to the kernel not flushing buffers/cache to disk when the cgroup is running out of memory.
-- It doesn't have overlay2 support and aufs dropped support in 2012.
-- We've been running custom kernels since long before we adopted Kubernetes, so this wasn't a hurdle for us. We currently run a mainline kernel 4.9 with many patches. That said, there are yum repos out there for modern kernels.
- Docker stopped supporting CentOS 6 long ago at version 1.7. That said, they didn't kill off the CentOS 6 build support until the beginning of the moby split in 1.13 so if you were running a custom kernel and an updated iptables beyond 1.4, everything worked. We run 17.06 now, which was much more painful to get building.
- Need to build and upgrade util-linux, e2fsprogs, iproute2, libseccomp, and probably a few others.
So once you've done all that, an init script is the least of your problems lol. CentOS 6 also ships both sysvinit and upstart, so you could write an upstart config instead and get similar enough behavior to systemd.
Thanks for the response, and hats off to you for making lemonade in that situation.
There is a RHEL7 guide at the predictable url.
I've no idea what the differences are.
If you are managing you're own cluster you really need a guide like this. It's very easy to create a very insecure cluster. People are actively targeting poorly configured k8s clusters too. It doesn't take to long before you start mining Bitcoin :)
Obviously your code would have to handle using these new secrets and not just simply read the file at startup.