But, popular services, on default ports, with default APIs enabled, without hard authentication on a WAN interface? That should be a paddling. That doesn't fly. Or, well it does, except not for the guy paying the power.
I'm not familiar with enough distributions to know if there is a popular distribution that totally disabled authentication by default, but in my companies distribution, kubeadm clusters, and I suspect all managed clusters (GKE/EKS/AKS/etc), the vector outlined in the article would only work if an admin specifically disabled the authentication.
In gravity (my companies distribution), we even disable anonymous-auth, so someone would have to do real work to allow API access to the internet.
That said it's not that long ago that a lot of distros were shipping unauthenticated kubelets, and I think that's where a lot of this will come from.
From cluster reviews I've done, problems like this tend to arise where people are using older versions (so early adopters) or have hand-rolled their clusters, not realising all the areas that require hardening.
And that's where I'll turn around 180 degrees and say: If you can't give me a hard reason why you'll be a hard target on the internet, you shouldn't have a public address. Default authentication isn't enough.
I dislike trusting my edge firewall, but it gives me time to handle weak internal systems.
Typically, it's limited to client certificates that have been signed by the private key the apiserver has access to.
Client cert auth over tls is pretty damn secure. I expose my kubernetes cluster's apiserver to the internet and have, to my knowledge, had no issues yet.
At the moment Kubernetes has no certificate revocation process at all, so if one of your users has their cert stolen for an Internet facing cluster, you'll have to rebuild the entire CA and re-issue all certs to get round the problem.
They may consider that fine for security (the equivalent of having an insecure MySQL install on a machine with no tables of value in it), but might perhaps forget that even an empty Kubernetes install still lets attackers dictate what your CPU is doing.
Secure defaults are irrevalent if you pay attention to the news.
Interesting. I've got a few openbsd boxes that do not have vulnerabilities that impact them nearly so often.
It turns out that if you practice defence in depth, the majority of security vulnerabilities in the news have no impact on you.
For example, on my openbsd boxes I have only a single user. I do not run any untrusted code. That means spectre/meltdown doesn't actually impact me because no one can run code which will perform such a timing attack.
There was a recent openbsd/Xorg security issue. I didn't have X installed, and even if I did since it's only a single-user server, it again wouldn't have impacted me (privilege escalation means nothing when everyone is effectively root in my threat model).
All vulnerabilities are not created equal, and with enough good practices it's possible to have boxes that are secure for years and years with no need for patches.
I'm not so concerned about my OpenBSD box with >800 days of uptime, which runs very limited and carefully selected services.
"sshd" is an ambiguous term as there are many ssh daemons, from the libssh server to dropbear to OpenSSH, and OpenSSH is likely the one that you use and the most secure one.
systemd has had a few security incidents, but very few of them are actually a big deal. People have overblown each and every one since there has become a cult of systemd hate, which has muddied the waters significantly.
> you're playing with fire if you rely on product security to be perfect
That's a vacuous statement; of course you can't rely on everything being perfect, so you must practice defence in depth. Everyone already knows software sometimes has bugs.
The point of the parent post is that Kubernetes does intend to allow secure use while being exposed publicly (unlike e.g. the default redis configuration). The parent post does not claim it is perfect and that you must never patch it, merely that it is reasonable and can be hardened.
In the end, there are tradeoffs. You must decide that the convenience of developers being able to ssh into machines is worth the risk of running OpenSSH. You must decide that using Google Apps is worth the risk that Google will have a data breach exposing all of your confidential information. You must decide that Slack can be trusted to write secure enough php that your messages aren't being read by others.
Just because something isn't perfect doesn't mean that it can't still be a good tradeoff based on the expected risk.
I thought the 'd' was a holdover from "daemon", as in initd or setsid, as a general name for a background process. systemd is a little more than just a background process but it's sort of the same idea.
From the wiki page: "In a strictly technical sense, a Unix-like system process is a daemon when its parent process terminates and the daemon is assigned the init process (process number 1) as its parent process and has no controlling terminal. However, more generally a daemon may be any background process, whether a child of the init process or not. "
> Yes, it is written systemd, not system D or System D, or even SystemD .... [You may also, optionally] call it (but never spell it!) System Five Hundred since D is the roman numeral for 500 (this also clarifies the relation to System V, right?).
The 'd' is a pun on both daemons typically being postfixed with 'd' and on the roman numeral for '500'. It does not directly stand for either though officially.
The attack vector is what? Someone manages to convince an administrator to write a service that has "User=0foo" in it?
If an attacker has access to write into `/etc/systemd/system` then they already have root on the system.
If an attacker can cause an administrator to write a systemd unit and the administrator isn't checking that it's reasonable, the attacker could just have the `ExecStart` line run a 'sploit and not have a `User` line at all.
Seriously, what is the attack that you imagine where this has a security impact?
As Poettering said on that issue, no one should be running system services as usernames starting with numbers, and that's questionably valid in the first place.
People still have blown it out of proportion because it's systemd.
Note that a similar issue exists in the old sys-v init scripts: they run as root, and if you convince a person writing a sys-v init script to exclude the `start-stop-daemon -u username` flag, then the daemon will run as root. Basically identical, but never assigned a CVE because no one seriously considers "I talked my sysadmin into running something as root" by itself a privilege escalation.
When this engineer redid things they opted to go the public internet route where the master runs a public api and auth is done via a certificate. The logic here was so that external 3rd party stuff (CI) could control our master.
To my knowledge this setup is still running and chances are these machines are vulnerable to this issue.
Contrast to the prior setup where, immediately upon being offboarded from the company your VPN access became automatically terminated (thank you LDAP and Foxpass!)
With software like google IAP, and many similar products, it just seems silly.
Google has moved its internal stuff to the beyondcorp model, and it honestly seems like a better approach if you really care about security and have a big enough security team to make it work.
Google have a) huge resources and b) a threat model which means they're subject to a lot of high-end attacks all the time.
for many corp's the idea of exposing all their services and endpoints to the general internet without firewalls or VPNs would ... end poorly...
Google I(dentity)A(ware)P(roxy) is actually a hosted beyondcorp implementation! But I probably should have explained that in my original comment.
I generally think it's no more risky to expose a Go app with cert-based auth than it is to expose OpenVPN so long as both are set up correctly.
Many Kubernetes distributions enable anonymous authentication to allow for health checking, so there is some risk there.
As to the general point, the only thing I'd say is that Kubernetes is a massive 1.5 million Line code base which is relatively new code, where Openvpn has been around and attacked for a long time. I wouldn't be surprised if the recent CVE isn't the only issue we see in k8s over the next year.
Complexity: Single purpose apps built with a very specific threat model in mind for a boring, established usecase tend to be more secure. K8s is a fast evolving labyrinth of complexity with contributions from thousands of people, very few of whom have a grasp on the whole codebase.
Publicity: the general Internet doesn't find your VPN server just by using your API.
Why not sidestep the issue by running CI within the VPC? :/
edit: to clarify, vpn/vpc requirement would turn CVE-2018-1002105 from a pre-auth to a post-auth vulnerability, right? Which might be a big or small help depending on how controlled your user pool and signup process is.
Script kiddies are just annoying and their actions resulted in the patch killing that silent mining botnet as well.
Far too many people are adopting Docker/Kubernetes as they have been the hot new product for the last couple of years, often regardless of whether they are actually the best
or most appropriate tool for the job.
A lot of the people who get sucked into the hype are often inexperienced programmers, devops or admin types who are in positions of power or influence in companies that they probably shouldn't be, IMHO.
As a result, they don't have the Linux or networking experience to be able to know when they are deploying these complex products securely or not, and they are putting their employers businesses at risk.
You could say the exact same thing about Linux, Cisco, Dell, or pretty much any of the popular FOSS projects. Popular things, regardless of their complexity, get chosen by people of all experience levels. Inexperienced people are less likely to properly configure something, regardless of its popularity or hype.
If anything, having a few attractive projects tends to be beneficial (or at least neutral) for security as there are so many more people scrutinizing it, and many more people learning how to properly use it.
I cannot agree more. Many times, I feel you da easily do away with ansible and terraform to setup VMs / docker. you dont quite need k8s. Just cuz K8s are cool.. people feel the need to use it.
- You want to spin up ephemeral environments to test PRs end2end. Sure, create a namespace, deploy your charts and run your tests. You want to do that with ansible, sure you can, but it's harder.
- You org is running apps via a multi-cloud and on-prem strategy? Okay, lets just write lots of tooling per cloud and another for on-prem, or we could abstract that away via kubernetes and only worry about tooling for kube itself.
- You want to do have rolling-upgrades. Sure, you build them with ansible then, or you could just use kubes.
Further to that, kubernetes is guiding reasonble abstractions, seperating infrastructure from code. Sure, it comes with complexity, but so does most things when you start throwing in scaling and auto-recovery.
For example, deploy terraform from your laptop? The device you probably browse porn on has becomes an attack surface. Move this to Jenkins, the CI is the attack surface. Put your code on Bitbucket? Bitbucket and the Jenkinsfile becomes the attack surface. Pretty much everything we do has complexity and attack surface _problems_ and using a managed k8s service will allow you some easy wins so you can actually think about those other problems, and those solutions will work on all platforms you can run k8s on.
Do you containerize these yourselves, whether or not the vendor says that will support that? Or does it get pushed to some other team that manages whole VM's/AWS instances that are not container hosts.
Or is this a scenario that just doesn't happen in your environment?
> using a managed k8s service will allow you some easy wins so you can actually think about those other problems, and those solutions will work on all platforms you can run k8s on
None of which matters one jot, if one cannot properly manage ingress/egress filtering on one's API endpoints, or a reasonable level of password/credential security. One will be used for cryptomining or worse, as per the fine article.
In that instance, one needs to go back and get some basic UNIX/Linux/network and security training before one starts playing with complicated software on publicly connected clouds. Or hire some people who actually know what they are doing with respect to that.
Depends what it is. I've taken a number of apps and wrapped them into docker containers and then written a helm chart. Some orgs get a bit skittish over "vendor support" but this usually only matters when they think it's a key product.
The point is, once you have a fleet, you should manage everything the same. If you're off building other pet services, you're going to have capacity problems.
> None of which matters one jot, if one cannot properly manage ingress/egress filtering on one's API endpoints, or a reasonable level of password/credential security. One will be used for cryptomining or worse, as per the fine article.
I mean sure, but I did say use a managed service, which will come with auth. Similarly I wouldn't recommend you host services on any cloud or network facing the public, without a professional involved.
For example AWS is easy to get wrong all the same. One of my current client is busy hiring developers with no experience to put services on AWS, and they came up with no encryption, no auth, no monitoring, misconfigured IAM. What's really the difference between that and kube?
Working with large enterprises, I've seen both. If there's a good business case for the risk vs rewards (i.e. containers providing something technically useful which can be translated directly into revenue) and a good engineering + management team, some companies will actually risk it.
There's also the factor of how good the company's relationship with the third-party vendor is. Some companies have the weight to make the vendor support the unsupportable.
So itself should be more sensitive than other infrastructure pieces.
And I think op meant to say that, not that k8s is particularly bad in security in general. Or k8s is less experienced in security.
The down vote is not warranted.
It's not hype if it solves a lot of organizations pain points.
K8s is no doubt hype, otherwise it won't enjoy the explosive growth.
That's not subjective, at least IMHO
If magic CPUs that were 10x better showed up tomorrow we'd all be justified in being very excited. But running with hype around the next Zune? ... that's not excitement backed with meaning.
VMs also have zero-days that have been exploited for cryptomining.
From the article itself, although they mention the CVE at the top, the real point they are making is that people
are deploying the products with poor defaults:
"as is typical with our findings, lots of companies are exposing their Kubernetes API with no authentication; inside the Kubernetes cluster"
Not to mention a bunch of NoSQL type db's you can easily search on Shodan if you wanted to have some fun.
So yes - the problem here is experience, or lack thereof, and not Kubernetes itself. The CVE can be patched. You can't patch inexperience - except with experience I suppose.
All I am saying is that there a lot of people who are downloading and deploying these products because of hype, who are unable or unwilling to secure them.
It's not just a Kubernetes Problem. Like many have posted, many databases, other types of clusters, shares, are accessible without Auth for those that know how to look for them (not that hard now days), mainly malicious actors.
Nice, Monero mining
Cryptocurrencies makes the bug bounty market A LOT more efficient than companies, legislation or HackerOne ever could.
Yes - if it has a CPU and access to the public internet, someone will hack it and make it mine "cypto". Let's stop pretending we aren't aware that the internet of things exists and writing breathless stories every time a toaster, router, or adult toy starts churning out Monero.