Users should not use AES-CBC or GCM for encryption. Secretbox should be the default mode of storing information and users should be encouraged to use KMS.
I see where this is coming and agree in spirit, but GCM is actually idiomatic Go and implemented through the crypto/aead interface, which does about as good a job as any library at being user-proof.
I too would probably prefer code that used Nacl primitives over Seal/Open, but I would probably not flag code that didn't.
Good point, and I appreciate the (updated) Kubernetes docs do a pretty good job of telling you what the implications of using aesgcm vs secretbox are.
However, I was surprised that XChaCha20-Poly1305 wasn't recommended. XChaCha appears to check all the boxes you mentioned and is nonce-misuse resistant.
But, why not use XChaCha20-Poly1305 over AES-GCM in Go? Both are "implemented through the crypto/aead" and -- to my eyes -- seem equally user-proof. Why not take the bigger nonce size?
Does it make sense to make this recommendation even if the dev did not choose a vulnerable algorithm and there aren't any issues with implementation?
Using CBC in a Go program would be bad indeed.
Second, while you can make CBC secure, it isn't secure by default. New designs should generally avoid CBC mode in favor of a mainstream AEAD. So while I'd happily recommend Fernet to people --- it also dates back to a time when AEAD ciphers were a little less mainstream than they've become --- I would see CBC as a design smell in a newer library.
My point is just, these things all have rough edges.
So anywhere client cert's are used for authN, if one is lost there's no way to revoke it, short of rolling the whole certificate authority.
when you combine that with the 200k+ Internet exposed Kubernetes clusters, that's quite a large potential for attack.
The GH issue for this has been open since 2015 https://github.com/kubernetes/kubernetes/issues/18982
> A database clustering system for horizontal scaling of MySQL
> Vitess combines many important MySQL features with the scalability of a NoSQL database. Its built-in sharding features let you grow your database without adding sharding logic to your application.
What a quirky project. Is this for folks who started out with MySQL then find themselves needing to scale out in "NoSQL" style?
> Vitess automatically rewrites queries that hurt database performance.
That sounds scary.
But they're hardly the only places scaling out MySQL. Facebook and Slack are two other prominent examples.
Not sure if that's a good thing.
Why is this a security issue? Also, beyond naming convention, why?
> The container manager used in kubelet checks for docker daemon process either via pidfile or process name. While the pidfile points to the docker daemon process PID, the dockerProcessName constant stores a docker cli name (docker) instead of docker daemon name (dockerd).
They're trying to look up the process by a name the process isn't using.
Kubernetes is 5 years old. This is very, very young for mission-critical infrastructure management software.
Having a certain level of doubt in young open source projects is responsible, in my opinion. I'm interested to hear other people's perspective on production-readiness of k8s for mission-critical applications.
However realistically k8s is in heavy deployment in a wide variety of industries including public sector, financial services, retail, technology ... and it's clear that this kind of concern is not the primary consideration.
There were banks in the UK (Monzo) deploying k8s almost 3 years ago (https://monzo.com/blog/2016/09/19/building-a-modern-bank-bac...)
Monzo, on the other hand, was default-dead, so betting the farm on a relatively unproven technology perhaps wasn't risking as much. Nobody talks about the startups that used unproven tech and sank.
Also k8s is far from only used in tech companies. the UK home office (not exactly a startup) were giving talks about their use of k8s in 2016 https://www.phpconference.co.uk/videos/2016/kubernetes-home-...
Whether or not something is a smart choice to use in mission-critical production applications doesn't depend on the number of big banks or big tech companies that use the technology.
At the end of the day, Kubernetes is a tool that will change
very rapidly over the next 5 years. I could see k8s being a decent choice to use in a tech project that you expect to actively maintain and improve for the next 5+ years, AND if you (and your developers) are willing to invest time (potentially a lot of time) every year keeping up to speed with how k8s evolves through every version release. That's the primary risk in using something like k8s.
The last decade has been dominated by rapid adoption of technologies that were under heavy development at the time, from Ruby on Rails, to Node.JS to Golang to Rust.
The simple reality of modern IT is that companies are unwilling to wait until a technology has stabalized before making use of it.
Personally I'd rather they did, but my opinion has little weight in that regard.
Good example is the recent offer for 147 million people to get 125 dollars out of a pool of 31 million ( Equifax )
As an industry IT is simply shamefully shoddy
Given todays landscape of hardware and software exploits adding a complex orchestration layer with identified issues seems like less than prudent behavior.
I currently work on kubernetes in production and am migrating large clients into these systems. I see the distinct lack of knowledge around securing systems and more so when adding kubernetes.
There is some good information about securing k8s around, although we could always do with more.
There's a free oReilly eBook on k8s security from aqua (https://info.aquasec.com/kubernetes-security)
Also the CIS benchmark for k8s is reasonably up to date, although could use expanding, which should be happening for the next version.
On top of that there have been quite a few conf. talks now about k8s security https://www.youtube.com/playlist?list=PLKDRii1YwXnLmd8ngltnf... for some examples
OpenBSD security model is sound. Simplify and secure.
Due respect to smart acquaintances who work on OpenBSD, but to most people who secure application deployment environments, this is not the reassuring statement OpenBSD seems to think it is.
What's funny about it is, if you're going to make up a benchmark (and theirs is contrived; it was "no remote vulnerabilities", as I recall, when I was involved with the project, then "no remote vulnerabilities in the default install", then "only one remote vulnerability in the default install"), make up one where your number is zero, not "just 2 in a heck of a long time".
But more substantively: the reason you run an operating system is to do stuff on it. It isn't 1996 any more and nobody gets public shell accounts on Linux systems or OpenBSD systems; similarly, remotely-exploitable vulnerabilities in other operating systems are also exceedingly rare, and so OpenBSD's benchmark excludes the LPEs that actually make up the meaningful attack surface of a modern OS.
What's a more important question is what features the operating system provides to harden the non-default programs that inevitably have to run on it. OpenBSD has historically lagged here, though they're upping their game recently.
Despite briefly being involved with the project during "The OpenBSD Security Audit" in the late 1990s, I have a longstanding bias against OpenBSD that I should be up front with: we shipped an appliance on OpenBSD at Arbor Networks, and I spent several days debugging a VMM problem that would zombify pages of memory and gradually suffocate our systems. When I presented evidence to Theo, he said (not a literal quote) "don't bother me about this, Chuck Cranor" --- I think it's Chuck Cranor but could be wrong --- "wrote this VMM as his graduate project and I've got nothing to do with it". For whatever that's worth, I've felt OpenBSD is an unserious option for deploying real systems other than near-stateless network middleboxes ever since.
The fact very few and I do mean very few people understand the low level functions going on ( like the multiple layers of nat via iptables ) and they are simply struggling to keep it running its pretty obvious they arent qualified to run this in production.
I have been at google HQ in kubernetes discussions and its frightening how little people know about the internals of it.
These arent amateurs off the street either.
As for struggling to run it, our experience has been different. Granted we're a small user. Our largest cluster has just over 100 nodes. Our highest volume service hits about 15k req/sec at peak. We're on GKE which is a well-managed implementation and that also makes it less risky. In two years of production the platform has been extremely reliable. Moreover we've been able to do things that would have been a lot harder before, such as autoscaling the service I mentioned above so that we're not paying for capacity we don't need off peak.
Let the experts who designed it run it for you, almost like it was planned that way :)
Does anyone understand the JVM and servlet containers? Does anyone understand OpenSSL's state machine? Does anyone understand hardware load balancers? Does anyone understand speculative execution? Does anyone understand the Postgres query planner? Does anyone understand all the same-origin policies? Does anyone understand their laptop's power supply?
I've seem a lot of people build a lot of successful systems on things they don't know every detail of, even when not knowing those details is quite dangerous. That Kubernetes is yet another one of these building blocks isn't an indictment of Kubernetes, it's an indictment of the compulsion to understand everything.
When the entirety is so complex seasoned engineers shrug when you ask what is wrong with the stack you have a problem
("Don't build the system you want to build, build the system I want you to build" isn't an answer.)
Thing is everyone I have worked with uses k8s because its the new cool toy. None of them have a requirement to create a large expensive platform which costs more than simple hardware so a company can bring products to market faster
Everyone thinks they can save money with k8s. You wont. Especially in AWS
I think it adds overhead but does allow you do maximize server density and usage allowing you to use all 500 machines more effectively.
And it adds a plethora of attack vectors
However, if that's the case, that's a distribution specific issue and not really anything intrinsic in k8s.
Edit - there's a GH issue here https://github.com/kubernetes/kubernetes/issues/81112