And the rest of this gibberish is just meaningless.
"""A resource is an endpoint in the Kubernetes API that stores a collection of API objects of a certain kind."""
OK, almost completely free of meaning ... then later:
"""A declarative API allows you to declare or specify the desired state of your resource and tries to keep the current state of Kubernetes objects in sync with the desired state."""
But wait, I thought the resource _was_ an API endpoint? The API allows me to declare the desired state of my API?
I know K8s is not Borg and I know a lot of xooglers hate Borg but at least the Borg documentation was concise and made sense.
A resource has an endpoint, i.e. a URL that represents its configuration and state. But a resource and its endpoint can be thought of as the same thing, in that the endpoint is the resource's canonical URI. Think REST/HATEOAS.
A resource is any object that Kubernetes can track. They include objects like services, pods, nodes, ingresses and so on. Together resources form an API.
A CRD is a Custom Resource Definition. It defines the schema and API of a custom resource (as opposed to a built-in one like "service" or "pod").
A resource is pure declarative data -- configuration and state commingled in one JSON document -- but can have behaviour associated with it.
(Confusingly, Kubernetes also uses the word "resource" to refer to compute resources like CPU and RAM. For that reason, you'll often see "object" used instead.)
After reading your description, "a resource is an endpoint" still seems wrong. Perhaps just lazy, but laziness in technical docs gets confusing fast.
Kubernetes is fairly gigantic now and continuing to spurt out in all sorts of directions, feature-wise.
"The full-time job of keeping up with Kubernetes" was published in February 2018 and nothing's slowed down.
 https://gravitational.com/blog/kubernetes-release-cycle/# (and the HN discussion at the time: https://news.ycombinator.com/item?id=16285192)
The opsys bros I know who are used to hacking together docker-composes that are more insecure than your grandmother's internet explorer browser in 30 seconds, slapping a json parsing logger on it and calling themselves geniuses fail miserably when they are left to design construct and optimise secure kubernetes clusters and elegantly port a variety of full stack web applications in a cost efficient manner.
It requires planning and design. Just because you haven't invested hundreds of hours into reading the docs and testing, and building k8s clusters for which the docs are incredibly useful doesn't mean you have to attack the verbage, just read the docs. If you can't do that then maybe your best investing your criticism into a domain of expertise you're actually knowledgeable and/or experienced in.
Also "kind" represents the type of k8s object, like service or deployment. It's not ambiguous at all, it just requires reading to the third paragraph of "general Kubernetes concepts" in the Kubernetes docs to understand it.
A "resource" === "endpoint" in the sense that the endpoint defines where you can define the configuration of a specific resource. They are being specifically vague I am assuming because of the concept of CRDs.
But without that sort of knowledge ("Hey, Kubernetes has a customizable API using custom resources!"), that sentence is rather hard to unpack.
edit: Thinking about it some more I suppose an endpoint is an overloaded term w.r.t. Pod IPs.
1. The CustomResourceDefinition itself, a document that tells Kubernetes "Hey, I have a kind of custom resource I want you to know about".
2. Instances of that custom resource that have been submitted to Kubernetes. "Submitted how?", you ask, which leads to:
3. When you submitted the CRD, Kubernetes automatically created HTTPS endpoints based on the values of the group, kind and version of the CRD. It accepts that custom resource at that endpoint.
What makes dealing with CRDs confusing, in my experience, is that "CRD" is used in two senses. One is the actual resource, the actual chunk of YAML submitted to Kubernetes that contains "kind: CustomResourceDefinition".
The second sense is to bundle together all the things that can be done with or flow from CRDs. Usually this is where you hear about controllers, operators and so on, but it will still be called "the Foo CRD", even though that's like referring to a 3-tier application as "the Foo table schema".
We were at a crossroad. Either we hire an SRE that has extensive kubernetes experience for $130K/year or we move to a managed platform until we actually need those capabilities.
We're happy with the move and the pressure relief is tremendous. Now we can focus on features and not whether or not deploying is using ENV vars from our local machines and causing the system to crash.
Running Kubernetes is vastly different for using it to run your apps. You really shouldn’t do the former unless you have to be on-Prem or have some other need.
"Managed Kubernetes" really runs the spectrum between "one step above just installing it yourself on a bunch of VMs" and "I spend 1% of my time managing anything below the product." Each cloud provider exists somewhere different on this spectrum, with none of them being in quite the same location, and some of them have multiple different products which exist at different points.
For example: AWS is among the most bare-bones. EKS is just a managed control plane; coming from GKE, you might click "create an cluster" then be very confused how there are no options for, say, instance size, or how many... because you have to do that all yourself. There are tools like eksctl or Rancher which can help with this, but ultimately, you're managing those instances. You're doing capacity planning (you think kube would be a great pick to integrate with spot fleets because of its ability to schedule and move workloads to a new instance when one goes down? have fun setting it up, hope you like ops work.). You're doing auto-scaling (and that ASG? its not going to know about your pod resource requests, so you either need some very smart manual coordination between the two, or you need to set up cluster-autoscaler). You're setting up cluster metrics (definitely need metrics-server. not heapster, that was last year, metrics-server is this year. but how to visualize? do i host grafana in the cluster? then i need to worry about authn. cloudwatch really isn't made for these kinds of things... maybe I'll just give datadog a few thousand bucks.) Crap, 1.16 is out already? They only support 9 months of releases with security updates?! I feel like I just upgraded my nodes! Oh well, time to lose a day replicating this update across all of my environments.
I'd go on, but you get the point. There is nothing "managed" about EKS.
DigitalOcean is pretty similar to this (it does provision instances, but the tooling beyond that is barebones). Google Cloud/GKE is "more managed" in a few sense; the cloud dashboard provides some great management capabilities out-of-the-box, such that you may not need to reach for something like Datadog, and the autoscaler works really well without a lot of tinkering. There are still underlying instances, so you're worrying about ingress protection, OS hardening, OS upgrades, etc... but its not as bad as AWS. Not by a long shot.
The holy grail (for some companies) is really something like Azure AKS + Azure Container Instances. No instances to manage. Click a button for a kubernetes cluster. Schedule workloads. Get functional metrics, logging, tracing, dashboards out of the box. Don't worry about OS upgrades, hardening, autoscaling, upgrading the cluster, etc; we'll do it all for you, or at least make it one click to configure. That's the ideal situation. I haven't used AKS/ACI so I can't comment on whether Azure gets us there, but the idea is sound; even if its more expensive.
This sounds like an anti-Kube post, right? Wrong (its a Tide ad). The beautiful thing about Kubernetes is that it can span this spectrum. The same exact API surface can scale from a fully-managed abstract platform where you just say "take this git repo and run it" (see: Gitlab Auto-DevOps), all the way to powering millions of workloads across dozens of federated clusters at Fortune 500 companies.
But, to the OPs point: We're close to solving that right end of the spectrum, and a lot further away from the left end. We're getting there, but we're not there yet. There isn't enough abstracted management of these compute resources... yet. But there's enough money and desire for there to be that I know we'll get there.
Haha - I had to chuckle on this. It really is this bad on EKS. My god, upgrades are a total joke.
As bad as it is you have to believe that AWS seems to want to ignore Kubernetes and lock you in with ECS.
They need to get EKS as easy to manage as GKE and they need to provide free control plane/masters like all the other managed k8s services. Hopefully we see something at Re:invent this year... at this point there is no reason to use EKS unless you are trapped into AWS. Which unfortunately is what AWS is counting on :(
GKE is far more managed though with advanced features like global network, aliased IPs, global load-balancing, istio/traffic manager integration, private IPs, metrics-based-autoscaling, preemptible nodes, local SSDs, GPUs, TPUs, etc.
Azure Container Instances are nice, with the GCP corollary being Cloud Run using KNative. Both products are designed to quickly run a container image with a public endpoint, but ACI is not part of AKS. You might be thinking of the Azure Virtual Kubelet which lets you burst on-demand to ACI, but this is a very advanced use-case and not the normal cluster setup.
Google Cloud have been having that since before it's being called Google Cloud -- I mean Google App Engine. It has all those features, and with its "Standard" environment you don't even need to build docker image.
...and since it's based on Knative, it's portable-ish.
I use 1 linode for personal stuff, but otherwise, I'm all in on dedicated baremetal. From my perspective, DO is just another cloud vendor.
DO is tier 2 because it provides basic IaaS components along with S3/Spaces, managed databases, load balancing, and K8S as compared to providers like Linode which would be tier 3 and below.
Just compare the list on the product pages:
Also DO doesn't have a lot of more niche services like Redshift, Mail, ML
It sounds like the primary reason you're deciding to move away from it is because you don't face any of the problems that it's there to solve, rather than it being an operational burden.
We're migrating to k8s, and it's fantastic for us, but we have a large complex system that we're moving into the cloud, which really benefits from k8s features - loving the horizontal pod autoscalers, especially when I can expose Kafka topic lag to them (via Prometheus) as a metric to scale certain apps on.
Kubernetes can run a simple container with 1 line if that's all you need, or you can scale up to several different services with a few more files, all the way a massive deployment with thousands of containers. How you use it is up to you, but you do need to read and understand the basics.
However there's absolutely no need to run Kubernetes yourself unless you have a serious reason to. If you must, I highly recommend using something like Rancher  that can install and manage the cluster for you, even on cloud providers.
So why did you adopted a tool you had no use for to begin with? Sounds like poor judgement all around, from the adoption to the complains.
Of course, not every workload benefits from the features k8s provides.
I think AWS hamstrings kubernetes still bc it is holding onto ECS. Hopefully they see the light soon, beef it up and make it equal to AKS/GKE. At least make masters/control plane free...ugh
At home, I'm happy to play around with such things—and just might (I just purchased an old Dell server to take care of some home tasks and provide a sandbox environment). But at work I want to be as far away from the bare metal (physical machines or VMs) as possible.
Actually it's meant for virtually no one, because very few people have the problems it solves. Like AutoScaling Groups, Serverless, and Cloud its self, it's a tool that solves a pain point in a specific domain. About 0.001% of the business world have that problem.
> We're happy with the move and the pressure relief is tremendous.
I'll bet it was. I've convinced businesses to go in the completely opposite direction to K8s. I've told them to developed a monolith before they start optimising into microservices. The startups that listened to me are still around and in round C. The others closed up shop ages ago (minus one of them) because they never got to market in time.
K8s is a tool. Docker is a tool. Cloud is a tool. Businesses have to utilise tools as efficiently as possible to get their solutions out the door if they're to survive. Using K8s from the ground up is a death sentence.
As a case in point: I just set up a small new app on GKE. Because I'm experienced with Kubernetes, within a few minutes I had my app (a React front-end written in TypeScript and a backend written in Go) running and receiving HTTPS traffic. I entered just a handful of shell commands to get from an empty cluster to a functional one.
It's a cluster with a single node, no resilience. But this is genuinely useful, and a perfectly appropriate use case for Kubernetes. The alternative is the old way — VMs, perhaps with a sprinkling of Terraform and/or Salt/Ansible/Chef/Puppet, dealing with Linux, installing OS packages, controlling everything over SSH — or some high-level PaaS like AppEngine or Heroku.
While it's an example that shows that Kubernetes "scales down", I'm now also free to scale up. While today it's a small app with essentially zero traffic, when/if the need should arise, I can just expand the nodepool, add a horizontal pod autoscaler, let the cluster span multiple regions/zones, and so on, and I'll have something that can handle whatever is thrown at it.
My company has a ton of apps on multiple Kubernetes clusters, none of them very big. From my perspective, it's a huge win in operational simplicity over the "old" way. Docker itself is a benefit, but the true benefits come when you can treat your entire cluster as a virtualized resource and just throw containers and load balancers and persistent disks at it.
I've also managed a built-from-ground-up Kubernetes cluster. It's not rocket science. That said, I wouldn't do it in a company without a dedicated ops person.
And I agree. However...
I'm referring to the companies and people who don't think like you do and instead think, "We need to build everything as a microservice and have it orchestrated on K8s". I'm thinking of the people who have a monolith that's scaling fine, solving a problem, and generating income, but an executive has been sold on K8s and now wants to refactor it.
Your use case makes perfect sense, but very few people are thinking like that, sadly.
In my opinion if you're starting out with Docker and K8s, you're going down the right path, but only provided you're not starting with microservices.
Like you said it's a tool and like with most tools there is a balance between over engineering its use.
It just takes some experience. I mostly disable SSH for example. You bake your images once, not install everything on each boot.
That seems quite significant. I still come across those regularly.
The opsys bros I know who are used to hacking together docker-composes that are more insecure than your grandmother's internet explorer browser in 30 seconds and calling themselves geniuses fail miserably when they are left to design construct and optimise secure kubernetes clusters and elegantly port a variety of full stack web applications in a coat efficient manner.
It requires planning and design. Just because you haven't invested hundreds of hours into reading the docs.and testing, and building k8s clusters for which the docs are incredibly useful...
That would be kind of an operator in reverse.