Kubernetes is removing the "dockershim", which is special in-process support the kubelet has for docker.
However, the kubelet still has the CRI (container runtime interface) to support arbitrary runtimes. containerd is currently supported via the CRI, as is every runtime except docker. Docker is being moved from having special-case support to being the same in terms of support as other runtimes.
Does that mean using docker as your runtime is deprecated? I don't think so. You just have to use docker via a CRI layer instead of via the in-process dockershim layer. Since there hasn't been a need until now for an out-of-process cri->docker-api translation layer, there isn't a well supported one I don't think, but now that they've announced the intent to remove dockershim, I have no doubt that there will be a supported cri -> docker layer before long.
Maybe the docker project will add built-in support for exposing a CRI interface and save us an extra daemon (as containerd did).
In short, the title's misleading from my understanding. The Kubelet is removing the special-cased dockershim, but k8s distributions that ship with docker as the runtime should be able to run a cri->docker layer to retain docker support.
For more info on this, see the discussion on this pr: https://github.com/kubernetes/kubernetes/pull/94624
I've personally switched to bazel for building most of my containers but that's a far departure from what the majority of people are doing I suspect.
Maybe I'm mixing up things, pls correct me wherever needed.
I suspect this will nuke a huge amount of tutorials out there though & frustrate newbies.
One difference is that if you 'docker build' or 'docker load' an image on a node, with docker as a runtime a pod could be started using that image, but if containerd is the runtime it would have had to be 'ctr image import'ed instead.
I know that minikube, at some point, suggested people use 'DOCKER_HOST=..' + 'docker build' to make images available to that minikube node, which this would cause to not work.
It would be nice if k8s had its own container image store so you could 'kubectl image load' in a runtime agnostic way, but unfortunately managing the fetching of container images has ended up as something the runtime does, and k8s has no awareness of above the runtime.
Oh, and for production clusters, a distribution moving from dockerd to containerd could break a few things, like random gunk in the ecosystem that tries to find kubernetes pods by querying the docker api and checking labels. I think there's some monitoring and logging tools that do that.
If distributions move from docker to docker-via-a-cri-shim, that won't break either of those use cases of course.
What does this mean? I thought that Kubernetes manages Docker containers which makes the title kind of confusing.
Kubernetes can use docker runtime (dockerd) to run OCI containers, but Docker Inc strongly discourages the docker runtime being used directly for infrastructure. Docker runtime imposes a lot of opinionated defaults on containers that are often unwanted by infrastructure projects. (For example docker will automatically edit the /etc/hosts file in containers, in a way that makes little sense for Kubernetes, so Kubernetes has to implement a silly work around to avoid this.)
Instead Docker Inc recommends using containerd as the runtime. containerd implements downloading, unpacking, creating CRI manifests, and running the resulting containers all without implementing docker's opinionated defaults on top. Docker itself uses containerd to actually run the containers, and plans to remove it downloading code in favor of using the one from containerd too.
The only advantage to using docker proper for infrastructure projects is that you can use the docker cli for introspection and debugging. Kubernetes has created its own very similar cli that works with all supported backend runtimes, and also can include relevant Kubernetes specific information in outputs.
Is there a list of these defaults or other downsides to using docker instead of containerd?
Much of this all stems from the flak infrastructure people gave docker when they made swarm part of the engine. But it comes to more than that. Docker has its own take on networking, on volumes, on service discovery, etc. If you are trying to use docker as a component of your own product, at least some of these are likely things you want to implement differently. And the same may well be true of any new features docker wants to add in the future. At which point one must ask why bother using docker directly?
containerd was quite literally created when docker decided to extract the parts of docker that projects like kubernetes might want to use. It has evolved heavily since then, but that really does capture the level at which it sits. This leaves dockerd in charge of things like swarm, docker's view on how networking should work, docker's take on service discovery, dockers view on how shares storage should work, building containers, etc.
There is another lower level of runtime, the OCI runtime, of which the main implementation is runc. Alternatives have interesting attributes, like `runv` running containers in VMs with their own kernel to get even grater isolation, `runhcs` which is the OCI runtime for running windows containers, etc. Most if not all of the higher level runtimes allow switching out the OCI runtime, but in general sticking with the default of `runc` is fine.
Does it do everything that docker cli does ? Build, pull, etc ?
If you need builds, I'd suggest either run dockerd alongside a containerd kubelet, or use buildkit.
Very valuable. Thanks!!!
There are others... some for non-Docker image support. There are people running other things than just Docker these days. They are more niche case.
You wrote dockerd without caps.
Here's an explanation I found helpful:
You can check out the project here: https://github.com/vmware-tanzu/buildkit-cli-for-kubectl
There are also some pretty cool features. It supports building multi-arch images, so you can do things like create x86_64 and ARM images. It can also do build layer caching to a local registry for all of your builders, so it's possible to scale up your pod and then share each of the layers for really efficient builds.
Container images nowadays can be built by a variety of tools, and run by a variety of tools, with Docker likely being the most popular end-user tool with the most history and name recognition. Others like Podman/Buildah are differently-architected replacements.
As long as a container meets the open container specs, it can be built with whatever tool and run on whatever tool that also follows the specs.
The part of Kubernetes that runs containers has had a shim for docker along with an interface for runtimes to use. It's called the Container Runtime Interface (CRI). The docker shim that worked alongside CRI is being deprecated and now all runtimes (including Docker) will need to use the CRI interface.
These days there are numerous container runtimes one can use. containerd and cri-o are two of them. Container images built with Docker can be run with either of these without anyone noticing.
1. [common, informal] "An OCI container".
2. [pedantic, strictly accurate] "A set of tools for building & interacting with OCI containers".
This article is talking about the latter definition.
Docker is pretty much the a textbook example of why you probably shouldn't use the same word for a lot of different things.
Better yet, .Net.
The submitter can. This kind of misses the point anyway. The title is misleading.
Not a big deal. It's some backend stuff that's not interesting to people who use managed k8s. Cool cool.
Is this a testament to, or an indictment of, how abstracted our systems have become?
I took an Operating Systems class decades ago in school in which I wrote a toy OS, but at this point I couldn't tell you much about how operating systems really work, but I deploy software to them everyday. That is fine, it is the nature of computers, they are basically abstraction machines. And OS's are pretty mature and stable, I don't really ever need to debug the OS itself in order to deploy software to one, for the kind of software I write. (Others might need to know more).
But personally I still haven't figured out how to use K8, heh.
There is nothing “un abstract” about running applications on VMs or machines. We’re just evolving the abstractions that we work with. Before it was VMs, then containers and now containers + orchestrators. In the future it will be some other abstraction.
Every step of the way, we’ve made this transition for compelling reasons. And it will happen again.
Anybody claiming they need to know everything about their dependencies is being unrealistic.
As a user you should know the different types of namespacing that affect containers without necessarily knowing that/how your runtime calls clone() to do it. And as a sysadmin you had better know how all the components fit together and their failure modes because you’re the one supporting them.
Different people have different views of any technology so someone’s necessary understanding as a user of managed k8s can be different than a sysadmin who is a user of k8s code itself.
The only "official" notice about it so far seems to be in the linked changelog.
It seems that Docker images will still run fine on k8s. The main change is that they're moving away from the "Docker runtime", which is supposed to be installed on each of the nodes in your cluster.
More details about k8s container runtimes here: https://kubernetes.io/docs/setup/production-environment/cont...
You lose out on things that require access to the docker daemon socket, but ideally any such software should be replaced with something that talks with the kubernetes API instead. (exception is building containers in cluster. If you need that, run docker side by side with the kubelet, or use buildkit with containerd integration). You also lose the ability to interact with containers with the docker cli tool. Use crictl instead, which has most of the same commands, but also includes certain k8s relevant information in output tables.
K8sContributors on Twitter: "#Kubernetes 1.20 introduces an important change to the kubelet - the deprecation of #Docker as a container runtime option. What does this mean and why is it happening? You can learn more in our blog post! https://t.co/lzPPzwXUNM" / Twitter - https://twitter.com/K8sContributors/status/13343017328309903...
Don't Panic: Kubernetes and Docker | Kubernetes - https://kubernetes.io/blog/2020/12/02/dont-panic-kubernetes-...
Dockershim Deprecation FAQ | Kubernetes - https://kubernetes.io/blog/2020/12/02/dockershim-faq/
https://twitter.com/IanColdwater/status/1334149283449352200 also has some details
Actually I’m nearly always lost with kubernetes. It’s either broken or changing.
You have a Pod definition, which is basically a gang of containers. In that Pod definition you have included one or more container image references.
You send the Pod definition to the API Server.
The API Server informs listeners for Pod updates that there is a new Pod definition. One of these listeners is the scheduler, which decides which Node should get the Pod. It creates an update for the Pod's "status", essentially annotating it with the name of the Node that should run the Pod.
Each Node has a "kubelet". These too subscribe to Pod definitions. When a change shows up saying "Pod 'foo' should run on Node 27", the kubelet in Node 27 perks up its ears.
The kubelet converts the Pod definition into descriptions of containers -- image reference, RAM limits, which disks to attach etc. It then turns to its container runtime through the "Container Runtime Interface" (CRI). In the early days this was a Docker daemon.
The container runtime now acts on the descriptions it got. Most notably, it will check to see if it has an image in its local cache; if it doesn't then it will try to pull that image from a registry.
Now: The CRI is distinct from the Docker daemon API. The CRI is abstracted because since the Docker daemon days, other alternatives have emerged (and some have withered), such as rkt, podman and containerd.
This update says "we are not going to maintain the Docker daemon option for CRI". You can use containerd. From a Kubernetes end-user perspective, nothing should change. From an operator perspective, all that happens is that you have a smaller footprint with less attack surface.
The fact that so many other orgs, many of which are startups or just small to medium sized tech companies, use a system this complex is ludicrous to me.
I've seen way too many Ansible nightmares grown out of deceptively "simple" mutable VM deployments.
k8s makes our life so much easier because it eliminates a whole bunch of other complexity. Easily reproducible development environments, workload scheduling, sane config management...
It's kind of like if you had a shell script to launch programs, and it used to move the mouse to press icons, but now you've deprecated that and will only run programs directly.
 most folks emphasise the control loop aspect, I think it's more helpful to point to blackboard / tuple-space systems as prior art.
Ah, nevermind then, that gives me rest concerning the viability and transparency of it all.
It's not more complex. If you're small, the overhead is probably not worth it. If you're big enough, you manage the k8s control plane, and you don't have to manage your tenants infra.
Is a programming language too complex for hello world? Perhaps, but that's not all it does.
As I understand, dockershim makes docker daemon cri compliant. But dockerd already uses containerd which is cri compliant. So, why can't kubelet directly interact with containerd APIs without dockershim?
If someone wants to use kubelet + docker so that they can, for example, ssh into a node and type 'docker ps' to see containers, or have something else using the docker api see the containers the kubelet started, that won't work after re-pointing the kubelet from docker to containerd.
The difference here is namespacing, but not the linux-kernel-container-namespace, rather the containerd concept by the same name to allow "multi-tenancy" of a single containerd daemon.
In addition, I don't think you could have docker + cri run in the same containerd namespace since they end up using different networking and storage containerd plugins. I think that terminology is right.
So yeah, repointing the kubelet to containerd directly works fine, but it won't be the same thing as running docker containers.
Each kubelet does its thing through the Container Runtime Interface (CRI), so in a sense it doesn't know what it's running on. If it used containerd's interfaces directly, it wouldn't be possible to substitute in a different option.
For example, there are emerging VM-based approaches like Firecracker and VMware "Project Pacific" (disclosure: I work at VMware).
Glad I'm not the only one. I'm sure I'm not the smartest engineer/sysadmin in the world, but I'm also not the dumbest and I have never gotten an on-premises Kubernetes installation to work.
The way I manage containers is lxc and shell scripts. I understand it, and it works.
Don't tell the boss or the customers, but most of the time when release notes for a new version come out, I look at them and go "WTF, why do we need that, I better do some research." It's fast changing, complex as hell, and absolutely brutal. That said most things are there for a reason and once I dig in I usually see the need.
That said I do love it despite its warts. There's no doubt some Stockholm Syndrome at play here, but I love the API (which is pretty curlable btw, a mark of a great API IMHO) and the principles (declarative, everything's a YAML/JSON object in etcd, etc). I see it the same way I did C++ (which I also loved). It gives you great power which you can use to build an elegant, robust system, or you can create an unmaintainable, complex, monster of a nightmare. It's up to you.
 https://goteleport.com/blog/kubernetes-release-cycle/ , HN discussion: https://news.ycombinator.com/item?id=16285192
What is deadly difficult is getting networking to work. Even a comparably "easy" thing with a couple of two-NIC machines (one external, internet-routable, one DMZ) cost me a fucking week.
What's even worse is when one has to obey corporate restrictions - for example, only having external interfaces on "loadbalancer" nodes:
- First of all, MetalLB only has one active Speaker node which means your bandwidth is limited to that node's uplink and you're wasting the resources of the other loadbalancers.
- Second, you can taint your nodes to only schedule the MetalLB speaker on your "loadbalancer" nodes via tolerations... but how the f..k do you convince MetalLB to change the speaker node once you apply that change?!
- Third, what do you do when you want to expose N services but only have one or two external IPs? DC/OS was way more flexible, you had one set of loadbalancers (haproxy) that did all the routing, and could run an entire cluster on four machines - two LBs, one master, one worker. There is no way to replicate this with Kubernetes. None.
It's not Kubernetes but it doesn't try to be at all.
That said, depending on where you were working, things could also change fast. You could find yourself finding that a kernel system call had changed because someone patched it the evening before.
Coincidentally, today I watched three presentations about burning Kubernetes clusters and all of them had Docker daemon issues in the mix. I’ve been using Docker for over five years myself and I’ve been using Kubernetes for almost two years now. The most pain I encountered was with Docker or its own ecosystem.
In the last two years it always had some weird racy situations where it damaged its IPAM or simply couldn’t start containers after a restart anymore. Also its IPv6 support is just a joke.
Sorry, I had to rant and I hope that this announcement will fuel the development of Docker alternatives even more.
- 99% of Kubernetes deployments use dockerd as a runtime
- 99% of dockerd deployments use containerd as a runtime
- containerd can be called directly by kubernetes via cri-containerd
- Therefore most Kubernetes deployments can, and should, be simplified by calling containerd directly.
- This deprecation notice will make this transition happen sooner.
This is the natural consequence of Docker itself splitting out its runtime into containerd.
"Docker support in the kubelet is now deprecated and will be removed in a future release. The kubelet uses a module called "dockershim" which implements CRI support for Docker and it has seen maintenance issues in the Kubernetes community. We encourage you to evaluate moving to a container runtime that is a full-fledged implementation of CRI (v1alpha1 or v1 compliant) as they become available. (#94624, @dims) [SIG Node]"
I seems like containerd is maintained by The Linux Foundation, a group of people who mostly don't even run Linux (most of their releases and media material is made on Macs)
I dunno. I don't like the direction things are going in the open source world right now.
The bit that absolutely fucking sickens me is how these transactions are often dressed up in language with free software intonations like "community", "collaboration" etc. Institutionalized doublethink is so thick in the modern free software world that few people even recognize the difference any more. As an aside, can anyone remember not so long ago when Google wouldn't shut up about "the open web"? Probably stopped saying that not long after Chrome ate the entire ecosystem and began dictating terms.
The one mea culpa for Docker is that the sales folk behind Kubernetes haven't the slightest understanding of the usability story that made Docker such a raging success to begin with. The sheer size of the organizations they represent may not even allow them to recreate that experience if indeed they recognized the genius of it. It remains to be seen whether they'll manage that before another orchestrator comes along and changes the wind once again. The trophy could still be stolen, there's definitely room for it.
The whole idea of containerization came from Google anyways, who uses it internally. Docker came out with their container system without understanding what made it work so well for Google. They then discovered the hard way that the whole point of containers is to not matter, which makes it hard to build a business on them.
Docker responded by building up a whole ecosystem and doing everything that they could to make Docker matter. Which makes them a PITA to use. (One which you might not notice if you internalize their way of doing things and memorize their commands.)
One of my favorite quotes about Docker was from a Linux kernel developer. It went, "On the rare occasions when they manage to ask the right question, they don't understand the answer."
I've seen Docker be a disaster over and over again. The fact that they have a good sales pitch only makes it worse because more people get stuck with a bad technology.
Eliminating Docker from the equation seems to me to be an unmitigated Good Thing.
Not really. Jails and chroots are a form of containerization and have existed for a long time. Sun debuted containers (with Zones branding) as we think of them today long before Google took interest, and still years before Docker came to the forefront.
> I've seen Docker be a disaster over and over again. The fact that they have a good sales pitch only makes it worse because more people get stuck with a bad technology.
> Eliminating Docker from the equation seems to me to be an unmitigated Good Thing.
Now this I agree with, Docker is a wreck. Poor design, bad tooling, and often downright hostile to the needs of their users. Docker is the Myspace of infra tooling and the sooner they croak, the better.
Yes, we had chroot, jails, and VMs long before. I'd point to IBM's 360 model 67 which was released in 1967 as the earliest example that I'm aware of. A typical use before containerazation was shared hosting. But people thought of and managed those as servers. Maybe servers with some scripting, but still servers.
I'm not aware of anyone prior to Google treating them as disposable units that were built and deployed at scale according to automated rules. There is a significant mind shift from "let's give you a pretend server" to, "let's stand up a service in an automated way by deploying it with its dependency as a pretend server that you network together as needed". And another one still to, "And let's create a network operating system to manage all services across all of our data centers." And another one still to standardize on a practices that let any data center can go down at any time with no service disruption, and any 2 can go down with no bigger problems than increased latency.
Google standardized all of that years before I heard "containerization" whispered by anyone outside of Google.
And agreed, Docker is a mess. It seems like everything that's good about Docker was developed by other companies, and everything that's bad about Docker was developed by Docker. The sooner the other companies can write Docker out of the picture the better. I want the time I wasted on Swarm back.
Docker was always a company first and foremost, I fail to see how leaving the technology in their commercial control would have been better in any way than making it an open standard.
Just because Docker = small = good and Google = giant corporation = evil? Docker raised huge amounts of VC funding, they had every intention of becoming a giant corporation themselves.
And it's kind of bizarre to completely discount the outcome of this situation, which is that we have amazing container tools that are free and open and standardized, just because you don't like some of the parties involved in getting to this point.
I would hesitate to use the term "open standard" until I'd thoroughly assessed the identities of everyone contributing to that open spec, along with those of their employers, and what history the spec has of accepting genuinely "community" contributions (in the 1990s sense of that word)
You can see the releases and specs that are supported by all major container runtimes here: https://opencontainers.org/release-notices/overview/
For example, OpenShift ships https://cri-o.io in its kubernetes distribution as its container runtime, so this isn't really new.
Disclosure: I helped start OCI and CNCF
But let's say you're right and call it a closed standard. Then this change drops support for one older, clunkier closed standard in favor of the current closed standard. Still doesn't seem like anything to get upset over.
What's "this" in that sentence? Kubernetes in general?
But divorcing their API from that tech base is also a move to support Cloud users---they don't want the story for big companies to be "If you want to use Kubernetes, you must also attach to Docker." That cuts potential customers out of the market who want to use Kubernetes but may have a reason they can't use Docker (even if that reason is simply strategic).
Google Cloud's business model walks a tightrope between usability and flexibility. Really, all the cloud vendors do, to varying degrees of success.
I commented on a child comment as well, but I don't understand this idea. The news is that a piece of commercially built software is being deprecated by a major project in favor of one built on an open standard, and you're interpreting this as a blow to open source?
Using Macs for content creation isn't evidence that the Foundation members don't use also Linux, whether for software development, backend servers, etc.
What a totally random data point of no relevance or significance eh?
Such things do in fact reflect the character and nature of the people involved. It doesn't necessarilly define them entirely, but yes it does reflect them.
It's not that you're not a "true Scotsman" necessarily if you say, care about linux primarily in other roles than desktops. You can be perfectly sincere in that, and it's valuable even if it only goes that far. But it does mean you are in a different class from people who actually do abjure the convenience of proprietary software wherever possible, and today "possible" absolutely includes ordinary office work and even presentation media creation.
It's perfectly ok to be so compromised. Everyone doesn't have to be Stallman.
It's equally perfectly fair to observe that these people are not in that class, when such class does exist and other people do actually live the life.
You can't have it both ways that's all. If you want to preach a certain gospel to capitalise on the virtue signal, without actually living that gospel and not actually posessing that virtue, it's completely fair to be called out for it.
to others reading this -- simplified, but, docker uses containerd to build/run images. all docker images are valid containerd images. you can run images through containerd straight off the docker hub.
Open source is fine; there's a ton of available code out there, to mix and match for whatever goals you need. Open services were never a thing, and what we're observing is that the SAAS model is eating the entire marketplace because tying services together to solve tasks is far easier (and depending on scale, more maintainable) than tying software together to solve tasks on hardware you own and operate exclusively. Owning and operating the hardware in addition to owning and operating the software that does the thing you want to do doesn't scale as flexibly as letting someone else maintain the hardware and provide service-level guarantees, for a wide variety of applications. But the software driving those services is generally closed-source.
If by "open source" you mean "Free (as in freedom) software," the ship has kind of sailed. The GNU-style four essential freedoms break down philosophically in the case of SAAS, because the underlying assumption is "It's my hardware and I should have the right to control it" and that assumption breaks down when it's not my hardware. There may be an analogous assumption for "It's my data and..." but nobody's crystallized what that looks like in the way GNU crystallized the Four Freedoms.
It's kind of a case study for future text books about how if there is a certain incentive, it will be embodied and satisfied no matter what. If the names and labels have to change, they will, but the essentials will somehow turn out to not have changed in any meaningful way in the end.
It's if anything worse now than before. At least before you were allowed to own your inscrutible black box and use it indefinitely. There was sane persistence like a chair. You buy it, and it's there for you as long as you still want it after that. maybe you don't want it any more after a while, but it doesn't go poof on it's own.
One way things are actually better now though is, now in many cases the saas outside of your control really is just a convenience you could replace with self-hosted good-enough alternatives, thanks to decades of open source software and tools building up to being quite powerful and capable today.
I think this is a case of the rising water lifting all ships. If the proprietry crowd gained more ability to abuse their consumers, everyone else has likewise gained more ability to live without them. Both things are true and I tell myself it's a net positive rather than a positive and a negative cancelling out, because more and better tools is a net positive no matter that both sides have access to use them for crossed purposes. At least it means you have more options today than you did yesterday.
For example, Al Capone had some friends, and the government violated his right to socialize with them without being surveilled for good cause.
containerd was also created by Docker Inc and donated to the LF.
TLDR: Docker Inc almost certainly is happy to see this change happen.
So Docker is deprecated, but no replacement is yet available?
It's just saying if you use something else, it must follow at least the v1alpha1 or v1 CRI runtime standard.
Bottom line I think is that using docker as a container runtime with K8S is going to be harder unless cri-dockerd becomes production grade but even then, from the Cons section it looks like it will not be a good option -
cri-dockerd will vendor kubernetes/kubernetes, that may be tough.
cri-dockerd as an independent software running on node should be allocated enough resource to guarantee its availability.
By using containerd (or podman) in K8s, you're getting rid of a lot of unnecessary overhead and so should get more containers per host...
minikube start --container-runtime=containerd
Use this to convince yourself that all your current docker images will still deploy and work as usual.
Containers as a concept is an important learning, but the implementation for today may not be the same as the one in 5 years from now.
Hell, even if you wrote an application with 0 dependencies, you're still on the hook for installing the correct version of its compiler, the correct version of your deployment tool, and the correct version/OS of your VM. These are still dependencies, even if they're not dev dependencies.
> It certainly doesn't make things easier, faster, more secure, or cheaper;
If you don't think being able to reuse software makes your workflow easier, faster and at the very least cheaper, I'm not sure what you could possibly think would do those things.
In that kind of situation, it is unwise and irresponsible to treat your infrastructure as a black box. You still need to be able to re-build/migrate your images for security, stability, and feature upgrades, so you're basically just piling additional complexity on top.
The premise of Kubernetes and containers/clouds is an economical (and legitimate) rather than technical one: that you don't have to invest into hardware upfront, and pay as you go with PaaS instead. That tactic only works, though, as long as you have a strong negotiation position as customer. In practice, if you won't get locked-in to cloud providers by tying your k8s infra to IAM or other auth infrastructure, or mixing Kubernetes with non-Kubernetes SaaS such as DBs (which suck on k8s), then you still won't be able to practically move your workload setup elsewhere due to sheer complexity and risk/downtime.
The economical benefit is further offset by a wrong assumption that you need no or fewer admin staff for Docker ("DevOps" in an HR sense).
My post, and most of yours, had nothing to do with Kubernetes, but containers in general. I don't care for Kubernetes, and would actively reject using it 99% of the time. Your post, however, was mostly about containerization of applications, whose validity has nothing to do with one particular product or pattern (Kubernetes).
Containers are an almost unanimous win in terms of the simplification of development and deployment. Conflating Kubernetes to be the only approach to containerization is a farce.
> We encourage you to evaluate moving to a container runtime that is a full-fledged implementation of CRI...
Kubernetes documentation has a setup guide (for containerd, as well as CRI-O here: https://kubernetes.io/docs/setup/production-environment/cont...