Swarm is great to get simple orchestration going, but it doesn't really do the same thing.
With k8s you can configure multi-point routing, storage backends and load balancing among other things.
With swarm, you get ... Overlay network (which is shit if you actually need to really scale) and 'scaling', which starts n containers of the same service.
Swarm feels more like an mvp which actually works even for small teams. K8s is more like a behemoth which only dedicated personal can fully tame with a silly amount of features most people don't need.
We've used both at my current job for toy projects (self hosted). Never in production however.
And I'm a personal user of gcp - which works wonderfully .. albeit more expensive than simple root servers
Load balancing should be a solved problem already. Swarm and Kubernetes should be using dead simple off-the-shelf software for ingress and load balancing. Any competitor should be able to use the same solutions. To put it another way, this shouldn't be a differentiator.
The problem is that the functionality in tools like nginx are still tied to static network architectures that evolve slowly, and don't take advantage of things like diurnal variability in workloads.
Kubernetes does use dead simple off-the-shelf software for ingress and load balancing. That software though, unfortunately, has a lot of knobs, and what "Ingress" and "Service" resources do is make sure those knobs are turned to the right settings.
The nginx ingress controller for example, under the hood, just generates and applies a big ol' nginx config! You can extract it and view it yourself, or extend it with a Lua script if you want to be fancy and do some FFI inline with processing the request, etc.
> The nginx ingress controller for example, under the hood, just generates and applies a big ol' nginx config!
I learned the hard way that GKE, in using GCP's load balancers don't support the same syntax for Ingress patterns as when you use an nginx Ingress. Definitely read the documentation thoroughly!
The easy way to do this is with NodePorts, wherein you configure your LB with all the nodes in your cluster being app servers on a certain port for a certain app. However you will lose some performance as there's some iptables magic under the hood.
Beyond that there's a sea of options for more native integrations that will depend on whether your LB vendor has an K8s integration, how friendly your networking team is, and how much code you're willing to write.
Swarm is great to get simple orchestration going, but it doesn't really do the same thing.
With k8s you can configure multi-point routing, storage backends and load balancing among other things.
With swarm, you get ... Overlay network (which is shit if you actually need to really scale) and 'scaling', which starts n containers of the same service.
Swarm feels more like an mvp which actually works even for small teams. K8s is more like a behemoth which only dedicated personal can fully tame with a silly amount of features most people don't need.
We've used both at my current job for toy projects (self hosted). Never in production however.
And I'm a personal user of gcp - which works wonderfully .. albeit more expensive than simple root servers