Kubernetes, for better or for worse, has clearly completely won. Not that this article isn't interesting for it's own sake, but it's just something to consider.
As a production user of Docker Swarm though, it is effectively dead and I've been planning a migration to k8s ASAP.
https://dockerswarm.rocks is better for a more complete, production-ready setup guide.
On the other hand, most of the books on it are spectacularly bad.
Used it for years in prod - just awesome.
I’ve also used k8s extensively and nomad/consul is more ”open” and less opinionated.
If you also manage some form of legacy infrastructure and VMs, consul is hard to ignore.
Two go-binaries - you’re up in minutes for a test drive.
Nevermind that Nomad is fast, easy to grasp, very concise and can run non-Docker workloads. Oh well...
Now that nomad/consul have autoscaling and csi as well as native envoy integration the gap, with regards to container workloads, have shrunk quite a bit!
- Nomad is a more general type of workload scheduler. What I mean with "general" is that you can schedule pretty much anything over a bunch of hosts - not just containers e.g you have a JRE on the node? Just schedule your jar. Execute a raw shell command? Not a problem.
- Batch jobs with parameters? Just send it to the "dispatch" endpoint with parameters and an optional payload. Often can replace "serverless" and sometimes work queues.
- Consul is a "general purpose" service discovery (dns & distributed K/V) that can span over all your services, not just k8s ones (for this very reason Hashicorp have built a k8s -> consul services sync tool: it is often needed!)
You include consul agent with a service definition in roles on your ansible/salt/puppet managed VM's. They can be included in the mesh, and SD works seamlessly between hosts and containers.
- The consul connect (mesh) feature with envoy is much more comprehensible than istio.
I use to say that nomad + consul is more in-line with the "unix philosophy". You have these two binaries that do what they do independently, but they can be hooked together if you would like to use distributed K/V and service discovery for your scheduled workload. Add Vault to that - which often is used without consul & nomad, but integrates as well.
They can all be easily integrated with through well defined rest API's and do not have to be consumed as a big black box.
K8s is all or nothing.
people don't write about them or teach you about them. i have only seen a couple of youtube videos, just mentioning them. i got a book on terraform (the only one i could find that covers it) that wanted me to use AWS to even begin the examples.
it feels like these tools are only used in the enterprise realm.
very sad and discouraging.
You can use nomad to schedule pretty much any workload you’d like, including docker containers.
You could probably schedule a raw command that is “docker-compose”.
Yes, there is a learning curve. It's not a Heroku where you enter one command and it just works. You'll need to spend a few days to become familiar with the basics concepts. And it may take weeks grasp the advanced concepts. But it's less complexity than learning something like a new programming language.
You'll probably get back the time you invested. Things like deploying production-ready application with a single command using e.g. Helm is quite powerful and can save you money over using managed services.
The networking used to be simple, you could just set up a few static routes and manually assign blocks of CIDRs to nodes and be done. I'm sure there there are some newer networking API components that obfuscated the whole thing in the name of "simplicity" because nobody understands networking anymore.
I've created a tool called 'Swarmlet' over the past few weeks, which tries to mimic Dokku, so it combines git and Docker Swarm mode for easy app deployments. With some services included like Traefik for routing and automatic SSL using Let's Encrypt, with Consul as a distributed secrets store.
Definitely not production-ready, and quite some things to do, but it's a nice POC which actually works for me.
Also how that we have managed services like EKS, GKE, AKS it's really straight forward to learn.
I would suggest start with a GKE cluster and deploy nginx.
I would suggest using a Google Disk or EBS volume (Depending on if you are using GKE or EKS. They can be mounted to multiple containers at a time (your consumers). Portable and durable.
Keep this volume updated with your data. Maybe a sync process from your other server.