I'd encourage anyone to try Nomad, download the single binary to your laptop, fire up the orchestrator in single-node dev-mode and submit a few lines of a service definition file and see how it feels. Regardless of whether you already know k8s or use it, just give it a try, it will give you a better appreciation for what Nomad does and doesn't do, and what decisions it made to get there. The same advice is true for any competing set of technologies really.
Everyone wants to be Google and have Google size problems. Using Nomad is an admission that you're not really a big player, and your problems can be solved by an easily understood off-the-shelf solution (with a support program).
Part of it might also be the fact that may developer don't know their options, and just pick the one solution they sort of know. I currently work on a project where we briefly had another consulting company involved. Their staff heard: container, devops and traefic and immediately reached for Kubernetes. The project has four different containers and no HA requirements. Our solution: Maybe kill off Traefic and use the HAProxy installation that we need to a different part of the project anyway and just deploy the four containers using docker-compose or Ansible.
Yet ironically, Nomad can deal with (much) larger clusters, schedule jobs faster, and generally require less screwing about than Kubernetes does.
Tell a frontend dev to use Ember over React and they will get sweaty palms and tell you that "React has won" or something like that. Tell someone they should use Nomad over k8s and they will tell you that they really should be working with whatever everyone else is working with, the the die have been cast and k8s has emerged as the winner.
I believe it is a mixture of resume-driven development, which is a perfectly fair concern if you are planning to apply to the next job and have that item on your CV, and insecurity. Insecurity because someone might be worried that if they learn Nomad instead of k8s they will have trouble applying their knowledge later, as if the concepts aren't mostly 1:1 mappings. Same with most programming languages and frameworks du jour, but still some people think it's better to have 3 years of experience with Framework X instead of 1 year of experience each with Framework X, Y, and Z.
> Why do you think Kubernetes has so much more traction than Nomad?
I'm a dev on a small team, there is just no way I'm going to advocate for nomad when I can use a managed k8s cluster on any one of the cloud providers. Our CI tool supports k8s out of the box (but not nomad). Our logging stack is entirely turnkey - we have a sidecar container that _just works_. There is so much tooling out there that just works with k8s it would be insane to turn our back on it.
> well-engineered stuff gets only a niche audience of people who appreciate such things.
Well engineered is meaningless if it doesn't work or is a pain in the ass to use. A prime example is hashicorp vault. The documentation on setting that up to run is utterly terrifying for someone who is not a seasoned sysadmin. Of course I'm going to use the secret manager that comes with my cloud provider for free over the perfectly architeced 9 node vault cluster deployed with consulate for networking.
"We are excited to announce that the beta release of HashiCorp Nomad 1.1 is now available. Nomad is a simple and flexible orchestrator used to deploy and manage containers and non-containerized applications across on-premises and cloud environments. "
I know I've heard of Nomad but couldn't remember what it does. This explained it perfect! I often see an announcement show up here on HN or on Twitter or my RSS Feed and there's not any mention of what the thing is/does other than what's new.
It would be so helpful if companies talked in terms of some very specific use-cases that their product excels at, and then move to more general capabilities.
If you need a workload orchestrator for heterogeneous workloads (i.e. not just Docker) and need something that is simple then Nomad is for you.
In my limited experience, and my use-case, I found that dealing with docker-compose and docker stack is much more straightforward, and seems to work out of box.
We're continually refining Terraform modules to ease setup, although Nomad's own terraform directory is in dire need of updating.
Running Consul and/or Vault with Nomad is definitely possible, but we need to do the hard work of testing and publishing best practices.
Once you have Nomad up and running, staring at an empty job file is daunting as well! We're approaching that problem from 2 angles: better tools for guided job creation, and more modular job files. Templating tools already provide some job file modularization, but nothing as unified as the Helm/Charts experience. Nomad lacks any sort of central job repository as well.
You can expect us to chip away at these problems like we've approached everything else: incremental improvements while maintaining backward compatibility.
And for those willing (or eager!) to let someone else handle the operational burden for you: stay tuned for exciting things coming from HashiCorp Cloud Platform!
Please start filing issues encountered in single server setups to help us improve the situation! What I'm afraid is that the lack of issues does not mean that it's a pain-free experience.
I think a lot of that lies in the fact that all of the pieces stand-alone, which is a selling point, but it makes it difficult to see how they all join together to create the bigger picture.
I'd love to see a good tutorial on setting up a complete environment, with Vault, Consul and Nomad (and Waypoint!) with best practices. Something like "here's how to deploy your rails app with Nomad and Vault" would be great to see the big picture.
The beauty of something like Nomad is that you can run it on a single box with moderately more effort than docker compose, but then you can flick a switch and have your workloads run across multiple machines.
How is that any different to kubernetes with k3s?
Even running your own on prem is simple as with things like rancher these days.
Nomad + Consul have been one of the best decisions we've made, it's simple to reason about, easy to maintain, the upgrade path haven't been stressful and it requires a minimum of maintenance.
Most of the moving parts for k8s are easily handled if you go for a managed service.
Using Nomad alone is fine, but with Consul in the mix, it requires quite a bit of set-up (especially for some degree of HA), and from my experience, is no less harder than using something like k3s.
Overall from my perspective, it just seems like Nomad + Consul sits in two places.
Either for an org. small enough where HA is not a concern, so setting it up, and running it "on-premise" is trivial. Or, for an org. large enough where you can have a team dedicated to setting it up, and managing it to ensure it meets various SLAs.
Genuinely curious to know what's your experience been like, and if it matches up with this.
Our whole platform is between 10-50 EC2 machines running a Nomad cluster, Nomad manages our Docker containers and with services backed by RDS.
I think managed services were in their infancy when we did our initial research back in 2017/2018, Tectonic+Kubernetes with CoreOS looked promising but they were bought by Red Hat and probably rebranded/merged/disappeared into OpenShift. EKS was in beta an only available in the US (we're in EU).
We did try Rancher but we hit issues with it.
I don't know if K3S existed yet, but just looking at the diagram on their website it does look quite interesting.
We launched Consul first and started defining all of our services, and after that we started moving applications into Nomad.
HA has been quite easy with Terraform on EC2. We build "golden-images" with Packer, and then launch them with Terraform, upgrading Consul is adding 3 new servers, making sure things are stable, and then removing the 3 old servers.
Which large parts of Europe currently cannot do, or are at least unwilling to risk doing, due to Schrems II. Amazon, Google and Azure are the three providers with the best managed Kubernetes services, or that least those who require least involvement from the user.
Otherwise I completely agree, Kubernetes is less of an overhead, if you can rely on managed services. I do question the idea that Nomad can't easily do HA. If you can build an HA environment for Kubernetes, then you can just as easily do the same for Nomad.
I know k8s is more popular (meaning there is broader community support and a larger ecosystem around it), and has an extremely diverse set of features available. I don't doubt it is a sensible choice for many deployments, but I really don't think that these properties are a reason to dismiss alternatives. Nomad works well, and even supports various use-cases that k8s doesn't.
* for this particular project managed options weren't really an option.
Most of the market outside of a small 'cloud-native' echo chamber. I use Kubernetes currently, and almost every aspect of the everything related to it would be simpler, more scalable, and generally less bad using Nomad.
Also if you run Windows, Nomad is your absolute best choice. I understand k8s work on Windows, however I don't know a single individual running Windows containers.
Not dismissing k8s here but if you're running Docker Swarm (dead) or Mesos (also dead) on-prem I'd think it's wise to also take a look at Nomad.
The HN bubble can be fascinating at times. The answer is: a lot of people, and a lot of companies. Like A LOT.
We're running 1000+ containers on it and it just works, needs almost no maintenance and the Nomad upgrades are with zero downtime.
It's really really hard to find customers who couldn't just run their containers on one or two VMs. More and more we see the developers getting Kubernetes added to the requirement, and then they show up with as little as ONE container.
The argument I often hear is: Then we can scale up during peak hour. Well.. yes, but this is a cluster limited to just you. You still need to pay for the workers nodes and the control plan. Those don't just turn off. Most people seem to not grasp that running Kubernetes is often more expensive than running VMs or EC2 instances.
You need to have a large number of smaller containers and a lot of interaction between these containers for Kubernetes to make much sense, in my opinion. It's a solution for a small subset of problem, and if your solution is outside that subset, the management overhead might not be worth it.
If it runs on two VMs, how do you handle networking between them, or updating the containers? Once you actually start to run a container, kubernetes solves a bunch of problems straight away that you have to solve yourself if you use something elelse
> You still need to pay for the workers nodes and the control plan.
Surely if you're running k8s at any scale you're auto scaling your worker nodes, and if you're spending enough to actually scale your workers the cost of the control plane is negligible compared to the rest (i.e. on aws it's $30/month per cluster)
You do know that networking existed before containers right?
In our case though: We already have a large multi-tenant load balancer, so any traffic that needs be able to hit either containers is routed that way. Inter-container communication is just run over the networks as you would any normal application.
Deployment, that's a little tricky, and here Kubernetes do present good, but confusing way of managing deployments. However, we find that for most setups Ansible is actually just fine. Remove the VM from the load balancer, upgrade that VM, swap it out for the other VM and upgrade that. This is something we sovled before Kubernetes was available.
It's my belief though that many of those developers who want Kubernetes actually want kubectl, because it's an easy way for them to deploy code to production. They in fact don't really care what runs their code in the end.
I mostly deal in onprem Kubernetes, so autoscaling workers isn't really a thing. The hardware NEEDS to be present at all time. Your point about the cost of the control plan VMs is correct, but what if your application runs fine on roughly the same resources as the a control plan node? (something like 4GB of RAM vs. 2GB and maybe 2 CPUs) My issue is that I see people pick Kubernetes for insanely small projects.
I do, but the point of my question was that kubernetes solves networking, and load balancing, and upgrading, and many other issues.
> It's my belief though that many of those developers who want Kubernetes actually want kubectl,
You're not wrong - kubectl is half the reason I use kubernetes. Solving the problems it does is reason enough to use it.
And that is honestly perfectly reasonable. I'd argue that it doesn't come for free, but if the benefits outweigh the cost then go for it. Again, I just don't often see customers who have so clearly argued as to why they'd need Kubernetes and not for instance Nomad.
It is all nice and shiny on paper, but applied to "real world" conditions … it sucks.
It works, as long you never touch it again. But, if you dare to look at it with wrong eye, it will break, and if it breaks, it breaks beyond all repair. Every time.