Hacker News new | past | comments | ask | show | jobs | submit login
HashiCorp Nomad 1.1 Beta (hashicorp.com)
106 points by edouardb on May 3, 2021 | hide | past | favorite | 43 comments



I've said it in basically every thread on Nomad on HN, and I'll say it again: Nomad is an awesome piece of focused technology and an absolute pleasure to work with. There is one very clear way to do things, everything feels very polished, and you don't need a combination of tools or even third-party services to get set up.

I'd encourage anyone to try Nomad, download the single binary to your laptop, fire up the orchestrator in single-node dev-mode and submit a few lines of a service definition file and see how it feels. Regardless of whether you already know k8s or use it, just give it a try, it will give you a better appreciation for what Nomad does and doesn't do, and what decisions it made to get there. The same advice is true for any competing set of technologies really.


Why do you think Kubernetes has so much more traction than Nomad? It seems like a general pattern in tech: the more complicated, cumbersome, clunky stuff gets traction while the smooth and well-engineered stuff gets only a niche audience of people who appreciate such things.


Honestly: Because Google uses it.

Everyone wants to be Google and have Google size problems. Using Nomad is an admission that you're not really a big player, and your problems can be solved by an easily understood off-the-shelf solution (with a support program).

Part of it might also be the fact that may developer don't know their options, and just pick the one solution they sort of know. I currently work on a project where we briefly had another consulting company involved. Their staff heard: container, devops and traefic and immediately reached for Kubernetes. The project has four different containers and no HA requirements. Our solution: Maybe kill off Traefic and use the HAProxy installation that we need to a different part of the project anyway and just deploy the four containers using docker-compose or Ansible.


> Using Nomad is an admission that you're not really a big player

Yet ironically, Nomad can deal with (much) larger clusters, schedule jobs faster, and generally require less screwing about than Kubernetes does.


As another sibling said: Because Google uses it. But also because many devs get very anxious when you tell them to use something that doesn't have the majority "mindshare" (yes, I have had people tell me that concern multiple times).

Tell a frontend dev to use Ember over React and they will get sweaty palms and tell you that "React has won" or something like that. Tell someone they should use Nomad over k8s and they will tell you that they really should be working with whatever everyone else is working with, the the die have been cast and k8s has emerged as the winner.

I believe it is a mixture of resume-driven development, which is a perfectly fair concern if you are planning to apply to the next job and have that item on your CV, and insecurity. Insecurity because someone might be worried that if they learn Nomad instead of k8s they will have trouble applying their knowledge later, as if the concepts aren't mostly 1:1 mappings. Same with most programming languages and frameworks du jour, but still some people think it's better to have 3 years of experience with Framework X instead of 1 year of experience each with Framework X, Y, and Z.


Not OP but going to split my reply in two.

> Why do you think Kubernetes has so much more traction than Nomad?

I'm a dev on a small team, there is just no way I'm going to advocate for nomad when I can use a managed k8s cluster on any one of the cloud providers. Our CI tool supports k8s out of the box (but not nomad). Our logging stack is entirely turnkey - we have a sidecar container that _just works_. There is so much tooling out there that just works with k8s it would be insane to turn our back on it.

> well-engineered stuff gets only a niche audience of people who appreciate such things.

Well engineered is meaningless if it doesn't work or is a pain in the ass to use. A prime example is hashicorp vault. The documentation on setting that up to run is utterly terrifying for someone who is not a seasoned sysadmin. Of course I'm going to use the secret manager that comes with my cloud provider for free over the perfectly architeced 9 node vault cluster deployed with consulate for networking.


If you want a bunch of the enterprise features you need to get nomad enterprise. Not everything is available in the FOSS version.


Because Kubernetes matured far sooner and faster.


More announcements should start like this one does:

"We are excited to announce that the beta release of HashiCorp Nomad 1.1 is now available. Nomad is a simple and flexible orchestrator used to deploy and manage containers and non-containerized applications across on-premises and cloud environments. "

I know I've heard of Nomad but couldn't remember what it does. This explained it perfect! I often see an announcement show up here on HN or on Twitter or my RSS Feed and there's not any mention of what the thing is/does other than what's new.


I always say Nomad is what Mesos should be if you re-wrote it and fixed the mistakes that caused it to "lose" out to Kubernetes.


I guess I still don't understand the "why", in terms of why someone would use or be interested in the product. What differentiates it?

It would be so helpful if companies talked in terms of some very specific use-cases that their product excels at, and then move to more general capabilities.


The "why" is in the second sentence: "Nomad is a simple and flexible orchestrator used to deploy and manage containers and non-containerized applications across on-premises and cloud environments."

If you need a workload orchestrator for heterogeneous workloads (i.e. not just Docker) and need something that is simple then Nomad is for you.


Nomad seems great, and I wanted to jump in a lot of times, but I found it hard to do. I'm not sure why though. Setting up a basic project is easy if you follow their tutorials, but at a certain point it just gets confusing, and you end up piecing things together, because you'll need Consul and Vault. So after a while I just got lost. Maybe because I don't live and breathe all the orchestration related jargon.

In my limited experience, and my use-case, I found that dealing with docker-compose and docker stack is much more straightforward, and seems to work out of box.


Nomad Team Lead here -> You're definitely not alone, and we know it. Once you add Consul, Vault, namespaces, ACLs, mTLS, etc you have quite the infrastructure to maintain. HashiCorp is making a big push to cloud to help alleviate this operational burden, but there's a lot we intend to do to make it easier to self-host as well.

We're continually refining Terraform modules[1] to ease setup, although Nomad's own terraform directory is in dire need of updating.

Running Consul and/or Vault with Nomad is definitely possible, but we need to do the hard work of testing and publishing best practices.

Once you have Nomad up and running, staring at an empty job file is daunting as well! We're approaching that problem from 2 angles: better tools for guided job creation, and more modular job files. Templating tools already provide some job file modularization, but nothing as unified as the Helm/Charts experience. Nomad lacks any sort of central job repository as well.

You can expect us to chip away at these problems like we've approached everything else: incremental improvements while maintaining backward compatibility.

And for those willing (or eager!) to let someone else handle the operational burden for you: stay tuned for exciting things coming from HashiCorp Cloud Platform!

[1] https://registry.terraform.io/modules/hashicorp/nomad/aws/la...


One thing I found a bit daunting is that the documentation says you should never run Nomad on a single node in production. Could you tell me what the downside would be of running Nomad on 1 server in dev mode, versus running an application on 1 server without Nomad? Doing so could make it easier to start off a project with Nomad from the start, and expanding to the full blown "3-or-5 Nomad Servers, 1-to-10000 Nomad Clients" setup when needed. But maybe I'm not seeing the full picture.


It's largely a testing and documentation issue on our end. Nomad clients (the daemon's running on each node managing workloads) keep your services running even if they lose contact with Nomad servers (the schedulers), so a Nomad outage != a service outage.

Please start filing issues encountered in single server setups to help us improve the situation! What I'm afraid is that the lack of issues does not mean that it's a pain-free experience.


I agree. I love the idea of it, but as a relative newcomer to the space, some of the documentation and examples definitely felt a bit "draw the rest of the owl".

I think a lot of that lies in the fact that all of the pieces stand-alone, which is a selling point, but it makes it difficult to see how they all join together to create the bigger picture.

I'd love to see a good tutorial on setting up a complete environment, with Vault, Consul and Nomad (and Waypoint!) with best practices. Something like "here's how to deploy your rails app with Nomad and Vault" would be great to see the big picture.


I agree with you that if you're looking at single-node deployments, Nomad or any distributed orchestrator will feel like overkill. I personally use Nomad quite a lot, but for single-box systems I'll happily revert to using plain Docker containers started via systemd service files.

The beauty of something like Nomad is that you can run it on a single box with moderately more effort than docker compose, but then you can flick a switch and have your workloads run across multiple machines.


> The beauty of something like Nomad is that you can run it on a single box with moderately more effort than docker compose, but then you can flick a switch and have your workloads run across multiple machines.

How is that any different to kubernetes with k3s?


k3s is still drastically more complex, with a lot more moving parts.


Honest question; how? And what difference does that make to running an app? Running on k8s involves writing 2 yaml files, and scaling is literally the flick of a switch. The architecture of nomad may be simpler but how does that benefit me as a user? I've been running apps in kubernetes for about 2 years now at a "mild" scale and not once has the complexity of k8s been the show stopping issue it's made out to be here on HN.


I think the first upgrade from docker-compose towards nomad is running nomad in a single-server setup (not recommended, but for the same reasons docker-compose on a single server is a bad idea if you want high-availability).


Congrats on the beta and everything, but honestly who isn’t using k8s at this point in time?

Even running your own on prem is simple as with things like rancher these days.


We're running Nomad, it's doing one thing and one thing well, when we looked at Kubernetes and all the moving parts it comes with plus all the opioniated stacks people tend to run with it, it was a daunting learning experience, we managed to get it running but the upgrade processes felt brittle and it seemed like a full-time job just keeping it updated and happy.

Nomad + Consul have been one of the best decisions we've made, it's simple to reason about, easy to maintain, the upgrade path haven't been stressful and it requires a minimum of maintenance.


I'm quite curious about your scale / size of your team.

Most of the moving parts for k8s are easily handled if you go for a managed service.

Using Nomad alone is fine, but with Consul in the mix, it requires quite a bit of set-up (especially for some degree of HA), and from my experience, is no less harder than using something like k3s.

Overall from my perspective, it just seems like Nomad + Consul sits in two places.

Either for an org. small enough where HA is not a concern, so setting it up, and running it "on-premise" is trivial. Or, for an org. large enough where you can have a team dedicated to setting it up, and managing it to ensure it meets various SLAs.

Genuinely curious to know what's your experience been like, and if it matches up with this.


We're 1.5 people to do ops works, but no one is full time on it, I'm the main responsible and I have someone that helps me when he's interested. We are 5 developers.

Our whole platform is between 10-50 EC2 machines running a Nomad cluster, Nomad manages our Docker containers and with services backed by RDS.

I think managed services were in their infancy when we did our initial research back in 2017/2018, Tectonic+Kubernetes with CoreOS looked promising but they were bought by Red Hat and probably rebranded/merged/disappeared into OpenShift. EKS was in beta an only available in the US (we're in EU).

We did try Rancher but we hit issues with it.

I don't know if K3S existed yet, but just looking at the diagram on their website it does look quite interesting.

We launched Consul first and started defining all of our services, and after that we started moving applications into Nomad.

HA has been quite easy with Terraform on EC2. We build "golden-images" with Packer, and then launch them with Terraform, upgrading Consul is adding 3 new servers, making sure things are stable, and then removing the 3 old servers.


> Most of the moving parts for k8s are easily handled if you go for a managed service.

Which large parts of Europe currently cannot do, or are at least unwilling to risk doing, due to Schrems II. Amazon, Google and Azure are the three providers with the best managed Kubernetes services, or that least those who require least involvement from the user.

Otherwise I completely agree, Kubernetes is less of an overhead, if you can rely on managed services. I do question the idea that Nomad can't easily do HA. If you can build an HA environment for Kubernetes, then you can just as easily do the same for Nomad.


There are plenty of shops running Nomad, Cloudflare is probably one of the most prominent ones. Also, due to its simplicity and easy of deployment compare to k8s, my guess is that there is a large amount of small shops running Nomad because they can actually manage it, but those are the exact shops that don't have a public presence or write engineering blog posts.


Definitely. As a solo founder I really want to keep complexity down. I didn't already have significant experience with k8s, but what experience I had steered me away from it for a one-person setup*. I opted for Nomad and haven't regretted it so far.

I know k8s is more popular (meaning there is broader community support and a larger ecosystem around it), and has an extremely diverse set of features available. I don't doubt it is a sensible choice for many deployments, but I really don't think that these properties are a reason to dismiss alternatives. Nomad works well, and even supports various use-cases that k8s doesn't.

* for this particular project managed options weren't really an option.


> but honestly who isn’t using k8s at this point in time

Most of the market outside of a small 'cloud-native' echo chamber. I use Kubernetes currently, and almost every aspect of the everything related to it would be simpler, more scalable, and generally less bad using Nomad.


We cycle north of a million containers a day through all our Nomad clusters. Scales, functions, breaks, and is repairable all in predictable ways. The same is true for Vault and Consul.

Also if you run Windows, Nomad is your absolute best choice. I understand k8s work on Windows, however I don't know a single individual running Windows containers.


My next project will use Nomad over Kubernetes, and the reason is that I want to keep complexity down. I haven't used rancher, but another layer won't remove complexity, just hide it (until the abstraction leaks).


I've introduced Nomad some ~4 years ago in an organisation that requires everything to run on-prem. This has been pretty successful, with adoption expanding beyond its original use-case.

Not dismissing k8s here but if you're running Docker Swarm (dead) or Mesos (also dead) on-prem I'd think it's wise to also take a look at Nomad.


> honestly who isn’t using k8s at this point in time?

The HN bubble can be fascinating at times. The answer is: a lot of people, and a lot of companies. Like A LOT.


Another person here who isn't using k8s.

We're running 1000+ containers on it and it just works, needs almost no maintenance and the Nomad upgrades are with zero downtime.


We're still struggling to figure out why people want to use Kubernetes. I don't mean that is a "Kubernetes" is bad kinda way and we sell a lot of Kubernetes, both running clusters and just general consulting.

It's really really hard to find customers who couldn't just run their containers on one or two VMs. More and more we see the developers getting Kubernetes added to the requirement, and then they show up with as little as ONE container.

The argument I often hear is: Then we can scale up during peak hour. Well.. yes, but this is a cluster limited to just you. You still need to pay for the workers nodes and the control plan. Those don't just turn off. Most people seem to not grasp that running Kubernetes is often more expensive than running VMs or EC2 instances.

You need to have a large number of smaller containers and a lot of interaction between these containers for Kubernetes to make much sense, in my opinion. It's a solution for a small subset of problem, and if your solution is outside that subset, the management overhead might not be worth it.


> It's really really hard to find customers who couldn't just run their containers on one or two VMs

If it runs on two VMs, how do you handle networking between them, or updating the containers? Once you actually start to run a container, kubernetes solves a bunch of problems straight away that you have to solve yourself if you use something elelse

> You still need to pay for the workers nodes and the control plan.

Surely if you're running k8s at any scale you're auto scaling your worker nodes, and if you're spending enough to actually scale your workers the cost of the control plane is negligible compared to the rest (i.e. on aws it's $30/month per cluster)


> If it runs on two VMs, how do you handle networking between them, or updating the containers?

You do know that networking existed before containers right?

In our case though: We already have a large multi-tenant load balancer, so any traffic that needs be able to hit either containers is routed that way. Inter-container communication is just run over the networks as you would any normal application.

Deployment, that's a little tricky, and here Kubernetes do present good, but confusing way of managing deployments. However, we find that for most setups Ansible is actually just fine. Remove the VM from the load balancer, upgrade that VM, swap it out for the other VM and upgrade that. This is something we sovled before Kubernetes was available.

It's my belief though that many of those developers who want Kubernetes actually want kubectl, because it's an easy way for them to deploy code to production. They in fact don't really care what runs their code in the end.

I mostly deal in onprem Kubernetes, so autoscaling workers isn't really a thing. The hardware NEEDS to be present at all time. Your point about the cost of the control plan VMs is correct, but what if your application runs fine on roughly the same resources as the a control plan node? (something like 4GB of RAM vs. 2GB and maybe 2 CPUs) My issue is that I see people pick Kubernetes for insanely small projects.


> You do know that networking existed before containers right?

I do, but the point of my question was that kubernetes solves networking, and load balancing, and upgrading, and many other issues.

> It's my belief though that many of those developers who want Kubernetes actually want kubectl,

You're not wrong - kubectl is half the reason I use kubernetes. Solving the problems it does is reason enough to use it.


> You're not wrong - kubectl is half the reason I use kubernetes. Solving the problems it does is reason enough to use it.

And that is honestly perfectly reasonable. I'd argue that it doesn't come for free, but if the benefits outweigh the cost then go for it. Again, I just don't often see customers who have so clearly argued as to why they'd need Kubernetes and not for instance Nomad.


Competition is good, and from my experience with both Nomad feels like a more complete product than kubernetes


If I've learned something in recent years, is: stay way from HashiCopr stuff as far as you can.

It is all nice and shiny on paper, but applied to "real world" conditions … it sucks.

It works, as long you never touch it again. But, if you dare to look at it with wrong eye, it will break, and if it breaks, it breaks beyond all repair. Every time.


I'd love to know why you think that. Is there a story there?




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: