Hacker News new | past | comments | ask | show | jobs | submit login

On this note, HN users and several HC employees (including some enterprise architects) I've spoken to love to mention Nomad and how it's much lighter and simpler to run, completely oblivious that many with bigger names have tried and failed to go up against K8s.

Kubernetes has become a speeding bullet, it won't be easy to catch and it has to run through a lot of walls to slow down for a competitor to do any damage to it's dominance.




> completely oblivious that many with bigger names have tried and failed to go up against K8s

Ha, I can at least assure you we're not oblivious. Nomad has a product manager from Mesosphere and most of HashiCorp's products integrate deeply with Kubernetes. We're well aware Kubernetes is the juggernaut to which every other scheduler has succumbed.

I believe there's room for both Nomad and Kubernetes. Whether as competitors or complements, having more than one vibrant project in the orchestration space will hopefully make all the projects better for users. Any one project has to make tradeoffs that improve the user experience for some while degrading it for others. For example Kubernetes has an IP-per-Pod network abstraction which provides an excellent out of the box PaaS-like experience for microservices in the cloud. On the other hand Nomad's more flexible network model more easily supports networks outside of flat cloud topologies whether it's small home raspberrypi clusters or globally distributed edge compute.


Sorry, I think I came across dismissive of Nomad which I am not. I work with multiple products in the HC stack daily, but have yet to work with Nomad, however, the bits that I have seen and read about Nomad, it seems like a great tool, and I agree there is place for Nomad in the current ecosystem, just like there is place for ECS. Even if I didn't think so, at the very least, like you said, it's good to have competition.

What I was calling out is the abundance of people who claim it is the greatest thing since the tech equivalent of sliced bread, and love to point out how simple it is despite requiring the need for Consul to run it at enterprise scale and likely also Vault if you need secret management in any form, which then also needs Consul if you want to run it at scale. I have intimate experience with running Vault and Consul and have advised colleagues for the better part of 2 years on utilising Vault (either as basic secrets management or some of its more advanced features). IIRC the recommendation from HC is also running a third Consul cluster for service discovery. If running Nomad at scale is anything like Vault, then it isn't as simple as people make it out to be, never mind the fact that you'll probably be running 5 different clusters of 3 different products to provide functionality that doesn't fulfill half of what Kubernetes gives.


> If running Nomad at scale is anything like Vault, then it isn't as simple as people make it out to be, never mind the fact that you'll probably be running 5 different clusters of 3 different products to provide functionality that doesn't fulfill half of what Kubernetes gives.

The complexity of setting up a Nomad cluster due to Vault and Consul being soft requirements is very real and something we're hoping to (finally!) make progress on in the coming months.


you would need to make it as simple as running k3s on 1,3,5 nodes and probably also have stuff like system-upgrade-manager and another good idea is to work on flatcar linux, etc. k3s is so good for small-middle sized metal clusters and also works on a single node. it's so simple to start, you will have a rough time to actually try to gain momentum in this space which will also not generate a lot of revenue.

the only thing which is not so easy is LoadBalancing to extern i.e. kube-vip/metallb both can be a pain (externalTrafficPolicy: Local is still a shit show). k3s basically creates a k8s cluster with everything except good load balancing over multiple nodes


A single Nomad process can be the server (scheduler) and client (worker). Nomad can run in single server mode if highly available scheduling is not a concern (workloads will not be interrupted if your server does go down) by setting bootstrap_expect=1 (instead of 3, 5, 7, etc). You can always add more servers later to make a non-HA cluster HA. No need to use different projects to setup different clusters. Clients can be added or removed at any time with no configuration changes (people using Nomad in the cloud generally put servers in 1 ASG and clients in another ASG).

Nomad does not have a first class LoadBalancing concept in keeping with its unopinionated network model, although we may add ingress/loadbalancing someday. Right now most people use Traefik or Nginx with Consul for ingress, and Consul Connect is popular for internal service mesh. Obviously unfortunate extra complexity over having it builtin, but Nomad has been focused more on core scheduling than these ancillary services so far.


Great perspective. Thank you for you comment!


Thing is, Nomad isn't trying to go up against K8S. It's part of the Hashicorp family of products, so if you use a couple of those, it's easy to add Nomad into the mix, or just use K8S. While other competitors have tried to out-do K8S, Hashicorp seem to try to work with the broader eco-system, which means some will just use Nomad to keep it simple, but don't need to for Hashicorp to continue working for them in other areas.


> Thing is, Nomad isn't trying to go up against K8S.

Hard[0] disagree[1] on this[2]. Nomad is a direct competitor of Kubernetes.

[0] https://www.nomadproject.io/docs/nomad-vs-kubernetes

[1] https://www.hashicorp.com/blog/a-kubernetes-user-s-guide-to-...

[2] https://www.hashicorp.com/blog/nomad-kubernetes-a-pragmatic-...


Nomad can compete with a fraction of k8s, a bit like all the other hashicorp products


We will see if the next hot scaling/clustering/orchestration solution replaces k8s in a decade from now. Fingers crossed.

There have been lots of speeding, hard-to-catch bullets in the past.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: