
Ask HN: What configuration management tool would you choose in 2017? - deeevops
Chef, Puppet, Ansible, SaltStack, CFEngine and others... what&#x27;s your pick if you were starting today?
======
stuffaandthings
Ansible, because I like having an agentless configuration management tool. In
my experience, Ansible also seems the most readable (obviously subjective).
I've used Puppet, and Chef but only through AWS OpsWorks.

That said, I've been playing around with Kubernetes lately and trying to move
a lot of our infrastructure onto Kubernetes. The use of Dockerfiles kind of
nullifies the need for a full fledged configuration management tool for me and
I've been relying on bash scripts and distributed kv stores to manage state
and environment variables (using 12 factor approach and managing environment
variables with consul and secrets with vault)

~~~
nikolay
Really?! Writing pseudo-Bash scripts in YAML? And why do you need this in the
age of Docker and Kubernetes and Immutable Infrastructure? For AWS with no
cloud-neutral goal on the roadmap, I'd use native services: Declarative
Infrastructure via CloudFormation, and HashiCorp's suite for everything else.

------
citruspi
Chef. I've previously used Ansible and Saltstack. I haven't tried any others.

When I first started using Chef, it was for work and I wasn't a fan at all.
But, it's grown on me to the extent that I replaced all my Ansible-maintained
infrastructure with Chef and I now maintain my own personal Chef server.

It can definitely be overcomplicated in some scenarios, but I'm comfortable
using it and and I feel of the tools I've tried, it allows me to be the most
expressive. I'd much rather write a Ruby Chef recipe than a YAML Ansible
playbook.

The fact that Chef's Management Console UI[0] is available for free (for small
deployments) while Ansible Tower[1]'s pricing starts at $5000/yr also factors
into my choice. It appears that Red Hat has plans to make a free, open source
version of Tower[2] but there's nothing there yet. Looking online, it looks
like Saltstack maintains a GUI management console of some sorts, but it's only
available for Saltstack Enterprise.

[0]: [https://docs.chef.io/manage.html](https://docs.chef.io/manage.html)

[1]: [https://www.ansible.com/tower](https://www.ansible.com/tower)

[2]: [https://www.ansible.com/open-tower](https://www.ansible.com/open-tower)

------
richardknop
I'd pick none of them. I'd create a coreos cluster and configure it based on
cloudinit files. For more complex projects I'd choose kubernetes or something
similar as cloudfoundry.

The point being I'd try to get as far as possible from any server
configuration. Just limit it to setting up the platform as a service. So
engineers can just push containers to that platform.

I'd also consider serverless architecture.

Long story short, limit server configuration to bare minimum. And for that
minimum I'd use terraform and ansible if needed (but kubernetes can be
deployed just with terraform and cloudinit files imho).

~~~
jetsnoc
This is a good strategy and one my teams are more closely moving towards.
We're using Jenkins + Docker Swarm. If Jenkins receives a web hook that there
is a new stable version tag, it builds the container, pushes it to the
register and instructs the swarm to drain connections and do a rolling update
of the container. In this regard, configuration management only becomes
important for the swarm cluster nodes. E.g., managing our users and ssh keys
and building up the nodes. We are using Terraform and Chef to create and
manage our swarm cluster nodes and Jenkins+Docker swarm for everything else.

------
milesvp
Be weary of Chef if you plan on any kind of autoscaling. We had to deploy some
pretty big hacks to get around some core assumptions in chef in order to use
AWS autoscaling.

On the other had we did make autoscaling work with chef. Not sure how any of
the others would fare since a relatively static server list was a pretty safe
assumption until about 6 years ago.

~~~
citruspi
What kind of issues did you face using Chef and autoscaling? We use Chef to
configure servers which are managed by an AWS autoscaling group without any
issues.

~~~
milesvp
I may not know enough specifics to be useful, but we ran into issues where the
solution was to sort of have servers spin up and bootstrap themselves until
they could finally locate and cede control to the chef server. It may have had
something to do with how we use AMIs but the way our ops engineer talked about
it it seemed to be antithetical to how chef is intended to be used.

------
tristram_shandy
SaltStack for configuration management, absolutely. We use it to manage our
Linux servers, Macs, and the few Windows desktops that linger around the
office. We also use it to bootstrap Kubernetes, audit configuration of some
clusters, and provision dev machines.

Salt can also run agentless, and is much, much faster than Ansible, and has
excellent support for both Docker and Kubernetes, which is a huge plus for us.
Everything that can't be put in a Dockerfile gets run through Salt

------
mbrock
I try to use Nix and NixOS as much as I can.

