
Salt: Like Puppet, Except It Doesn’t Suck - Baustin
http://blog.smartbear.com/devops/a-taste-of-salt-like-puppet-except-it-doesnt-suck/
======
SandB0x
I'm not a web developer but I have a side-project that runs on a cobbled
together EC2 instance. The server state is in theory documented in a set of of
shell scripts and virtualenv requirements files.

I know that I should be doing this in a more robust way but whenever I try and
read up on configuration management tools like Puppet and Chef, they're all
described in comparative terms - Puppet does X better than Vagrant which does
Y better than Chef, etc. I quickly lose patience and get back to digging
myself into a deeper technical hole.

Is there a non-recursive explanation of what these tools are able to do and
where someone like me should start?

Edit: Thanks for the helpful responses!

~~~
the1
if you're using EC2, why not just create an AMI and forget about configuration
tools?

~~~
bfrog
Having done this a dozen or more times... can you package your ami for your
developers to use? Can you remake your image exactly the same way again as it
was when you first imaged it should it be lost in the cloud (100% reliability
isn't something AWS provides).

Provisioning tools let you create it, and incrementally update your image in a
way that lets you redo it from scratch at any time.

That alone is the reason why I like the idea, not necessarily the resulting
applications that have been created so far for the task.

Shell scripts could do the same and have for years if your only interested in
from scratch setups.

~~~
gyepi
This is not a difficult problem. I recently had to stand up a cluster of EC2
instances for a job that required a cluster of them and used these three
steps:

1\. Write a script to configure an instance and run it when the instance
starts.

2\. Clone the instance to an image

3\. Run instances based on the image.

It's quite straightforward to do. See [http://github.com/gyepisam/fcc-
textify](http://github.com/gyepisam/fcc-textify) for more details.

------
tptacek
I've used Fabric, Chef, Puppet, and Ansible, and have settled on Ansible; it's
a sort of middle ground between Fabric and Chef that does more than just run
commands on servers but doesn't require me to buy into a whole elaborate
universe of configuration management servers and whatnots. Ansible is great.

The ZeroMQ stuff makes sense if you're pushing configurations inside a data
center, but it's a dealbreaker for us having things hosted externally.

~~~
frankwiles
I don't understand why ZeroMQ outside of a data center would be a deal breaker
for anyone. You do realize the data on the wire is encrypted right?

~~~
tptacek
Awesome. How many people have reviewed it for flaws? How many people have
reviewed OpenSSH?

~~~
emidln
How many people have reviewed Paramiko? In particular, how about that ecdsa
patch[1] to Paramiko that you'll need to be accessing modern Ubuntu or Fedora
(and before long, RHEL/CentOS). What about the python-ecdsa[2] (that
paramiko's provisional support for modern Fedora and Ubuntu's default configs
is based on)? This entry from its README seems pretty frightening:

    
    
        This library does not protect against timing attacks. 
        Do not allow attackers to measure how long it takes you 
        to generate a keypair or sign a message. This library
        depends upon a strong source of random numbers. Do not
        use it on a system where os.urandom() is weak.
    

I'm not saying Paramiko (or its patch sets) are insecure, just pointing out
that the same arguments can be made against the libraries and code that
Ansible is based on.

[1] -
[https://github.com/paramiko/paramiko/pull/152](https://github.com/paramiko/paramiko/pull/152)

[2] - [https://github.com/warner/python-
ecdsa](https://github.com/warner/python-ecdsa)

~~~
dsl
> Do not use it on a system where os.urandom() is weak.

So, don't use it in the cloud? [1]

1\. [http://harvey.binghamton.edu/~ychen/chen-
kerrigan.pdf](http://harvey.binghamton.edu/~ychen/chen-kerrigan.pdf)

------
contingencies
Salt/Puppet/whatever. I ignore them all. Why? I have put a lot of thought in
to this area.

IMHO, the _overwhelming_ problem with salt/cfengine/puppet style solutions
(which I will refer to as 'post-facto configuration tinkerers', or PFCT's) is
that they potentially accrue vast amounts of undocumented/invisible state,
therefore creating what I refer to as _configuration drift_.

IMHO, a cleaner solution is to deploy configuration changes _from scratch_ ,
by deploying clean-slate instances with those changes made. In addition,
versioning one's environment in this way creates an identifiable point against
which to execute automated tests. (This class of solution I refer to as
'Clean-slate, Identifiable Environments' or CSIES.) Examples are Amazon AMI's,
and any other kind of versioned/identified VMs.

PFCT's deployment paradigm tends to be relative slow and error prone. CSIE's
tend to be fast and atomic. PFCTs are headed for the dustbin of history. They
are temporary hacks that clearly grew from old-school sysadmins' will to
script. CSIEs embrace modern day devops, as more holistic entities that
embrace virtualization and recognize the integrity of the environment as
critical to preventing ridiculous numbers of environment-induced, service-
level issues that are an expensive tangent to service development, testing and
deployment. Thus, I would argue that what we are looking at with PFCT's is a
failed paradigm, and with CSIEs, the now real and current opportunity for
something far more elegant.

(Disclaimer: Haven't tried ansible or vagrant first hand, but they do seem to
be PFCT's to me.)

~~~
mechanical_fish
_a cleaner solution is to deploy configuration changes from scratch, by
deploying clean-slate instances with those changes made._

All of us who have built big cloud-server clusters have dreamed of this plan
at least once. But there are big practical problems.

Relaunching infrastructure is easy in theory, but from time to time it becomes
very difficult. There is nothing like being blocked on a critical upgrade
because your Amazon region has temporarily run out of your size of instance,
or because the control layer is having a bad day, or because you've
accidentally hit your instance limit in the middle of a deployment, or...

A much bigger issue is that bandwidth is finite, so "big" data is hard to
move. This is a matter of physical law. It's all well and good to declare that
you're never going to apply a MySQL patch in place: You're just going to
launch a new instance with the new version and then switch over. But however
fast you manage to launch the new instance (and you will be hard put to launch
an instance faster than you can apply a patch and restart a daemon...) you
will be limited by the need to copy over the data. Have you ever tried copying
half a terabyte of data over a network in an emergency while the customer is
on the phone? It is _very_ annoying. Because it is often physically impossible
to do it quickly: Cloud infrastructure isn't generally built for that, and
when it is it costs money that your customer will not want to spend for the
luxury of faster, cleaner patch-application.

A solution to this is to use cloud storage like EBS. Now your data sits in EBS
and you just detach its drive and reattach it to a new instance. That actually
works okay, provided you're happy with the bandwidth and reliability of EBS,
which lots of people aren't – and, as those people will cheekily point out,
you have now solved the "relaunches are slow" problem by replacing it with an
"everything is uniformly slow" problem. Moreover, detaching and reattaching
EBS volumes isn't instantaneous either. You have to cleanly shut down and
cleanly detach and cleanly restart, and there's like 12 states to that
process, and all of them occasionally fail, and if you don't want your service
to go down for thirty seconds every time you apply a patch you need a ton of
engineering.

Which brings us to the other problem: Complexity. Most programmers are not
running replicated services with three-nines-reliable failover that never
breaks replication. But even if you are, because you've got the budget for
excellent infrastructure and a great team, it will always - for values of
"always" measured in several more years, anyway - be more complicated and
risky to fail over a critical production service than to apply, say, a
security patch to 'vi' in place on a running server. 'vi' is not in your
critical path. If you accidentally break 'vi' on a live server ( _and you won
't, because vi is older than dirt and solid as a rock_), you will have a good
laugh and roll it back. Why risk a needless failover, which _always_ has a
chance of failure, when you could just _apply the damn patch_ and thereby
mitigate risk?

At Google scale that argument probably stops applying. But most people don't
run at that scale and it will take decades to migrate everyone to a system
that does, if that even happens.

So, "dustbin of history", maybe, someday, but in the long run we are all
retired, and I will be retired before our dream becomes reality. ;)

~~~
contingencies
The bulk of your comment - your second, third and fourth paragraphs - focus on
issues of speed, bandwidth and reliability in a third party hosting/cloud-
based architecture, which are a design-time tradeoff, so I don't see them as
strictly relevant (though anecdotally informative).

Your fifth paragraph describes problems related to operations process, which
are entirely avoidable.

~~~
mechanical_fish
Well, okay. Give my regards to Saint Peter and all the angels!

------
susi22
IMO, ansible is even better:

[https://github.com/ansible/ansible/](https://github.com/ansible/ansible/)

It doesn't require any deamon and does all its work over the good old unix
fashion way: SSH. And it's python too.

~~~
lobo_tuerto
From the article:

"Chef works atop ssh, which – while the gold standard for cryptographically
secure systems management – is computationally expensive to the point where
most master servers fall over under the weight of 700-1500 clients. Salt’s
approach was far simpler."

Does that assertion about Chef somehow don't apply to Ansible?

On the use case:

"I have this command I want to run across 1,000 servers. I want the command to
run on all of those systems within a five second window. It failed on three of
them, and I need to know which three."

~~~
susi22
Well, ansible by default runs with paramiko which is a python implementation
of SSH protocol. It will also keep connections open for multiple commands. It
also has a pull mode and it also has a fireball mode which uses 0mq:

[http://jpmens.net/2012/10/01/dramatically-speeding-up-
ansibl...](http://jpmens.net/2012/10/01/dramatically-speeding-up-ansible-
runs/)

However, you're not forced to use this. In the beginning, you can just seed
your CentOS or debian with a Kickstarter or seed file and then run your inital
thing with ansible simply over ssh (using all the goodies, ssh-agent, password
less ssh etc..).

One huge plus for ansible is also that it used yaml which is rather simple.
I've been following both project for >1 year and it seems that recently
ansible has picked up a lot and will probably make the "race" (IMO).

~~~
danudey
Salt also uses yaml for its configuration backend (by default). You can also
write your state in Python if you prefer, with all the power that that brings
(including pulling data from databases, remote API calls, or whatever you
like).

------
jtreminio
I _was_ frustrated with Puppet when I first started. All I wanted was a VM to
install a few things so I could do some development and not have to worry
about managing my VM.

It turned out to be a rabbit hole. As soon as I thought I learned just enough
to get it running, something else popped up that stopped me.

That's why I created PuPHPet [1]. So far the reception has been fairly
positive.

At one point in my learning, I got fed up and tried Salt. I couldn't get the
Salt hello-world running. I followed the directions to a T. If your tutorial
is incorrect, or hard to follow just to get the most basic version up and
running, it will turn people away.

Also, this was all on top of Vagrant.

[1] PuPHPet - [https://puphpet.com](https://puphpet.com)

~~~
dave1010uk
PuPHPet looks really good. What's the default PHP setup? It would be great to
be able to switch between and configure mod_php, fastcgi, fcgid, PHP-FPM,
suPHP, suExec, etc.

~~~
jtreminio
Default is:

* Ubuntu Precise 64 Bit (12.04.2 LTS)

* Apache

* PHP 5.4

You can switch between Apache/Nginx and PHP 5.4/5.3 (5.5 coming today!)

------
memset
I think salt is neato, but I also find it very frustrating to use! (Possibly
through no fault of salt itself - I feel like I must be missing something.)

I am generally able to SSH into a box and get things configured the way I
need. However, I have huge amounts of trouble translating that into salt
scripts.

Consider logrotate. Here is the only documentation I can find on the topic
[1]. From this, I have _no idea_ what to put in init.sls to make sure a given
log file is being rotated correctly. It seems this would work on the cmdline,
but not necessarily in a salt script.

And that's just for logrotate! My uswgi + nginx configuration - translating
that into salt - I don't know where to begin.

How do I make sure things get installed in a certain order? (Answer seems to
be having 10 directives, for 10 packages, each depending on another, to
enforce order.)

Is there anything that more closely mirrors what I _actually_ do when
configuring the box? SSH in, set certain values, etc? I guess I could write a
shell script (or use fabric) but then I seem to have lost the point of
configuration management.

[1]
[http://docs.saltstack.com/ref/modules/all/salt.modules.logro...](http://docs.saltstack.com/ref/modules/all/salt.modules.logrotate.html)

~~~
Jedd
Hey memset - these sound like pretty straightforward questions (with
straightforward answers). Perhaps asking on the mailing list, or hop onto the
#salt channel on freenode IRC?

~~~
memset
That is a great suggestion. I almost never do this for fear of sounding like a
noob, but I ought to try giving it a shot more often.

~~~
tech-dragon
As one of the people on that list who may well respond there, I'll reply here
as well so its on the record.

You will be very well off if you read and 'digest' the Salt docs on States
first, before moving on to modules, pillars, grains, custom returners, etc.

What you probably need to do with logrotate is take the configuration that you
normaly setup on your servers, then add it to your salt system. So top.sls
calls 'logrotate' running the 'logrotate/init.sls' and that has a definition
that says "I want logrotate installed, I want it running as a service, and by
the way take the file 'logrotate/config.conf' and shove it in
/etc/logrotate.d/ as <correct filename>, p.s. If i change that file, restart
logrotate"

With States & the requisite declarations to enable salt to know what order
things need to be in, you shouldnt have much trouble adding a simple service
like logrotate along with a specific config file to use for that service.

------
Goladus
I'm still looking for a configuration management system that doesn't assume
that the first step towards managing servers is to add a new "master" server.
From the thread, ansible looks promising. In the meantime I'll keep using
chef-solo until opscode kills it.

~~~
akoumjian
You can run salt masterless. See the quickstart.

[http://docs.saltstack.com/topics/tutorials/quickstart.html](http://docs.saltstack.com/topics/tutorials/quickstart.html)

~~~
Goladus
Thanks, it may even be that a salt master is lightweight worth configuring.
But I should note there's a difference between "can run" and "intended to
run."

It's as much a semantic thing as a technical thing. Instead of thinking about
an unconfigured node as a "minion" awaiting orders and provisions from central
command, I prefer to think of a node like a stem cell, fully capable of
differentiating itself based on signals that it receives. You need a way to
update the DNA and a way to send the signals, that's it.

This may seem like a meaningless difference, since there is still value in
centralized services (package repositories, security, reporting, monitoring).
But it's still a subtly different focus and over time yields different
results.

For my part I think the distributed, organic "stem cell" way of thinking will
win out over "master/minion" in the long run.

------
kapilvt
The thing that bugs me about salt is the almost complete lack of
testing/coverage. They had tons of egg-face releases for crypto bugs, upgrade
issues, things a basic test suite would have solved. I'd rather not trust my
production environments to something that's a roll of the dice of whether its
working, secure, or upgradable on a given release.

------
uggedal
I used Puppet for a few years (and created a few modules for it
[https://github.com/puppetmodules](https://github.com/puppetmodules)). I
switched to Salt a year ago. My main motivation were its simplicity
(YAML+jinja), lower memory consumption, easier source code both to read and
contribute to, and its support for both push and pull based architectures.

If you want to get a feel for how salt looks like when managing some servers
and laptops you can take a look at my states:
[https://github.com/uggedal/states](https://github.com/uggedal/states)

------
gaadd33
Is communication to/from ZeroMQ encrypted? If not it seems like this wouldn't
be a very secure way to configure or distribute files over anything other than
a VPN or LAN?

~~~
DASD
They use AES and RSA but their implementation has had vulnerabilities.

[http://docs.saltstack.com/topics/releases/0.15.1.html](http://docs.saltstack.com/topics/releases/0.15.1.html)

[https://github.com/saltstack/salt/commit/5dd304276ba5745ec21...](https://github.com/saltstack/salt/commit/5dd304276ba5745ec21fc1e6686a0b28da29e6fc)

Here's a good article( [http://missingm.co/2013/06/ansible-and-salt-a-
detailed-compa...](http://missingm.co/2013/06/ansible-and-salt-a-detailed-
comparison/)) with a comparison to Ansible that others are also mentioning
here. Ansible uses KeyCzar which which seems more sane than rolling your own
crypto as many readers here on HN know.

------
wunki
I just released a open-source package which enables you to create a Django
centric stack on Vagrant with the help of Salt. It was indeed very easy to
write. You can check it out here:

[https://github.com/wunki/django-salted](https://github.com/wunki/django-
salted)

~~~
StavrosK
Same thing for Ansible: [http://www.stavros.io/posts/example-provisioning-and-
deploym...](http://www.stavros.io/posts/example-provisioning-and-deployment-
ansible/)

------
UtahDave
SaltStack also won at Gigaom Structure last week!

[http://gigaom.com/2013/06/20/devops-player-saltstack-wins-
st...](http://gigaom.com/2013/06/20/devops-player-saltstack-wins-structure-
launchpad-competition-and-investor-interest/)

(I'm a SaltStack employee)

~~~
jemeshsu
Whole article about SaltStack winning the award, and not a link back to
SaltStack website.

------
hi2usir
"Salt’s approach was far simpler."

Funny, that's how I feel about Ansible compared to everything else including
Salt.

~~~
crdoconnor
Funny, that's how I feel about salt compared to ansible.

------
spudlyo
_Chef works atop ssh, which – while the gold standard for cryptographically
secure systems management – is computationally expensive to the point where
most master servers fall over under the weight of 700-1500 clients._

It doesn't have to be this way. The situation where one host repeatedly needs
to talk to hundreds via SSH is precisely where the SSH ControlMaster socket
shines. This saves you a _ton_ of overhead by not having to start up and tear
down the session every time you want to issue a command via SSH.

I often use this trick on busy Nagios servers that execute many active checks
via SSH -- it works well.

------
cultureulterior
Personally, I don't think puppet sucks

~~~
susi22
The problem is most dev-ops and sysadmins don't know ruby and that's a HUGE
disadvantage. In the end configuration management will often by done by
sysadmins.

~~~
pilif
I'm only managing 24 machines with puppet, so nothing fancy, but I managed to
do all of the stuff I needed without writing a single line of Ruby code.

That was handy for me too as while I'm somewhat familiar with Ruby, I'm no
expert at all. I can read Ruby no problem and I can write ruby that's not-
quite-idiomatic and I'm terribly slow at it.

~~~
susi22
I forgot to mention that just installing ruby is a HUGE PITA on anything but
the most common OSes. It took me 3h last week to get it on a CentOS installed.
And I don't even want to try to get it running on our Solaris hosts...

Point is: Python is the number one scripting language (after bash) for
sysadmins just like Perl used to be.

~~~
yxhuvud
yum install ruby

took you 3h? It may take slightly longer if you want 1.9, but it still exists
in fedora so it should not take 3h to solve.

~~~
susi22
Yes I needed 1.9.3 for this silly software. And I had a little special setup
so rvm failed to compile. I'm also very overwhelmed by rvm,gem,bundler etc...
Python has pip,easy_install(old) and virualenv. Which are just easier to
understand for me. Ruby is too much magic and is trying to do everything
automatically (IMO).

~~~
sciurus
Red Hat recently released Ruby 1.9.3 packages as part of "software
collections"; I assume CentOS is rebuilding these and making them available
the same as they do for other Red Hat Enterprise Linux packages.

[https://access.redhat.com/site/documentation/en-
US/Red_Hat_D...](https://access.redhat.com/site/documentation/en-
US/Red_Hat_Developer_Toolset/1/html/Software_Collections_Guide/)

------
AaronBBrown
This article makes a claim (...Puppet...Suck(s)), but does not take even
attempt to explain what it is that sucks.

What, specifically, "sucks" about Puppet and Chef and what is so much
"simpler" about Salt or Ansible? As an Ops guy who has been running Puppet
since 2008 (and Chef most recently) against hundred of servers, I don't see
the simplicity reflected in the documentation, nor do I find Puppet or Chef
particularly complicated.

(Ok, Chef's attributes system is a bit confusing at first, but it is hugely
powerful.)

~~~
fusiongyro
From the article:

> Chef works atop ssh, which – while the gold standard for cryptographically
> secure systems management – is computationally expensive to the point where
> most master servers fall over under the weight of 700-1500 clients. Salt’s
> approach was far simpler.

I think I'm with you (without the experience): I find "it works over ssh" a
lot simpler than "we wrote a custom protocol on 0mq." Simplicity apparently
has lots of interpretations. I couldn't care less if ssh performs well enough
to support a trillion connections. In practice, you only need a handful,
usually one.

Maybe Salt is fantastic. I'm not really in a position to judge. The article
made it sound interesting to me, but I'm not sure attacking Chef/Puppet was
really necessary, especially since it wasn't really expounded on.

------
justincormack
I much prefer the immutable server model [1] to the puppet model. Build a new
tested server with the new config and roll that out.

[1]
[http://martinfowler.com/bliki/ImmutableServer.html](http://martinfowler.com/bliki/ImmutableServer.html)

~~~
rektide
There's words written about "automatic configuration" in that link but I don't
see any guidance or information on what those configuration tools are. The
focus is certainly not on automatic configuration: the focus is on use and re-
use of images, on taking images, doing something to them, and getting new
images.

The notion is deeply flawed to me: using an image as a precondition for making
an image, over time, becomes an intractable mess and requires very careful
supporting documentation to prevent the scheme from devolving into a bunch of
"buckets of bits" with no transparency into what work has gone on to make it
that way.

The #1 thing that I enjoy about automated tooling is that I can take a bare OS
and spin up a complete new system in a matter of minutes, and I get to watch
that entire process happen before my eyes. There's no mystery, no external
dependencies, no existing work I'm riding off of: everything that happens is
visible to me in an immediate way.

There's a value & use to immutable images, but please decouple your image
making from past images made: no one wants to root around to figure out what
twelve horrible things you did to install Java 9 image instances back
whenever, nor are they going to have any fun reproducing it on the twenty nine
active variants of that ancestor image when there's a security fix to be done.

~~~
justincormack
I think most people use puppet to build their immutable images at present. It
is still rather different from running it on production. Sure you should not
start from a non reproducible point.

------
v0land
I use SaltStack for managing a render farm consisting of 73 Ubuntu nodes. My
requirements are rather simple, really: most states just install some
packages, put configuration files into place (sometimes using a template) and
enable/start services. However, I can't recall a single problem when setting
everything up. SaltStack is clean, simple, and just works.

------
boothead
This is timely! I've just started writing a set of salt states to capture the
set up of my new dell xps 13 (sputnik) so I never have to go through the pain
of setting up xmonad, emacs and various other development environment stuff
again.

What I really like about salt is that everything is in one place and all goes
towards building the same data structure that everything runs off.

------
knowshan
Both Salt and Ansible look interesting. It's much easier to define system
state using Ansible or Salt than Puppet.

However, I am not sure how would one use Ansible where VMs get launched
dynamically (private cloud/virtualization fabric where devs can instantiate
systems) and then receive their configuration without any manual steps.

For example, one can create kickstart/VM-images which get a hostname based on
certain regex pattern, register with a Puppet master, the Puppet master auto-
signs certs matching this specific hostname pattern and then client nodes
receive their catalog. This is really useful pattern wherein systems pull
their configuration state almost immediately after boot. It requires manual
setup only while writing kickstart/VM-iamge profile and Puppet master
configuration.

Ansible's SSH keys setup requires manual intervention, however, I think it can
be automated using pre-defined keys in kickstart/VM-images. Haven't tried it
yet though...

~~~
killing_time
Yes, having predefined keys in your VM images does the trick, and is exactly
what we do for (almost) zero-intervention deployments of our servers in my
particular environment.

We tend to destroy and recreate servers more often than we scale out, so we
haven't bothered to remove the manual step of adding the server's hostname to
the ansible inventory_hosts file. However, that's easily automatable...

Ansible will _execute_ your inventory_hosts file if it's executable, and IIRC
it just needs to return a JSON or YML data structure representing all your
servers and the groups they're in. So, as long as you have a library which can
query your infrastructure (e.g. boto for EC2 etc) it's not hard to automate
this.

------
dmohjoryder
What I prefer about ansible above all others, besides its simplicity, is that
its use case scales up and out. By that I mean ansible can be used for
platform/app stack provisioning while OS/infra sys admins maybe another tool.
To often an agent based approach causes a conflict with OS sys admins and
platform/app team regarding ownership/sharing. I want to offer self service as
much as possible. Further, most cfg mgmt tools are monolithic in that they
want to manage all servers as tho a single team/overload manages them all,
rather than various independent sys admin teams. With various independent
teams its just too much hassle trying to share roles appropriately or setup
separate master/agents. Ansible does not have these issues.

~~~
terminalmage
[http://docs.saltstack.com/ref/clientacl.html](http://docs.saltstack.com/ref/clientacl.html)

------
dysinger
Puppet is mature and has tons of cookbooks & community. You may not like it
but saying it "sucks" is not right. It works and is used tons.

Chef doesn't run over SSH in any environment I've used that wasn't a toy
(vagrant w/ chef-solo). Please fact check.

Fanboy article.

------
mncolinlee
We're actually a Windows-centric shop and have been actively evaluating
configuration management solutions for Windows-based virtual machines.
Initially, we were only looking at Puppet, Chef, and a commercial product
called uProvision along with Vagrant. I was surprised to find that Salt had a
real community behind it.

Our greatest challenge has been coming up with a tool which can manage images
for both VMWare and Microsoft Hyper-V. This article introduced a web
integration between Salt and libvirt called Salt-virt. Has anyone tried this
interface for managing images? Does it work better than the young integration
between Vagrant and libvirt?

~~~
mncolinlee
For anyone else looking, I just found Foreman. This seems to do exactly what
we're looking for, but it uses Puppet instead of Salt. Even if it requires a
Linux server, making our solution more complicated, it appears to meet our
needs very nicely.

[http://theforeman.org/](http://theforeman.org/)

------
misiti3780
I use fabric for everything just because I dont have time to learn another one
of these technologies. This salt article seems great - but at the end of the
day (and I may be way off base here) all I want to do is install a given
version of a piece of software on my server. I dont want to create a receipt
(chef), or learn another configuration format (sounds like I would need to do
this with Salt stack), etc. My fabric file really seems to do only three
things: use pip to install shit that is python (I use Django), use apt-get to
install anything that is ubuntu specific, and make wget calls to various
pieces of software, pull them down, and build them from source. Until there is
an easy way for me to do this without needing to learn yet another technology,
I will continue to use fabric (or, until the job of doing this gets so big I
can hire a dev ops guy that actually already knows, but I am not there yet :)
). Sorry for the rant, it's just every time I see these articles I wish I had
time to learn the technology but then I realize I don't.

So - is it just me or is there seem to be a big/huge learning curve for all of
these dev ops technologies?

~~~
tech-dragon
Yaml is a hair above "properly indenting my templates" as far as complexity
goes. You write django templates, you can handle Yaml ;-)

As current 'devops guy' on a django project myself, salt works wonderfully.
Salt has states available that let you setup all that software, create the
virtualenv you need (including telling it you want to use the requirements.txt
that you pulled down with your django project source code - Salt gives me my
own little Heroku :D ) and for anything left in those wgets you can throw a
block of salt cmd.run calls using specified ordering to enable them to run
neatly in the sequence you desire.

------
crb
_> MCollective (which Puppet Labs acquired several years ago) was (and
remains!) fiendishly complex to set up._

I didn't find MCollective hard at all - you just install some debs, a message
queue server (Stomp was easiest at the time - it's now deprecated, but surely
is not much different to RabbitMQ?) and it Just Worked for me. And there was a
great screencast.

Did it get far more complicated since I used it last?

~~~
druiid
Well, rabbitmq is kind of a pain to get working with it. Additionally, the
modules for rabbitmq and mcollective, for puppet, don't really work that well
together (read: I had to re-write the ones I found to get them working).

~~~
apenney
I'm starting work at Puppetlabs in exactly a week's time as part of a brand
new "module team" and I personally promise you here and in the open that I am
going to tackle the puppetlabs-rabbitmq module and fix it so that it actually
works and isn't an abandoned wasteland.

Come to that I'm hoping we can start building out a full set of mcollective
modules to replace the existing ones that will be fully supported and kept up
to date so that getting mcollective running will be as easy as including a
class and waiting.

A huge part of this job is ensuring community patches get merged in and
contributors get treated as I would like to be treated when contributing to a
project. I hope we can reverse your experience with modules within a few
months (I took this job because I've been in exactly your position, grabbing
official modules and having them not work at all!)

~~~
druiid
FYI we've spoken in the #puppet channel about just this issue ;).

------
otterley
When used with Chef Server 11 (or Hosted Chef), Chef scales reasonably well.
You install a client on each node, and the client speaks to the server via
HTTPS + REST.

The unqualified assertion that Chef uses ssh is inaccurate. You can run chef-
solo via ssh if you like, but you'll run into the same scalability ceiling as
with any other ssh-based solution.

------
WickyNilliams
Is there a comparable tool for Windows?

Powershell works great for executing commands on arbitrary servers (which
sounds like the basis of Salt), but it'd be great to declaratively say "I want
the server in this state" like the config management side of salt. I assume
there is a tool built atop of Powershell like this somewhere?

~~~
lmickh
As mentioned, this can be done with Chef, Puppet, and Salt, but be careful
about how you go about it. It is important to recognize when it is best to
leverage AD for your Windows configs.

It is easy to fall down the rabbit hole of trying to implement things in a CM
tool/Powershell combo that could be done in AD far easier.

~~~
drummer32
Genuine question: Why would you have Windows servers joined in an AD domain?
Or are you talking about pushing changes to workstations?

~~~
AjithAntony
Because the services you provide depend on a domain for authentication and
configuration like Exchange, Citrix, IIS, Sharepoint, SCCM, and every other
Microsoft server product.

I am dying for the chef/puppet/salt/ansible/cfengine recipe that will let me
fully configure this stuff, including the domain memberships.

------
frio
From skimming the top level of comments, it seems most people don't like these
tools. Fair enough.

That said, on-topic, I just wanted to say that having tried Puppet, Chef and
Salt, I've found Salt the easiest to use. Straightforward installation (no
messing with Ruby versions/rvm/etc.), really simple setup (systemctl start
salt-master; systemctl start salt-minion; salt-keys -L; salt-keys -A yourbox;
done), and the YAML-based configuration syntax has been a breeze to work with.

Really quite pleased with it; it's made getting a few of my hairer boxes under
control much easier than I expected (and much easier than I found with Chef or
Puppet).

------
abtinf
Yet another un-googlable project name. Pretty much kills it for me.

~~~
frankwiles
Have you tried Googling salt stack? I've had zero problems finding tutorials
and documentation.

------
knowshan
Good to see tools that work as a system configuration framework and also allow
command execution.

[ControlTier]([http://www.controltier.org/](http://www.controltier.org/)) had
(don't think it's actively developed now) options to execute general system
commands, configure systems and application deployment. But it was fairly
complex and required [ant]([http://ant.apache.org/](http://ant.apache.org/))
skills.

------
dmourati
It seems most people miss the fact that any sufficiently large system is going
to require _both_ a pull-based _and_ a push-based solution.

So, take ansible. Primary use: push. But has ansible-pull.

Look at puppet.

Primary use: pull. But has mcollective.

IMO, and I am not there yet but soon to be. The gold standard is to _combine_
two strong players that specialize one each in push/pull. For me, it is
looking like ansible/puppet.

------
cowmix
For years I have been trying to spread the gosspel of bcfg2 because, while not
perfect, I thought was a more complete system over Puppet or Chef. Bcfg2,
however has some big warts of its own AND it never really caught on.

In the past few months I've been slowing converting to SaltStack and it really
is everything I ever dreamed of for a CM system. Fast, easy, real-time. Lovin'
it.

~~~
Goladus
If bcfg2 was a complete system, it never caught on because the documentation
was entirely missing. Every time I looked at it, I blocked on actually getting
anything done because I couldn't find an equivalent to these reference
manuals:

[http://docs.puppetlabs.com/references/latest/type.html](http://docs.puppetlabs.com/references/latest/type.html)

[http://docs.opscode.com/resource.html](http://docs.opscode.com/resource.html)

[https://cfengine.com/archive/manuals/cf-
manuals/cf2-Referenc...](https://cfengine.com/archive/manuals/cf-
manuals/cf2-Reference#Concept-Index)

------
ishbits
I've recently been thinking I need to learn Chef or Puppet. This thread has
convinced me to pick up and try Ansible first.

------
1gor
[https://github.com/seattlerb/rake-
remote_task](https://github.com/seattlerb/rake-remote_task) is all you need if
you use ruby.

    
    
      require 'rake/remote_task'
    
      set :domain, 'abc.example.com'
    
      remote_task :foo do
        run "ls"
      end

~~~
danudey
State management is about a lot more than 'execute this command on a server'
(which is discussed in the article). It's about creating a set of rules and
performing idempotent actions.

If you're just running shell commands, it's easy to screw up and waste your
time or break your server by accidentally having the same commands run twice.

------
WestCoastJustin
I've thrown together a little puppet demo if anything is interested.
Highlights what puppet is, and how it works.

[http://sysadmincasts.com/episodes/8-learning-puppet-with-
vag...](http://sysadmincasts.com/episodes/8-learning-puppet-with-vagrant)

------
tegansnyder
I use Salt to run commands across our EC2 environment to do things like
restart Varnish, clear logs, and run updates. Paired with Unison for file
synchronization it works well when your auto-scaling kicks in and you need
your new AMI to be synched from staging.

------
tetsusoh
emm, Private chef also use zeromq to implement the pub job feature.

Puppet has MCollective (with ActiveMQ) to implement the similar feature.

------
gunmetal
Salt is missing templates, the ability to use higher level programming
language and all the environment/roles that I find the most powerful part of
Chef.

~~~
Game_Ender
You are wrong on all counts here. Salt supports Jinja2 template engine, so you
can template your states [1]. You define custom states in python [2]. In the
root configuration file (top.sls) you target configuration based on host name,
grains (machine specific information) [3].

1 -
[http://docs.saltstack.com/topics/tutorials/states_pt3.html](http://docs.saltstack.com/topics/tutorials/states_pt3.html)

2 -
[http://docs.saltstack.com/ref/states/writing.htm](http://docs.saltstack.com/ref/states/writing.htm)

3 -
[http://docs.saltstack.com/ref/states/top.html](http://docs.saltstack.com/ref/states/top.html)

------
moe
Dead-end. Use ansible.

