

Decentralize your DevOps with Master-less Puppet and supply_drop - pgr0ss
http://www.braintreepayments.com/devblog/decentralize-your-devops-with-masterless-puppet-and-supply-drop/

======
viraptor
How can you tell what's in each system at every step? I mean - you know what
should be there, but how can you be sure someone didn't forget to apply, or
that they had a clean tree at the time? They say "With a centralized Puppet
server, the server maintains a single canonical version of the Puppet
configuration." I'd counter with: make it always have the latest version from
git - you end up in the same position you want, as described in the same
point.

I'm usually relying heavily on external resources (puppet) / node searches
(chef)... not sure I really like the chef-solo or forced apply in puppet way
of doing things.

------
sciurus
At work I recently started to use masterless puppet to manage a working
environment (e.g. directory structure, shell configuration, packages installed
into our own RPM database) on compute clusters we use but don't have root
access to. It was much easier to get working than I expected. Looking at the
source for rump (<https://github.com/railsmachine/rump>) was helpful when
determining the proper directory structure and puppet command line options.

------
fsaintjacques
I'm currently trying to run Chef in solo-mode with a git repository. Nodes
would update with regular pull & apply. Has anyone tried something like this?

~~~
thibaut_barrere
Not yet but I thought about that.

My current favorite setup (for Ubuntu + Windows nodes) is knife-solo with
rsync.

That said unlike for code, I like being able to apply modifications just by
rsyncing: if my git host is down it will still work for example (but then I
mostly chef deploy alone).

------
sciurus
In this workflow, the configuration being applied to a server is whatever
people happen to have rsynced to it. That may allow for quick iteration in
their QA environment (as long as people avoid stepping on each others toes),
but wouldn't it be better if the sandbox and production servers pulled their
configuration from the sandbox and production git branches?

~~~
drfatbooty
Our goal is to make applying changes to a server an intentional step. Instead
of changes in production being pulled, we make it a push. With more complex
changes it involves unbalancing a server, making the change, testing that
everything is working correctly, then rebalancing it. While smaller changes,
we simply apply them to the environment wholesale. This process gives us the
flexibility to do both.

------
troels
Couldn't this cause a race condition if two admins try to apply changes at the
same time? Without a centralised repository, I presume that you would need to
introduce a locking mechanism on a per-node level, so one would: take lock,
run noop, run apply, release lock. Maybe make it implicit.

------
seany
Wouldn't this break externalized resources?

