

Snowflake Server - baha_man
http://martinfowler.com/bliki/SnowflakeServer.html

======
dredmorbius
"If you disable any direct shell access to the server and force all
configuration changes to be applied by running the recipe from version
control, you have an excellent audit mechanism that ensures every change to
the environment is logged."

Um. Yeah. You're also creating a situation where, should anything possibly go
wrong (now, how could _that_ happen), you cannot hop on the server, diagnose,
and possibly do some quick-try fixes. I can see limiting shell to a small
trusted subset of colleagues (sorry, devs), but not eliminating it altogether.

Automated management, "system-as-code", devops, and all that has its place.
The problem with virtually _all_ existing systems for automated configuration
management (cfengine, puppet, chef, Oopsware), is that they are highly _non-
transitive_ , and effectively bin years or decades of administration
experience. They're fine once you've worked out the kinks, but as anyone who's
worked with these tools can tell you, the one thing they're best at is
screwing up _all_ of your systems _simultaneously_.

"Etch" (<http://sourceforge.net/projects/etch/>) is one tool I've looked at
briefly that appears to take a different tack, and is amenable to taking on-
host changes and incorporating them into the configuration management system
itself. One of the criticisms I've seen of it is that it's rather Linux and/or
specifically Debian-centric, which may well be as Debian offers some very
strong tools for managing, assessing, and maintaining system state (policy,
APT, debconf). While dependencies can be painful when they keep you from doing
what you want to do, as with most good safety systems, it's generally because
you really don't want to go there (and if you do, there are means, within the
framework provided by Debian, to get you there).

One way to avoid snowflake systems is to use tools that manage dependencies,
stick within them to the greatest extent possible, and where that's not an
option, to put your own modificiations within that same framework.

~~~
heretohelp
I just use fabric.

I just start writing code to provision/configure the machine for the role in
question and don't stop until I have something that can reliably take a blank-
slate server and have it rolling by the end of the function.

Easiest way to do devops I've seen yet. I didn't care for
chef/puppet/cfengine.

Particularly since I can cherry-pick servers for testing pretty easily with
fabric before I run the code against the rest of the machines.

~~~
dredmorbius
URL?

Not a particularly searchable term.

~~~
heretohelp
<http://pypi.python.org/pypi/Fabric/0.9.0>

Edit:

The primary weakness of my approach is that for convenience's sake, I rely
heavily on apt-get. So hypothetically I'm tied to the debian family. (we use
Ubuntu LTS of a couple different versions)

Strictly speaking this disadvantage is unnecessary, you could do all from-
source builds using fabric if you wanted. Didn't seem like a constructive use
of my time though.

Sample task in my fabfile:

<https://gist.github.com/3086836>

I'll let you guess from the function name what it does.

The magic word of devops is "idempotence".

~~~
jdf
There's a new project called Ansible that may be of interest to you:

<http://ansible.github.com/>

While that page has a long list of things they do, the important bits relevant
to your comment are

1\. a tighter focus on idempotence than Fabric 2\. an easy-ish way to
integrate package management so you could potentially use the same script to
kick off either yum or apt depending on the box

~~~
heretohelp
The original reason I abandoned chef and puppet is that I didn't like
something getting between me and the shell.

Why would I make the same mistake twice? My only real dissatisfaction with
Flask is that it cannot handle dispatching its work in parallel. Not a big
deal though.

I like knowing exactly how things get done.

~~~
StavrosK
Doesn't Fabric support parallel instructions now? I think I heard something
about that, although I'm not entirely sure...

~~~
heretohelp
I was thinking more along the lines of parallel connections.

~~~
StavrosK
Yeah, it turns out that's what it does, rather than parallel
instructions/commands:
<http://fabric.readthedocs.org/en/1.3.0/usage/parallel.html>

------
arohner
The also linked <http://martinfowler.com/bliki/PhoenixServer.html> is good.

My startup has been in production on EC2 for 6 months, and I think we've never
had a server up for more than 3 days, and never booted a production box from
an AMI more than 2 weeks old.

------
bcx
I just wanted to ++puppet.

Once you have more than a few servers you will go crazy if you don't have a
good configuration management setup. The real advantage of using a tool like
puppet over something home grown is that you can hire someone to come in and
manage puppet who will be able to understand how your automation works without
having to have your system admin sit down and explain the 1000 little arcane
perl scripts that make everything work.

We use client-server puppet, but require all the updates to be manually run,
which makes rollouts a little bit more deterministic and avoids the "Hey we
just changed puppet and everything broke" effect.

------
Lazare
This idea - combined with the Phoenix Server concept - seems very powerful to
me.

In an odd way, it feels like this is main advantage that PaaS offerings like
Heroku have: they force you into the mindset of not relying on hand tinkering
with the server config.

------
batgaijin
Puppet is so 2011, the future is nixos. Once we have year of the Haskell
desktop that is.

