
Configuring Ansible - tcarriga
https://www.redhat.com/sysadmin/configuring-ansible
======
yjftsjthsd-h
It continues to surprise me that the official documentation seems to prefer
using global configuration in /etc/ansible; it always seemed vastly preferable
to me to put it in a repository under my home directory. This does mean that I
have to put in a config file to point at my inventory, but that's a trivial
one time change and now I don't have to touch system configuration owned by
root every time I want to change my inventory or whatever.

~~~
carwyn
Same here. I re-worked our config to be an entirely self contained git
repository with it's own ansible.cfg in the root which contains:

[defaults] inventory = ./inventory/hosts

~~~
yjftsjthsd-h
In case it's useful to you: You can have that be a directory.

    
    
        [defaults]
        inventory=./hosts.d
    

(This is what I use)

------
peterwwillis
I never get guides like this. This is a configuration management tool, and
they're telling people to manually bootstrap their systems to use it. Great,
you've helped someone do the thing the tool is intended not to do.

First of all, no, don't create an ansible user. Never use generic shared
accounts. I mean, you _can_ , but you _can_ also just SSH into root. (You know
what the difference between using the _-B_ option and SSHing into root are?
There is none.)

Second, either use sssd to enable an SSO system, or bootstrap all your hosts
with an SSH CA so you don't have to constantly run jobs to add SSH public keys
to all your nodes, which when you eventually start using ephemeral nodes,
won't work at all unless it's using a shared account; see above paragraph.

Third, if you're going to use a plaintext inventory, generate it on the fly,
and definitely don't write it to a single global root-owned file, because you
may want to update it some time, and you may be running ansible more than once
on that host at that time.

Fourth, they didn't really go over the anguish of trying to manage system-
installed python packages and running Ansible. You need to install and run
Ansible in a virtualenv with pinned versions, don't use a system-packaged
version.

Five, don't use any complicated logic in your tasks or playbooks, or later
you'll be kicking yourself because of how much of a huge pain in the ass it is
to debug playbooks and tasks and inventories and filters and the non-standard
fake YAML config format and a million other things. Keep it really really
simple.

Six, run ansible-pull on a cron job with random jitter every 10-20 minutes,
because your system and code _will_ deviate over time, and you won't know it
until you try to run Ansible again after 3 months and everything breaks.
You'll also want a system to alert you when Ansible fails so you can find out
what broke.

Seven, try to make any Ansible you use obsolete by baking Docker images, VM
images, system packages, and other versioned immutable infrastructure
artifacts. If you start out with that in mind, you may not ever have to touch
Ansible once, and may even eventually achieve immutable infrastructure, and
what a wonderful world that is. (I'm serious... everyone should be working to
remove configuration management from their stack wherever it's feasible)

~~~
xorcist
> You'll also want a system to alert you when Ansible fails

Indeed. This is what makes a configuration management tool work in practice.

------
moreati
I've tweeted a few Ansible tips and gotchas over the years, covering some of
the corner cases and a few neat tricks

[https://twitter.com/search?q=from%3Amoreati%20ansible%20tip&...](https://twitter.com/search?q=from%3Amoreati%20ansible%20tip&f=live)

[https://twitter.com/search?q=from%3Amoreati%20ansible%20gotc...](https://twitter.com/search?q=from%3Amoreati%20ansible%20gotcha&src=typed_query&f=live)

------
thecopy
We use Ansible to orchestrate our infrastructure and application deployment.
Only in one particular stack are we using ansible how it is "supposed" to be
used (SSH:ing into a machine and updating/installing Prometheus).

The infrastructure is provisioned using 90% CloudFormation and the other 10%
is Ansible commands for when CF is not enough or when AWS CLI commands allows
for better flexibility (i.e. generating SSH keys for EC2 machines and storing
the private key in a central store). We make heavy use of the `cloudformation`
Ansible-module.

~~~
gregmac
Are you using Ansible to orchestrate the setup of a particular EC2 instance,
or to actually provision AWS resources, or both? If you're doing AWS
resources, where is ansible running from?

I have a few deployments using CloudFormation, and in those use
CloudFormation::Init [1] to actually setup the EC2 instances (deploy packages,
apps, configure local firewall, etc). This kind of grew over time and now
involves some (mildly) complex bash and/or powershell scripts which do things
that could probably be done with Ansible, but when I started it wasn't obvious
how complex that part would eventually get. The split of Ansible vs
CloudFormation responsibility also wasn't clear to me, and I favored the
simplicity of a single tool, so even though I considered using Ansible, I
ended up sticking to CloudFormation::Init.

In retrospect, CloudFormation::Init is pretty much a stand-alone tool, and the
only connection it has to AWS and AWS CloudFormation is really just that it
reads the metadata from the EC2 API to get its actual configuration (which is
just a JSON or YAML blob).

[1]
[https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGui...](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-
resource-init.html)

~~~
jturpin
We just use ansible for the provisioning of EC2 instances, and Terraform for
the actual orchestration of the resources. Userdata is used to call ansible-
pull. Theoretically we can use Ansible for everything, but Terraform's
planning makes the runs a lot safer, and in general it just seems a little
nicer for orchestration.

------
yepguy
I just finished setting up Ansible to manage a bunch of Windows laptops. I
basically just followed the example of Dag Wieers here[1][2], except that
additionally every laptop is roaming with unstable IP addresses. So I used
PageKite[3] to deal with that, although it would have been much easier if
ansible-pull worked on Windows.

I know every experienced Windows admin would probably look poorly on my setup,
but it feels really good to finally have some level of automation in managing
these machines. It took me years to figure out a sane strategy under
constraints like: OEM and retail software licenses only, employees retain
admin privileges, no corporate network or VPN, unstable IP addresses, and
(perhaps most importantly) no budget.

With constraints like that, it would also be a pretty good setup if you
provide tech support to family and friends, as long as they trust you to
backdoor their laptop.

[1]: [https://www.ansible.com/managing-windows-desktops-with-
ansib...](https://www.ansible.com/managing-windows-desktops-with-ansible) [2]:
[https://github.com/crombeen/ansible](https://github.com/crombeen/ansible)
[3]: [https://pagekite.net/](https://pagekite.net/)

~~~
yjftsjthsd-h
> PageKite

Thanks for the pointer. They certainly have the most fun pricing page I've
seen to date:
[https://pagekite.net/signup/?more=bw](https://pagekite.net/signup/?more=bw)

------
jturpin
We use ansible-pull to decentralize our Ansible runs as much as possible, but
we also use Hashicorp Vault's SSH Key Certificates functionality to make sure
our SSH keys are short-lived when we run it from CI. The last thing we want is
CI with a bunch of root level keys on it!

~~~
samvimes
you seem to be a big proponent of ansible-pull. are any of your use
cases/implementations publicly available, I'm really interested to see how
people are using ansible-pull in production.

I'd like to do something similar, currently we use a mix of ansible tower
(which I don't love) and ansible runs from local machines to manage the
infrastructure. I'd rather it all be tied into terraform though, so that we
have a single place to manage changes from

~~~
jturpin
We don't have anything publicly available unfortunately, but we call ansible-
pull the instance userdata to configure the host on startup. IAM Policies and
Vault integration are used to grant the host access to certain secrets needed
by the ansible run.

------
movedx
Is it OK if I plug my (free; use a fake email to sign up if you like) Ansible
Crash Course here? Might be useful to some folks.

It'll be getting updated with PDF versions of the material and quizzes soon
enough.

------
AdmiralAsshat
In my corporate environment, I've played a bit with Puppet. Ansible looks to
cover much of the same ground. Are there any compelling reasons to use one
over the other?

~~~
core-questions
I'm deeply experienced with Chef and have been using Ansible for some newer
projects.

Plusses: \- Supported Ansible modules are usually more reliable and more
clearly defined in terms of how they'll function than community cookbooks for
Chef are \- Upgrades between versions go smoothly \- People can just go read
your YAML and figure out what it's going to do

Minuses: \- It's not code, so if you want to do something more programmatic,
you're going to end up with something ugly compared to what you can do in
Chef/Puppet's Ruby DSL \- It's still a bit immature in terms of comparison to
Chef's server offering (Ansible Tower is pretty useless), and the
Kitchen/Vagrant/Serverspec stuff is also not as advanced.

Ansible seems to fit well as a replacement for the Chef Zero use case of fire-
and-forget provisioning - works well inside Packer or inside of Terraform via
a null_resource, but I don't tend to find myself wanting to use it for long-
lived machines that in the Chef world benefitted from continually running the
cookbooks to help keep config in sync.

~~~
chr15p
> It's still a bit immature in terms of comparison to Chef's server offering
> (Ansible Tower is pretty useless)

Meh, I really like Tower, but maybe that depends on what you're used to.

> It's not code, so if you want to do something more programmatic, you're
> going to end up with something ugly

this is true, ansibles DSL has its limits and the developers are pretty clear
that there are some things that are too complicated for its yaml based syntax.
99% of use cases its great for, if you want that extra 1% then you have to
write your own ansible modules (which is actually reasonably straightforward).
I dont know chef but puppet has become more and more complicated over the
years chasing the more advanced use cases and in many ways thats moved it away
from a tool for sysadmins towards a being a tool for specialists so I think
Ansibles outlook makes more sense. YMMV.

~~~
core-questions
> I dont know chef but puppet has become more and more complicated over the
> years chasing the more advanced use cases and in many ways thats moved it
> away from a tool for sysadmins towards a being a tool for specialists so I
> think Ansibles outlook makes more sense. YMMV.

Yeah, I concur. There aren't too many cases where I can't work around the lack
of programmability with some creative templates or what-have-you.

(I wonder if I could render an Ansible task file from a template and then
include it...)

~~~
snorkasaurus
Maybe I'm missing something, but I've got a lot of mileage out of custom
Ansible modules and callback plugins. Maybe that's what you're looking for.

~~~
core-questions
Learning Python is never going to be something I'm looking for. Too much
brainpower invested in Ruby now to switch...

------
ggregoire
How do you test your playbooks targeting bare metals during development? So
far I’m using docker but the official base image is very different from the
official ISO, that’s kinda annoying (no ssh, no python, not even ps). I’m
thinking about using Vagrant. I also saw a framework called Molecule but it’s
based on docker so I guess it’ll be the same problem.

~~~
jlgaddis
I just (in the last two weeks or so) started using Molecule. It took a bit to
get everything set up and figured out but now that I've got things the way I
want them it really is pretty awesome.

I'm using it with Vagrant and VirtualBox at the moment, just because that was
the easiest way -- if you aren't already using Docker, that is -- to get
started (due to available documentation, examples, etc.) but eventually I'd
like to use both Docker and my ESXi lab for testing.

Testing w/ bare-metal was always a pain in the ass as I'd constantly need to
"reset" the hosts to a specific "known good" starting point. I have everything
set up for automated network installs, though, so it only takes 5 minutes or
so to wipe and reinstall a physical host.

Now, I'm doing all my Ansible testing against VMs on VirtualBox, running from
an image generated by Packer. Those images use the exact same preseed.cfg file
I use on the bare-metal host, making it _sooooo_ much easier. I do all my
testing on the VMs, make sure everything is correct, and then run it against
the physical host once for "verification".

~~~
ggregoire
Thanks, you confirmed what I was thinking. That’s exactly the setup and
workflow I’d like to have. :)

~~~
jlgaddis
Feel free to reach out (my username at Gmail) if (when) you run into issues or
have any questions. I'm still learning new things as I go but I'm glad I
decided to go down this path (and I'm happy to help if I can). It really is
worth the time and effort!

------
notduncansmith
I’m using Ansible inside a Docker container at work, which has worked
beautifully for me.

