+ (minimal) Bootstrap System playbook
+ Complete System playbook (that references group_vars and host_vars)
+ Per-machine playbooks stored alongside the ansible inventory, group_vars, and host_vars in a separate repo (for machine-specific kernel modules and e.g. touchpad config)
+ User playbook that calls my bootstrap dotfiles shell script
+ Bootstrap dotfiles shell script, which creates symlinks and optionally installs virtualenv+virtualenvwrapper, gitflow and hubflow, and some things with pipsi. https://github.com/westurner/dotfiles/blob/develop/scripts/b...
+ setup_miniconda.sh that creates a CONDA_ROOT and CONDA_ENVS_PATH for each version of CPython (currently py27-py37)
Over the years, I've worked with Bash, Fabric, Puppet, SaltStack, and now Ansible + Bash
I log shell commands with a script called usrlog.sh that creates a $USER and per-virtualenv tab-delimited logfiles with unique per-terminal-session identifiers and ISO8601 timestamps; so it's really easy to just grep for the apt/yum/dnf commands that I ran ad-hoc when I should've just taken a second to create an Ansible role with `ansible-galaxy init ansible-role-name ` and referenced that in a consolidated system playbook with a `when` clause.
A couple weeks ago I added an old i386 netbook to my master Ansible inventory and system playbook and VScode wouldn't install because VScode Linux is x86-64 only and the machine doesn't have enough RAM; so I created when clauses to exclude VScode and extensions on that box (with host_vars). Gvim with my dotvim works great there too though. Someday I'll merge my dotvim with SpaceVim and give SpaceMacs a try; `git clone; make install` works great, but vim-enhanced/vim-full needs to be installed with the system package manager first so that the vimscript plugin installer works and so that the vim binary gets updated when I update all.
I've tested plenty of Ansible server configs with molecule (in docker containers), but haven't yet taken the time to do a full workstation build with e.g. KVM or VirtualBox or write tests with testinfra. It should be easy enough to just run Ansible as a provisioner in a Vagrantfile or a Packer JSON config. VirtualBox supports multi-monitor VMs and makes USB passthrough easy, but lately Docker is enough for everything but Windows (with a PowerShell script that installs NuGet packages with chocolatey) and MacOS (with a few setup scripts that download and install .dmg's and brew) VMs. Someday I'll write or adapt Ansible roles for Windows and Mac, too.
I still configure browser profiles by hand; but it's pretty easy because I just saved all the links in my tools doc: https://westurner.github.io/tools/#browser-extensions
Someday, I'll do bookmarks sync correctly with e.g. Chromium and Firefox; which'll require extending westurner/pbm to support Firefox SQLite or a rewrite in JS with the WebExtension bookmarks API.
A few times, I've decided to write docs for my dotfiles and configuration management policies like someone else is actually going to use them; it seemed like a good exercise at the time, but invariably I have to figure out what the ultimate command sequence was and put that in a shell script (or a Makefile, which adds a dependency on GNU make that's often worth it)
Clonezilla is great and free, but things get out of date fast in a golden master image. It's actually possible to PXE boot clonezilla with Cobbler, but, AFAICT, there's no good way to secure e.g. per-machine disk or other config with PXE. Apt-cacher-ng can proxy-cache-mirror yum repos, too. Pulp requires a bit of RAM but looks like a solid package caching system. I haven't yet tested how well Squid works as a package cache when all of the machines are simultaneously downloading the exact same packages before a canary system (e.g. in a VM) has populated the package cache.
I'm still learning to do as much as possible with Docker containers and Dockerfiles or REES (Reproducible Execution Environment Specifications) -compatible dependency configs that work with e.g. repo2docker and https://mybinder.org/ (BinderHub)