Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: Configuration Management for Personal Computer?
198 points by jacquesm 14 days ago | hide | past | web | favorite | 139 comments
Hello HN,

Every couple of years I find myself facing the same old tired routine: migrating my stuff off some laptop or desktop to a new one, usually combined with an OS upgrade. Is there anything like the kind of luxuries we now consider normal on the server side (IaaS; Terraform; maybe Ansible) that can be used to manage your PC and that would make re-imaging it as easy as it is on the server side?




I recently started hosting a dotfile repository on Gitlab (I will migrate away to self-hosting on the long term), and although it doesn't hurt to use the git or tig from CLI (and learn the arguments) I've found Sublime Merge to be an exceptionally easy to learn Git client.

I also made a repository for my shell scripts, and one for my fonts.

I'm using GNU Stow to manage the symlinks. I sync my shell config together with $PATH (includes hostname and OS specific $PATH variables).

Its some work, but the more you refine it the better it is going to work for your different machines.

As for imaging, I'd build a script which you'd run after install and "does all the things". On macOS you got brew, on Windows choco/scoop, and on Linux whatever package managers you use. I use Topgrade to maintain all the different package managers in existence.


I think overall its a mistake to try to over automate this process. It's brittle and the requirements are simply not the same as a server environment that needs to be simple to replicate X versions quickly, repeatedly and frequently.

Automation needs fairly constant attention in the order of little tweaks week over week to fight the inevitable mismatch between the automation and the software networks that it interfaces with.

Do you reinstall your workstation once a week? Unlikely and probably would be an unproductive use of your time.

The way I approach it instead -

The first thing I do is list everything I need for a workstation in an orgmode text file. For why orgmode really works well for this -

http://howardism.org/Technical/Emacs/literate-devops.html

I aim to make a "computational document" that I can execute a bit at a time as I review/check each piece on installs and upgrades of the system.

To me it's really critical that there are links to the webpages and text notes about these configurations alongside the code I'm using.

I check this and the needed config files into a single git repository.

As the workstations (I use the same document for workstation,laptop, studiopc, homeserver, android) age I'm inevitably adding and upgrading bits of the git repository. I make sure that I work from the orgmode document whenever I touch any configuration.

I use Linux and Emacs (which means a big part of my software environment is in a git controlled .emacs.d) so the entire environment supports this workflow.


> I think overall its a mistake to try to over automate this process. It's brittle and the requirements are simply not the same as a server environment that needs to be simple to replicate X versions quickly, repeatedly and frequently.

> Do you reinstall your workstation once a week? Unlikely and probably would be an unproductive use of your time.

It's not about how often I reinstall. It's about how many machines I work on.

I have a personal laptop, a work laptop, a work build server, a personal cloud server, and accounts on various systems. I want all of those to have mostly the same packages, with some variations (such as no GUI on the servers, and SSH on the servers).

(In my case, I ended up with a simple set of Debian metapackages that between them depend on 99% of what I need, in addition to the usual git homedir.)


If you regularly pull up new systems (also VMs), it indeed becomes worth it to do some form of automation. For me, instead of having a whole orchestration setpu, I just have a simple .bashrc that I copy everywhere and it has a few package lists in the form of functions:

- Running `defaultinstall` will install vim, git, iotop, progress, curl, and other essentials, and it runs apt-file update.

- Running `defaultinstalloptional` will pull more packages like sipcalc, cloc, woof, and other packages that I like but are not something I use on every system or do not use regularly.

- Running `defaultinstallwifi` pulls wavemon, iw, and wireless-tools; `defaultinstallprogramming` gets me interpreters and compilers that I regularly use; `defaultinstallgui` gets me GUIs that I use (audacity, gimp, wireshark, xdotool, xclip, etc.); etc.

On a new system I run whichever ones are applicable (e.g. a desktop may be GUI but not WiFi). Together with copying .bashrc, .vim/, and .vimrc, that's 90% of the setup I need for a non-GUI system. On a GUI system, I will want to set some task bar preferences. Only replacing my main system is an exception: I'll simply copy my filesystem from the old system (or, more often, physically move over the SSD), since I'll want to have the files, firefox profile, thunderbird config, etc. anyway.

Takes nearly zero time to maintain and is not dependent on anything that I don't already consider essential (Debian won't ship without bash-compatible shell or a package manager for a long time) so there is only one manual pre-setup step: copying a text file.


I don't do any orchestration either; I just have metapackages that do roughly the same thing, as well as a few packages that install system-wide configuration.


I entirely disagree with it, because I do do this and it works beautifully. I even wrote an article on how:

https://www.stavros.io/posts/provisioning-your-computer-one-...

Just write whatever change you want to make into your Ansible file, and run the provisioning command, instead of making the change directly. Done.


I had a period where I _did_ reimage my machine every week. Keeping it running productively wasn't that bad - data was kept on a separate drive from the OS, and a script was used (as a post-reimage step) to reinstall all the apps I used and link all the data folders (and config files and profiles) into place. For me for Windows, that meant having a powershell script that used a combination of chocolatey (to install what was available via package manager in that way) and saved installers run from the command line in quiet mode (plus a bunch of link commands).

The only real drain is that you need to remember when you install a new app that you need to add it to your script. Beyond that, simply keeping most non-application data on a drive that didn't get reimaged with the OS (and could store things that could be linked into the system drive) handled a lot.


NixOS can be helpful here, by making all of the non-user-specific stuff completely reproducible; combine that with Git versioning for dotfiles and a NAS for backups and large storage and your setup should be pretty easy to manage.


NixOS is exactly what the OP is describing. It isn't super user-friendly but once you learn & get things working, the fact that it's code means you can have things working forever.


Nixos + storing dotfiles using the simple technique described here (essentially a git repo stored in something other than .git): https://www.atlassian.com/git/tutorials/dotfiles

With the above, the most difficult step in getting a new system installed is now only typing in the WIFI password, drivers and the filesystem partitions...


NixOS + Home Manager + storing your config in dotfiles is exactly what you want :)


NixOS is great. I use it as my daily driver. But. In my experience, your distro will never have all the software you would like in its repos. And I've found this is true of NixOS also. So it should be mentioned that some software does not play nicely with it [0][1].

Because some things will be missing from distro repos, packaging is, to me, quite a valuable skill. Compare, for example, the packages for autorandr for Nix and Arch [2][3]. When I started using Arch I found the Arch package fit much better within my understanding of the world. I expect most here would find the same.

Here is what is essentially the manual for Nix packaging [4]. Personally I'd say that if you're unwilling to consume that, eventually you are going to encounter a frustrating problem with Nix that you cannot solve, and potentially regret using it. Though, using a VM or potentially a container might get you around some/many of those issues. I should also mention that Nix Pills may be worth a read before beginning to use NixOS- it's what convinced me to switch, in fact.

[0] https://unix.stackexchange.com/questions/522822/different-me...

[1] https://github.com/NixOS/nixpkgs/issues/36759

[2] https://aur.archlinux.org/cgit/aur.git/tree/PKGBUILD?h=autor...

[3] https://github.com/NixOS/nixpkgs/blob/e380ef398761206954e514...

[4] https://nixos.org/nixos/nix-pills/


I’ve had no issue contributing to fix small holes in NixOS, and Docker covers the rest. So far the only thing that I’ve found genuinely challenging is getting complex binary software running. I have a Nix expression for IDA free but it can only use bundled Qt. VMWare Workstation I outright can’t use, which is a shame since I have a Pro license, but it woke me up to how great KVM is, so there’s that.

I would not recommend NixOS to everyone, but I definitely would recommend it to the type of person who wants to automate their system.


Same this is what I use, and I've not looked back since. NixOS definitely has a slight learning curve, I find it quite intuitive to use but after time you get the hang of it.


+1 for NixOS, it changed the way I approach my personal computer.


NixOS looks quite fascinating.

I'm on Windows for the time being, but will give NixOS a go some time.

For any other WSL users here, it's worth keeping an eye on this issue: https://github.com/NixOS/nixpkgs/issues/30391 (NixOS being packaged up as a WSL distro).


> NixOS can be helpful here, by making all of the non-user-specific stuff completely reproducible

How is this different from say Debian's "apt" package manager?

You can specifically reproduce the system by downloading all versions of the packages you have, right?


Consider reading the NixOS documentation if you would like to learn more, because it’s worth it imo, but no, Debian is a different case.

NixOS doesn’t just manage packages, it manages the entire system holistically. You effectively have a single configuration file (or multiple) that configure every package, service, mount, ramdisk, driver, etc. If I delete everything but my home folder, my configuration file can be used to completely recreate the root and all of its configurations. Have a KVM GPU passthrough? That can be expressed. Want to override bashrc or zshrc globally? Yep. Want to setup users, LDAP, Samba, FTP? Same configuration. It goes as far as to make most files read only so that you can’t accidentally change things.

But it goes further with build reproducibility. All builds are hashed with their inputs. Every package specifies the sha256 sum of their sources. If your package builds on one machine, it shall build on another.

But it keeps going further. NixOS eschews normal shared library paths. Binaries always refer to Nix store paths in their runpath, making sure it will only ever load the version of the library it was built to load.

And there is still more. NixOS tracks generations. Any time you rebuild the system to change a config or add a new package to the root, you get a new generation that can be rolled back. The default bootloader lets you choose what generation to boot. So if you mess up your configuration, you very rarely need recovery media.

It may not seem very important, but it pretty much changes how you interact with your OS. If you want to restart from scratch, just make sure you have copies of your home folder and configs and start over. This is great for switching to LVM or deciding to use LUKS, where it may not be easy to do without reformatting.

It does take investment, but if you have multiple machines it becomes amortized. I’m running 3 machines with largely identical setups.


I share a NixOS config setup with my brother. It’s been evolving since 2015. He recently got a new laptop after using Mac for a year, cloned our repo, and was up and running. It really makes the computer feel like a commodity and the configuration like a sharable durable trackable artifact.

Wow, thanks for the detailed response. Yes it sounds like they've taken great care for reproducibility, which I greatly appreciate.

Sounds very similar to Docker's philosophy, where changes to the system produce incremental changes to the system image and all dependencies are strictly controlled.

What else do I need to be convinced to leave my Ubuntu system for NixOS?


You don't have to ditch Ubuntu right away. You can install the nix package manager on any Linux distro (and I think also *BSD and MacOS). That way you can play around with it first. You won't get the full power of NixOS but it's a good way to get started.


Docker tries to solve these problems only on the surface. Nothing in a Dockerfile guarantees reproducability. Try docker build of you favorite Dockerfile today and next week. The images probably won't be the same. Whereas NixOS allows you to build the exact same system now and whenever you want. Although, you'd have to pin the nixpkgs you are using.


Sure, they won't be the same, but they will be damn close. We aren't looking for the same md5 checksums. That goal isn't worth it.

Don't leave Ubuntu. Boot your Ubuntu Docker images bare metal.

https://godarch.com/

https://github.com/pauldotknopf/darch-recipes


I've been using puppet for everything for 8 years now. I try to avoid using the pre-built modules so that I know what everything is doing but I do use a few. There's a base config I use everywhere and then some modules that are specific to my workstation and some that are specific to servers. Makes it really easy to bring up a new personal laptop/desktop/server and have a great base setup (passwordless ssh, a tinc overlay VPN, automatic upgrades, root emails that show up in my inbox, etc) as well as making it very easy to bring up new services (e.g., a new website is just a few tweaks and a deploy) or migrate between machines.

Couldn't recommend it more. I started with puppet which seemed most robust at the time but I'm sure one of the others would do the same. Having all your configs in a single git repository and easy to deploy from a central point is great. And once you have that doing things like centrally managing keys (great for ssh and tinc) becomes really easy.


I consider this problem to be similar to moving homes. When you move, you discover a bunch of stuff that you are no longer using and this presents an opportunity to get rid of those things. You also discover things that you have not been using for a while but now that you are reminded, you would like to start using again. In any case, a move can to be a mindful activity that results in a better setup for you in your new home/machine than it is in the current one, and also an opportunity to curate your belongings and restructure/reorganize where necessary. Get rid of cruft, tidy things up, reorganize, and restart!


The Minimalists movement talks about this a lot. The idea is that you pack everything you own into labeled boxes and for 3 months, every time you need X you first think about if you really need X or not, and if you do you unpack. After 3 months you throw out everything that is still packed.

For this reason every few months I start with a clean OS install and for a few days I add what I need, while also taking some time to evaluate other options if needed.


While I'm a big fan of minimalization (minimizing?), it must have some exceptions. If you live in places with very distinct seasons, then you will have some items that stay packed away for 6+ months before you need them. You can't just throw out your winter clothes. Sure you can layer, but you still need the coat, gloves, and maybe even snow boots.

Same applies on your computer. Maybe you need Illustrator or Premiere once in a while, but not often. Some installations are time consuming or network-data intensive, so postponing them until the need arises can be detrimental (maybe when you need it, you are under time pressure... or you're somewhere remote without high bandwidth internet).

That said, reinstalling completely every few months is probably not a terrible idea, especially if you can automate most of it.


I'll bet there are very little minimalist wood and metalworkers then. Tools tend to get used infrequently, a couple of times per year, if that. But when you need them, you need them badly.

For Windows, I use a PowerShell that launches a install of Choco followed by a bunch of Choco installs. For Mac, I have a bash script that installs homebrew then runs a bunch of brew and cask commands, installs xcode utils, and then command lines some App Store installs that aren’t available from cask.


I prefer https://scoop.sh for windows - although I've yet to try it for service/server installs.

But for personal tools, it feels a lot like using apt - but geared towards installation in the home dir (like xstow).

As far as I can tell scoop does support installs for multiple users/services too, though.


To be clear regarding multiple computers and servers so doesn't chocolatey, which is the the actual name of the app the parent post is using choco is the call to the package manager. I'm a little worried about bypassing UAC on Windows with Scoop but it's always good to have options.

I think I'd rather say "sidestep" UAC - if you drop an exe in your "documents" folder, there's no UAC prompt. Scoop doesn't write to protected areas by default.

I suppose it's entirely possible to change this via draconian policies (to have the equivalent of mounting home noexec) - and conversely, for global installs - you need to abide by UAC in the form of "sudo":

https://github.com/lukesampson/scoop/wiki/Global-Installs


This is helpful for windows/posh driven installs: https://boxstarter.org/


I do this too. It’s worked well for me. I have three macs and six windows machines for work where I like to keep config in sync. I generally assume the scripts work (and they do usually still work, especially on windows). Until they break when I hunt down the problem and fix it.

I’ve gone through a few iterations but for me it’s come down to getting things simpler so there is less to maintain.


I wrote a tool that I've been using for the last few years now.

It uses similar concepts used in Docker (even sharing tech).

https://godarch.com/

https://pknopf.com/post/2018-11-09-give-ubuntu-darch-a-quick...

A deterministic OS, with the package manager of my choosing.

I run the same exact bits on 3 different machines.

Each reboot is a fresh install.

Here are my personal recipes: https://github.com/pauldotknopf/darch-recipes


I used a bash script with homebrew to reimage my MacBook fresh. With cask I can get almost every app I need and reinstate python and necessary packages. As you're working on the command line you can also modify most settings with defaults.write. I'm sure there is something similar you can do with windows and Linux.


I do the same. For me it's as simple as installing ~5 homebrew packages (I can get the same packages for my Linux machines), the Opera browser, and a copy of .bash_profile and .vimrc, which I keep in a Github repo. Having to migrate machines has made me trim down my development toolset significantly. I now know exactly what I need, and can go from fresh machine to the environment I'm accustomed to working with in < 1 hour.


Yep and you know a shell will be on whatever device you want. I've even got it tightly ordered so it creates an SSH key, waits for me to go to GitHub and then I resume and it pulls all my repos down along with dot files.

It's a really bad idea to toot ones own horn before the horn is ready but I have (always have had) the same problems.

I finally started using Docker with ssh -X so that my host laptop just acts as a X display and the Docker instance is what I "really" work in.

The advantage is that I use docker files (dead simple templates) as my configuration - it's there in git right now. I change it when the thing I am doing gets over the hump of me bothering to add it to git - this means I don't end up with the usual cruft cluttering up my config - things only stay in my machine on reboot if I add them to the templates (which is very simple but at least forced me to think "Do I really want that again"?

Secrets are on a usb stick which means I sometimes get to do this on my wife's Mac - assuming I can chase children off it.

I put together http://www.github.com/mikadosoftware/workstation it does all this and sorta kinda walks you through set up. Someone has sent a pull request with a good fix for ssh passwords but I have put it off thinking I will just get this "last thing here working" and frankly I am ashamed enough of ignoring free contribs that I will probably add it tomorrow on the commute

I am seriously thinking of migrating it over to nix as that does look cool but really I have lots of projects - this one I just use every day.

Edit actually the two biggest advantages are

- that I have a latest and next instance - I can rebuild one while working on the other so instead of rebuilding and reconfiguring every two years its every two days - and I always have a spare instance to work on if the build goes south. I am guessing this will mean moving to a new laptop (yeah one day) will hurts less for the same reason daily releases hurt less than quarterly releases

- I am starting to ratchet up - as it's all scripts in theory my security and other bits and pieces just go up and to the right.


Thank you for this! I eventually gave up trying to get X11 apps stable in Docker, I will see if I get it working with yours.

I run Nix and it's a decent experience, but a clear benefit with containerizing user space apps is sandboxing. Sure there are ways to escape docker jail, but it's definitely more sanitary to have isolated filesystems (XSS:ed Electron apps sweeping home directories for secrets etc would have to be more sophisticated).


I toyed with building a docker workstation, using X forwarding via XQuartz on a macos host.

Getting something to play nicely with a high-dpi display and a multi-monitor setup, seemed next to impossible.


It works ok on the one big mac screen :-) just buy larger and larger single displays ! It's what the manufacturers would want you to do :-)


My opinion is that people tend to overdo solutions for problems like these. If you're not faced with some case where you're flashing a laptop on a regular basis, don't aim for 100% coverage on your settings.

1. Figure out what files really aren't easy to replace (eg. vscode config, bashrc, etc.

2. Make a dead simple script that copies all these files/directories (and the script itself) to a temp folder and makes an archive of that folder.

3. Email it to yourself.

4. Make a `/remind me to ....` on Slack or an email service to remind you every few months to do this.

When I do recover, I don't just copy and paste all the files in. I use them as references as I rebuild. Because after a few years, my configs are filled with obsolete cruft or there's better ways to do things and I'll notice it when I go to paste in that segment into the bashrc or whatnot.

This certainly isn't an approach for everyone. Lots of people will want a more nuanced automagical approach. I just often find that over-automation just becomes this expensive headache that never ends up paying for itself.



Shameless plug: I rewrote the "dotfile-relevant" featureset of GNU stow in plain python as a PoC of a simpler batch symlinker: https://github.com/joshuarli/sym-prototype

It's worked really well for me. Used to use stow.


I found the problem was I only ever migrate to a new machine every five years +/-. Many assumptions are outdated by the next run. Therefore keeping main dot files in vcs and a script to install my favorite packages is enough.


I use Ansible for that, since it has great support for Windows and Windows is usually more annoying to configure than Linux (you have to disable telemetry, remove dangerous bloat software, etc).

I even have a playbook for setting up a Wintendo machine with the Steam client (big picture mode) replacing Explorer.exe as the shell. If there is interest, I can publish it as open source.


Wintendo?


Presumably a windows-based gaming box, probably in a small-form-factor case with middle-of-the-road hardware.


Yes, but in my case, the hardware is just an old laptop.

It seems like the declarative management approach of Nix or Guix ought to be perfect for this use case. I would like to use it for my personal laptop when I can find some time to play with them and get started. I wonder how smooth and beginner friendly they are.


While I have tried to do things with Ansible, it was always more hassle than I could be bothered with for my personal computer. My current answer is use Arch, keep a list of installed packages, and restore a backup (of a subset of directories). I have come to accept that /etc is as good a declarative description of a system as you’re going to get.


Same. I've tried to automate this for Ubuntu-based desktops as well, but it didn't work out and I also ended up with Arch. Arch solves the problem by having a rolling release (no OS upgrades) and by keeping things minimal and barebones.


I also feel like I pay much more attention on Arch to avoiding cruft, and having the exact right config in /etc. At first I have to admit that I thought Arch was just a weird form of role-play where Linux people deliberately made their lives difficult to feel clever, but I can't deny I've learned a huge amount about configuration and simplicity since switching. With Ubuntu I'd mostly started treating it like Windows, which is great in a way, but not really what I want out of a system.

I should point at that I use a single laptop for everything until it’s replaced, and all I need on servers is a subset of my dotfiles, and most importantly my .emacs.d which I additionally store in git. Obviously if you have a workstation in the office and a Mac laptop and a Windows PC for gaming then good luck to you.


Syncthing.

All I am doing lately is install Syncthing on any new machine, add my NAS as syncing device and setup one "Linux workstation" folder that I have.

Among other things, this folder has a hundred-something-lines shell script that pretty much does the following:

- Install all of the packages that I need

- creates symlinks of my usual dotfiles, (hosted in the "linux workstation" folder) to the proper location

- Prompts me for my gocryptfs passphrase and save it on the keyring

- starts gocryptfs to unlock another folder I have on syncthing with ssh keys

- `git clone` all of my code repos

The whole thing is done in less than 10 minutes. And the best about Syncthing is the possibility of having multiple folders x devices to define. It allows me when getting a new machine to, say, sync only my "development-related + music collection" folders if it is a work machine or if I want to have my camera pictures as well, etc.


I'm working on a new tool called mgmt: https://github.com/purpleidea/mgmt/

While I think it's a good fit for the general automation problems we face today, on of the reasons I'd say that it's more difficult to find solutions geared to an individual user's /home/ is that most people don't develop the necessary "resources" to manage those things, or that the tools aren't geared well to support it.

In mgmt's case, we can run rootless, and standalone very easily, so it might be something you can hack on. It is missing the same resources that many other tools are missing, so if you want something special for your /home/, then please send us a patch!

HTH


> ...is that most people don't develop the necessary "resources"...

I think you hit the nail on its head, and this is the 'central' problem. It's fairly easy to find those 'resources' for cattle (for any of the many provisioning/orchestration tools), but nobody creates (or shares) their 'pet-resources', because its initially harder to a) make them generic and re-usable, and b) make them discoverable (mostly a matter of language, and finding 'searchable/findable' terms that describe the resource.


I recently decided to tackle this exact problem, and ended up using Salt in masterless mode to do this. I have the most experience with Salt, and follow a few best practices to ensure my states are idempotent.

Here's my local laptop configuration with a one-step run (I only use Ubuntu laptops as o 2019): https://github.com/AkshatM/configuration

Some challenges and partial solutions I'm still figuring out:

1. Distribution. I cannot simply clone and run onto a new laptop as that assumes I have Git as well as my Git SSH keys for my user integrated - this is usually not the case for fresh laptops. Instead, I carry a clone of this repo in a USB stick, and run `launch.sh` as root.

2. Configuring workspaces. Golang is particularly annoying because $GOPATH != my existing workspace folder (which has many cross-language products in it), so I need to declare two separate workspaces. :/ I would like to one day have a single unified workspace.

3. Version management. I've discovered doing this yourself is detrimental to your sanity (and that apt's package pinning is some of the worst I've ever seen), so I now only require pyenv, nvm, tfswitch, etc. for doing this for languages and favor the latest secure versions of all apt-packages.

4. Deciding which package manager to use. I recently discovered snap, which offers an improved experience for Linux applications compared to apt, but suffers from a small ecosystem. I'm using snap where possible for all my favourite apps, but by and large apt is still the ulcer of my life.


I recently started doing this. I'm using GNU Stow plus Git to maintain my dotfiles and scripts (one thing I do in scripts is check uname to decide if its macOS or Linux). You could use Gitlab, GitHub, or host your own Git (e.g. reachable via WireGuard). Furthermore, I use Topgrade [1] to maintain most machines. It abstracts Homebrew, APT, Vim Plug, Pip, Cargo, and a plethora of other software management tools. I keep a spreadsheet to write down changes I make on a system, and then allow myself to cross which machines also have this change.

This all works well for me, and my Macs have TimeMachine, which is synced over WLAN at home network. However my Linux machines don't have such luxury (referring to the UI here). So you might want to look into a Linux alternative for that, or use a system with that in mind such as Btrfs, ZFS, or indeed (as per other comments) NixOS.

[EDIT]I'd like to add that some of my (virtual) machines -such as for example the ones running Kali- only have read access to certain Git repositories, and no access at all on other.[/EDIT]

[1] https://github.com/r-darwish/topgrade



No need to over complicate things. I just have my ~/.dotfiles and if I remembered I saved a list of installed packages on my old install that I can go through and pick whatever I need. Could add some scripts for the things I know I need to do but that's already adding too much. My feeling is you'll end up working hours and hours to create the perfect setup (which doesn't exist so you just keep tweaking it forever).


If you remember you export the list of packages? That sounds very manual, and prone to failures.

I'm surprised people in this thread aren't talking about backups. I have my dotfiles stored under revision-control, like many others here. I have a couple of notes about the setup of each host I store too. But basically I install Debian packages and that's 99% of the my install.

The only stuff that I install, outside my own code, are things like the ARduino IDE, Firefox, Calibre, and they get installed beneath /opt

Despite the lack of "real" state I still take a backup every day, and that has saved me more than once.


Woops! Late answer.

~/ is backed up nightly and ~/.dotfiles is on github, have some other configs on there too that I don't keep in .dotfiles, not that it matters.

The remember part is just that it doesn't really do any difference in the end, I will install what I need when I need it anyway, the important part is to keep the config that I have spent hours and days making mine. :)


I've got home in a git repo with an on commit trigger to push to a remote repo.

There's some conveniences in my bashrc (add ~bin to PATH, add bash completions from ~/.bash_completions, similar for man-path etc).

Vimplug handles my (neo)vim plug-ins, so I just have a short .vimrc-file.

I've recently consolidated on asdf as a version manager for my workstation, so I no longer manually muck with ruby/python/go venv/paths:

https://asdf-vm.com/#/


Probably overkill, but I'm pretty happy with how my dotfiles are managed now. I commit everything to my dotfiles repository and use chef to install things and manage dependencies. It's easy to support another OS by adding a cookbook and recipes.

https://github.com/callahanrts/dotfiles


Unsure if mentioned already, but I keep a repository of all my Linux configuration files. I have scripts that link them all out to the correct places. Then for almost any tweak I make, I script it into a massive install.sh script I maintain. Now I'm at a point where I can set up a 98% perfectly customized machine with a sequence of `./install.sh some_scripted_setting` calls. All this takes effort and care, but well worth it to me. And since it's all my scripting, I know exactly what's going on.

It's quite tough to do that for Windows (which I partially do). Ultimately I keep a list of the stuff I need to do and manually do a lot of it.

An alternative which I have dabbled with is basing your config in VMs. Keep stable backups of them. That way the host OS is not important, and just re-use the VMs if you switch hosts. QubesOS is what really opened my eyes to this. Of course this is really only useful for particular workflows probably more development oriented, and at some point you'll want to update them/start fresh.


I do the same thing, and I make sure that the install.sh file can be re-run without breaking things. Then if I want to make a permanent change, or add a new package that I will want to keep, I just updated install.sh and re-run it so I know it works and is up to date. Pretty fragile, but I would rather spend a few minutes fixing any errors whenever I run it on a updated OS then take the time to make it more robust.


Hi! Take a look at Sparrow6 -https://github.com/melezhik/Sparrow6/blob/master/documentati... works really well for me when I need to automate servers upgrade/configurations

There is ssh client for Sparrow6 if you want ssh based automation. It's called Sparrowdo - https://github.com/melezhik/sparrowdo/blob/master/README.md


I load my major config files (fish, tmux, emacs) in a CI alpine container[0] and check that they're not broken when i commit them.

If you're on OS X then `brew bundle dump` and `brew bundle install` will also help[1]

[0] circleci config: https://gist.githubusercontent.com/bauerd/27b24d1a3f881fe508...

[1] https://github.com/Homebrew/homebrew-bundle


This applies only too MacOS.

I use CCC(Carbon Copy Cloner) to do a full bootable volume backup, which is triggered every night, around 9pm. I go to sleep after 10, so I can switch off my desktop.

When I move to a new machine, I use Setup Assistant or Migration Assistant to migrate data from a CCC backup to a new Mac. [1]

Has worked seamlessly for me across devices as well: :laptops, Mac mini and iMacs.

[1]: https://bombich.com/kb/ccc5/i-want-clone-my-entire-hard-driv...


I use a bash install script in my dotfiles that installs all the software I need using homebrew, restores data from a backup and clones various repos.

I always try to keep any application settings that are important in the dotfiles, but the backup (some basic rsync commands) has a bunch more stuff in case I didn’t get around to adding something yet, and have to add it manually afterwards.

I’ve actually had to use it in the past few weeks so it’s quite up to date:

https://github.com/mjgs/dotfiles


You could try experimenting with Guix. If you take the time to set it up and if it works for your needs, you can just use the config file to get the same installation on every device.


I've recently employed a docker-based strategy for this. I work for multiple clients, some of whom go months between giving me work.

Each client has their own docker-compose setup on my machine - and I have a secondary machine that I rsync the dockerfiles and some supporting scripts[1] over to for out-and-about. I have a couple of scripts for generating the docker templates for new projects.

This means I only need to manually set up three things on a new machine: Docker, a password manager, and a text editor (I'm using Atom, but Vimmers / Emaxions could probably roll this into the template generator)

Depending on how locked-down your setup is, you might find that as time goes on and distros go out of support, you need to update your dockerfiles - but I think you'd need to make those kinds of tweaks with any system.

Note that I no longer advocate a dockerfile per-project, but per-client (or some other sub-structure) - that way, production dockerfiles don't interfere with developer dockerfiles. Some day, I'd like to experiment with "sidecar" containers instead... some serious advantages there.

[1] Supporting scripts include the templates, a few bash aliases, an easy way to find the ip of a docker machine, an easy way to open a bash shell within a docker container, etc.


Put your dot files in a git repo and then stop.

Automating this sort of thing doesn't make any sense unless you're switching laptops on a much more regular basis than every other year. You're going to spend more time vetting the ideas in these comments than you'd spend configuring your new laptops for the next 4 years.

See also: https://xkcd.com/1205/


I was really impressed when I got a new MacBook this year. With my previous Windows machines it was always one or two days of reinstalling everything and then a few more weeks until I had all my settings back. With the Mac I restored from Time Machine and after a few hours I had my machine back pretty much the way it was before with just a few settings missing. That alone is worth a certain premium to me.


I have had Time Machine fail on me when I tried to restore from a backup on the network that was not unmounted properly. It would not let me get to it in any way, and trying to debug the issue from public forums was impossible. In the end I gave up on it as most important files were backed up separately.

Now I'm using Carbon Copy Cloner [0]. It has the same ease of use as Time Machine, it gives you bootable backups and it has the best documentation I have ever come across. Pretty much every possible use case and error case is described with workarounds.

Time Machine is great when it works, but when it doesn't it turns into an absolute nightmare. Also it gives you very little insight into what it is doing or what state it is in.

PS: I'm not affiliated with CCC. Just a very happy user who got bitten by Time Machine too many times.

[0] https://bombich.com/


I don't mind premium and what you(me too) observe with windows is a bad architectural decisions of Microsoft allowing every app to pollute it's shit all over the places - registry, system dirs, user/* and tons of other obscure places where app could spread leftovers all over.

This makes it nightmare to manage anything.

Mac ecosystem is cleaner and more tighly controlled.

With Windows - it's a business opportunity for someone to plug Microsoft holes and come up with easy migration/restore service.


Yeah... Mac is so clean that's why everything regarding transferring devices is such a PITA. It takes hours to figure out the arcane undocumented ways to fix whatever's broken on a Mac because they internationally hide or break their own devices. Seriously you can't even replace hard drives anymore on a Mac. Here's a fun one, try transferring family sharing to a new owner because say the previous owner died without losing everything. Try moving a child without access to the previous owners device, or even dropping the child from family sharing. Everything iOS user management related just plain sucks. I'm glad they're finally fixing the hot mess that is iTunes. Working on iOS or Mac is closer to bring a techpriest than a techie, understand nothing just chant the magic words. Give me Windows or Arch Linux any day of the week. I'm actually pretty sad the Ubuntu phone stopped being developed.

I can see how the Mac is not all golden but I still think it’s much better than windows.

I remember even in the 90s realizing that the registry was a bad idea. Instead each app should have had its own executable/settings/data folder. I don't understand why they didn't add this.


Exactly. Each app should have it's own directory. It shouldn't pollute outside. Problem solved.

Ballmer thought differently. Or most likely didn't think at all.


I wouldn't blame it on Ballmer. All these decisions were made when Gates was in control. the designers of NT and windows 95 had a chance to clean up and didn't do it.


You can get 99% of what configuration management does by just building a configuration package.

1. If you want an application installed, list it as a dependency of your configuration package.

2. If you want a configuration file installed, put it in your configuration package.

3. If you want a user account or group, put a config file for systemd-sysusers in your configuration package.

I built a tool that simplifies package building for this particular purpose, where you're just listing dependencies and file contents: https://github.com/holocm/holo-build - As an example, this is the configuration package for my notebook: https://github.com/majewsky/system-configuration/blob/418282...

The nice thing about this approach, compared to switching to something like NixOS, is that you can continue to use your favorite distribution. However, when you go by this route, you will run into a brick wall very fast because you cannot install a configuration file into a path where an application package already installed an example config. I built http://holocm.org for this purpose. It works well enough, even though it has a few quirks at times. It's fundamentally limited by the system package manager having the final say in everything.

Since January, I've been slowly migrating my servers to NixOS. The desktop and the notebook might follow at some point. NixOS certainly has enough quirks on its own, but at last the configuration management is a first-class citizen on my servers.


Just yesterday afternoon I picked up my old laptop from 2015, and thanks to NixOS [0] + versioning my home folder [1] in a couple of hours (of mostly waiting for the thing to finish downloading stuff) I had my usual setup running with everything configured - like everything: settings, packages, emacs, shell, wallpapers, etc

NixOS stable is "extremely stable" (i.e. it never broke for me in three years that I use it) and "upgrading it" is still two commands away

Another interesting part is that now that I run it on all machines it is really easy to sync the configuration between machines, as they _literally_ share the same config barring something like 20 custom lines for every machine

This stuff used to take _days_, I'm so happy with this thing

[0]: https://nixos.org/ [1]: https://github.com/f-f/home


I'm in a similar situation (3 different workstations that should have the same config) and tried to solve it with all major config management solutions (Ansible, Puppet, Saltstack) and also tried using a Bash script. While config management works perfectly at my workplace (for real servers), I found all of them way to complicated and slow to maintain my workstations.

In the end I switched to use a plain Git-managed makefile which works perfectly for me, because:

- Plain and simple syntax, single file

- Shell autocompletion for Makefile targets, if I only want to install a small set

- No need to write cfg management "modules" as I can achieve the same with minimal Bash scripting

This is how my config looks currently: https://gist.github.com/ifischer/5fdd672aeed1099c1f6c6ea925c...


My system for multiple Debian and OpenBSD machines:

Configuration files, secrets files, and personal scripts are stored in a git repository. There is a shell script that sets up symbolic links for the configuration files, /etc/hosts file, and crontabs (the machines all back up to each other and to several servers using rsync and hardlink snapshots automatically, in addition to offline media backups). Machine-specific configurations (mostly keyboard and screen) are conditional on hostname and pulled in by symbolic links or templating.

The scripts have some OpenBSD-specific conditionals. Also .profile is conditional on the operating system.

GNU Emacs configuration lives in a separate git repository. EXWM is my window manager on Linux and OpenBSD, which makes things easier. I run different versions of Emacs on different machines (some from the package manager, some built from source). The configuration is conditional on terminal or graphics mode, as well as on X11 specifically. There is also conditional configuration for Mac OS X for when I have to work on someone else's machines.

I do not automate operating system or package installation (I also get new hardware "every couple of years," and all the machines have different software installed), but the installed package list (`apt-mark showmanual` on Debian and `pkg_info -m` on OpenBSD) is part of the backups, so I can restore or bring up a new machine from the list quickly.

Reference and media files are synced with Unison, my personal notes and projects are in git repositories, and mail is synced with offlineimap.

I find the above makes it easy to bring up new machines and, more important, effortless to keep the relevant configurations on multiple machines consistent, while not getting in the way of differences (different software on different machines, software compiled from source vs installed via the package manager).

I plan to start using GuixSD in the near future, and look forward to using it to automate server and virtual machine configuration and provisioning.


I read about Stow a while ago here in HN, and have been using that + restoring bulk data from backups, as a method to migrate between computers:

https://news.ycombinator.com/item?id=8487840


For the software I use, my approach is portable applications, i.e. applications that require no installation and save their settings in the folder they run from. To move to another computer you simply copy the application folder. Makes it really simple to backup too!


Can't stress this enough.

After upgrades, moving computers and installing and configuring everything multiple times, I went the portable route.

Now, for every application I install the portable version is my first preferences. Every few months, I backup application paths and configuration files which are outside of the portable installation paths. Some applications keep local MRU files or some other settings in the dot folders under the user directory.

Of course, the portable applications directory also gets backed up on a regular basis.

Nowadays, a recovery from an operating system failure, results in a downtime of less than an hour.


I use Ansible, and it pulls in everything I need, including my dotfiles, which are managed in a separate repo:

https://github.com/geerlingguy/mac-dev-playbook


Same here. I have an Ansible role "workstation" that installs/configures 90% of what I want, including cloning and installing the repo with my dotfiles.


Most of those tools are built around the "cattle not pets" idea but a PC is a pet. Best way IMO is some dotfiles and a bash script, OS installation part is pretty quick, unless you running arch- but even that can be automated


> Most of those tools are built around the "cattle not pets" idea

That's true, but not in a way which makes them inapplicable elsewhere. They are essential to support the “cattle not pets” approach, but most of them aren't intimately tied to it.


Brewfiles [0] get me 90% of the way there.

[0] - https://coderwall.com/p/afmnbq/homebrew-s-new-feature-brewfi...


Like everybody else here on the top-level comments, I've also wrote a tool for that :-)

It's a wrapper for things like Ansible, Terraform, shell-scripts etc (Ansible is the only one with a working implementation so far), with the goal for you to be able to describe your setup in an easy, minimal & composable way, within a single file (well, data-structure, really). That file and one binary is all that is needed to re-create your setup on a new target (which can be a physical or virtual machine, container image, remote server, etc.):

https://freckles.io


I tried to automate my most recent Ubuntu PC using this Ansible playbook as a starting point:

- https://github.com/lvancrayelynghe/ansible-ubuntu

It has some bells and whistles I stripped out (or wished I did).

One of the problems I haven't really solved: ongoing updates. Say I need to install a new apt package. Do I update the Ansible playbook and rerun it? I probably should. But more often, I just install it from the command line. Now my image is out of sync.

Also certain applications like Jenkins seems surprisingly hostile towards reproducible builds.


I do this regularly and decided to, instead of automating things, simplifying and reducing my setup. I have a private cloud that stores documents, all software is in Git repositories. Setting up a new machine is really just copying my private key, setting up cloud access and emails, cloning repositories and copying my nvim config.

Comparing with my colleagues, I have to add that I hardly keep any personal stuff on my machines - I have no photos, music and movies are streamed. I use 128GB of my 512GB storage (including cloud and software). I also try to go with defaults as much as possible.


Another alternative (that I'm surprised isn't already mentioned) is https://github.com/lra/mackup.

What it does:

1. moves your dot files and whatnot into DropBox (or Box or Google drive or whatever)

2. symlinks the moved file to the original location


Darch[0]?

>Think Dockerfiles, but for bootable, immutable, stateless, graphical (or not) environments for your everyday usage.

[0] - https://godarch.com/


I use a dotfiles repo in git and GNU Stow. I could automate more of the setup work, but I do it so rarely that just having my config files backed up and replicated is enough.


I tried nix and ansible but went back to plain bash scripts. I think these tools are great if you actively maintain but fail shortly if you want to use them twice a year.


There is no absolute approach.

On Windows you can use @rlv-dan's strategy though not all apps will be portable.

On Linux it's mostly enough to save your dotfiles incl. .bashrc and your home folder. You can also save your repo lists and the currently installed software and just reinstall these automatically (e.g. with ansible).

But afaik there's no cool and nice solution that you just use. You'll have to write this tool or find your own backup-and-restore strategy.


Assuming data is on separate partition or disk, I do nothing after an upgrade to older gear.

A new Mac, Brewfile and dotfiles in git

New Linux, I wrote a bash script years ago that just iterates over an array of pkg names as strings, then clone dotfiles. It checks for apt or pacman; there was a time where I couldn’t pick and kept going back and forth between Debian and Arch ️

For some reason I can’t bring myself to run Homebrew on Linux, even though I know it supports that OS now.


I use archiso and a number of custom PKGBUILD files to configure all my gear, boot into it as a live ISO and if I need additional packages I install them as I need in the live session (normally I keep an executable README.md that installs every thing I need in the package root) if there’s anything I want to keep config wise I merge it into the package that preps /etc/skel/ for the next build.

I use etckeeper. https://wiki.archlinux.org/index.php/Etckeeper

I find it nice to be able to simply make changes as I need directly on the system without going through any config management abstractions. But then I have a record of previous state a can revert to. It's simple and works well.


This is a tool that I've been using for past five years. Works really well for me: https://gitlab.com/happycoder97/dotcastle

Example: https://gitlab.com/happycoder97/my-dotcastle


I use a simple checklist on github, which contains mostly scripts I can run to install what I need

https://github.com/acutesoftware/dotfiles/blob/master/instal...

Works quite well and is future proof (plain text)


I have used VMWare as a base on Mac, Linux, and Windows hardware over the last 10 years. I do everything except for video conference in VMs (running Fedora), use multiple per purpose VMs at a time (eg segregate banking from client work and from email/social media), and just migrate the VM images when changing hardware. Recommended.


Have you tried qubes-os[0]?

- https://www.qubes-os.org/


I currently use a mix of SaltStack in masterless mode (for system-wide configuration) and Makefile (for user-specific configuration), both in the same git repository, and it handles pretty much everything fine, including idempotency.

The thing is, it doesn't age well: it works for me because I take care of it on a regular basis but it probably wouldn't help much a few years from now if left unmaintained.

-*-

I started using SaltStack in masterless mode 5 years ago, and I won't be looking back. It handles both packet installation and consistent system-wide configuration. It was perfect for a single desktop computer running Kubuntu at the time, and it's perfect now for 9 different computers (servers, desktops and laptops) with widely different hardware specs, 3 different distros (Debian, Kubuntu and Arch) and very different use cases (servers, gaming stations and family computers).

The Makefile is an idempotent replacement for a shell script I wrote for NetBSD 15 years ago. It handles user configuration and works on pretty much all OSes I use (including Windows, Solaris and AIX) without requiring a root / admin account.

With this mix, I can setup a new computer with a known distro in a few minutes: setting the hostname, selecting the profile (server, gaming…) and the list of users which will get access to it. Installing on a new distro (including an upgraded version of a known distro) usually requires a few tweaks, so it can take up to a few hours to get everything working — that's still way better than spending weeks or months for some inconsistent and buggy result, but keep this in mind if new distros is the only scenario you're interested in.

I don't migrate data: I prefer having as little as possible on each individual computer.


For a desktop Linux environment, this setup gets me up and running from scratch with one command, fully containerised, managed using Git.

https://github.com/sabrehagen/desktop-environment


I use a GitHub repository to keep track of configuration files and Ansible to install apps.

https://github.com/fernandoacorreia/macfiles


when I was running linux (switched to windows lately) I was relying on bash script I made and saved on the github.

It contained everything I need to start working again, such as:

- Installing Ruby/Rails/Python/Django/Node

- Installing and setting up a SQL/Postgres db, creating user etc.

- Installing plugins and configuring Vim

- Installing UI packages for the linux itself (icon packs etc.)

- Installing various libraries (imagemagick etc.) that I know I'm gonna need to install soon

- Creating folder structure I've found comfortable to work with

- Installing various desktop apps I use

Sky is the limit - it was awesome that after fresh install the only thing I had to do was to config git on machine, clone my repo and run one command :)


I've just got a syncthing set up with all my configuration and data.

"Spinning up" a new computer takes me an afternoon, which I doubt I could improve on enough by using more professional tooling to make it worth the time investment.


Along time ago I used FAI (Fully Automatic Installation) to manage my home machines, but then later switched to Ansible. Just because you use things at work doesn't mean you can't use them at home.

I use Ansible scripts to automate the setup on my new Macs, so far so good


Definitely - I've been doing this for about five years with all my machines. A rough overview of the setup:

- A public repository with a Nix configuration file: https://gitlab.com/victor-engmark/root. This does the global setup, installing applications, configuring global services, enabling hardware, etc, and gets me 90% of the way to identical machines with a single `nixos-rebuild switch`.

- A public repository with dotfiles and main application configuration: https://gitlab.com/victor-engmark/tilde. Allows me to configure user applications once for all my machines.

- A private repository with secrets such as SSH keys and host-specific configuration like video drivers, screen layout etc.

This makes it possible to get from a fresh OS install to a developer workstation with everything from keyboard layout to my favourite window manager configured as I like it within minutes.

I consider learning configuration management a nice bonus of this setup, but of course that is not for everyone.

The biggest problem with this is how many programs seem to go out of their way to make their configuration hard to version control. Firefox moved to SQLite for everything years ago, but at least there's the Sync service. Some applications reorder configuration items every time they save (I've built scripts to order them properly). And others include things which IMO don't belong in "user" configuration files such as window size, recently opened files and which configuration tab was last open.

PS: I recently moved to Nix on NixOS from Puppet (similar features to Ansible and Chef) on Arch Linux for the system configuration. Nix has several massive advantages over at least Puppet and Ansible, both of which I've used a fair bit:

- Much shorter configuration. For example, `services.fail2ban.enable = true;` is enough to install, enable and start a fail2ban service when building the configuration, and `time.timeZone = "Pacific/Auckland";` means I don't have to even think about where that piece of configuration is stored, or in which of the infinite formats used for Linux configuration.

- Trivial rollback to earlier configurations, during runtime or at the boot menu. This saved my backside when I screwed up a GRUB-related setting - just reboot and select the previous configuration.

- You can install applications as a non-root user to try them out.

- I've been using Nix for less than a month, but it's far easier to just get stuff done with it than with Puppet or Ansible, and the end results so far are just generally nicer.

So if you want a simple system configuration you can copy around to configure everything the same way I would thoroughly recommend NixOS.


Pretty simple - I have a Keybase git repo called "dotfiles". When I get a new machine, I copy those files to my home directory.

Only go with ansible if you already scripted the original provisioning with ansible (ansible is better doing more like a push than a read).


I apologize if this sounds naive or ill informed, but on Mac can't 'restore from time machine' do this? What am I missing?


That I'm using linux. Sorry, I should have mentioned that. On mac hardware, but I've never used OS/X.


Microsoft Windows Active Directory, Group Policy and or image server?

Well, if you were doing it for a lot of machines it would be worth seeing up.


Nix comes to mind.


I used to keep a VM of my old machine, to spin up when necessary. Mount the disk image and copy over what's required etc


Ansible can also be used locally. It's not reserved at all to the server side of things.


I got one of these. Feel free to fork and use!

Https://github.com/echohack/macbot


Ansible is worth the extra few minutes, IMHO.

+ (minimal) Bootstrap System playbook

+ Complete System playbook (that references group_vars and host_vars)

+ Per-machine playbooks stored alongside the ansible inventory, group_vars, and host_vars in a separate repo (for machine-specific kernel modules and e.g. touchpad config)

+ User playbook that calls my bootstrap dotfiles shell script

+ Bootstrap dotfiles shell script, which creates symlinks and optionally installs virtualenv+virtualenvwrapper, gitflow and hubflow, and some things with pipsi. https://github.com/westurner/dotfiles/blob/develop/scripts/b...

+ setup_miniconda.sh that creates a CONDA_ROOT and CONDA_ENVS_PATH for each version of CPython (currently py27-py37)

Over the years, I've worked with Bash, Fabric, Puppet, SaltStack, and now Ansible + Bash

I log shell commands with a script called usrlog.sh that creates a $USER and per-virtualenv tab-delimited logfiles with unique per-terminal-session identifiers and ISO8601 timestamps; so it's really easy to just grep for the apt/yum/dnf commands that I ran ad-hoc when I should've just taken a second to create an Ansible role with `ansible-galaxy init ansible-role-name ` and referenced that in a consolidated system playbook with a `when` clause. https://westurner.github.io/dotfiles/usrlog.html#usrlog

A couple weeks ago I added an old i386 netbook to my master Ansible inventory and system playbook and VScode wouldn't install because VScode Linux is x86-64 only and the machine doesn't have enough RAM; so I created when clauses to exclude VScode and extensions on that box (with host_vars). Gvim with my dotvim works great there too though. Someday I'll merge my dotvim with SpaceVim and give SpaceMacs a try; `git clone; make install` works great, but vim-enhanced/vim-full needs to be installed with the system package manager first so that the vimscript plugin installer works and so that the vim binary gets updated when I update all.

I've tested plenty of Ansible server configs with molecule (in docker containers), but haven't yet taken the time to do a full workstation build with e.g. KVM or VirtualBox or write tests with testinfra. It should be easy enough to just run Ansible as a provisioner in a Vagrantfile or a Packer JSON config. VirtualBox supports multi-monitor VMs and makes USB passthrough easy, but lately Docker is enough for everything but Windows (with a PowerShell script that installs NuGet packages with chocolatey) and MacOS (with a few setup scripts that download and install .dmg's and brew) VMs. Someday I'll write or adapt Ansible roles for Windows and Mac, too.

I still configure browser profiles by hand; but it's pretty easy because I just saved all the links in my tools doc: https://westurner.github.io/tools/#browser-extensions

Someday, I'll do bookmarks sync correctly with e.g. Chromium and Firefox; which'll require extending westurner/pbm to support Firefox SQLite or a rewrite in JS with the WebExtension bookmarks API.

A few times, I've decided to write docs for my dotfiles and configuration management policies like someone else is actually going to use them; it seemed like a good exercise at the time, but invariably I have to figure out what the ultimate command sequence was and put that in a shell script (or a Makefile, which adds a dependency on GNU make that's often worth it)

Clonezilla is great and free, but things get out of date fast in a golden master image. It's actually possible to PXE boot clonezilla with Cobbler, but, AFAICT, there's no good way to secure e.g. per-machine disk or other config with PXE. Apt-cacher-ng can proxy-cache-mirror yum repos, too. Pulp requires a bit of RAM but looks like a solid package caching system. I haven't yet tested how well Squid works as a package cache when all of the machines are simultaneously downloading the exact same packages before a canary system (e.g. in a VM) has populated the package cache.

I'm still learning to do as much as possible with Docker containers and Dockerfiles or REES (Reproducible Execution Environment Specifications) -compatible dependency configs that work with e.g. repo2docker and https://mybinder.org/ (BinderHub)


I use the command dd to clone my disk to the new one computer.


I use the command dd clone my disk to a new computer.


Ansible is probably what you're looking for.


Just use Ansible for your personal computer.


ChromeOS


During my master's we had assigned desks and mandatory presence 4.9 days a week. Desks would switch about four times a year using a verifiably random scheme, and in addition, there would be projects that we would work on with random people, so we switched desks a lot. Each desk comes with a proper 2-screen desktop setup, so pulling out your laptop was inferior. Via PXE boot, people would regularly reinstall their system for various reasons, so it would regularly be a fresh Ubuntu install. An additional requirement was that others must still be able to work on my system without (m)any quirks.

Only a few of us used any sort of automation. Most that did chose Ansible, and all I ever heard from them was cursing on new systems and, between homework, constantly tweaking the deployment script. I'm sure the comments will say Ansible works reliably and painlessly for them, but from what I've seen, it seems to take some time to get into it if you want to setup GUI systems in detail.

What worked extremely well for me was a shell script that I grabbed from my server (wget example.com/setup.sh), ran, switched user account, ran step 2 (since it mounts the user's homedir, you don't want to do that while logged in), and then logged back into my real account. After less than 5 minutes of manual work, I had my desired software, the right desktop environment, task bar / alt tab / system tray / clock / etc. settings, and I mounted my homedir on a local server, which was fast enough to painlessly run virtual machines with GUI OSes off of it. I would be as productive and comfortable as on my private laptop after a few minutes of work in the worst case. The script took a few hours to create at first, and with a new Ubuntu release maybe another hour to make it work on the mix of old and new systems. The advantage over other people's setups was that anyone could wipe my system without a second thought (usually you'd have to ask the desk's owner, they'd want to copy files...) and others couldn't snoop through my files (at least, not opportunistically: they'd have to purposefully install a keylogger rather than just "sudo; ls /home/lucb1e") so I can jot down thoughts or save passwords in Thunderbird without worrying.

In a more common scenario (not shared systems that are regularly wiped), you'd leave out the mounting of the homedir and copy essential files instead, such as your bashrc/vimrc. Using a simple shell script is something I still recommend to manage setting up personal systems. It's what you would do anyway, except stored in a file instead of typing the commands manually. So that's what I still use today, though I don't switch systems often enough to warrant maintaining commands for GUI configuration preferences (which can be a pain to figure out how to set from the command line). My current setup script is included in my bashrc (which I copy together with a .vimrc) and mainly pulls packages. I specify which categories I want, e.g. if the system has WiFi it'll install wavemon, or if the system has a GUI it'll install wireshark and xdotool.


For me it was "easy" on Linux. But you need to commit to it, no pun intended.

I moved from Ubuntu to OpenSUSE Tumbleweed with almost no downtime. Only two things I had to change IIRC was to create the same group as Ubuntu with GID 1000 (100 on TW) and to find out the different package names because ubuntu is a mess.

The process that lead me to my current setup

- Install OS fresh (always using the same username because there are many cases where you can't eval or access env's)

- Ignore most of the $XDG folders except Desktop and Downloads.

- Have a centralized root folder and a scripts folder inside of it. All my scripts start with a check to see if the root is correct by checking an ENV

- Start a repo bare = false, worktree = /home/user, and use git add + excludesFile. I've tried everything there is and this is the best option by far because it's way faster, you have to be explicit about what's in your vcs. On VCS GUI .desktop I have added MimeType=inode/directory; so it's aware of the root.

- All my dotfiles are there but I use KDE and I added all the config files I care about so I can see what changes on updates, it helped me debug problems on update more than once.

- Don't customize the OS manually in any way (unless the GUI writes to a config file), create a script and only make changes through it. That includes what you install, what you remove, config changes, daemons, EVERYTHING.

- I prefer programs that are configurable by config/plain text files.

It's "reproducible", you have a script that describes your OS.

Personally use flatpak for everything that isn't on TW's repos and docker/podman for local development.

Now some examples of things in my script that also show some statistics after I run it. There's also a "first run" set of functions that I only run when it's a clean install.

sudo zypper install "${LIST_OF_PROGRAMS_TO_INSTALL[@]}" sudo zypper remove --clean-deps "${LIST_OF_PROGRAMS_TO_REMOVE[@]}" sudo zypper addlock "${LIST_OF_PROGRAMS_TO_REMOVE[@]}"

sudo systemctl enable "SVC" sudo usermod -a -G "SVC" $(whoami) sudo firewall-cmd --permanent --zone=ZONE --add-source=SRC

flatpak remote-add --user --if-not-exists flathub https://flathub.org/repo/flathub.flatpakrepo flatpak install --user -y flathub org.freedesktop.Platform//18.08 org.freedesktop.Sdk//18.08 flatpak install --user --noninteractive "APP" flatpak update -y

And things like enforcing correcting permissions for .ssh, .gnupg. Set up mounts and shares.

My root partition is btrfs, home is xfs. But I keep a tab on it with sudo btrfs filesystem usage /.

I have tested a lot of tools and in the end there's no replacement to discipline (it becomes second nature) and using a script instead of relying on 3rd party tools for config/provisioning.


I can't edit the comment to fix the formatting, for some reason. Feel free to improve it if you care :)


Saltstack


I went through this, again, a few months ago: https://news.ycombinator.com/item?id=18300976

tl;dr: bash scripts lasts longer than everything else, are easy to maintain, and anybody with basic IT skills can make sense of them. Keep in mind that in an enterprise context, the build system is used and therefore maintained, constantly, and that there's typically a team doing it. At a personal/family level, it's typically you using it, and modifying it if needed, once every other year or so, with literally nobody looking at it in between. Chances are you've done and are doing enough bash that you rarely look up its syntax. When was the last time you used terraform/ansible?

Lindy Effect (https://en.wikipedia.org/wiki/Lindy_effect) says bash will outlive all other recent solution.

Full story I went through this phases:

- full crazy pxe install + Debian/Ubuntu preseed, etc... But still needed some bash scripts. It required a lot of work from one OS version to the next. There were enough changes between versions of the pxe server that I had to revisit its config every time!

- I eventually bought a laptop for one of my kid that didn't support pxe. I installed the base OS with a USB key, and realised how easy it was, and that it was significantly less painful to install half a dozen laptops this way, than fighting my pxe server. My install became USB OS install + well, my good old bash scripts!

- I got involved with ansible very early on and decided to solve both world hunger and global warming with it, but more importantly, my laptop installs. I spent many hours on this. Got it fully automated, and felt great!

- one of my kids' laptop got destroyed, buy a new one, install from USB stick, fumble to install ansible, and realise that ansible has by now changed significantly and my scripts need a lot of work. This is in the middle of the school year, while super busy at work, I just don't have time to deal with this. But, there's a great news: I still have my old bash scripts, and guess what, they still work.

- last upgrade: I went from from my old dozen of bash scripts to this: https://github.com/dorfsmay/laptop-setup-ubuntu-18.04

It is slow-ish, especially some of the manual steps, but not painful enough to make me procrastinate that I delay an upgrade by six months. More significant: I sat down with my kids and get them to upgrade their laptops (which helped a lot to fix my documentation)!

PS: I have zero local files, everything is either on a cloud drive (pCloud), or on github if I want to keep history/share it (eg: my dot files).




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: