No extra tooling, no symlinks, files are tracked on a version control system, you can use different branches for different computers, you can replicate you configuration easily on new installation.
</quote>
I used to do this (using CVS, at the time), for keeping the configs and projects on my desktop and laptop in sync, as well as for having backups. Quick small notes:
* I split everything into topical modules, which were immediate subdirectories of the home directory, partly so the little laptop didn't have to install everything.
* There was a "config" directory that included most of the typical home directory dotfiles, as well as things like fonts. It had a small shell script in it, for keeping symlinks from the home dir up to date (e.g., "~/.bashrc" was a symlink to "config/bashrc".
* Not everything was in CVS. Photos, for example, were backed up separately.
* It was good to have my server Internet-accessible, since a few times I'd walk for an hour to work from my laptop, and realize I didn't have the latest version of my code.
* Nowadays I would probably use Git, only for the sake of standardizing, but the simple model of CVS was nice for this purpose. (I also knew many other fancier SCM systems, but intentionally went with a simple one.) Also, the repository was just in RCS format (files with reverse deltas of diffs), so, on rare occasions that I made an oops (like accidentally committed a huge file I didn't intend to), I could just SSH in and fix it manually.
* I had various servers for this over time, starting with my desktop, and then a PC in the closet, then a colo server, and then (my favorite) a small home RAID server I made from a fanless Atom board and Debian (which was effectively silent, running years 24/7 before I finally took it out of service, still working; one drive had to be replaced during that time, but was RAID mirrored).
That's probably the worst possible way of versioning your home besides, possibly, making a zip of your whole home directory every time you would otherwise to a commit.
Just use a snapshot-enabled filesystem. ZFS works awesomely, BTRFS too (but it's not released as stable and production ready AFAIK) and LVM can have snapshots too (take a look at the snapper project by Red Hat).
It's questionable as to what the author means by "Home Directory". If the author has actual binaries rather than just shell scripts in ~/bin for example I agree adding such things to git is not ideal.
I use the same setup (with a similar get of ignores) of my home being the root of a git repo for my dotfiles, but it's dotfiles alone that get added to git.
The ZFS tools don't really cover the case of ignoring a bunch of transient files in your home directory that you don't want to sync around. Things like browser temp files, etc.
You know, this aspect of versioning a home directory is basically why home directories aren't versioned very often.
That being said, if you want to "version your home directory", I think that zfs is the best approach.
Also, transient files and browser temp files should reside on /tmp AFAIK... So that's less of an issue. If you application is writing temporary files outside of /tmp, then it's probably a bug.
I have absolutely no dog in this fight at all but, in case anyone is interested, if you don't have zfs on a system you can create and maintain "rsync snapshots" by using hard links that take up zero space:
The idea is, you rsync to a snapshot directory (just like .zfs/snapshot/name_of_snapshot would be) but since you are hardlinking everything, almost all of the files take up zero space. Only files that change get their hardlinks broken and take up any space.
You can put the dir names in rotation and make them work exactly like zfs snapshots.
Except you don't get nice per-file history, which may not be important to you, but if it is, then using git is certainly not "the worst possible way of versioning your home besides, possibly, making a zip of your whole home directory every time you would otherwise to a commit". Filesystem snapshotting also makes it more complicated to sync your dotfiles with machines that have other filesystems.
One feature of this pair (unique to them?) that I find useful is the ability to apply different subsets of my repos to any given account's home directory.
I use a combination of GNU stow and keybase’s encrypted git repos to store my dotfiles. It works pretty well and ensures I don’t have to maintain a gitignore.
I use Dropbox to sync my home directories across machines. I specifically want them to sync but not version. The reason is that I'll be working on one machine (say at the office), want to continue typing and testing on another machine (say at home), and specifically do NOT want to commit in the half-finished erratic state things may be in.
Dropbox deals with syncing across machines on in instantaneous basis so that all machines have the same thing at every instant, allowing me to change location or machine and keep working. Git deals with versioning.
Be careful with git repositories shared in Dropbox in this manner. If you are running an ide on multiple machines, the idle ones might do some crazy things to your .git tree. You can look for signs of trouble by running “find .git/ -name ‘conflict’”.
Yep I've had this happen before. I'm usually careful to close IDEs but 99% of the time I just avoid IDEs. In the worst case I just copy in-progress source files and re-clone the repo. I don't really have a good alternative to a syncing folder across locations.
I’ve set up scripts that periodically grab my working state (unpushed commits, stage, and local diffs) for certain git working trees and push the changes to a directory in Dropbox.
The scripts also do a periodic “git fetch origin” in a broader set of local git repositories, so I usually have up-to-date code locally even if I haven’t been keeping things synced manually.
It's like discovering that there's a special type of hammer just for the task you're doing. You know hammers. You hit lots of things with hammers. The new hammer is a little different, and to be honest you could make do with the old hammer in a pinch, but once you've spent a little time pounding away with the right hammer, you may suspect the world needs more types of hammer.
I backup to B2 using Restic. It dedupes while also storing each version with a tag. It's not as fancy as git with the diffs... but my home dir is full of git repos and they are backed up of course.
You can mount your whole backup using FUSE and browse around, or restore to some other directory. Multiple computers? Multiple folders to backup? All handled. It's got ignore/include for your binaries and bundled dependencies and whatnot.
I used to version whole home directory, turned to stow + dotfiles recently, reasons:
1. maintaining .gitignore is kind of annoying.
2. I don't need all my configuration files on some machines, say a remote sever, you can just stow what you need instead of adding many files to your home.
3. it's not convenient setting up an environment when you already have many files in home, for git clone to a directory not empty is not allowed.
my .gitignore is set to ignore everything with a wildcard and un-ignore a few hidden directories where i'm likely to place new files (e.g., .vim, .zsh)
I have gone one step further for my computers at home (a desktop and a laptop) and have put most of the stuff in my home directory in a git repository, not just configuration files. That way I keep those two synchronized. The thing I do not have in the git repository is media files because they are too large. Instead I just backup the media files about twice a year.
For media files, git-annex [1] is really great. It makes it easy to create redundant backups by cloning the repo on additional devices and using annex to sync the content across all of them from a single worktree.
I don't like the idea of putting your whole home directory under git.
First of all for most files manual versioning is overkill; you probably only need to share them and for that something like for Dropbox or Syncthing; this together with a file system that supports snapshots gives you most if not all that you need with much less work (no commits).
There are also some things that I don't want to share between computers for a reason or another (for example, some of my ssh keys). I know I can use .gitignore, but I prefer the opposite approach (state what I want to include, not exclude).
I do version my configuration scripts, but I keep them in a subfolder and symlink them to their proper place. This also allows me to selectively decide what should be shared on a given machine and what not.
I previously tried configuration management using various techniques. First the plain home git repo, then rcm [1], then briefly stow.
What I found, is that while these systems all worked reasonably well, I ended up writing out several manual steps in README files (e.g. install packages xyz, create user/group, create directory, enable systemd unit, replace {some_template_var} in fileX before copying, etc).
Ansible seemed like a reasonable solution so I switched to that and it's worked out very well for me.
Pros:
- all steps can be encoded as config (config-only updates can still be run using tags)
- a fresh install can be ready to use in minutes
Cons:
- overhead encoding copy operations
- slower then alternatives if just updating config (e.g. stow or rcm)
Because ~/.config already existed, stow made the zsh symlink inside it. If ~/.config hadn't existed, stow would have symlinked it from ~/dotfiles/zsh.
To remove the symlinks stow set up:
stow -D zsh
I did eventually set up a wrapper script to pass a few default arguments to stow, to ignore certain files I use for documentation. But stow does all of the work of managing the symlinks.
My home directory is currently dotfile hell. I can't think of any reason to implement version control that outweighs the time and effort it would take to properly clean and maintain the directory.
<quote> I use:
where my ~/.myconf directory is a git bare repository. Then any file within the home folder can be versioned with normal commands like: And so one…No extra tooling, no symlinks, files are tracked on a version control system, you can use different branches for different computers, you can replicate you configuration easily on new installation. </quote>