
Versioning Your Home Directory - TopHatCroat
https://martinovic.blog/post/home_git/
======
drexlspivey
For dotfiles I use the workflow from this comment
[https://news.ycombinator.com/item?id=11071754](https://news.ycombinator.com/item?id=11071754)
with a keybase encrypted repo

<quote> I use:

    
    
        git init --bare $HOME/.myconf
        alias config='/usr/bin/git --git-dir=$HOME/.myconf/ --work-tree=$HOME'
        config config status.showUntrackedFiles no
    

where my ~/.myconf directory is a git bare repository. Then any file within
the home folder can be versioned with normal commands like:

    
    
        config status
        config add .vimrc
        config commit -m "Add vimrc"
        config add .config/redshift.conf
        config commit -m "Add redshift config"
        config push
    

And so one…

No extra tooling, no symlinks, files are tracked on a version control system,
you can use different branches for different computers, you can replicate you
configuration easily on new installation. </quote>

~~~
TopHatCroat
This is a pretty great approach, I agree. My version does not offer you a way
to easily use multiple repositories.

I will probably have to use this approach at some point as my config continues
growing.

------
neilv
I used to do this (using CVS, at the time), for keeping the configs and
projects on my desktop and laptop in sync, as well as for having backups.
Quick small notes:

* I split everything into topical modules, which were immediate subdirectories of the home directory, partly so the little laptop didn't have to install everything.

* There was a "config" directory that included most of the typical home directory dotfiles, as well as things like fonts. It had a small shell script in it, for keeping symlinks from the home dir up to date (e.g., "~/.bashrc" was a symlink to "config/bashrc".

* Not everything was in CVS. Photos, for example, were backed up separately.

* It was good to have my server Internet-accessible, since a few times I'd walk for an hour to work from my laptop, and realize I didn't have the latest version of my code.

* Nowadays I would probably use Git, only for the sake of standardizing, but the simple model of CVS was nice for this purpose. (I also knew many other fancier SCM systems, but intentionally went with a simple one.) Also, the repository was just in RCS format (files with reverse deltas of diffs), so, on rare occasions that I made an oops (like accidentally committed a huge file I didn't intend to), I could just SSH in and fix it manually.

* I had various servers for this over time, starting with my desktop, and then a PC in the closet, then a colo server, and then (my favorite) a small home RAID server I made from a fanless Atom board and Debian (which was effectively silent, running years 24/7 before I finally took it out of service, still working; one drive had to be replaced during that time, but was RAID mirrored).

------
znpy
That's probably the worst possible way of versioning your home besides,
possibly, making a zip of your whole home directory every time you would
otherwise to a commit.

Just use a snapshot-enabled filesystem. ZFS works awesomely, BTRFS too (but
it's not released as stable and production ready AFAIK) and LVM can have
snapshots too (take a look at the snapper project by Red Hat).

Just use ZFS and snapshot the whole filesystem.

~~~
pdonis
How would you transfer a snapshot to another machine? That's the use case.

~~~
thwarted
There's always zfs send and zfs receive, but these don't work automatically.

~~~
lostapathy
The ZFS tools don't really cover the case of ignoring a bunch of transient
files in your home directory that you don't want to sync around. Things like
browser temp files, etc.

~~~
znpy
You know, this aspect of versioning a home directory is basically why home
directories aren't versioned very often.

That being said, if you want to "version your home directory", I think that
zfs is the best approach.

Also, transient files and browser temp files should reside on /tmp AFAIK... So
that's less of an issue. If you application is writing temporary files outside
of /tmp, then it's probably a bug.

~~~
lostapathy
Take your chrome browser history. That doesn't really belong in /tmp, but it's
also not something that's going to sync around well with zfs, either.

------
NikkiA
Just use yadm, it's far less effort than trying to do it manually with git.

[https://yadm.io/](https://yadm.io/)

------
pabs3
The vcs-home website has more tips for version control of home directories:

[https://vcs-home.branchable.com/](https://vcs-home.branchable.com/)

~~~
frumiousirc
vcsh + mr works well for me.

One feature of this pair (unique to them?) that I find useful is the ability
to apply different subsets of my repos to any given account's home directory.

------
SirensOfTitan
I use a combination of GNU stow and keybase’s encrypted git repos to store my
dotfiles. It works pretty well and ensures I don’t have to maintain a
gitignore.

------
dheera
I use Dropbox to sync my home directories across machines. I specifically want
them to sync but _not_ version. The reason is that I'll be working on one
machine (say at the office), want to continue typing and testing on another
machine (say at home), and specifically do NOT want to commit in the half-
finished erratic state things may be in.

Dropbox deals with syncing across machines on in instantaneous basis so that
all machines have the same thing at every instant, allowing me to change
location or machine and keep working. Git deals with versioning.

~~~
pcl
Be careful with git repositories shared in Dropbox in this manner. If you are
running an ide on multiple machines, the idle ones might do some crazy things
to your .git tree. You can look for signs of trouble by running “find .git/
-name ‘ _conflict_ ’”.

~~~
dheera
Yep I've had this happen before. I'm usually careful to close IDEs but 99% of
the time I just avoid IDEs. In the worst case I just copy in-progress source
files and re-clone the repo. I don't really have a good alternative to a
syncing folder across locations.

~~~
pcl
I’ve set up scripts that periodically grab my working state (unpushed commits,
stage, and local diffs) for certain git working trees and push the changes to
a directory in Dropbox.

The scripts also do a periodic “git fetch origin” in a broader set of local
git repositories, so I usually have up-to-date code locally even if I haven’t
been keeping things synced manually.

------
sexyflanders
Meh. I treat my home dir as a scratch space and symlink important things to
version controlled and synced subdirs.

Too many things come and go to justify a git workflow to me. And git ignoring
wide swaths of private data seems haphazard at best.

------
abathur
It's not perfect, but IMO yadm
([https://yadm.io/docs/overview](https://yadm.io/docs/overview)) significantly
improves some of the git tedium here.

It's like discovering that there's a special type of hammer just for the task
you're doing. You know hammers. You hit lots of things with hammers. The new
hammer is a little different, and to be honest you could make do with the old
hammer in a pinch, but once you've spent a little time pounding away with the
_right_ hammer, you may suspect the world needs more types of hammer.

------
teddyh
A simpler solution would be to use SRC for the individual files for which you
wanted versioning.

[http://www.catb.org/~esr/src/](http://www.catb.org/~esr/src/)

------
hashkb
I backup to B2 using Restic. It dedupes while also storing each version with a
tag. It's not as fancy as git with the diffs... but my home dir is full of git
repos and they are backed up of course.

You can mount your whole backup using FUSE and browse around, or restore to
some other directory. Multiple computers? Multiple folders to backup? All
handled. It's got ignore/include for your binaries and bundled dependencies
and whatnot.

Edit: encrypted by default.

------
wogong
I used to version whole home directory, turned to stow + dotfiles recently,
reasons:

1\. maintaining .gitignore is kind of annoying.

2\. I don't need all my configuration files on some machines, say a remote
sever, you can just stow what you need instead of adding many files to your
home.

3\. it's not convenient setting up an environment when you already have many
files in home, for git clone to a directory not empty is not allowed.

~~~
graywh
my .gitignore is set to ignore everything with a wildcard and un-ignore a few
hidden directories where i'm likely to place new files (e.g., .vim, .zsh)

------
cjfd
I have gone one step further for my computers at home (a desktop and a laptop)
and have put most of the stuff in my home directory in a git repository, not
just configuration files. That way I keep those two synchronized. The thing I
do not have in the git repository is media files because they are too large.
Instead I just backup the media files about twice a year.

~~~
davvid
For media files, git-annex [1] is really great. It makes it easy to create
redundant backups by cloning the repo on additional devices and using annex to
sync the content across all of them from a single worktree.

[1] [https://git-annex.branchable.com/](https://git-annex.branchable.com/)

------
mrighele
I don't like the idea of putting your whole home directory under git.

First of all for most files manual versioning is overkill; you probably only
need to share them and for that something like for Dropbox or Syncthing; this
together with a file system that supports snapshots gives you most if not all
that you need with much less work (no commits).

There are also some things that I don't want to share between computers for a
reason or another (for example, some of my ssh keys). I know I can use
.gitignore, but I prefer the opposite approach (state what I want to include,
not exclude).

I do version my configuration scripts, but I keep them in a subfolder and
symlink them to their proper place. This also allows me to selectively decide
what should be shared on a given machine and what not.

------
matt-snider
I previously tried configuration management using various techniques. First
the plain home git repo, then rcm [1], then briefly stow.

What I found, is that while these systems all worked reasonably well, I ended
up writing out several manual steps in README files (e.g. install packages
xyz, create user/group, create directory, enable systemd unit, replace
{some_template_var} in fileX before copying, etc).

Ansible seemed like a reasonable solution so I switched to that and it's
worked out very well for me.

Pros: \- all steps can be encoded as config (config-only updates can still be
run using tags) \- a fresh install can be ready to use in minutes

Cons: \- overhead encoding copy operations \- slower then alternatives if just
updating config (e.g. stow or rcm)

------
lionyo
I use this:
[https://github.com/holman/dotfiles.git](https://github.com/holman/dotfiles.git)

Dotfiles are stored in git, and run a script that creates symlinks to those
dotfiles

~~~
tomjakubowski
I do something like this, but I lean on GNU Stow[1] to manage the symlinks.

My ~ directory tree, early on in setting up a new system, might look like:

    
    
        ~/
          .config/
          dotfiles/
            i3/
              .config/
                i3/
                  config
            zsh/
              .config/
                zshenv.d/
                  README.txt
              .zshenv
              .zshrc
    

When I want to use a package, I cd to ~/dotfiles and run:

    
    
        stow zsh
    

Running that sets up symlinks rooted one directory above, ~. Now ~ looks like:

    
    
        ~/
          .config/
            zshenv.d/ -> ../dotfiles/zsh/.config/zshenv.d/
          dotfiles/
            (as earlier)
          .zshenv -> ../dotfiles/zsh/.zshenv
          .zshrc -> ../dotfiles/zsh/.zshrc
    

Because ~/.config already existed, stow made the zsh symlink inside it. If
~/.config hadn't existed, stow would have symlinked it from ~/dotfiles/zsh.

To remove the symlinks stow set up:

    
    
        stow -D zsh
    

I did eventually set up a wrapper script to pass a few default arguments to
stow, to ignore certain files I use for documentation. But stow does all of
the work of managing the symlinks.

[1]:
[https://www.gnu.org/software/stow/manual/stow.html#Introduct...](https://www.gnu.org/software/stow/manual/stow.html#Introduction)

------
jorgesborges
My home directory is currently dotfile hell. I can't think of any reason to
implement version control that outweighs the time and effort it would take to
properly clean and maintain the directory.

------
salmo
I use chezmoi. Better than symlink hell, cleaner than this.

[https://github.com/twpayne/chezmoi](https://github.com/twpayne/chezmoi)

------
lostmsu
In Windows this happens by default with File History.

It also supports automatic backups to an external storage.

------
testcross
For dotfiles, a good workflow seem to have a git/hg repo and combine it with
GNU stow.

[http://brandon.invergo.net/news/2012-05-26-using-gnu-stow-
to...](http://brandon.invergo.net/news/2012-05-26-using-gnu-stow-to-manage-
your-dotfiles.html)

~~~
TopHatCroat
Just learned about GNU stow. It's a very useful tool.

