Hacker News new | past | comments | ask | show | jobs | submit login
A Dotfile History (myme.no)
78 points by ingve on April 13, 2022 | hide | past | favorite | 34 comments

The extra configuration around setting up git to track $HOME is interesting.

My setup is pretty lame compared to that: I just have a ~/.local/dotfiles/ git repo and symlink my various files from that repo.

The "git config status.showUntrackedFiles no" is a nice touch, I'll have to try that sometime.

NixOS home manager seems pretty cool, but that feels like a huge dependency. It also isn't available on my MacOS client work machine so sharing between multiple computers might be difficult (for me, at least)

I'll take this opportunity to shill my quick/rough setup guide for various tools like fzf, fd, ripgrep on zsh on MacOS. It's a gist, so I don't get any ad revenue: https://gist.github.com/aclarknexient/0ffcb98aa262c585c49d4b...

Interesting setup - thank you for sharing.

I've SSH'd in to hundreds machines between home and work, and never thought to check `$SSH_CONNECTION` to see if I was remotely connected or not and take actions on that fact. So clever to have a different prompt when remoting in or not.

Your setup is more complicated than mine, and maybe even than what I want, but I'm excited to take some inspiration from it and maybe incorporate some of this to my setup.

I really like to hear feedback like that! I'm really glad you found something useful in the guide. If you have any criticism or changes, please let me know.

Got a similar repo: https://github.com/lloeki/dotfiles

A couple of differences though.

- there's a setup script to do the basic symlinks, automatically from the files in the "home" subdir by prepending the names with .

- then for shell stuff everything is sourced from either shell, bash, or zsh subdirs, all in modular files

- shell dir content is autoloaded based on +x

- there are polyfills for bash that makes it more zsh-like (stuff like precmd)

- each shell module tests for tool presence and is a noop or sets up a fallback when the tool is not available, so I can clone this on any system and have it still work, gracefully degrading down to zero deps except the shell itself

- it also attempts to provide a uniform experience across bash versions and OSes (darwin, linux)

- prompt is minimal (workdir, dirname only, not the full path), increases with detail progressively and in a hierarchical order (root if root, host if ssh, workdir, vcs branch if in repo, vcs status as symbols if nonempty, venv name if virtualenv, "nix" if in nix shell)

I guess you cannot install Nix on that Mac because you are not admin?

In general you can install the Nix Package Manager and home-manager an macOS. There were some rough edges when macOS Catalina came out, but that should be resolved now. The way it works will probably require admin rights though (https://nixos.org/manual/nix/stable/installation/installing-...).

Right now the IT security dept requires that any software is approved by committee, repackaged and hosted internally. So it's a huge hurdle to overcome. It's just easier to use symlinks.

> My setup is pretty lame compared to that: I just have a ~/.local/dotfiles/ git repo and symlink my various files from that repo.

Same here; I did add a shell script to actually make the symlinks (the bulk of which is a single for loop because so many files go to ~/.$FILE), but a "real" tool to manage things always seemed like overkill. In fairness, that probably only works because my configs are mostly the same between machines.

I basically do the same but use `stow` and a `~/.dotfiles` repo.


  #!/usr/bin/env bash

  DEFAULT_DIRS=$(echo */)
  for dir_name in $DIRS_TO_STOW; do
      echo "Linking $dir_name..."
      stow --restow --no-folding $dir_name
I then run `./link.sh` to re-create symlinks for everything or `./link.sh foo bar` to re-link some in particular.

It seems like you could get value out of stow. It would manage the symlinking for you.

I do keep meaning to look into that. Thankfully I'm not jumping onto new computers every day, so I haven't had to automate the setup of a new instance of my dotfiles repo.

I spent some time evaluating all the available dotfile managers a few months ago and settled on chezmoi: https://www.chezmoi.io/

I had initially been turned off when I first encountered it, because it seemed overkill for (what appeared to me) a simple task.

But the problem of managing a relatively small number of dotfiles across a relatively small number of machines with small differences between them and keeping them up to date proved to be MUCH more complex than I imagined. Copy things around by hand, and then later distributing them via source control got hairy very quickly.

I finally realized all those features were absolutely necessary to manage things sanely, and once I took some time to learn how to do things with chezmoi, I have never looked back.

In addition to its docs, I found the authors dotfiles repo on Github to be a great help on how to set things up: https://github.com/twpayne/dotfiles

(FWIW: I have never used NixOS/Home Manager, so I can't really speak to how it compares to chezmoi.)

Interesting OP went from stow to Nix without passing XDG, that seems a common path.

I love the idea of NixOS, but I've so far stuck with (being annoyed by things that disrespect) XDG_CONFIG_HOME because it seems harder to version control Nix config, harder/more annoying or learning to confgure since you're now writing Nix config for everything rather than config for the tool itself. I think I'd always have two docs pages open instead of one.

I shared your concern with letting Nix manage all configs because it felt wrong to me to have an abstraction layer over all kinds of configs. I've later realized that's got a big benefit of being able to use values across various configs, like high DPI settings for specific machines with 4k monitors as an example.

Version controlling Nix expressions is trivial since it's all just text files. So i basically use just git for that. With flake lock files you get a pretty high guarantee that rebuilding your configs (or machine setup) will be quite reproducible.

I should add that I do still have some imperative/manual steps still though, mostly setting up doom emacs.

You don't need to write Nix configs for everything. With the following NixOS config, you can write raw configuration files and deploy them to /etc.

    # deploy ./config.txt to /etc/foo/config.txt
    environment.etc."foo/config.txt" = ./config.txt;
You can do the same thing with Home Manager too.

However, many Nix users just tend to write Nix configs instead because it's easier. For one thing, NixOS modules provides high level options that's easier to work with than bare config files. The NixOS website has a good search UI for those options too [1]. Second, the Nix language is a nice language for writing configurations. It's JSON, but with functions, variables, and other niceties like indented multi-line strings and string interpolation. Writing everything in the Nix language allows you to share configuration values across different software with ease.

As for version control, I'm not sure what your issue is with Nix. Nix configuration files are just plain text files, so it can be version controlled like everything else. In fact, Nix flakes, the last topic of the article, is even designed to work best with version control.

[1]: https://search.nixos.org/options

Installing an entire operating system to manage your dotfiles....seems reasonable

Big disclaimer: I have only used home-manager on NixOS.

The main development target for the project seems to be NixOS, but you can install it on other Linux distribution or macOS as well. It just depends on the Nix Package Manager, which you need to install, not on NixOS.

The home-manager manual describes a standalone (not integrated with NixOS) setup with flakes here: https://nix-community.github.io/home-manager/index.html#ch-n...

Love this!

I should add that my Nix endeavors begun way back when I started doing Haskell because of the binary caching for Haskell libs. Nix has then gradually taken over pretty much everything I do related to software. So the OS switch was inevitable.

I'm also on NixOS but hating Nix language (while loving the model, and sadly Guix System so far miss a bit of things I want like LVM, LUKS, zsf support) I keep my dotfiles as org-mode, org-roam managed notes, from the NixOS config to individual apps, tangling them in their right place.

Respect of using Nix it might be a bit less strong, but it's far simpler and an issue here does not means anything else stop (failed rebuild for a single derivation error).

In the past I have kept my dotfiles under a common root and {sym,hard}link them or copy via a single wrapper script, such setup was fragile but still effective.

Before that, many years ago, I've just have the dotfiles I'm interested in regularly sync-ed in a dedicated tree via unison.

I've briefly tried GNU RCS, then mercurial to track changes but fail to see a real use case for personal use, I see one in an Ops team where multiple people makes change and anyone need to see them, who and when and (hopefully) why they was made, but for me, myself and I at home... A classic backup suffice to protect against accidental mistakes and adding a reason to backup AND RESTORE is a good thing :-)

I've also tried Salt and Ansible for local usage but even if far simpler than other similar their YAML+python is still too much for my taste...

I'm actually going through some Ansible tutorials to help manage my dotfiles, and easier reproducibility of my linux desktop setups/installs. Already i see areas of overkill (mostly because my configs are pretty minimal, basic)...BUT...i continue with my approach to establish a flow based on ansible because i wish to learn ansible in general. If not for that ansible goal, then yeah, i probably would seek out another tool.

If you are interested in Ops/classic operation Ansible is a very popular tool and learning it without much investments means essentially learn a bunch of other equivalent python/YAML tools from Salt to mcolletive etc so it's a good thing.

If not IMVHO do not go much further then the initial curiosity, Ansible and similar tools have a common issue: they try to express in simple no-code manner complex things and the real practical result is that the simplification is more complicated than the original complexity of the task.

We have seen a similar trend relatively relatively for config files: in the past any program needing a config have had it's own config DSL, parser etc. After some DSL became very similar and was generic enough to be adopted/copied from a program to another, ini format was one very common, XML another moderately popular from the mid '90s to mid '00 than JSON/YAML etc. They all fail eventually so some start using for configs the same programming language of the relative software so lisp software use lisp config (see Emacs), python software python configs, C for C apps (see Suckless tools) and after some times dedicated language emerge (see DHall for instance) but ultimately fails. Seeing the past for simple, but really simple, things a dead simple config files, k/v style, suffice, for the rest the similar programming language if usable for that purpose without issues (like the absurd of Suckless C usage that need wasting time recompiling all the time) is the better choice.

That's why classic systems have only one language for anything (Smalltalk or Lisp to cite the most famous) and that's why NixOS is more and more popular even if Nix is really indigestible like Haskell...

I expect in 5/8 years that now popular JSON/YAML will be called crap like happen for XML before for equal reasons and so the fate of the tools using them will be the same, annoying legacies that drag themselves for decades with nearly all crying against them but unable to jump out of their monsters due to the reached level of complication... If you have time my suggestion is investing time in Emacs, for me was and still is a life changer, not just for config but for anything on my desktop, in just few months it became my WM, MUA, feed reader, ... in a bit more I almost ingest all my files in it and still have to see show stopping limits...

Thansk, and agreed, i am learning ansible but won't be going too deep. If my job was in Ops/sysadmin/SRE then i would learn it more...but at that point it would be needed for resume and not just to learn as a hobby anyway. Since my job is not in any of those realsm, ansible will only be a brief, minor experiment.

Also agreed that in a few years i also feel that json, yaml will be considered the new "xml", and receive dislike from many.

> ...If you have time my suggestion is investing time in Emacs...

Thanks for the recommendation. I have tried emacs in the past but it just felt like too much work to get productive in. I've been liking more and more the philosopy about each tool doing only one thing, but doing it well. And, to me, emacs seems so overwhelming. This is not a bad thing, just so far for the times that i tried emacs, it didn't click right for my work flow. I'm not a super fan of vi/vim or nano either, but it is mildly handy that those editors are always available on any linux machine that i jump into. As far as my need for other functions like feed reading, reading email, etc....so far, sticking to the practice of using each tool for only its main function seems to work out ok for me...though i have to consciously ensure i do not go overboard. so far, so good. Thanks again for your comments!

Personally I've failed with Emacs years ago, just tried it and decide that's a heavy and horrid mess, I re-tried it few years ago when I see it in action and was at that point in time that I realize it's power.

Emacs is one-tool one-job BUT the tool is just a function and functions can be combined like unix commands in pipes but in far more powerful manner and with far less overhead. Try seeing this: https://youtu.be/B6jfrrwR10k or this https://youtu.be/dljNabciEGg the real issue when starting without a friend of video-tutorials is that Emacs defaults are barebone and really old, the juicy things come only when you discover them.

The power of Emacs is not much Emacs itself but the classic desktop paradigm: anything is integrated so anything can be combined. You type some math in a mail? Why not have it rendered by LaTeX, since your system do have LaTeX support installed and why not solve the ode you are typing since you have a CAS? Why being tied on GUIs to cut&paste as the sole IPC? That's is. Is extremely powerful but so different than actual environments that most users fail to comprehend without seeing it in action...

Thank you, that's quite interesting! I'll take a look at those videos as well. Cheers!

The original git approach is still the best, which I've been using for years:

  # source this to clone the .files repo and setup the command

  #git clone --bare https://<user>@bitbucket.org/<user>/dotfiles $HOME/.files
  alias .f='git --git-dir=$HOME/.files --work-tree=$HOME'
  .f config --local status.showUntrackedFiles no
  .f config --local remote.origin.fetch '+refs/heads/*:refs/remotes/origin/*'
  #.f config --local user.name '<User Name>'
  #.f config --local user.email '<user@emaildomain>'
  .f branch -a
I maintain a branch for each os/distribution (macOS, Ubuntu, CentOS etc.)

I fell out of love with Nix (including flakes) just as fast as falling in love with it. The current approach for updating packages seems to have a massive human scalability problem (3.3k open PRs [1]), and several packages are very out of date. I still use it on my work mac because nix-darwin does bring some semblance of sanity to the platform, but I've switched to Silverblue on my personal machines.

Nix feels more like an incredible research project to me. Guix is a good example of what happens when those great ideas are put through some refinement (but I can't use it on any of my machines).

Silverblue has convinced me that some unbundling is needed. The concept of an immutable base is extremely powerful, and there's good wisdom in their strong discouragement of using ostree to install regular apps. Being able to rebase onto Kinoite, and back, was an eye-opening experience for me.

In my opinion, an ideal tool would:

* Ignore system packages ("immutable-land") altogether. This makes system updates trivial.

* Use additional layers ("layer-land") only where absolutely required: drivers or /etc edits for example.

* Rely on flatpak as much as possible for "mutable-land", it could reach into flatpak sandboxes and blindly sync those dotfiles.

* For "mutable-land" outside of packages (DE stuff usually, like the wallpaper), it would have deep knowledge of those.

There is definitely benefit to being able to edit your dotfiles in one place (which we see with Nix), but the happy path should be 100% transparent to the user.

[1]: https://github.com/NixOS/nixpkgs/pulls

What are people's views on having your dotfile repo public? I generally have mine public in case anything comes in handy for others but have been wondering lately whether there is a heightened risk of inadvertently committing something private/secret.

My journey of managing my dotfiles is still early days, but I have mine hosted in a private github repo. Of course, even via a private repo, i assume that the moment something leaves my machine it is not as private nor as protected as i would like...So, i try hard to ensure that nothing sensitive ever lives in those files. (I've behaved this paranoiod way for decades now, even before using github, etc.)

Public, mostly because it makes cloning my dotfiles to new computers/VMs much easier. But yeah, requires a bit of care to avoid committing secrets.

I have mine on my personal, private Gitlab.

Good read. There are so many ways to manage dotfiles...

Here are some features of the system[1] I made if anyone is interested:

* No system links

* Doesn't rely on git

* Simple CLI for diff, commit, push, pull etc

* Install files without the CLI: `curl https://dotfilehub.com/knoebber/vim >> ~/.vimrc`

* Can manage files outside of $HOME

* Web front end is minimal, JS free, and easily self hostable

[1]: https://dotfilehub.com

Good old RCS is pretty useful for versioning random files:

ci -l .yourfile

And that's it. No setup necessary!

If you ever outgrow it, you can easily convert to more modern version control.

I can only recommend to take a look at https://yadm.io/

I've been using GNU stow for a while, and I love how portable it is. NixOS might indeed be the answer, but it might be a while before that ecosystem is mature enough for my laptop, server, etc.

GNU stow worked fine for me...

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact