Hacker News new | past | comments | ask | show | jobs | submit login
Sd: My Script Directory (ianthehenry.com)
128 points by shadytrees 4 months ago | hide | past | favorite | 21 comments

One issue I've had with personal shell scripts is that the dependencies are often implicit, or command line arguments change, leading to (sometimes silent and critical) behavior. With Nix you can write shell scripts with a nix-shell shebang[0], specify the dependencies and the rest of the script will run with the dependencies satisfied, for instance, this will execute GNU Hello regardless of whether it is already in the path or not

  #! /usr/bin/env nix-shell
  #! nix-shell --pure -i bash -p "hello"
a more realistic example is[1] which generates[2], and if necessary, the Nixpkgs revision can be pinned to fix the dependencies in time. This approach also extends across languages and some of my scripts are written in Haskell which is interpreted!

The upshot of all this is that you can write reproducible shell scripts as if you had every package in Nixpkgs available to you, and share it easily with others.

[0] https://nix.dev/tutorials/ad-hoc-developer-environments.html...

[1] https://edef.eu/~qyliss/nix/lib/gen.sh

[2] https://edef.eu/~qyliss/nix/lib/

I learned about this very recently[1] and it seems like an amazing use case that I wish got more play in the marketing of Nix. I have a lot of dumb "productivity" scripts that I wrote at work (Linux) that I can't run at home (macOS) because of differences in the behavior of `find` or `date` whatever. I plan on porting them to use `nixpkgs.findutils` etc. and never having to worry about this again.

[1]: https://ianthehenry.com/posts/how-to-learn-nix/command-refer...

Neat! In your post you mention that it takes a few seconds until you get to the prompt (and the waiting time is similar for the nix-shell shebang scripts), but recently someone made nix-script[0] to cache these scripts, resulting in a runtime on the order of dozens of milliseconds.

Also, thanks for being dedicated enough to read the entire manual and give your impressions! If you spot typos, things that could be clearer, etc. please let us know on Freenode #nixos-dev or the Nixpkgs issue tracker!

Do you also plan on covering flakes? Flakes are currently gaining adopting in the Nix ecosystem and are pretty exciting, since they resolve issues like ad-hoc project structuring, and since evaluation will now be pure, results can be cached aggressively and nix-shell will be much much faster.

[0] https://bytes.zone/posts/nix-script/

nix-script looks awesome -- I saw that post on HN right after I started learning Nix, and it's sitting in my notes as something to go back to later in the series (I'm trying not to jump around too much until I'm done reading the official docs, to try not to taint myself with too much knowledge). I used to work somewhere that had built something like this specifically for OCaml scripts, so it's very cool to see a generic solution.

I have encountered flakes in passing -- in the as yet unpublished part 20 I built Nix from source to try to fix a bug I found, and encountered the concept briefly. This series is on my reading list once I've read the Nixpkgs manual and Nix Pills:


(Not sure if there's a better place to start.)

I plan on opening some doc PRs once I'm done reading through everything -- I found a few little things that I feel like I can fix myself. For higher level issues I'll try to open a discussion (e.g. I had a lot of trouble going through section 14.3 and 15.4 of the Nix manual). Anyway. Kind of off-topic for this thread, but here we are.

Oh my goodness, your blog series looks incredible. I'm in sort of the same boat as you--I've been barely treading water with NixOS for the last several years and have recently resolved to learn how it works well enough to get all the benefits. Thanks for putting this together; looking forward to diving into it!

Most instances of the organising principle I find here are amazing. I think "I would never have done it that way" followed almost immediately by "but damn: its good"

for comparison: I have ~/bin and I don't distinguish between compiled and scripted personal commands.

I just have ~/bin/ and ~/bin/scripts/ and both are in my $PATH so I get autocompletion automatically.

I usually don't create "--help" options for personal scripts. Most of my personal scripts are mostly designed to be extremely specific and be so damn obvious how to use them because the --help's of other commands are too complicated.

Some examples of my scripts in ~/bin/scripts/:













    help-script(){ "${EDITOR:?vi}" "$(which "$1")"; }
I don't actually use that but the point stands -- If the script is simple enough, you don't need a --help; comments in the source work just as well.

zsh has `= expansion`:

> If a word begins with an unquoted `=' and the EQUALS option is set, the remainder of the word is taken as the name of a command. If a command exists by that name, the word is replaced by the full pathname of the command.

To edit the source of `myscript` I can then use

  vim =myscript

That's really neat! I'd never heard of that. I think you just saved my future self many many keystrokes.

Asking for a friend: What does your "jabra-stop-changing-volume-goddamnit" script look like?

+1. Really love the organization of this

- having it in PATH is important. - most of us are programmers, spend a few minutes writing scripts!

c@host:~$ ls ~/bin | wc- l 97

I used to have very full ~/bin, and ~/$(hostname), directories. In the end I pared them back and started bundling things together in one binary.

The end result is very similar to this approach, I run "sysbox blah", or "sysbox help", and use integrated subcommands.

Very helpful and makes deployment easy by having only a single binary:


Not bash/shell, but similar and useful idea to experiment with.

I have my setup so that you can create a shell environment specific to the project associated with the working directory. For example, I can invoke

    edit-env -ds dump-data.sh
and it will open a script for editing which will only be available to run in the project's directory. It would be callable as the bash function `dump-data`. I've got lots of useful little things like that now that I wouldn't necessarily want to be stored in the project's VC but that I find useful.

While the organization seems nice, the organization itself may have a cognition overhead. I find myself often abandon an organization scheme after I spending time perfecting it. Lately I am mostly using aliases to manage my personal scripts and commands. What the aliases does may change from time to time, but the aliases themselves can stabilize. Those aliases that failed to stabilize or can't fit into top of my head probably are not worth keeping anyway. I delete them every once a while.

'personal monorepo' is what I like to consider this kind of organization. I have templated starter projects, direnv configs, etc. all in a hierarchy of folders in my home.

This is really similar to something I've recently created: https://github.com/simonmeulenbeek/Eezy . Although in my project's case it's scoped to the specific PWD you're in.

I really like using the folder structure to have 'subcommands' (i.e. 'sd blog publish' ). Very neat!

I made a ~/bin folder to do something similar, so I don't have to mess with any of the PATH stuff, and can just call it like anything else found in a /bin dir.

EDIT: Oh, and autocomplete works with it by default. Just 'mkdir ~/bin' and you're good to go.

They cover that in the article, actually. They had hundreds of files in ~/bin and had forgotten the name of one of them. sd was created to handle that case.

If you want an actual project for this I use this one: https://github.com/knqyf263/pet

Applications are open for YC Winter 2022

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact