Hacker News new | past | comments | ask | show | jobs | submit login
No script is too simple (nicolasbouliane.com)
268 points by nicbou 63 days ago | hide | past | favorite | 155 comments

A follow-up recommendation I give (which I suspect might be unpopular with many around here) is to use Python for all but the most trivial one-liner scripts, instead of shell.

Since 3.5 added `subprocess.run` (https://docs.python.org/3/library/subprocess.html#subprocess...) it's really easy to write CLI-style scripts in Python.

In my experience most engineers don't have deep fluency with Unix tools, so as soon as you start doing things like `if` branches in shell, it gets hard for many to follow.

The equivalent Python for a script is seldom harder to understand, and as soon as you start doing any nontrivial logic it is (in my experience) always easier to understand.

For example:

    subprocess.run("exit 1", shell=True, check=True)
    Traceback (most recent call last):
    subprocess.CalledProcessError: Command 'exit 1' returned non-zero exit status 
Combine this with `docopt` and you can very quickly and easily write helptext/arg parsing wrappers for your scripts; e.g.

    """Create a backup from a Google Cloud SQL instance, storing the backup in a 
    Storage bucket.
        Usage: db_backup.py INSTANCE
    if __name__ == '__main__':
        args = docopt.docopt(__doc__)
Which to my eyes is much easier to grok than the equivalent bash for providing help text and requiring args.

There's an argument to be made that "shell is more universal", but my claim here is that this is actually false, and simple Python is going to be more widely-understood and less error-prone these days.

As someone who hasn't used Python in ages, and finds it quite unreadable, I would be saddened and dismayed if instead of shell (which is the lingua franca) a project used Python everywhere.

The exception being if it's a Python app in the first place, then it's fine since it can be assumed you need to be familiar with Python to hack on it anyway. For example I write a lot of Ruby scripts to include with Sinatra and Rails apps.

For a general purpose approach however, everybody should learn basic shell. It's not that hard and it is universal.

Ever tried to parse XML or json in linux shell?

It's not a matter of familiarity with linux shell. It's a matter of wasting time debugging/implementing stuff in shell that is trivial to implement in any modern scripting language.

Haven't parsed XML in quite a number of years but I parse JSON to extract values from a curl command all the time with jq and it's really not that bad. If there's anything more than simple extraction required then I agree it's not good for the shell. For that I turn to the primary language of the project preferably (which these days is Elixir and sometimes Ruby for me).

But even JSON and (X|HT)ML you'd be amazed how often a simple grep can snag what you want, and regular expressions are also mostly universal so nearly everybody can read them.

We (2-person team doing embedded QT app on linux) did it the exact way you're arguing for - starting with shell, struggling with it for several months every time something went wrong or we had to change it, and finally giving up and rewriting it in python.

Python wasn't our main language (the app was in C++ and QML which is basically java script). But we both knew Python and it's the easiest language for these sort of things that we both knew and that comes with the system.

My main conclussion is - I'm starting with python the next time.

Life's too short to check for the number of arguments in each function.

I think there is a big difference between this and the parent. If you are working on projects where the "primary language" is Ruby or Python or Elixr than sure, use that, but if your project's primary language is C++, like most embedded applications, you do NOT want to use that.

Any complex or cross-platform C++ projects will need a scripting language in addition to shell and a build system.

Life's too short to check for the number of arguments in each function.

Ha! The Oil shell does that for you :)


(It's our upgrade path from bash)

I do JSON pretty frequently with jq and idiomatic python[0] can't even approach its ease of use.

[0]: I'm sure there's a jq style library for python somewhere but it's definitely not the norm.

JMESPath works pretty well: https://github.com/jmespath/jmespath.py

there are a couple of bindings from python to jq, I can't remember which I've used, but it makes it super simple to do jq queries from python

Shameless plug: my project yq (https://github.com/kislyuk/yq) (pip install yq) includes a utility, xq, which transcodes XML to JSON using the Goessner JsonPath transformation (https://www.xml.com/pub/a/2006/05/31/converting-between-xml-...) so you can use jq to query XML.

If you're trying to implement stuff in shell you're doing it wrong. It's a glue language.

and yes, parsing XML or JSON in "linux shell" is very easy. For JSON use jq, for XML use xml2 or xmlstarlet.

>json in linux shell? Yes!

Fortunately for me a 10 whole lines of sed, awk and some trimming and a for loop turned my json into a CSV to pass back to a legacy system. No curl or json modules necessary. That being said, if I needed anything more complex Python would be my default.

I can understand finding the Python unreadable but where I disagree is that "everybody should learn basic shell" -- no it's not that hard, yes it's universal, but there's more to finding a sustainable solution than that. Try to pivot that shell into something slightly more sophisticated and you run into muddy waters.

I really like Ruby as a replacement shell language because its syntax for shelling out is very succinct and elegant. Python is a bit more verbose but a lot more explicit. Either one is a step function better than shell. You get a real programming language.

Let's extend your analogy to a logical conclusion. Should you write something in a shell because it's "not that hard and universal" or should you insist on using a programming language that lends itself to writing maintainably? If not, we should have no problems with PHP and COBOL, no? But we do.

Use the right tool for the right job. If you're not sure what that is, don't hesitate to pull out a glue language. Python, Ruby, JS -- whatever you need to get the job done. Your shell should be just that -- a shell, not the core.

Python is far more of a lingua franca than shell is. Learning basic Python is a much better use of your time than learning basic shell - it's easier and more widely useful.

Python as new shell lang when?

Maybe shell makes sense to you, but I recently wrote my first real shell script and it was nothing but pain. Brackets don't mean what you think they mean. 'If' does not work sometimes? and I am not sure why. Sometimes I am supposed to quote variables but not always?

And after all that it broke once it moved to a AIX machine since it was not POSIX compliant.

Maybe it's obvious to you, but when I cannot even figure out if an if statement will work, something is horribly wrong.

I don't even use python and I can just about guarantee it's easier for someone who knows neither.

I'm +1 about the use of Python, but I'd say ONLY when complexity gets higher than simply calling commands (e.g. cat/sed/etc). For basic commands, I'd say Makefile is the way to go, or scripts in a package.json. I find Python becomes a must when you want to reuse commands and pass data around, or support the same kind of commands locally and on remote hosts.

Fabric is the best for this: you put a `fabfile.py` (name inspired by Makefile) in the project root and add @task commands in there. Specifically the `fab-classic` fork of the Fabric project https://github.com/ploxiln/fab-classic#usage-introduction (the mainline Fabric changed it's API significantly, see `invoke` package).

Fabric can run local or remote commands (via ssh). Each task is self-documenting and the best part it that it's super easy to read... almost like literate programming (as opposed to more magical tools like ansible).

Why choose between Makefiles and Python when you can have both? [0]

[0] https://snakemake.readthedocs.io/en/stable/

for the record...

if __name__ == '__main__': args = docopt.docopt(__doc__)

means nowt to me.

In fact I don't really understand your post and I've written a lot of python.

No language has a simpler api to executing a cli command than bash itself. By definition.

The conditional expression

  __name___ == '__main__'
simply tests whether this Python file is being executed from command line.

It’s one of the ubiquitous Python idioms that doesn’t actually follow Python’s zen of “readability”.

Agreed on this point, it's the `public void static main` of Python.

Perhaps I shouldn't have tried to make two points in one post; I wouldn't advocate using docopt or a main function for a simple script, I was more making the case for how easy it is to add proper parameter parsing when you need it; that is easier to remember than what you'd end up writing in bash, which is something like:

    while (( "$#" )); do
      case "$1" in
          if [ -n "$2" ] && [ ${2:0:1} != "-" ]; then
            shift 2
            echo "Error: Argument for $1 is missing" >&2
            exit 1
        -*|--*=) # unsupported flags
          echo "Error: Unsupported flag $1" >&2
          exit 1
        *) # preserve positional arguments
          PARAMS="$PARAMS $1"
    # set positional arguments in their proper place
    eval set -- "$PARAMS"
I think you've got to write a lot of bash before you can remember how to write `while (( "$#" )); do` off the top of your head; the double-brackets and [ vs ( are particularly error-prone pieces of syntax.

Idiomatic shell code would be:

  while getopts "ab:" OPTC; do
    case "$OPTC" in
      # shell already printed diagnostic to stderr
      exit 1
  shift $((OPTIND - 1))
Yes, I realize it's more difficult to support long options (though not that difficult), but the best tool for the job will rarely check all the boxes. Anyhow, the arguments for long options are weakest in the case of simple shell scripts. (Above code doesn't handle mixed arguments either but GNU-style permuted arguments are evil. But unlike the Bash example the above code does support bundling, which is something I and many Unix users would expect to always be supported.)

Also, I realize there's a ton of Bash code that looks exactly like you wrote. But that's on Google--in promoting Bash to the exclusion of learning good shell programming style they've created an army of Bash zombies who try to write Bash code like they would Python or JavaScript code, with predictable results.

But the point of the article is not to make complicated scripts readable again, but to put even simpler commands (maybe even one long mysqldump command) into very short scripts.

I agree that a cli framework is often easier to use than bash, but it also is a dependency. I think everyone should use what he’s familiar with.

> No language has a simpler api to executing a cli command than bash itself. By definition.

Things that are true "by definition" are not usually useful.

"No language has a simpler api [...] [b]y definition" only if by "executing a cli command" we mean literally interpreting bash (or passing the string through to a bash process). But that's never the terminal[0] goal.

The useful question is whether, for the task you might want to achieve, for which you might usually reach for the shell, is there in fact a simpler way.

It may very well be that the answer is "no", but support for that is not "by definition".

[0]: Edited to add: ugh. Believe it or not, this was not intended.

look above it to the line that says:

  Usage: db_backup.py INSTANCE
then just go with it.

Every project I work on uses the `invoke`[1] Python package to create a project CLI. It's super powerful, comes with batteries included, and allows for some better code sharing and compartmentalization than a bunch of bash scripts.

[1] http://www.pyinvoke.org/

Thanks for the recommendation, I used Fabric for local scripting ages ago and haven’t revisited in some time.

Is it mostly the Make-style task decorators for managing the project task list that you consider a win vs. native subprocess.run? Are there other features that you find valuable too?

(Back when the original fabric was written subprocess was much less friendly to work with, of course).

I remember the memfault blog post that introduced me to this package. Great find, and I’ve started doing the same on all my embedded dev projects. Thanks!

At my last company, I used Python for most of our scripts. Bash scripts were too tedious to write, harden and maintain.

How does this handle pipes and such? What's the nicest python equivalent to `grep ^$(whoami) /etc/passwd | awk -F : '{print $NF}'`?

Here are three options:

    # Option 1 (no pipes)
    import os
    for line in open("/etc/passwd").read().splitlines():
        fields = line.split(":")
        if fields[0] == os.getlogin():

    # Option 2 (Cheating, but not really. For most problems worth solving, there exists a library to do the work with little code.)
    import os

    # Option 3 (pipes, not sure if the utf-8 thing can be done nicer somehow)
    import subprocess
    username = subprocess.check_output("whoami", encoding="utf-8").rstrip()
    p1 = subprocess.Popen(["grep", "^" + username, "/etc/passwd"], stdout=subprocess.PIPE)
    p2 = subprocess.Popen(["awk", "-F", ":", "{print $NF}"], stdin=p1.stdout, stdout=subprocess.PIPE)

Thanks, that does answer my question quite thoroughly:) I intended the question not as "how would I get my shell" but as a "how would I pipe commands together", which now that I think about it is perhaps the real issue: I think about solving many problems from the perspective of "how would I do this in /bin/sh?", but perhaps the real answer is that if you're doing it in Python (or whatever) then you should be writing a solution that's idiomatic in that language. Or if you like, perhaps one doesn't need to "standard library" of coreutils if one has the Python standard library, which means that many of the thing's I'd miss in Python are hard because they aren't the right solution there.

Quickest and dirtiest way I've found, but yes, not idiomatic Python:

    subprocess.check_output("grep ^$(whoami) /etc/passwd | awk -F : '{print $NF}'", shell=True, encoding="utf-8").strip()
I didn't test that particular line, but in general this is how I execute shell pipelines in Python.

:) I agree that that works, of course, but if we have to use shell=True and just shove the whole thing like that then it loses a rather lot of the "Python" appeal. Still valuable if you need to feed the output into Python, of course.

The answer to" how I pipe shell commands in Python" is that you can do it (e.g., using plumbum, fabric, pexpect, subprocess), but you shouldn't in most cases.

In bash, you have to use commands for most things. In Python, you don't.

For example, `curl ... | jq ...` shell pipeline would be converted into requests/json API in Python.

Nothing is quite as succinct as shell. The python equivalent to this would probably be at least 10 lines of code.

Everyone tends to rag on Perl, but it's excellent in this space.

Edit: The equivalent of above would be:

  my $shell=(getpwuid($>))[8];
  print "$shell\n";
Though it does have simple syntax for pipes as well.

https://news.ycombinator.com/item?id=24558719 does it in 5, which is, I think, proof that you're both right. (Of course, perhaps shell is cheating, since I chained 3 commands into that one-liner, and 2 of those are really their own languages with further subcommands.)


I now agree.

My last gig, I stumbled onto shebang, which allows you to invoke shell scripts in any scripting language from the command line.


Major face-palm-slap. I'm certain I've read about shebang many times. But never realized that it's general purpose.

I've made a single file python library that implements an idiomatic shell API in python: https://github.com/tarruda/python-ush

One nice thing about it is the overloading of pipe operator to allow easy creation of pipelines and/or redirection (plus it is cross platform, so you don't depend on the system having a certain shell).

>In my experience most engineers don't have deep fluency with Unix tools, so as soon as you start doing things like `if` branches in shell, it gets hard for many to follow.

Okay, but what about software engineers?

Shell is definitely not universal on Windows machines. If you work with mechanical engineers, at least half of them will be using Windows. That being said, Python is gross on Windows, too.

subprocess is far to complicated for casual levels of scripting. We have a handful python-devs here, and everyone is avoiding it for scripts, because it's just so poor designend for this job.

Shell is simple and straight forward, and can bring you quite far for most stuff. Only if you start using complex datastructures it makes more sense to use a mature language like python.

Ruby is a much better candidate for this. Nicer syntax and a vastly better standard library.

This kind of conversation is never productive in a team setting. The strongest criteria is "what would the majority of people on the team prefer to use?". That'll yield the highest productivity, regardless of whatever syntactic sugar it has.

Nicer syntax? Vastly better standard library? Those seem like subjective statements to me. Can you provide some examples to support those claims?

I dunno about a better standard library but I do think Ruby has nicer syntax. You can use back-ticks to run system commands, like:

listing = `ls -oh`

puts listing

puts $?.to_i # Prints status of last system command.

That might just come down to the usual differences between Python and Ruby. A Python programmer would likely appreciate an explicit call to `subprocess.run`.

Some equivalent Python 3.7+ would be

  import subprocess
  listing = subprocess.run(("ls", "-oh"), capture_output=True, encoding="utf-8")
As an experienced Ruby programmer I imagine the code you provided looks like a no-brainer. To a Ruby neophyte the backticks, $? sigil, and .to_i method don't strike me as intuitive (but maybe they would be do someone else). We may just have to disagree about the nicety of syntax.

As cle mentioned[1], the strongest criteria is

> what would the majority of people on the team prefer to use?

and I'd agree that ultimately that's what makes the most sense.

[1]: https://news.ycombinator.com/item?id=24558989

The back-ticks and $? are shell standards so I would expect those to be familiar to shell scripters. I agree that if one knows nothing about any of the languages then neither is much better.

At this point, one can also argue to use node. It is probably faster, and it doesn't suffer the 2.7 vs 3.x problem.

just put this on the first line #!/usr/local/bin/node instead of


Isn't nodejs breaking every some months with the newest major-update? That's even worse than Python3, which at least is now mostly established.

I've been moving most of my projects over to Makefiles, instead of npm scripts [1]. Unless I am missing something obvious, it's nicer to run:

make vs. npm start make thing vs. npm run thing

But also more importantly, the Makefile itself feels much cleaner, rather than stuffing scripts in package.json files. The thing I'm confused about, why do npm scripts seem to dominate? I don't see many people using Makefiles.

[1] https://github.com/umaar/video-everyday/blob/master/Makefile

I had your point of view until I started working in corporate environments where windows reign supreme, and then I understood that the reason why no one wrote Makefiles or shell scripts is simply because no one had access to those runtime environments. JavaScript, however, truly is everywhere these days.

Sure, you can run Make and even shells on windows now, but that doesn't mean you have the power or even capability to do so in a corporate environment where IT dictates only reluctantly provides you with a machine in the first place.

I actually find using Make on Windows to be really simple. I usually drop a prebuilt make.exe into the project directory since it's just 244kb (http://www.equation.com/servlet/equation.cmd?fa=make, https://github.com/MarkTiedemann/make-for-windows).

If the Makefile needs to be cross-platform, that's easy to do, too:


ifeq ($(OS),Windows_NT)

# Windows


# *nix



Life is too short to work in these kinds of environments, if you can avoid it.

Makefiles are much less cross-platform and much less clear about their dependencies. A hand-written Makefile probably won't work on FreeBSD unless you've deliberately tested it there, much less Windows, and is probably relying on a bunch of GNU-only flags in the utilities it calls. Some of them are probably from a newer version of that utility than what ships on OSX. Etc.

Similar notion: Never, ever have more than a simple shell script invocation in your CI configuration.

So many projects I've worked on have had nontrivial build-logic or implicit assumptions in their CI configuration which leads to distrust of local reproducibility. Eliminate as many variables and steps to reproducibility as possible. At the very least, create a `ci.sh` script in your project and make all your CI invocations go through that.

If you avoid putting "raw" npm or whatever invocations in your CI config in the first place you won't be tempted to add "just one more option" to your CI config and forget to add that option when running locally.

Such a script also becomes a de-facto API for how to use your project. This can help immensely if all of a sudden, for example, you switch programming languages (or major versions) environment assumptions, etc.

Yes definitely, many lines of shell in YAML (or Docker files) is one of my pet peeves. It's always better to invoke a shell script. Then you can use ShellCheck on it, your editor will syntax highlight it, you can parse it with Oil (osh -n) [1], etc.

And yes there shouldn't be too much of a difference between what developers do locally and what the CI does.

It's sort of an anti-pattern if you have a bunch of automation that can only the CI can run. The developer should be able to run it on their machine too, without CI, and without YAML ...

[1] http://www.oilshell.org/why.html#what-you-can-use-right-now

I try to have CI only run make targets. Helps to reproduce things really easily.

Often when you use the CI config instead of shell scripts it gives you deeper integration with the CI provider, like better error messages and such. I wish there was a way to bridge that gap.

I used to run gitlab-runner locally to debug the CI pipeline but my team now uses a lot of gitlab features to make the pipelines more DRY and this means CI code is scattered over projects.

The easiest way I found to debug it, is to put set -x into the code to print the command with substituted variables.


OTOH, the monster I'm currently implementing as a set of ci.sh-like scripts is already way too big. It all started as a couple `kubectl apply`, but now I wish it was an ansible playbook.

Some other people on the same project abused groovy to the fullest to create some fancypants Jenkins pipeline. Local reproducibility? Zero.

This is the ideal. In practice the CI environment may need optimizations that don't make sense locally without a lot more work.

> have had nontrivial build-logic or implicit assumptions in their CI configuration

Wait, what? If your CI isn't testing the builds your bugs are (possibly) in then what's the point of CI?

It's a great habit.

I use that same concept extensively for all my projects, whatever the language/stack, though I prefer to use Makefiles since I feel like it creates a more cohesive result. Very easy to chain, depend and read in the CI too.

+1 for make. Its an amazing and underrated tool.

Agreed. A great many things can be expressed as a graph of sub-tasks with preconditions and dependencies that create a partial ordering, and make is pretty darn good at grinding through such graphs. You can go far with just the basics plus a couple of extra tricks like "phony" targets.

It took me forever to come around on make, but I absolutely love it for things like this now.

Any recommendations on learning to use make more effectively outside of just reading the docs? I feel like this is a glaring gap in my knowledge as I really only know the basics.

I'd say gnu make (dont use any other make) can be learned from the documentation, it readable. Also, it's super simple to try things yourself (unlike, say, k8s by example) : just use it in your project and use the doc as needed. I am very fond of make, to the point I did a video transcoding solution based on it and ffmpeg.

Related trick: GNU tools tend to come with Info pages - it's to man pages what a book is to a listicle. These Info pages can get you up to speed quickly, while giving you a deeper understanding of the tool at the same time.

Read a lot of other people's Makefiles. I tend to shy away from huge/overly complex ones, but there's a lot of good examples out there where I got most of my ideas/habits/patterns from.

Hmm, unfortunately I don't. I didn't really bookmark anything when I was learning. IIRC, I spent most of my time in the docs though.

Yes, anything I do public facing is likely to use make/docker. It really cuts down on the mental energy I need to expend thinking about differing environments/bootstrapping.

Can you give an example on how you use make for simple things like this ? I want to get into it for non C++ projects, and it's a little daunting to make the leap.

Here is a starting point for a AWS SAM Golang project. It even comes with a `make help` command.



Here's one of mine for a project I'm working on today:


I love this! Used to do it, too, but currently we’re stuck in windows. What do you do then? (IT blocks WSL so not that)

GNU make is available on windows. If you run it via git-bash then it should mostly "just work" (though there will be some idiosyncracies to work out). I tested this project and it all runs on linux/osx/windows: https://github.com/J-Swift/cod-stats

Not an easy sell in my experience, windows people think it’s strange to install an archaic C build system as an add-on. And some people even git in cmd. It’s just not easy to get people on board with it.

When used as I would recommend, GNU Make is not a build system, its a task runner. Its a language agnostic gulp or npm package.json scripts runner. You can run anything you want (even powershell!) in the targets, its just a standard way to package them up together and declare task dependencies.

I know and support the approach, but have to keep «selling» this. Make documents itself as a C compilation system. I think it’s actually a better task runner! Python’s tox and invoke are nice, but much more noisy and language-specific.

One related thing that I began to do a while ago is to always have a `<project>.org` file going along any of my projects, which is a free-form Org-Mode journal of all the things and ideas related to that project, including of course esoteric commands for certain things (which sometimes can be hard to find in my bash history). With Org-Mode it's easy to fluidly change the structure of this journal, which makes it a very powerful tool.

When I was working on a NASA project in university, one of my colleagues would always preach the virtues of org mode; as a person whose brain now thinks in vim, is it worth learning emacs just for this tool?

I was where you are about a month ago. I ended up using Doom[0] as a bootstrapped config and haven't looked back.

I love org-mode. It's the killer feature of Emacs in my mind. I also feel like (note, this is entirely anecdotal and not based on hard facts) Emacs has better LSP integration than Vim. I mainly use Go, so it could also be that gopls has become more stable than it was a year ago when I was first trying to get Vim working with it.

[0]: https://github.com/hlissner/doom-emacs

vim vs emacs thing is extremely outdated imho. There is evil-mode (Emacs VIm Layer) in emacs which is an emulator of vim. You can have vim, or emacs, or both, or none, all being equally viable. In fact, there is Spacemacs which is an emacs distro that is built around evil-mode and comes with a whole bunch of packages out-of-the-box.


This is not to preach of emacs or vim, really. I'm just saying vim and emacs are by no means mutually exclusive. I personally never got used to vim stuff, so I use Spacemacs with emacs keybindings, and my custom elisp scripts. Emacs really is more of a programming environment/mini operating system than an editor. Enjoy!

I couldn't agree more. Emacs is a text-mode Lisp VM, whereas Vi is a modal editing UI. They are in different categories.

Emacs has a great Vi implementation, Spacemacs. Neovim is also a good Vi implementation. Vim, I think, is a bit outdated. For example, VimL scripting is full of quirks.

Not op, but yes org-mode is worth it.

As a vim user, my recommendation is to skip spacemacs, and go for straight emacs (with evil mode if you like modal editing, i do)

I liked Uncle Dave's Tutorial series on youtube: https://www.youtube.com/channel/UCDEtZ7AKmwS0_GNJog01D2g/vid...

Start with his emacs tutorial, it is mostly learn emacs and org mode as you learn to setup emacs.

Why skip spacemacs? I've been in this mentality since early 2010s and I've been maintaining my own N k line of elisp script for ~15 years. Last year I installed spacemacs and TBH although not everything is exactly how I want, it's refreshing not to maintain my own OS to be able to code. Spacemacs still makes some things harder but overall I prefer it to building everything yourself from ground. Anyway, just my opinion. You can always customize Spacemacs too, of course it's gonna be more complex than vanilla emacs.

Some of the first advice I was given when starting to learn emacs, was learn vanilla first then try doom or spacemacs.

Hearing the love so many have for spacemacs, I started there first instead.

Quite early on, I ran into problems. Every time I reached out on various forums I was told either: you're doing it wrong, that's a non-issue, RTFM (which isn't helpful when you don't know what you're looking for), or my favorite you have an XY problem (I didn't). So I'd go back to vim and put emacs on the back burner for a while longer, waiting for spacemacs to mature.

After the third attempt at spacemacs, I gave up and started looking for a good emacs tutorial.

Again I ran into some issues, but I found the regular emacs people very welcoming and helpful. Pretty soon I was able to diagnose my own issues, and figure out what settings I needed to change to meet my needs.

In the end, that early advice was true. You need to have some understanding for emacs to help diagnose spacemacs issues.

Will I give spacemacs another shot? Maybe one day, probably around the time the update their main release. It's been what, 2.5 years since they updated the main branch?

Not OP, but I recommend skipping Spacemacs so that people gain experience with configuration and elisp in Emacs.

Having some base level understanding makes understanding other Emacs configurations (like Spacemacs, Doom, Prelude, etc) much easier in my opinion.

I'm of two minds about it. I get the value it provides to people new to Emacs. But once you reach the point in which you'll want to dig in and adapt Emacs to yourself, you'll be facing not just learning elisp and Emacs, but also the complex framework Spacemacs built on top of that.

I've been using Emacs since early 2010s as well, so I'm biased - I had my own convoluted elisp modules before Spacemacs came around :).

Seems like it would make sense to make the file runnable, the "esoteric commands" would work as subcommands / makefile targets of sorts, and the rest of the orgfile would be what they aleady are.

In org-mode there's a concept of `tangle` and you can 'compile' (not sure if thats how org calls it) .org files into a number of individually specified scripts or documents. So you can have your top-level NOTES.org and also your scripts/* entangled.


I use fishshell, and I've gotten in the habit of creating a function called `t` to run tests. This function captures whatever test command I'm currently using. Just test one file, run one test case, use a debugger, capture coverage, etc. I don't save the function, so it doesn't persist across Terminal sessions. If I need to change how I'm running tests, I update the function.

It's a small, but noticeable improvement over the way I was working before, either up-arrowing until I found the last time I ran tests, or typing, `pytest...` and letting autocomplete figure out what I was doing previously.

edit: So yes, I am also a big fan of helping to enforce consistency by scripting even the small things.

I wrote a small python script, that I alias in my shell to "b" (for build). When I run it in a given directory, it prompts me for a command, if it doesn't have one already saved. Subsequent runs just run the saved command, but it can save a different command for each folder. I use it to clear and remake my build directory using cmake on my c++ projects, with the various compile options saved in as well. It's basically a persistent version of what you describe.

Here's the script, if you're interested. It's not super complex. https://github.com/wheybags/stuff/blob/master/build-dir-comm...

Have a function similiar to this, reruns the last test command

  function retest
      set --local cmd (history | grep -v history | grep -Em1 "((bundle exec)?rspec|go test|jest)")
      echo "Rerunning `$cmd`"
      eval $cmd

> either up-arrowing until I found the last time I ran tests

just as a substitute for up-arrowing, have you tried ctrl-r/reverse-i-search? (if that's a thing in fishshell)

edit: I see wodenokoto already mentioned this workflow in another comment

If you type before up arrowing this is what the up arrow does in fish

Check out FZF for fuzzy finding through your command history in this manner- it is absolutely the superior way to search command history

in fish you can type some of the command and arrow-up. It will show only commands containing that substring.

So does libreadline (and therefore bash). The following in ~/.inputrc does the trick for vi mode users:

k: history-search-backward j: history-search-forward

There is an equivalent for emacs mode.

Would you mind sharing a gist of that function?

I’m assuming they mean something like:

  t() {
    bazel test //some/specific/thing/...
    # or pytest, maven, sbt, etc etc
Add in whatever option you want to the test command there. Then you just press “t” and it runs your tests.

Yeah, that's exactly it. There's barely anything to it, and it's trivial to recreate if I close the terminal window

Ahh cool. I thought you meant something a little more context aware. A great idea though!

Consider the task of doing anything with your software a program in a very specific DSL. Then the scripts are your verbs. What you also need are your nouns.

For instance, deploying to a specific target could be "./scripts/deploy.sh stage", backporting a patch could be "./scripts/patch_release.sh 1.1.0 dbcde45", and creating a database migration script could be "./scripts/db_changed.sh 'add new field for model'"

IMO thinking about the verbs is the first important step, but one should also always specify the nouns explicitly.

He mentions this in the original article: scripts can be functional documentation. They are an easy way to learn the common commands in a project, can call out the expected workflow, and document all the commands to accomplish it.

Yes, I use shell as executable documentation, and I think it's the best language for it. I outlined these ideas here:



As shown there, a lot of people are doing this under different names... I hope Oil can provide something consistent.

I've been doing this with Makefiles for a while. It has always bugged me that I'm not "making" an artifact. Think I"m going to try this `run.sh` approach.

Yeah, as mentioned in the posts, make targets that are verbs rather than nouns should really be .PHONY, but most people forget that.

While the idea of having Make's dependency engine is nice in theory, it falls down for one-off automation in my experience. For a couple reasons:

(1) Make's dependency model has some well known deficiencies. It doesn't play well with tools that produce two files. It doesn't play well with tools that produce a directory tree of "dynamic" filenames (not known when you write the Makefile)

(2) Most makefiles have bugs, especially when you do make -j (parallel builds). It's basically like writing a C program with a bunch of threads racing on global variables -- your Make targets will often be racing on the same file, leading to non-deterministic bugs.


So IMO it's better to mostly stick with the sequential model of shell for this kind of project automation.

But shell can invoke make! When you know your dependencies, invoke them from run.sh! And when you're REALLY sure it's correct, invoke make -j :)

In other words shell is my default, and make is only for when I want to spend the effort to write dependencies -- which is quite difficult, because Make provides you virtually no help with that. Bugs in dependencies are common and hard to find. If you're trained to run "make clean", then that's a symptom of a bug in the build specification.

Shell "gets shit done" without these types of bugs. Debugging a shell script is very easy compared with debugging a makefile. The remaining problems with shell will hopefully be fixed by https://oilshell.org/ :)

I do want to add some dependency support, but I didn't get to it:

Shell, Awk, and Make Should Be Combined http://www.oilshell.org/blog/2016/11/13.html

Make is just a small elaboration on the shell model (concurrent processes and files), but unfortunately it's implemented as a completely separate tool that shells out to shell! facepalm

> (1) Make's dependency model has some well known deficiencies. It doesn't play well with tools that produce two files. It doesn't play well with tools that produce a directory tree of "dynamic" filenames (not known when you write the Makefile)

These are covered by pattern matching and prerequisites, which were backported from mk(1) into GNU Make (Albeit they changed the syntax to make it incomprehensible).

> (2) Most makefiles have bugs, especially when you do make -j (parallel builds). It's basically like writing a C program with a bunch of threads racing on global variables -- your Make targets will often be racing on the same file, leading to non-deterministic bugs.

mk(1) makes it easier to avoid bugs and easier to comprehend what the makefile is doing, it also allows you to invoke programming languages for specific targets as a feature of the Makefile, and other goodies :)

I guess you're talking about the Plan 9 tool:


The GNU make pattern rules are OK, but they don't handle some pretty simple scenarios I've encountered in practice.

One was building the Oil shell blog, where the names of the posts are dynamic, and not hard-coded in the make file.

The other one is build variants. Say I have a custom tool with a flag. I want to build:

- two binaries: oil and opy

- two variants, opt and dbg

I want them to look like:


The % pattern rules apparently can't handle this. You need some kind of Turing complete "metaprogramming". (CMake and autoconf offer that, but mostly for different reasons.)

This is a real problem that came up in: https://github.com/oilshell/oil/blob/master/Makefile

I plan to switch to a generated Ninja script to solve some of these problems.

I think that's the equivalent of abstraction/encapsulation in programming languages. Sometimes even one liners can be encapsulated in something else as the implementation might change but the purpose/role doesn't.

Shell functions can also accomplish the same purpose as tiny little scripts. For example, a 2 line shell script can be wrapped in a function instead:

    mymove() {
      cp --verbose $1 $2
      rm --verbose $1
Shell has good abstraction capacities! It even has some that other languages don't have:

Shell Has a Forth-like Quality http://www.oilshell.org/blog/2017/01/13.html

Pipelines Support Vectorized, Point-Free, and Imperative Style http://www.oilshell.org/blog/2017/01/15.html


Yes, shell functions are cool. I use them to augment standard commands too, e.g. make head(1) or tail(1) output more lines than usual, depending on the terminal's number of lines:

  head () {
   if [[ $# -eq 0 ]]
    /usr/bin/head -$[(LINES-1)/2]
   elif [[ -f "$1" ]]
    case $# in
     (1) /usr/bin/head -$[(LINES-1)/2] $* ;;
     (2) /usr/bin/head -$[LINES*5/12] $* ;;
     (3) /usr/bin/head -$[(LINES-1)/3] $* ;;
     (*) /usr/bin/head $* ;;
    /usr/bin/head $*

with that, I think if the cp fails, the rm will still be deleted. maybe 'set -euo pipefail' will fix it. Do the $1 and $2 need to be quoted too incase there are special characters?

Yes all that is true -- it was just a quick demonstration.

By the way:

- Oil has set -euo pipefail on by default. (And also nullglob)

- Oil doesn't split words by default, so you don't need the quotes: https://www.oilshell.org/release/0.8.0/doc/simple-word-eval....

Those are all "opt in" -- OSH still runs POSIX shell scripts, if you want all the extra ceremony :)

I have so many common one-liners I use in my current project (that I access using fuzzy search via ctr-R) that I'm thinking about having a file a'la "my-commands" and have it appended to my history, somehow.

That would truly be the opposite of this advice.

Maybe I need to think about this a bit more :)

Shell aliases are your friend.

just give them some memorable names, and add them to your .bashrc. Or, if they are very context sensitive (that's not great), there is a way to source a file every time you enter a directory, I just don't remember it.

Shell functions do the same thing as aliases, and have fewer parsing problems, and they have more flexible arguments. Instead of:

    alias ls='ls --color'

    ls() {
      command ls --color "$@"
(The "command" prefix avoids recursion)

Greenclips [1] works well for this if you're a rofi [2] user. You can set a staticHistoryPath that points to a file. When activating Greenclips, you can search for the desired command. I've been using this on my Linux box for the last year or so and haven't looked back.

[1] https://github.com/erebe/greenclip

[2] https://github.com/davatorium/rofi

I'm using navi (https://github.com/denisidoro/navi) for commands that long enough and used less frequently. And it works great.

There is a history command, with a way to reload, so this would be possible by writing to bash_history from bashrc, then reloading the history I think. Not tested.

This is a short guide that is an eye opener as to just how much can be done with bash history: https://www.digitalocean.com/community/tutorials/how-to-use-...

You can set up a per-directory bash history: https://unix.stackexchange.com/questions/305524/create-histo...

And a Makefile sounds like it'd be pretty helpful too!


You can also use direnv to add aliases when you enter a certain directory.

> have it appended to my history, somehow

isn't that what .aliases is for?

I honestly don’t know.

If you're not opposed to installing another tool for scripting, then check out my project mask [1]. It's a CLI task runner (made with rust, single binary) that supports multiple scripting languages and it's defined with a simple, human-readable markdown file :)

I used to have a scripts directory for each project, but I really wanted basic argument/options parsing and subcommands. That's mostly why I made mask. Now I use it daily as a project-based task runner, as well as a global utility command.

[1]: https://github.com/jakedeichert/mask

This looks really useful, thank you for making this.

I saw that the cost/benefit ratio of adopting https://github.com/casey/just in non-trivial projects was worth it as an alternative to bash scripts in script folders.

Seems like a great little tool.

Though it looks a bit too young for my taste: it's not available in most distro's base repositories yet, so it's going to be a tiny bit painful to deploy on every developer laptop, CI, and etc. I tend to prefer readily available tools like make, with 90% of the same features, but 100% distro coverage and previous developer knowledge.

Thankfully we have solved this issue by adopting nix for setting up developer machines/project setup on top of your OS's package manager of choice (OSX/Linux).

Additionally, you automatically get shell completion too! https://github.com/NixOS/nixpkgs/blob/3bb54189b0c8132752fff3...

As described in the README, avoiding the 'build' part from Makefiles cut unnecessary complexity like .PHONY targets, which improves clarity. In smaller teams/companies it makes sense IMO.

I have tons of such scripts for personal use, not just projects. Here's one I wrote a couple days ago called "jabra-stop-changing-volume-goddamnit":

    while sleep 0.1; do pacmd set-source-volume bluez_source.70_BF_92_CD_77_32.headset_head_unit 60000; done
Also, lots of scripts related to ffmpeg and other tools where the command line arguments are too hard to remember. For example, "ffmpeg-extract-sound-from-all-files-in-dir":

    find *.mov | sed -e s/.mov// | xargs --replace=qq --verbose ffmpeg -i qq.mov -acodec pcm_s16le qq.wav

My experience with simplicity is that it requires greater effort. Simple is not easy.

As such, people dedicated to the task of effort avoidance will find a way to avoid increased simplicity. The justifications and qualifiers are creative and elaborate often themselves taking great effort, often greater effort than that they originally sought to avoid. I see this behavior repeated so frequently in software and with such profound conviction.

A brief example is a single method call from a language or platform supplied API used to solve a problem instead of the entirety of a large framework. The heresy of such a disgusting travesty.

This is exactly what we do for Dark: https://github.com/darklang/dark/tree/main/scripts

Not only that, but each script automatically runs itself in our docker container so it's fully repeatable. They all start with

    . ./scripts/support/assert-in-container "$0" "$@"
which is just this:

    if [[ ! -v IN_DEV_CONTAINER ]]; then
      scripts/run-in-docker "${@}"
      exit $?
And this is run-in-docker: https://github.com/darklang/dark/blob/main/scripts/run-in-do...

> backup-db.sh

Just a small nitpick. I’d like it if we collectively moved away from including file extensions for scripts. You never know when you want to rewrite it in python or do something else. Nothing more confusing than opening up “backup.sh” only to find it’s actually a ruby script and must be executed.

I handle that by linking src/backup.sh as bin/backup Then if I rewrite it as src/backup.py I just change the bin/backup link to point there instead.

We usually end up with implementing the same scripts in shell, cmd and Powerhell, since some Windows folks prefer not to install cygwin or use wsl. Its a PITA to maintain, but doable if the scripts are simple and only check for requirements and the actual work is done by python, groovy, go, whatever.

Did anyone else hold down shift + command and find the achievements part of the site? Fun idea...

What I do is make the same scripts from project to project then have helpful aliases to run them quickly. For example, since I do TDD and often work on a single test file for a while I have a script in every project called:

Then when I want to run it I just type the letter r, since I've aliased r to run ./bin/run. It's super fast. To keep source control clean I add it to:

Which allows my .gitignore to be the same as everyone elses. I used to used the global gitignore file, but I had issues at times.

Why break tasks up into 10 line files when you could break them into 10 line Python/Similar functions?

Same advantages as above, a more powerful cross platform language, cross-task references can be linted

  def run(params):
    "Run a built binary."
    subprocess.run("./" + params.path)
  if __name__ == __main__:
    "Take CLI options and run selected function."""
    # Parse CLI arguments
    # Call functions with options

because it's not the unix way and the unix way has a LOT of value.

because not all the tools you use in your system can import your python/similar file. Your deploy pipeline could involve 10 languages running under multiple os's / versions

or to put it another way... because the whole world doesn't run in your favorite programming language.

I'm a big fan of this but I often find it in tension with necessary flexibility. The more flexibility I add to my scripts, the less useful they are for this purpose.

I usually come down on: things where the output is shared should probably be scripts, things where it's just for me probably shouldn't be (so that I can be more flexible, and there will be less bit-rot of those scripts).

Any other thoughts on this?

Yeah this is a good question. I lean toward making shell scripts work for a specific project. All the paths should be the same for all developers, so I usually use paths relative to the git repo root of the project.

A key point is that when you want to "abstract", then the right way is usually to write a command line tool invoked from shell scripts in multiple projects. That is the natural way to get flexibility and reuse.

But otherwise it's a bunch of commands dumped in one place.



Here's a random example -- some scripts to generate source code as part of the build process:


The paths are all hard-coded, which is a good thing.

In my mind, the goal is to save time and reduce mistakes. And having a consistent dev environment between everybody on a project is almost a prerequisite for that, and shell can actually enforce that consistency! (i.e. the shell scripts don't work if people have quirks on their machine. They can check the environment too.)

These scripts become documentation. Not about what the commands are doing but WHY they are being ran. This in itself brings huge value.

I spent a fairly sizeable time on this in one of my previous jobs. Most things were now bin/do_this.sh or bin/setup.sh or whatever.

Setting up a pc went from 1-3 days to a 3 hour "run this setup.sh" script and call me if it crashes.

It will become obvious when you hire someone that this needs to be done. As a project lead, schedule time for it!

I have a similar practice too. For most projects I have *.cmd files such as


Same, although I leave off the extension. https://www.talisman.org/~erlkonig/documents/commandname-ext...

Sounds like the main function here is to create concepts at the correct level of abstraction, a bit like an API.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact