Hacker News new | past | comments | ask | show | jobs | submit login
Shell script best practices, from a decade of scripting things (sharats.me)
958 points by sharat87 on Oct 27, 2022 | hide | past | favorite | 490 comments



Hands down, shell scripting is one of my all time favorite languages. It gets tons of hate, e.g. "If you have to write more than 10 lines, then use a real language," but I feel like those assertions are more socially-founded opinions than technically-backed arguments.

My basic thesis is that Shell as a programming language---with it's dynamic scope, focus on line-oriented text, and pipelines---is simply a different programming paradigm than languages like Perl, Python, whatever.

Obviously, if your mental model is BASIC and you try to write Python, then you encounter lots of friction and it's easy for the latter to feel hacky, bad and ugly. To enjoy and program Python well, it's probably best to shift your mental model. The same goes for Shell.

What is the Shell paradigm? I would argue that it's line-oriented pipelines. There is a ton to unpack in that, but a huge example where I see friction is overuse of variables in scripts. Trying to stuff data inside variables, with shell's paucity of data types is a recipe for irritation. However, if you instead organize all your data in a format that's sympathetic to line-oriented processing on stdin-stdout, then shell will work with you instead of against.

/2cents


Shell and SQL make you 10x productive over any alternative. Nothing even comes close. I've seen people scrambling for 1 hours to write some data munging, then spend another 1 hour to run it through a thread pool to utilize those cores , while somebody comfortable is shell writes a parallelized one liner, rips through GBs of data, and delivers the answer in 15 minutes.

What Python is to Java, Shell is to Python. It speeds you up several times. I started using inline 'python -c' more often than the python repl now as it stores the command in shell history and it is then one fzf search away.

While neither Shell or SQL are perfect, there have been many ideas to improve them and for sure people can't wait for something new like oil shell to get production ready, getting the shell quoting hell right, or somebody fixing up SQL, bringing old ideas from Datalog and QUEL into it, fixing the goddamn NULL joins, etc.

But honestly, nothing else even comes close to this 10x productivity increase over the next best alternative. No, Thank you, I will not rewrite my 10 lines of sh into python to explode it into 50 lines of shuffling clunky objects around. I'll instead go and reread that man page how to write an if expression in bash again.


> getting the shell quoting hell right

Shameless plug coming, it this has been a pain point for me too. I found the issue with quotes (in most languages, but particularly in Bash et al) is that the same character is used to close the quote as is used to open it.m. So in my own shell I added support to use parentheses as quotes in addition to the single and double quotation ASCII symbols. This then allows you to nest quotation marks.

https://murex.rocks/docs/parser/brace-quote.html

You also don’t need to worry about quoting variables as variables are expanded to an argv[] item rather than expanded out to a command line and then any spaces converted into new argv[]s (or in layman’s terms, variables behave like you’d expect variables to behave).

https://github.com/lmorg/murex


One of my favorite Perl features that has been disappointingly under-appropriated by other languages is quoting with q(...).


This is one of my favorite features of Ruby!

Though Ruby makes it confusing AF because there are two quoting types for both strings and symbols, and they're different. (%Q %q %W %w %i) I can never remember which does which.... the letter choice feels really arbitrary.


Elixir has something like this too, but even more powerful (you can define your own):

https://elixir-lang.org/getting-started/sigils.html#strings-...


Ruby and Elixir both have features like this. Very sweet.

Elixir has sigils, which are useful for defining all kinds of literals easier, not just strings:

https://elixir-lang.org/getting-started/sigils.html#strings-...

You can also define your own. It's pretty great.


This means that you can even quote the delimiter in the string as long as it's balanced.

    $X=q( foo() )
Should work if it's balanced. If you choose a different pair like []{} then you can avoid hitting collisions. It also means that you can trivially nest quotations.

I agree that this qualified quotation is really underutilized.


Off topic. What's your opinion on Python?

I also write shell scripts, but I'm just curious what you would think about a comparison.


I’m not fan of Python, however that’s down to personal preference rather than objective fact. If Python solves a problem for other people then who am into judge :)


I noticed that I became so much more quick after taking 1 hour to properly learn awk. Yes, it literally takes about 1 hour.


Awk is awesome but saying it literally takes 1 hour to properly learn it is a bit overselling.


I really don't think so! If you have experience with any scripting, you can fully grok the fundamentals of awk in 1 hour. You might not memorize all the nuances, but you can establish the fundamentals to a degree that most things you would try to achieve would take just a few minutes of brushing up.

For those that haven't taken the time yet, I think this is a good place to start:

https://learnxinyminutes.com/docs/awk/

Of course, some people do very advanced things in awk and I absolutely agree that 1 hour of study isn't going to make you a ninja, but it's absolutely enough to learn the awk programming paradigm so that when the need arises you can quickly mobilize the solution you need.

For example: If you're quick to the draw, it can take less time to write an awk one liner to calculate the average of a column in a csv than it does to copy the csv into excel and highlight the column. It's a massive productivity booster.


Brian Kernighan covers the entire [new] awk language in 40 pages - chapter 2.

There are people who have asked me scripting questions for over a decade, who will not read this for some reason.

It could be read in an hour, but not fully retained.

https://archive.org/download/pdfy-MgN0H1joIoDVoIC7/


I feel like I do this every three years then proceed to never use it. Then I read a post on hn and think about how great it could be; rinse and repeat


yeah thats exactly right. it may only take an hour to learn, but every time i need to use awk it seems like i have to spend an hour to re-learn its goofy syntax.


alas this is true, I never correctly recall the order of particular function args as they are fairly random, still beats the alternative of having to continually internalize entire fragile ecosystems to achieve the same goal.


yeah you're definitely right. im sure if it was something i had to use more consistently i'd be able to commit it to memory. maybe...


What? The awk manual is only 827 highly technical pages[1]. If you can't read and interalize that in an hour, I suspect you're a much worse programmer than the OP.

[1] https://www.gnu.org/software/gawk/manual/gawk.html

For the sarcasm impared among us, everything above this, but possibly including this sentence is sarcasm.


Think the more relevant script equivalent of 'Everything in this statement is false.' is 'All output must return false to have true side effects.'

The quick one ~ true ~ fix was ! or #! without the 1024k copyright.

s-expression notation avoids the issue with (."contents")

MS Windows interpretation is much more terse & colorful.


Awk is an amazingly effective tool for getting things done quickly.

Submitted yesterday:

Learn to use Awk with hundreds of examples

https://github.com/learnbyexample/Command-line-text-processi...

https://news.ycombinator.com/item?id=33349930


All you need to do is learn that cmd | awk '{ $5 }' will print out the 5th word as delimited by one or more whitespace characters. Regexes support this easily but are cumbersome to write on the command line.


Doing that, maybe with some inline concatenation to make a new structure, and this are about all I use:

Printing based on another field, example gets UIDs >= 1000:

    awk -F: '$3 >= 1000 {print $0}' /etc/passwd
It can do plenty of magic, but knowing how to pull fields, concat them together, and select based on them cover like 99% of the things I hope to do with it


And don't forget the invisible $0 field in awk...


And $NF


It takes a lot less time to learn to be fairly productive with awk than with, say, vi / vim. Over time I've realized that gluing these text manipulation tools together is only an intermediate step toward learning how to write and use them in a manner that is maintainable across several generations of engineers as well as portable across many different environments, and that's still a mostly unsolved problem IMO for not just shell scripts but programming languages in general. For example, the same shell script that does something as seemingly simple as performing a sha256 checksum on macOS won't work on most Linux distributions. So in the end one winds up writing a lot of utilities all over again in yet another language for the sake of portability which ironically hurts maintainability and readability for sure because it's simply more code that can rot.


The only thing I use AWK for is getting at columns from output, (possibly processing or conditionally doing something on each) what would be the next big use-case?


I use it frequently to calculate some basic statistics on log file data.

Here's a nice example of something similar: https://drewdevault.com/dynlib


awk autonoma theory and oop the results. add unicode for extra tics!

Scripted cholmskey grammar ( https://en.wikipedia.org/wiki/Universal_grammar ) to unleash the power of regular expressions.


I have used it to extract a table to restore from a MySQL database dump.


For simple scripting tasks, yes. I have had the opposite experience for more critical software engineering tasks (as in, coding integrated over time and people).

Language aside, the ecosystem and culture do not afford enough in way of testing, dependency management, feature flags, static analysis, legibility, and so on. The reason people say to keep shell programs short is because of these problems, it needs to be possible to rewrite shell programs on a whim. At least then, you can A/B test and deploy at that scope.


awk great for things that will be used over several decades. (where hardware / OS started with nolonger exists at end of multi-decade project, but data from start to end still has to be used)


I feel like the reasons for this are:

* Shell scripts force you to think in a more scalable way (data streams)

* Shell scripts compose rich programs rather than simplistic functions

* Shells encourage you to program with a rich, extensible feature set (ad-hoc I/O redirection, files)

The only times I don’t like shell scripts are when dealing with regex and dealing with parallelism


The POSIX shell does not implement regex.

What is used both in case/esac and globbing are "shell patterns." They are also found in variable pattern removal with ${X% and ${X#.

In "The Unix Programming Environment," Kernighan and Pike apologized for these close concepts that are easily mistaken for one another.

"Regular expressions are specified by giving special meaning to certain characters, just like the asterix, etc., used by the shell. There are a few more metacharacters, and, regrettably, differences in meanings." (page 102)

Bash does implement both patterns and regex, which means discerning their difference becomes even more critical. The POSIX shell is easier in memory for this reason, and others.

http://files.catwell.info/misc/mirror/


> The only times I don’t like shell scripts are when dealing with regex and dealing with parallelism

Wow, for me parallelism is one of the best features of a unix shell and I find it vastly superior to most other programming languages.


Can you expand on the parallelism features you use and what shell? In bash I've basically given up managing background jobs because identifying and waiting for them properly is super clunky; throttling them is impossible (pool of workers) and so for that kind of thing I've had to use GNU parallel (which is its own abstruse mini-language thing and obviously nothing to do with shell). Ad-hoc but correct parallelism and first class job management was one of the things that got me to switch away from bash.


GNU parallel


It's great for embarrassingly parallel data processing, but not good for concurrent/async task.


I'd add a working knowledge of regex to that. With a decent text editor + some fairly basic regex skills you can go a long way.


> I started using inline 'python -c' more often than the python repl now as it stores the command in shell history and it is then one fzf search away.

Do you not have a ~/.python_history? The exact same search functions are available on the REPL. Ctrl-R, type your bit, bam.


Exact same - can I use fzf gistory search using Ctrl+R like I can in shell?


I've just started installing ipython on pretty much every python environment I set up on personal laptops, but there is repl history even without ipython: https://stackoverflow.com/a/7008316/1170550


I expect nushell to massively change how I work:

https://www.nushell.sh/

It's a shell that is actually built for structured data, taking lessons learned from PowerShell and others.


> getting the shell quoting hell right

Running `parallel --shellquote --shellquote --shellquote` and pasting in the line you want to quote thrice may alleviate some of the pain.

By no means ideal, though.


Python is a terrible comparison language here. Of course shell is better than Python for shell stuff; no one should suggest otherwise. Python is extremely verbose, it requires you to be precise with whitespace, and using regex has friction because it's not actually built into the language syntax (unless something has changed very recently).

The comparison should be to perl or Ruby, both of which will fare better than Python for typical shell-type tasks.


If I'm interactively composing something I do very much like pipes and shell commands, but if it's a thing I'm going to be running repeatedly then the improved maintainability of a python script, even if it does a lot of subprocess.run, is preferable to me. "Shuffling clunky objects around" seems more documented and organized than "everything is a bytestring".

But different strokes and all that.


> while somebody comfortable is shell writes a parallelized one liner, rips through GBs of data, and delivers the answer in 15 minutes.

This also works up to a point where those GBs turn into hundreds of GBs, or even PBs, and a proper distributed setup can return results in seconds.


I often find that downloading lots of data from s3 using `xargs aws sync`, and then xargs on some crunching pipeline, is much faster than a 100 core spark cluster


That's a hardware management question. The optimized binary used in my shell script still runs orders of magnitude faster and cheaper if you orchestrate 100 machines for it than any Hadoop, Spark, Beam, Snowflake, Redshift, Bigquery or what have you.

That's not to say I'd do everything in shell. Most stuff fits well into SQl, but when it comes to optimizing processing over TB or PB scale, you won't beat shell+massive hw orchestration.


usually you use specific frameworks for that, not pure Python.


I suppose the Python side is a strawman then - who would do that for a small dataset that fits on a machine? Or have I been using shell for too long :-)


I thought the above comment was about datasets that do not fit on ones machine?


As far as control-R command history searching, really enjoying McFly https://github.com/cantino/mcfly


> while somebody comfortable is shell writes a parallelized one liner

Do you have an example of this? I didn’t even know you could make sql calls in scripts.


I don’t have an example, but this article comes to mind and you may be able to find an example in it

https://adamdrake.com/command-line-tools-can-be-235x-faster-...


  PSQL="psql postgresql://$POSTGRES_USER:$POSTGRES_PASSWORD@$DATABASE_HOST:$DATABASE_PORT/$POSTGRES_DB -t -P pager=off -c "
  
  OLD="CURRENT_DATE - INTERVAL '5 years'"

  $PSQL "SELECT id from apt WHERE apt.created_on > $OLD order by apt.created_on asc;" | while 
  read -r id; do
    if [[ $id != "" ]]; then
      printf "\n\*\* Do something in the loop with id where newer than \"$OLD\" \*\*\*\n"
      # ...
    fi
  done


mysql, psql etc. let you issue sql from the command line

I don't do much sql in bash scripts but I do keep some wrapper scripts that let me run queries from stdin to databases in my environment


WASM Gawk wrapper script in a web browser with relevant information about schema grammar / template file would allow for alternate display formats beyond cli text (aka html, latex, "database report output", cvs, etc )


> Hands down, shell scripting is one of my all time favorite languages. It gets tons of hate, e.g. "If you have to write more than 10 lines, then use a real language," but I feel like those assertions are more socially-founded opinions than technically-backed arguments.

It is "opinion" based on debugging scripts made by people (which might be "you but few years ago") that don't know the full extent of death-traps that are put in the language. Or really writing anything more complex.

About only strong side of shell as a language is a pipe character. Everything else is less convenient at best, actively dangerous at worst.

Sure, "how to write something in a limited language" might be fun mental excercise but as someone sitting in ops space for good part of 15 years, it's just a burden.

Hell, I'd rather debug Perl script than Bash one...

Yeah, if it is few pipes and some minor post processing I'd use it too (pipe is the easiest way to do it out of all languages I've seen) but that's about it.

It is nice to write one-liners in cmdline but characteristic that make it nice there make it worse programming language. A bit like Perl in that matter


You say this as if it wasn't extremely common to find giant python monstrosities that can be replaced by a handful of lines of shell. TBF the shell code often is not just cleaner and easier to follow, but also faster.

It's possible to use the wrong tool for the job in any language - including language choice itself.

Dismissing a programming language because it's not shell and dismissing shell because it's not a proramming language are the same thing - a bad idea if that's your only decision criteria.


Bash is a good tool if the script is short enough, but if you have to write more than 10 lines, then use a real language.


Nonsense. That's a terrible metric.

If i need to run 11 commands in a row, suddenly i need to make sure new tooling is installed in my instance and/or ship a binary?

What if that 11 lines is setting up some networking? Now i need to go write a 400 line go program to use netlink to accomplish the same task? Or should I condense that to 80 lines of go to shell out the commands that replicate the 11 lines of simple bash?

There are plenty of reasons to do this, I have done it more than once. None of those reasons are "crossed an arbitrary magic number of 'lines of shell'".


If my bash script is more than 10 lines, I switch to python and if that's more than 10 lines I switch to C! And if that's more than 10 lines I use assembly!

/s


>However, if you instead organize all your data in a format that's sympathetic to line-oriented processing on stdin-stdout, then shell will work with you instead of against.

Not even that is necessary. Just use structured data formats like json. If you are consuming some API that is not json but still structured, use `rq` to convert it to json. Then use `jq` to slice and dice through the data.

dmenu + fzf + jq + curl is my bread and butter in shell scripts.

However, I still haven't managed to find a way to do a bunch of tasks concurrently. No, xargs and parallel don't cut it. Just give me an opinionated way to do this that is easily inspectable, loggable and debuggable. Currenly I hack together functions in a `((job_i++ < max_jobs)) || wait -n` spaghetti.


I think this comment points to an even deeper insight: shell is a crappy programming language but with amazing extensibility.

I would argue that once you pull in jq, you're no longer writing in "shell", you're writing in jq, which is a separate and different language. But that's precisely the point! Look at how effortless it is to (literally) shell out to a slew of other languages from shell.

The power of shell isn't in the scripting language itself, it's in how fluidly it lets you embed snippets of tr, sed, awk, jq, and whatever else you need.

And, critically, these languages callable from shell were not all there when shell was designed. The extension interface of spawning processes and communicating with arguments and pipes is just that powerful. That's where shell shines.


The shell is an ambiguous language that cannot be directly implemented with an LR parser.

Perhaps some of the power emerges from that ambiguity, but it is quite difficult to implement.

This presentation sums up the woes of an implementor:

https://archive.fosdem.org/2018/schedule/event/code_parsing_...


Do you have examples of concurrent use-cases that xargs and parallel don't satisfy? I discovered parallel recently and was blown away by how much it improves things. I've only really used it in basic scenarios so far, just wondering where its limitations are.


running a bash function with its own private variables in parallel. without having to export it.


How do you use dmenu for your shell script? to launch it? to prompt the user for input while it's running?

Do you have an example of a script you wrote?


Yes, for creating ad-hoc mini-UIs so the user can select an option. Same with fzf, but it's terminal-bound (rather than X-bound).

The scripts are similar to this one:

https://github.com/debxp/dmenu-scripts/blob/master/dmenu-kil...


Thanks, I will definitively use that kill one.


WASM gawk with html as user input/output more flexible.


Can you give an example of how you'd use rq in this pipeline? I'm not finding any good examples


curl -s "give.me.some/yaml" | rq --input-yaml --output-json | jq '.my.selected[1].field'


new to 'rq', it's not in active development, any other alternatives? it seems doing a lot other than convert structured data to json.


Not sure what it is doing more...I'm referring to this rq: https://github.com/dflemstr/rq#format-support-status

It converts to/from the listed formats.

There is also `jc` (written in Python) with the added benefit that it converts output of many common unix utilities to json. So you would not need to parse `ip` for example.

https://github.com/kellyjonbrazil/jc#parsers


Also look at `yq` - https://github.com/mikefarah/yq

This is a wrapper to jq that also supports yaml and other file formats.


> "If you have to write more than 10 lines, then use a real language"

I swear, there should be a HN rule against those. It pollutes every single Shell discussions, bringing nothing to them and making it hard for others do discuss the real topic.


There are three numbers in this industry: 0, 1 and infinity. Any other number - especially when stated as a rule, limitation, or law - is highly suspect.


Are you one of those people who take everything literally, so any and all jokes fly far over their heads?

This rule of ten lines or less is clearly meant as an illustrative guideline. Obviously if you have a shell script that has 11 lines, but does what it has to do reliably, nobody will be bothered.

The idea that the rule is trying to convey is "don't write long, complex programs in shell". Arguing about exact numbers or wording here is detracting from the topic at hand.


0, 1, 3 and infinity


Which works not just to preserve the previous statement from internal inconsistency, but also in regards to the incredibly useful Rule of Three (https://en.m.wikipedia.org/wiki/Rule_of_three_(computer_prog...).


> Which works not just to preserve the previous statement from internal inconsistency

It doesn't. You now have 4 numbers.


0, 1, 3, 4 and infinity - there's four numbers in this industry.

Five There's five numbers in this industry 0, 1, 3, 4, 5 and infinity

Wait, I'll come in again


0, 1, 7, and indeterminate, IME.

The 7 being for design. If there are more than 7 boxes on the whiteboard, try again.


ah, log base 2 of 7 is 127 bits (aka 8, y 1).

Unicode character can have more than 7 font boxes associated with one character box and still be a valid determinate character form.


thought the industry was broken down in 8 bit increments (0, 8, 16, 32, 64, 128, etc)

log base 2 of 4 is only 16bits


Good point. I'm not sure why I thought what I'd written above worked... shrug


Think use a real line discipline like n 8 1 would make more semantic sense than 'use a real lanaguage'.

Unless, the language is APL, in which case, 10 lines is an operating system.


The majority of those comments have significantly more thought put into them (and adhere more closely to the HN guidelines) than this comment does.


Is there a link to HN line discipline criteria? (beyond asci ranges 0 through 31)


> What is the Shell paradigm? I would argue that it's line-oriented pipelines.

Which python can do realitively well, by using the `subprocess` module.

Here is an example including a https://porkmail.org/era/unix/award (useless use of cat) finding all title lines in README.md and uppercasing them with `tr`

    import subprocess as sp
    cat = sp.Popen(
        ["cat", "README.md"],
        stdout=sp.PIPE,
    )
    grep = sp.Popen(
        ["grep", "#"],
        stdin=cat.stdout,
        stdout=sp.PIPE,
    )
    tr = sp.Popen(
        ["tr", "[:lower:]", "[:upper:]"],
        stdin=grep.stdout,
        stderr=sp.PIPE,
        stdout=sp.PIPE,
    )
    out, err = tr.communicate()
    print(out.decode("utf-8"), err.decode("utf-8"))
Is this more complicated than doing it in bash? Certainly. But on the other side of that coin its alot easier in python to do a complex regular expression (maybe depending on a command line argument) on one of those, using the result in an HTTP request via the `requests` module, packing the results into a digram rendered in PNG and sending it via email.

Yes, that is a convoluted example, but it illustrates the point I am trying to make. Everything outlined could probably done in a bash script, but I am pretty certain it would be much harder, and much more difficult to maintain, than doing this in python.

Bash is absolutely fine up to a point. And with enough effort, bash can do extremely complex things. But as soon as things get more complex than standard unix tools, I rather give up on the comfort of having specialiced syntax for pipes and filehandles, and write a few more lines handling those, if that means that I can do the more complex stuff easily using the rich module ecosystem of Python.


> But on the other side of that coin its alot easier in python to do a complex regular expression

I am not sure I would agree. Sed fills this role quite nicely.

cat README.md | grep # | tr '[:lower:] [:upper:]' | sed 's/something/something_else/'


Now do that again, but this time the regular expression is controlled by 2 command line params, one which gives it the substitution, the other one is a boolean switch that tells it whether to ignore case. And the script has to give a good error if the substitution isn't a valid regular expression. It should also give me a helptext for its command line options if I ask it with `-h, --h`.

In python I can use `opt/argparse`, and use the error output from `re.compile` to do this.

Of course this is also possible in bash, but how easy is it to code in comparison, and how maintainable is the result?


Man, you chose the wrong username, didn't you? ;-)


Not really, I love bash. I also love perl and vimscript btw. :D


In the example I gave I wouldn't write that in a script file, so I would just alter the command itself.

If I wanted to parse cli args I would use case on the input to mux out the args. I personally prefer writing cli interfaces this way (when using a scripting language).

    while test $# -gt 0; do  
      case "$1" in  
        -f|--flag) shift; FLAG="$1";;  
      esac  
      shift  
    done


grep+tr can be done within sed too (or go with perl for more features and easier portability)


one tool/command per 'concept' was a resource saving thing at one time.

sed is the thing that handles shell regular expressions for shellscripts.


> But on the other side of that coin its alot easier in python to do a complex regular expression (maybe depending on a command line argument) on one of those, using the result in an HTTP request via the `requests` module, packing the results into a digram rendered in PNG and sending it via email.

Doesn't sound so bad. A quick argument parser, a call out to grep or sed, pipe to curl, then to graphviz I guess (I don't really know much about image generation tools though), then compose the mail with a heredoc and run sendmail. Sounds like 10 to 15 lines for a quick and dirty solution.


It's certainly possible, but here comes the fun; How read/maintain/extend-able is the solution? How well does it handle errors, assist the user? Add checking if all the programs are installed and useful error messages into the mix. Then the API does a tiny change and now we need a `jq` between curl and graphviz, and maybe we'd need an option for that case as well, and so on, and so on, ...

Bash scripts have a nasty tendency to grow, sometimes in ways that are disproportional to the bit of extra functionality that is suddenly required. Very quickly, a small quick'n dirty solution can blow up to a compost-heap ... no less dirty, but now instead of a clean-wipe, I'd need a shovel to get through it.

I think my handle speaks for itself as to how much I like bash. But I have had the pleasure of getting handed over bash scripts, hundreds of lines long, with the error description being "it no longer works, could you have a look at it? and the original author both unreachable and apparently having string feelings against comments.

And in many of these cases, it took me less time to code a clean solution in Python or Go, than it took me to grok what the hell that script was actually doing.


shell was originally tied to job/programm processing.


I would agree, with the caveat that Bourne Shell isn't really a programming language, and has to be seen as such to be loved.

Bourne Shell Scripting is literally a bunch of weird backwards compatible hacks around the first command line prompt from 1970. The intent was to preserve the experience of a human at a command prompt, and add extra functionality for automation.

It's basically a high-powered user interface. It emphasizes what the operator wants for productivity, instead of the designer in her CS ivory tower of perfection. You can be insanely productive on a single line, or paste that line into a file for repeatability. So many programmers fail to grasp that programming adds considerations that the power user doesn't care about. The Shell abstracts away all that unnecessary stuff and just lets you get simple things done quickly.


Hard Disagree. Bash programming:

- no standard unit testing

- how do you debug except with printlns? Fail.

- each line usually takes a minimum of 10 minutes to debug unless you've done bash scripting for... ten years

- basic constructs like the arg array are broken once you have special chars and spaces and want to pass those args to other commands. and UNICODE? Ha.

- standard library is nil, you're dependent on a hodgepodge of possibly installed programs

- there is no dependency resolution or auto-install of those programs or libraries or shell scripts. since it is so dependent on binary programs, that's a good thing, but also sucks for bash programmers

- horrid rules on type conversions, horrid syntax, space-significant rules

- as TFA shows, basic error checking and other conventions is horrid, yeah I want a crap 20 line header for everything

- effective bash is a bag of tricks. Bag of tricks programming is shit. You need to do ANYTHING in it for parsing, etc? Copy paste in functions is basically the solution.

- I'm not going to say interpreter errors are worse than C++ errors, but it's certainly not anything good.

Honestly since even effing JAVA added a hashbang ability, I no longer need bash.

Go ahead, write some bash autocompletion scripts in bash. Lord is that awful. Try writing something with a complex options / argument interface and detect/parse errors in the command line. Awful.

Bash is basically software engineering from the 1970s, oh yeah, except take away the word "engineering". Because the language is actively opposed to anything that "engineering" would entail.


> - basic constructs like the arg array are broken once you have special chars and spaces and want to pass those args to other commands. and UNICODE? Ha.

Any example with this? The following works reasonably well for me.

  args=(-a --b 'arg with space' "一 二 三")
  someprog "${args[@]}"


> - how do you debug except with printlns? Fail.

With Trace. Which is talked about in TFA.

By the way nobody use exclusively bash. When i worked for a cloud provider, it was basically 30% python(ansible), 30% perl, 5 to 10% bash, and a bit of other languages depending on the client needs (mostly java, but also Julia and R).


There are workloads where shell scripts are the so-called right tool for a job. All too often I see people writing scripts in "proper" languages and calling os.system() on every other line. Shell scripts are good for gluing programs together. It's fine to use them for that.


For me it's once you make switch to a "proper" language you realize how much lifting pipelines do when it comes to chaining external binaries together.


Heaping things together is better than letting things stack up/down.


1000% THIS. The trick, of course, is knowing when it's time to abandon shell for something more powerful, but that usually comes with experience.


I wrote such a program, that runs other programs for heavy lifting but also parses text which you can't possibly do in bash.


bootloader, systemd, or init ?

Parsing text isn't anything fancy.

It's just knowing what the marker is for a word/item boundary.

For bash, that marker is defined in IFS


A build system for single file programs.


Eh, this is true but I dont think its because of the programming model of bash. I feel like this is conflating the *nix ecosystem with bash. If every programming language was configured by default and had access to standard unix tools with idiomatic bindings, Shell's advantages would be greatly reduced. You still get a scripting language with some neat tricks but I don't think I would reach for it nearly as often if other things were an option.

And sure sure you can call any process from a language but the assumptions are different. No one wants to call a Java jar that has a dependency on the jq CLI app being available.


This has been tried repeatedly - language idiomatic bindings tend to be clunky compared to (e.g.) a simple | pipeline or a couple of <() io redirections.

Shell is a tool that turns out to be pretty good for some things, particularly composing functionality out of other programs and also doing system configuration/tuning stuff to tailor an environment for other programs. It's also really handy for automating tasks you find yourself repeating.

Programming languages are a tool that are pretty good for other things - making new programs, tricky logic, making the most (or at least more than a shell script launching 1000s of new processes) efficient use of a computer.

Trying to replace one with the other is not really useful - they have different jobs. Learning to use them in conjunction on the other hand... there's a lot of power in that.

By comparison - javascript and html. They don't replace each other - yet they are both computer languages used in the same domain, and both have strengths and weaknesses. They have different jobs. And when you use them in conjunction you get something pretty darn powerful.


I also like Bash - it's a powerful language, especially when combined with a rich ecosystem of external commands that can make your life easier, e.g. GNU Parallel.

Handling binary data can also work in Bash, provided that you just use it as a glue for pipelines between other programs (e.g. feeding video data into ffmpeg).

One time, while working on some computer vision project, I had a need to hack up a video-capture-and-upload program for gathering training data during a certain time of day. It took me about 20 minutes and 50 lines of Bash to setup the whole thing, test it, and be sure it works.


To add to this, it's designed to work in conjunction with small programs. You don't write everything using bash (or whatever shell) built-ins. It will feel like a crappier Perl. If there is some part of your script where you're struggling to use an existing tool (f.g. built-ins, system utils), write your own small program to handle that part of the stream and add it in to your pipe. Since shell is a REPL, you get instant feedback and you'll know if it's working properly.

It's also important to learn your system's environment too. This is your "standard library", and it's why POSIX compatibility is important. You will feel shell is limited if you don't learn how to use the system utilities with shell (or if your target system has common utilities missing).

As an example of flexibility, you can use shell and system utilities in combination with CGI and a basic web server to send and receive text messages on an Android phone with termux. Similar to a KDE Connect or Apple's iMessage.


> I feel like those assertions are more socially-founded opinions than technically-backed arguments

You think the complaints about rickety, unintuitive syntax are "socially founded"? I can't think of another language that has so many pointless syntax issues every time I revisit it. I haven't seen a line of Scheme in over a decade, and I'm still fairly sure I could write a simple if condition with less likelihood of getting it wrong than Bash.

I came at it from the other end, writing complex shell scripts for years because of the intuition that python would be overkill. But there was a moment when I realized how irrational this was: shell languages are enough of a garbage fire that Python was trivially the better choice for my scripts the minute flow control enters the picture.


> with it's dynamic scope

Bash has dynamic scope with its local variables.

The standard POSIX language has only global variables: one pervasive scope.


Line-oriented pipelines are great and have their place but I'm still sticking to a high-level general purpose programming language (lets abbreviate this as HGPPL) for scripts longer than 10 lines, because the following reasons:

* I like to the HGPPL data structures and convenient library for manipulating them (in my case this is Clojure which has a great core library). Bash has indexed and associative arrays.

* Libraries for common data formats are also used in a consistent way in the HGPPL. I don't have to remember a DSL for every data format - i.e. how to use jq when dealing with JSON. Similarly for YAML, XML, CSVs, I can also do templating for configuration files for nginx and so on. I've seen way too many naive attempts to piece together valid YAML from strings in bash to know its just not worth doing.

* I don't want to switch programming language from the main application and I find helps "break down silos" when everyone can read and contribute to some code. If a team is just sysadmins - sure, make bash the official language and stick to it.

* I can write scripts without repeating myself using namespaces and higher-order functions, which my choice of paradigm for abstractions, others write cleanly with classes. You can follow best practices, avoid the use of ENV vars, but that requires extra discipline and it is hard to enforce on other for the type of places where bash is used.


Also the fact that $() invokes a supparser which lets use double quotes in an already double quoted expression is something I miss when using Python-f strings.


> My basic thesis is that Shell as a programming language---with it's dynamic scope, focus on line-oriented text, and pipelines---is simply a different programming paradigm than languages like Perl, Python, whatever.

This argument is essentially the same as "dynamic typing is just a different programming paradigm than static typing, and not intrinsically better or worse" - but to an even greater extent, because bash isn't really typed at all.

To those who think that static (and optional/gradual) typing brings strong benefits with little downsides over dynamic typing and becomes increasingly important as the size of a program increases, bash is simply unacceptable for any non-trivial program.

Other people (like yourself) that think that static typing isn't that important and "it's just a matter of preference" will be fine with an untyped language like bash.

Unfortunately, it's really hard to find concrete, clear evidence that one typing paradigm is better than the other, so we can't really make a good argument for one or the other using science.

However, I can say that you're conflating different traits of shell languages here. You say "dynamic scope, focus on line-oriented text, and pipelines" - but each of those are very different, and you're missing the most contested one (typing). Shell's untypedness is probably the biggest complaint about it, and the line-oriented text paradigm is really contentious, but most people don't care very much about the scoping, and lots of people like the pipelines feature.

A shell language that was statically-typed, with clear scoping rules, non-cryptic syntax, structured data, and pipelines would likely be popular and relatively non-controversial.


Eh, as soon as you have to deal with arrays and hash tables/dicts or something like JSON, bash becomes very painful and hard to read.


I mean they're not that bad.

    declare -A mydict( [lookma]=initalization )
    mydict[foo]=bar
    echo "${mydict[foo]}"

    list=()
    list+=(foo bar baz)
    echo "${list[0]}"


Now do an associative array containing another associative array.


Easy.

  declare -A outer=(
    [inner]="_inner"
  )
  declare -A _inner=(
    [key]="value"
  )
Access inner elements via a nameref.

  declare -n inner="${outer[inner]}"
  echo "${inner[key]}"
  # value
Currently writing a compiler in Bash built largely on this premise.


That seems really inconvenient to be honest.


Flatten the damn thing and process it relationally. Linear data scans and copying are so fast on modern hardware that it doesn't matter. It's counterintuitive for people to learn that flattened nested structure with massive duplication still processes faster than that deeply nested beast because you have to chase pointers all over the place. Unfortunately that's what people learn at java schools and they get stuck with that pointer chasing paradigm for the rest of their careers.


Then what I need is tuple on bash


Sometimes a you just have to accept a language's limitations.

Try in Python to make a nested defaultdict you can access like the following.

    d = <something>
    d["a"]["b"]["c"]  # --> 42
Can't be done because it's impossible for user code to detect what the last __getitem__ call is and return the default.

Edit: Dang it, I mean arbitrary depth.


    c = defaultdict(lambda: 42)
    b = defaultdict(lambda: c)
    a = defaultdict(lambda: b)
    a["a"]["b"]["c"]  # --> 42


Okay fair, I deserve that. I assumed it was obvious I meant arbitrary depth.

Also d["a"] and d["a"]["b"] aren't 42.


If d["a"]["b"] is 42, then how could d["a"]["b"]["c"] also be 42? What you want doesn't make sense semantically. Normally, we'd expect these two statements to be equivalent

d["a"]["b"]["c"] == (d["a"]["b"])["c"]


I mean you got it but it's something a lot of people want. The semantic reason for it is so you can look up an arbitrary path on a dict and if it's not present get a default, usually None. It can be done by catching KeyError but it has to happen on the caller side which is annoying. I can't make a real nested mapping that returns none if the keys aren't there.

    d = magicdict()
    is42 = d["foo"]["bar"]["baz"]
      # -> You can read any path and get a default if it doesn't exist.

    d["hello"]["world"] = 420 
      # -> You can set any path and d will then contain { "hello": { "world": 420 }
People use things like jmespath to do this but the fundamental issue is that __getitem__ isn't None safe when you want nested dicts. It's a godsend when dealing with JSON.

I feel like we're maybe too in the weeds, I should have just said "now have two expressions in your lambda."


What languages allow such a construct? It seems like it would be super confusing if these two code samples produced different values:

    # One
    a = d["a"]["b"]["c"]
    
    # Two
    a = d["a"]["b"]
    b = a["c"]


The MagicMock class from unittest package does what you want.

I have a hard time understanding any use case outside of such mocking.


In this case you're chaining discreet lookup operations where it sounds like you really want a composite key. You could easily implement this if you accepted the syntax of it as d["a.b.c"] or d["a", "b", "c"] or d.query("a", "b", "c")

Otherwise I'm not sure of a mainstream language that would let you do a.get(x).get(y) == 42 but a.get(x).get(y).get(z) == 42, unless you resorted to monkey patching the number type, as it implies 42.get(z) == 42, which seems.. silly


Kindred spirit. I particularly love variable variables and exploit them often. Some would call it abuse I guess.


The biggest Issue is that error handling is completely broken in POSIX shell scripting (including Bash). Even errexit doesn't work as any normal language would implement it (One could say it is broken by design).

So if you don't care about error cases everything is fine, but if you do, it gets ugly really fast. And that is the reason why other languages are probably be better suited if you want to write something bigger that 10 lines.

However, I have to admit, I don't follow that advice myself...


> The biggest Issue is that error handling is completely broken in POSIX shell scripting (including Bash). Even errexit doesn't work as any normal language would implement it (One could say it is broken by design).

I guess you're referring to http://mywiki.wooledge.org/BashFAQ/105. Got recently hit by these as well.


Yes and my personal favorite: Functions can behave differently depending on, if they are being called from a conditional expression vs. from a normal context. Errexit has no effect if the function is called from a conditional expression.


I sometimes regret I never learned to "really" write shell scripts. I stumbled across Perl early on, and for anything more complex than canned command invocation(s) or a simple loop, I usually go for Perl.

There is something to be said in favor of the shell being always available, but Perl is almost always available. FreeBSD does not have it base of the base system, but OpenBSD does, and most Linux distros do, too.

But it is fun to connect a couple of simple commands via pipes and create something surprisingly complex. I don't do it all the time, but it happens.


As someone who has used a lot of shell over my career, I do love it as a utility and a programming paradigm.

However the biggest issues I've had is that the code is really hard to test, error handling in shell isn't robust, and reusability with library type methods is not easy to organize or debug.

Those are deal breakers for me when it comes to building any kind of non trivial system.


Shell scripting also inspired some choices (especially syntax) of the Toit language (toitlang.org).

Clearly, it's for a different purpose, and there are some things that wouldn't work in a general-purpose language that isn't as focused on line-based string processing, but we are really happy with the things we took from bash.


Aye.. I've been saying for years that shell scripting is how I meditate, and I'm only mostly joking

Shell quoting though, Aieeee...

I find I have to shift gears quite substantially moving from shell or powershell to anything else...

"I'll just pipe the output of this function into.. oh, right"


I've written a lot of shell scripts. I have my own best practices that work for me. I don't like it one bit. I mean, it's enjoyable to write shell scripts, it's just not enjoyable to deal with them long-term.


> Use bash. Using zsh or fish or any other, will make it hard for others to understand / collaborate. Among all shells, bash strikes a good balance between portability and DX.

I think fish is quite a bit different in terms of syntax and semantics (I'm not very familiar with it), but zsh is essentially the same as bash except without most of the needless footguns and awkwardness. zsh also has many more advanced features, which you don't need to use (and many people are unaware of them anyway), but will very quickly become useful; in bash all sorts of things require obscure incantations and/or shell pipelines that almost make APL seem obvious in comparison.

In my experience few people understand bash (or POSIX sh) in the first place, partly because everything is so difficult and full of caveats. Half my professional shell scripting experience on the job is fixing other people's scripts. So might as well use something that doesn't accidentally introduce bugs every other line.

Most – though obviously far from all – scripts tend to be run in environments you control; portability is often overrated and not all that important (except when it is of course). Once upon a time I insisted on POSIX sh, and then I realised that actually, >90% of the scripts I wrote were run just by me or run only in an environment otherwise under my control, and that it made no sense. I still use POSIX sh for some public things I write, when it makes sense, but that's fairly rare.

I think bash is really standing in the way of progress, whether that progress is in the form of fish, zsh, oil shell, or something else, because so many people conflate "shell" with "bash", similar to how people conflate "Google" with "search" or "git" with "GitHub" (to some degree).


I can't really stand Bash's arcane syntax, it drains my brain power (and time of consulting manual) every time I have to work with it. Switching to Fish has been a breath of fresh air for me. I think some people who want to use only Bash need to open their conservative mind. All of my personal shell scripts now are converted to Fish. If I want to run some POSIX-compatible script then I just use `bash scripts.sh`

Of course Bash is ubiquitous so I use them whenever I can in the company. A golden rule for me is: if it has more than 50 lines then I should probably write in a decent programming language (e.g. Ruby). It makes maintenance so much easier.


Bash as a language is a downright bad. Especially the mess around `,",'. Fish is better is in this regard, however the syntax of Fish's string-related functions is unbearable. (I have a growing suspicion, with string-related functions, syntactically valid expressions can be constructed, which don't compile!)

However, neither Bash nor Fish were created with Composability in mind, which is a show-stopper for me.

IMO don't use Bash if the script is longer than 20 lines and don't use Fish if it's longer than 50. Use Python. If you want to use a proper(!) language use any LISP-Dialect like Babashka, Guile Scheme, Racket, etc. If you need Types have a look at Haskell-Scripting.

EDIT: To clarify, use Fish for its bling-bling capabilities, don't use it for scripting and configuring your machine(s).


This battle was lost a long time ago. Bash is the standard on most UNIX systems. If you change this reality, one might even start to try to think about writing in fish or some other new shell. But I will not even consider another shell for scripts that need to be run by other people.


POSIX shell is the standard, not bash.


That ship has sailed, because busybox ash and dash continually implement some of bash features and semantics, which come from ksh.

And OSH implements almost all of bash

That is, The posix shell spec is missing a lot of reality. It’s not very actively maintained, unfortunately

The canonical example is not having local vars, which basically every shell supports


> The posix shell spec is missing a lot of reality. It’s not very actively maintained, unfortunately

POSIX was primarily intended a descriptive specification, rather than a prescriptive.

That is, it attempted to document and standardize the common behaviour found on many platforms during the Great Unix Wars, rather than say "hey we thought of this great new thing and released a spec, go implement it!", which is more how, say, web standards work. I does/did have some of that, but it was never the main goal.

These days "whatever Linux does" is the de-facto standard, for better or worse, and the need for POSIX is much less.


That's just feature creep, eventually they will implement all features of bash, zsh, csh, fish, elvish, oil, HolyC and all other shells that emerge in the meantime.


"Everybody using some extensions" is not a contradiction to "The standard is the baseline".

> It’s not very actively maintained, unfortunately

Because that's how standards works. If there is enough interest a new standard will be made, but right now, bash isn't it.


I agree that POSIX is needing of a few small additions;

* local keyword * 2 or 3 variable expansion tricks (string replacement etc) * pipefail

But with those, I think the spec isn't too bad. I've been writing (imo) high-quality (and shellcheck compliant) shell scripts for a decade, and _always_ try to be as pendantic about being posix compliant where humanly possible. Sometimes things are a _little_ harder (no sarcasm here), but it's really quite doable once you get the hang of it.

You scripts then end up being far more portable, and as stated below, MUCH easier to read. The trick is, to write readable code (which is hard enough for most people, regardless the language anyway :p)

e.g. (a very simple example), and an addition to Shrikant's post, avoid the terse double ampersant 'and' (&&) and double pipe 'or' (||) operators, in normal calls. do e.g. ```sh if ! external_command; then do something with failure fi ``` Rather then `external_command || do something with failure`

These can get up becoming really hard to read once they become longer, where reading a stupid "if" statement is easy to comprehend. Readability over saving number of lines.

One important hint left forgotten, always make sure the last line of your file is `exit 0` or similar. This is more a 'weak' security feature. Prevent people from appending files and executing stuff instead. Gives a known exit point.

Another addition would be to actually favor single quotes for pure strings, and use double quotes where you expect the shell to do something with your string (true for most cases, but there's plenty of strings that benefit from single quotes, and hint the reader what to expect). Also integers should never be quoted (as it would turn then into strings), which shows a 'bad' example in the trace statement, you are now comparing the strings '0' and '1' rather then the number. Best use something easier to read in that case.

One thing I will pick up from this post for sure, is the trace hint, I'm adding that to my scripts, but probably a little more tunable.

```sh set -eu if [ -n "${DEBUG_TRACE_SH:-}" ] && \ [ "${DEBUG_TRACE_SH:-}" != "${DEBUG_TRACE_SH#"$(basename "${0}")"}" ] || \ [ "${DEBUG_TRACE_SH:-}" = 'all' ]; then set -x fi

echo 'Hello World'

exit 0 ``` Though I'll probably just settle for one of those keywords, I like 'all' the best atm. This would run the trace only if this special keyword is given, or if the variable contains the name of the script. I initially had `basename "${0%%.sh}"` but that makes things like `test,test1,test2` impossible, though that only helps if the extension is used ;)

While not needing be part of this list, I personally also always use functions, unless the script really is less then a handful lines, like google's bash-guide, always have a main etc.

In the end though I do disagree writing bash and bashisms. There's plenty of posts about that though; kind of like the whole C++ 'which version to use' discussion, where bash tends to be inconsistent in itself.

Shameless plug: See some of my older stuff here https://gitlab.com/esbs/bootstrap/-/blob/master/docker_boots...


Not going to install fish on all of my servers just so i can run your scripts, sorry. They already have bash pre-installed, though.

> If I want to run some POSIX-compatible script then I just use `bash scripts.sh`

Shouldn't you be using a shebang?


>Shouldn't you be using a shebang?

Why would they use a shebang? `bash script.sh` works perfectly fine. Ever used a terminal?


I use fish as an interactive shell but I don't write fish scripts. Once you accept a script has dependency there seems little reason not to go all the way to python (or usually I just go all the way to rust now, but I suspect others may disagree with me more on that than python)


If you are going to write in a language that requires installing additional dependencies on every machine, why not something like Lua? The great thing about bash for me is that it just works on most machines without dependencies.


A little personal color: I’m kind of a terminal tweak-fanatic but I’ve stuck with bash.

Ten years or so ago the cool kids were using zsh: which is in general a pretty reasonable move, it’s got way more command-line amenities than bash (at least built in).

Today fish is the fucking business: fish is so much more fun as a CLI freak.

But I guess I’ve got enough PTSD around when k8s or it’s proprietary equivalents get stuck that I always wanted to be not only functional but fast in outage-type scenarios that I kept bash as a daily driver.

Writing shell scripts of any kind is godawful, the equivalent python is the code you want to own, but it’s universality is a real selling point, like why I keep half an eye on Perl5 even though I loathe it: it may suck but it’s always there when the klaxon is going off.

The best possible software is useless if it’s not installed.


I personally really dislike fish as an interactive shell as it's just so busy. Things keep popping up, everything is in so many different colours, etc. It's great if you like that sort of stuff, but I really appreciate a "quiet" environment. This is also why I use Vim: all the IDEs I tried are just so "busy".

I was only talking about scripting; I know fish scripting is different, but I have no idea if it's any good. For interactive shells I don't care what people use: it's 100% a personal choice.


if you want `fish_config` opens up an easy editor for changing all the colors to whatever you find quiet and soothing.

You have a level of control over things popping up too


Quite a few things can't be disabled; for example AFAIK it doesn't offer a way to disable the autocomplete altogether, or the "fuzzy" matching. I really dislike these things. Fish is a great shell, but very opinionated which is great if your preferences align with that, and not-so-great if they don't. Which is fine because it makes the project better for those who do want these things, and not every project needs to cater to everyone.


This comment is so reasonable I’m getting a contact high of pragmatism.


I don't know fish, but I don't consider zsh a step in the right direction, as it tries to be just a cleaned up Bash, which is not enough.

There is a general problem in the fact that a radical evolution of glue languages wouldn't be popular because devs rather use Python, and small evolutions wouldn't be popular (ie. zsh), because they end up being confusing (since they're still close to Bash) and not bringing significant advantages.

I'm curious why there haven't been attempts to write a modern glue language (mind that languages like Python don't fit this class). I guess that Powershell (which I don't know, though) has been the only attempt.


zsh is not a "cleaned-up bash"; it's more of a clone of ksh (closed source at the time), with some csh features added in, as well as their own inventions. bash and zsh appeared at roughly the same time, many features were added in zsh first and added to bash later (sometimes much later, and often never).

This is kind of a good example of what I meant when people conflate "bash" with "shell".

As for your larger point: I kind of agree, but I think what zsh offers is the advantages of shell scripts with compatibility with existing scripts while still improving on it. That said, I believe oil also offers compatibility, but I haven't had the chance to look deeply in to it; just haven't had the time, and wanted to wait until it's stable (maybe it is now?)

Perl was initially invented as the "modern glue language" to replace shell. It's fallen a bit out of fashion these days though, and to be honest I never cared all that much for Perl myself either. Raku looks nice though. TCL also works well as a kind of "glue language", although it has some really odd behaviour at times due to everything being a string and I know some people hate it with a passion, but it always worked fairly well for me. But that has also fallen out of fashion.

I've also been told PowerShell is actually quite nice and has interesting concepts (and now also open source, and you can run it on e.g. Linux), but I could never get over the verbosity of it all. I'm an old unix greybeard and I want my obscure abbreviations dammit!


The verbosity of PowerShell is overstated I think. You easily make POSH look as gnarly and esoteric as Bash if you so desire. That said, the majority of heavy lifting in POSH is done via methods these days (vs cmdlets). Your initial API query to snag the JSON might be via a cmdlet, but after that, you're slicing and dicing with real data structures. You can interact with them without having worry about whitespace or structure (meaning complex loops can easily be written on the command line without worrying about indentation).

It's a little more wordy if you're use to C or Bash. But hands down it's one of my favorite languages for slicing dicing data. No need for 3rd party libraries or binaries. No need to learn a bunch of weird awk/jq syntax which is only useful for those two tools (yay, lets learn 3 language instead one?). Plus, most of the structure translates over to C#, and you can integrate C# code directly into your POSH code if desired, as well as access pretty much any low level C# methods directly.

Working with strings? Pretty much any/every tool you could want to slice and dice strings.

The POSH REPL is amazing. You have far more flexibility around interacting with the command line than you do with Python. It's both a shell and a true language. As with any language, there are ISMs, but far fewer footguns than any other language I've spun up.

Cross platform as well with 6.0+

Intellisense ON the commandline (did I mention the awesome REPL?). Hands down one of the best built-in parameter/args/help parsing I've encountered across any language. Debugging? Amazing in vscode. And can be done strictly from the commandline as well (dynamic breakpoints? You've got it, drop you right into your catch block with an interactive shell so you can see the current status of any/all variables and manipulate them live and resume if desired)

Okay, I'm done shilling for POSH. It's hands down one of my favorite shells/languages for doing POC work, or writing utility functions. Treat it more like Python than bash. But realize that you can easily use that Pythonic-esque code right inside your shell.


+1 for the Powershell ISE/Repl - its by far the most user friendly entry to administrative scripting I've ever run into.


I agree with all of this. Well said.


Just so you know. You can abbreviate almost anything in powershell or make your won aliases. I love Powershell, hands down best investment in my personal career was to really learn and understand Powershell.


I'm a Unix user and spent almost all of my professional career in bash, and switched to Powershell for my interactive shell a few years ago.

The nice thing with Powershell is that it's not verbose, but the arcane abbreviations are actually quite a bit easier to remember and discover than bash. What mixes people up is that in documented examples and reusable scripts, it makes sense to use the full, canonical name, which looks aesthetically different coming from a Unix background.

Here's what it might actually look like to check a JSON file that has an array of file metadata objects, and delete the ones that have been processed (this includes one user-defined alias, cfj, for "ConvertFrom-Json"):

  gc queue.json | cfj | where status -eq processed | ri
That seems pretty NON-verbose to me, equivalent to how you'd approach this in bash. Do you have jq installed? If you do, perhaps:

  jq '.[] | select(.status == "processed")' queue.yaml | xargs rm
If you don't have jq I think this gets much longer and is either really brittle (you're doing some kind of adhoc parsing that's terrible) or really heavyweight (like pulling in a pure bash JSON library--they exist) or you're using a one-liner from another programming language (Ruby, Python, something). Or maybe you'd complain to whatever was writing 'queue.json' and ask for a friendlier format, like something terrible and one-off you invented because it's easy to "parse" with awk?).

It's even better if you're dealing with network resources:

  $api = https://api/
  irm $api/queue.json | ?{ $_.status -eq processed } | `
    %{ irm -M Delete $api/files/$_.file }
That shows off another alias of Where-Object how you do a foreach loop in a pipeline. And bash:

  api=https://api/
  curl $api/queue.json | \
    jq '.[] | select(.status == "processed") | .file' | \
    while read file; do curl -X DELETE "$api/files/$file"; done
What probably makes you think Powershell is verbose is that, although that's how I type Powershell, it's not how I document it. If I were documenting it for someone else's use, or incorporating that pipeline into a script or module, I'd write it like this:

  Get-Content -Path queue.yaml | ConvertFrom-Json | `
    Where-Object -Property status -Eq 'processed' | Remove-Item
So it's not like it's verbose while you're using it, but verbosity is something you can reach for for clarity when it's desirable. Likewise, you can see the consistent relationship between these commands and their aliases: gc -> Get-Content, cfy -> ConvertFrom-Yaml, ri -> Remove-Item. So you have options for how verbose you need to be. I find it's very useful to have a spelled-out version for commands I don't use all the time, like 'Get-EC2Instance', and consistent verbs so I can make reasonable guesses at the command name for things I'm even less familiar with.

I didn't want to clutter this with too many examples but I'll reiterate that Powershell is a shell. It invokes your Unix commands just fine. For example, if you forgot what the option to Get-Date is, to get a Unix-style timestamp (it's 'get-date -uf %s' so not exactly hard to use): you can just type 'date +%x'. In the example above, I could have used 'cat' and 'xargs rm' instead of 'gc' and 'ri'. So it's not like you have to buy the whole kit and kaboodle right away, either.


Very well said, most people don't know about shorter way


zsh is, historically, a step from csh-like interactive shells in the direction of Bourne/ksh compatibility. It's easy to get the impression that zsh is a newer development than bash, but they're actually contemporary—bash rode on the popularity of GNU in the 90s, despite being a "small evolution" (frankly, a step back) compared to the ksh lineage.


It certainly didn’t hurt the popularity of Bash by having it be the default shell on tens of millions of Macs for so many years.

I’m aware that ZSH has been the default shell since Catalina.

Started using fish a month ago and really liking it.


tcsh was the default shell before that, and it didn't help much with its popularity, and for interactive usage tcsh can do most of the things bash can and is mostly okay (not scripting though).

I think being the de-facto default on Linux as part of "GNU plus Linux" has more to do with it.


Make sure you check out abbreviations

Like aliases but they expand in-place, so auto completion friendly, easily modifiable, etc. Love them

$ abbr s sudo

$ s<space> -> sudo


Oilshell is attempting new stuff, though.


I think the article means using Bash for scripting, while the reader could use anything they want interactively. That's what I do—I use zsh, but I don't script in zsh.


Yes, I was talking about scripting. I don't care what people use for their interactive shell: that's their own personal choice.


> Most – though obviously far from all – scripts tend to be run in environments you control; portability is often overrated and not all that important (except when it is of course)

If you're at that spot, don't use shell in the first place but whatever other scripting language your team uses. Well, unless it's "pipe this to that to that", sh has no parallel here


If I had the choice of using zsh, then most likely I would had the choice to use python.


Is there any way to get MacOS to stop nagging you about zsh?


Export $BASH_SILENCE_DEPRECATION_WARNING as described in the Apple web page pointed to by the nag message, or change your shell to your own version of Bash.

See also <https://apple.stackexchange.com/questions/371997/suppressing...>. I went with the "use an updated brewed Bash" approach, which has been working well. Using `sudo chfn` means you don't need to futz around with editing /etc/shells.


ah, that is lovely. Thank you.


I just use whatever the default is, and since that's zsh on macOS now, I just ported my ~/.profile to be functionally interchangeable between zsh and bash

Same PS1, aliases, functions, etc. but with a couple of slight variations due to syntax differences


There is no default scripting language, though.


First time I’ve ever seen “good DX” as one of bash’s selling points.


sh and bash feel pretty primitive after learning PowerShell.


I tried PowerShell, hated it. The idea of manipulating objects instead of text streams is interesting, and avoids most of the footguns sh/bash have, but you also lose in flexibility.

One reason is that there are thousands of command line tools in the UNIX ecosystem that process text streams and are designed to work with shells like bash. You have much less options when you are processing PowerShell objects.

Note: I think the first thing I tried to do in PowerShell was a script that scanned a directory recursively for files containing a CRC in their name, and then check it, or something along these lines. After several hours of trying, I simply couldn't do it while it was relatively straightforward in bash, even with spaces in file names.

And that's not that I like UNIX shell scripting, in fact I hate it, so many footguns, that's why I wanted to try PowerShell, but it didn't fit my needs.


What things are you not able to do with PowerShell that you can with Bash + utilities? PowerShell literally gives you access to every way you can manipulate a string in C#. I get it, you're familiar with bash. But just because you know how to do something with something you're familiar with, doesn't mean PowerShell can't do what you wanted it to do.

Not trying to be combative, but learning any new language is going to require some dogfooding and digging in order to be as efficient as something you're already familiar with. I use bash and PowerShell daily. I can't think of any one thing I couldn't do in one or the other. Bash usually requires tying in another tool though. It's not strictly bash at that point. People saying stuff like jq. I rarely find that pre-installed. And if it's your machine, you can just as easily install something that doesn't require you to learn multiple languages (jq is a language).

I here you. I've been there. I'd lazily prod you to give POSH another shake. I've no horse in this race but I think you're missing out if you weren't able to accomplish in POSH what you were able to do in bash.


"You have much less options when you are processing PowerShell objects."

There is simply no need for the kind of extensive text processing common in Linux because every command returns an object whose fields can be directly referenced. Combined with the ConvertTo-Json command this is incredibly powerful. Honestly it seems like you are attempting to do things in PowerShell the bash way instead of the PowerShell way.

" I think the first thing I tried to do in PowerShell was a script that scanned a directory recursively for files containing a CRC in their name, and then check it"

I wrote a PowerShell script that recursively scanned every filed and folder on our fileshares and wrote the permissions to a file for later indexing.


I tried to write a PowerShell script to recursively scan and find files/folders older than a certain date, but kept hitting problems with the length of the path/filenames. As a complete PowerShell noob, I'm sure I was trying to do it the wrong way, but after a few attempts, I gave up and install cygwin instead.


Your problem here is with win32 not powershell I suspect. Your script would probably have worked with powershell on Linux, but windows you need to use UNC paths to get a 2^16 - ~20 character path limit rather than the 256 character path limit of regular paths.

Or there's some registry hacks to remove the limit from regular paths.


You're probably right. It was a few years ago and after trying a couple of alternatives, just went with what I know.


You can enable long path support in Windows to have paths up to 32,767 characters long

https://learn.microsoft.com/en-us/windows/win32/fileio/maxim...


It was a few years back, so probably before Windows included that ability (Windows 10 onwards?)


   ls -File | % { $fh = Get-FileHash $_ SHA1; if ($_.Name -match $fh.hash) {$_} }


Doesn't work, it is not recursive and it was a CRC, not SHA1 but I probably could have found a solution starting from there.

Anyways, it was a long time ago, maybe I will give PowerShell a retry at some point.


Ofc. it works. If on linux, replace ls with `gci`. Recursion is done with ls -Recurse. Install-Package CRC. Its still single liner.


I feel like powershell hides too much to be used regularly. I have a dozen of small shellscripts and aliases to do basically what PS help me to do when i work on windows (and some), but at least i know how it work behind.

I had to work with Sencha/ExtJS early 2010. It was the same feeling. Yes, it is powerfull, but too much magic happen for something without a clear orientation (at the time, now it is used for data loaded frontend i think). PS i don't understand what it wants me to do.

The language is fine. The interface is now fine, but in 2015 it was the shittiest tty available on modern computers. It's okay since at least early 2021 (when i restarted using windows). I know it should be reasonably better, but i wouldn't trust any PS script written before 2021 to run on my workstation. I run bash scripts i wrote when i started coding.

Still, if you're new to the gig and don't care about free software and commons, you should learn PS (unless you want to work on baremetal or on MC, in this case, bash will be enough).


I recently replaced a bit of code to look up locked files for a file share with SMB cmdlets to do the same.

The performance difference was night and day.

The biggest issue with PowerShell is that PowerShell Core is not yet default on Windows 10/11 and Windows Server. That should be Microsofts highest priority for PowerShell.


Then not hobble who can run powershell scripts out of the box. Which makes it seem like a dangerous tool. Then no one wants to use it. Some form of powershell has been there since win7. Yet in one of the versions they decided 'oh only admins can use this unless you run this special command'. So it makes me have to revert to using CMD scripts for some simple things. Because I do not want to have to walk whoever it is thru enabling powershell.


> I had to work with Sencha/ExtJS early 2010. It was the same feeling. Yes, it is powerfull, but too much magic happen for something without a clear orientation (at the time, now it is used for data loaded frontend i think). PS i don't understand what it wants me to do.

Bit off-topic, but I worked with ExtJS around the same time, and I found it one of the most confusing development experiences I ever had. "It's so easy, just add this one property to this deeply nested object you're passing to this function!" Thinking back on it, it's a really good example of "simple vs. easy". It didn't help it gave not a peep if you got one of those data structures wrong (capitalisation typo, wrong location, etc.)

Plus in hindsight the whole idea of "OS semantics in your browser!" was never a good one to start with, although that wasn't as obvious to me at the time.


Yeah, exactly, powershell is easy, but not simple!

ExtJS (on netbeans with windows server 2004) was always my worst development experience, it was my first internship too. That's probably the reason why it took so long (and a bag of money) for me to try JS, an IDE and developing on windows again (I only used Windows for CTFs and AndroidStudio).


"Primitive" seems like the wrong word to apply. A modern Python programmer (a common example) might think that anything not OOP is archaic. But OOP is just one design pattern. And... it can easily lead to over-engineering and delight in the pattern per se.

Bash, awk, grep et alii don't do OOP. But they are close to the data and are powerful.

Compaints about the notation of these tools (e.g. "line noise") are becoming silly now that we've seen the appearance of some Powershell and Python statements. Any non-trivial notation will have to make choices. ^_^


grep is a domain specific language.

Bash and gawk don't have direct in-language support for OOP, but can be used/abused to do things in an oop manner. aka without in-language support (abstractions/api), takes much more effort to do/understand OOP approach.


""Primitive" seems like the wrong word to apply"

Painful, ugly, unpleasant?


I guess you would admit that beauty is subjective. Beauty in tooling is an even more rarefied discussion.

Code that we might deem "beautiful" may not even compile. ^_^

Does the tool do what it needs to do without getting fussy, gobbling RAM, and requiring a small army of maintainers to feed the monster? There is most definitely a place in technology for that kind of sanity.

There is something to be said for a design that doesn't bring with it a brigade of Opinions-As-A-Service proponents.

Any non-trivial notation is going to require learning. There is absolute value in learning. I think of many flexible and effective notations in tooling that has literally been the foundation of modern computing.

Maybe we are in the age of the Mono-Notation Luxury Coder. ^_^


They don't feel primitive, they are primitive.


Yep, just as a screwdriver is. For certain jobs, that's all you need.


Indeed, except then you have to maintain screwdriver.


I prefer "primal" for old great things.


sh and bash feel a lot more sane and reasonable after learning some PowerShell. PowerShell is mostly not-a-shell.


Can you expand on this? What do you mean by it's "mostly not-a-shell."? I don't understand it.


It's mostly a scripting language whose commands communicate with each other, which also let's you issue commands interactively. In an actual shell, programs and commands communicate with the _user_ via textual output that the user can read and understand; and other programs can take the place of the user, reading and parsing the textual output instead. That's not what happens in PowerShell, mostly.


As much as I love ZSH in my daily life, in sripting I HATE it for not having the "==" operator! >:((


It works inside [[ ]], just not in [ ].

=name will expand to the entry in your PATH. e.g. =ls expands to /usr/bin/ls. So == expands to an executable named =, or rather, it tries to as you probably don't have = in your PATH.

[[ ]] disables expansions (e.g. [[ * = * ]] will work too) so it's not an issue there.


> Use the .sh (or .bash) extension for your file. It may be fancy to not have an extension for your script, but unless your case explicitly depends on it, you’re probably just trying to do clever stuff. Clever stuff are hard to understand.

I don't agree with this one. When I name my script without extension (btw, .sh is fine, .bash is ugly) I want my script to look just like any other command: as a user I do not care what the language program is written in, I care about its output and what it does.

When I develop a script, I get the correct syntax highlight becuase of the shebang so the extension doesn't matter.

The rest of the post is great.


My thumb rule is no extension if the script goes to the local bin folder and `.sh` otherwise. Beyond syntax highlighting, the extension also helps for wildcard matching for file operations (`ls`, `cp`, `for` loop, etc).


And this rule has been followed the majority(if not all) interpreted and scripting languages. The likes of Ruby, python and JS have multiple examples. Whatever executable is in your $PATH it won’t have an extension.

Not sure if this convention is actually documented anywhere.

Random examples:

- https://github.com/PyCQA/isort/blob/main/pyproject.toml#L100

- https://github.com/pypa/pip/blob/main/setup.py#L78

- https://github.com/11ty/eleventy/blob/master/package.json#L1...


Though in any non-trash editor you get syntax highlight based on shebang line alone.

One advantage of no-extension is that you can swap the implementation language later without "breaking" shell history for people in your team.


What I do is have a scripts folder where the names have extensions and which is version controlled and symlink them from `.local/bin`


shellcheck *.sh FTW


> .bash is ugly

"Ugly" is subjective. If I encountered a file with that extension, I'd assume it uses Bash-specific features and that I shouldn't run this script with another shell.


The hashbang already specifies the shell, so also having it in the extension seems unnecessary. I don't like using '.sh' as an extension as it differs from other OS commands and I can't think where it's actually helpful.


If you download a script then running "sh script.sh" is a lot quicker and easier than a chmod followed by ./script.sh. You can of course also type "bash script.sh", but I don't always have it installed on every system, and the .bash extension just clarifies it.

For things in my PATH I drop any suffixes like that.


I see your point, but you can just as easily run "sh script" although that does imply that you already know that it's a shell script (obviously you wouldn't just run something from the internet without checking it first).


> obviously you wouldn't just run something from the internet without checking it first

This died a long time ago with the pervasive use of NPM and PIP and the likes.

Most developers probably run a lot of random unchecked shit all the time with local user privileges today without a blink.

Somehow people are ready for all this, but are still afraid to run a random shell script from the internet. I guess this fear is one of our chances to explain how NPM and PIP can be dangerous.


> This died a long time ago with the pervasive use of NPM and PIP and the likes.

When a malicious package is found on NPM or PIP, it will get removed. However, it is quite unlikely that a website will be taken town for a malicious script (or only after a long time).

I really doubt that most readers of HN would run a random script unless it comes from a source they trust (trusted enough to least to remove a malicious script in a timely fashion).


It's also fairly common to use a docker container that someone else built without having a look at it


I don't know. I do that a lot on test servers when tinkering with new stuff but at work I'm very careful what I insert into my employer's infrastructure. If someone breaks in using a hole I should have taken care of, that's already bad. But if invite bad guys by installing a C&C for them, that's superbad.


If you're committed to being an idiot no amount of rules of thumb will save you. Not understanding what you use is one facet of being committed to that.


But people don't live in a vacuum. They live in an ecosystem, are subjected to it and contribute to it.

If I contribute to Nextcloud or write an app for it, I need to run npm. If I want to run PeerTube, I need to run npm. They both pull a shitload of dependencies I can't possibly review.

I personally avoid building anything using NPM and advocate for fewer / no dependencies, or for using dependencies packaged by reputable entities like Debian, but what can I do? I can't build everything myself.

Am I committed to being an idiot?


The .sh extension told me it's a shell script of some kind.

I don't check everything I download from the internet; I don't think anyone does. It depends on what it is, where I'm getting it from, where I'm running it, etc. There are certainly some things I will review carefully, but other things I give just a quick check to see it's not in complete shambles, and others I barely check at all. I typically run the latest Vim from master, do I check every patch to see if after 30 years Bram finally sneaked in a crypto miner or password stealer? Do the people who package Vim for the Linux distros?


There's a difference between trusting well known software such as vim (especially when packaged by a distro) and trusting shell scripts from essentially anyone. If it's a well known resource, then I'd likely trust the script without checking (e.g. adding a docker repo to Ubuntu), but otherwise I'm going to give it a quick eyeball.

I would tend to agree that scripts for downloading from the internet should have a '.sh' extension to make it clear that it's a script as opposed to a binary.


Indeed; that was my point exactly. People complain about things like "curl https://sh.rustup.rs | sh" from the Rust homepage, but it's essentially the same as trusting "cd vim && ./configure && make && make install" (plus, if I would hide anything I'd do it in the probably quite large binary that script downloads, which is much harder to audit).


Subjetive, indeed. But unless I am missing something, the interpreter to be used should be determined by the shebang within the script though?


Extension or not, .sh or .bash: definitely subjective.

That was the intention of my comment. Because the rest of the post (or most of it for sure) is not subjective.


It's the effect of the extension on USER behavior that's the problem, the OS doesn't care.


Yes, I get it. But in my case I simply give u+x permissions to the script and then run "./script.sh" and then the script will be executed with the interpreter defined in the shebang.


Personally, I prefer to keep the extension and add and alias in my aliases file inside "~/.bashrc.d". Redudant, perhaps, but I like to run a ls inside "~/.local/bin" (the place where I throw user wide personal executables) and be able to see at first glance what is a binary and what is a script.


There are use cases where you don't have execute privileges. In those cases the .sh-extension makes it clear that you can do `bash script.sh`. If you don't use an extension you wouldn't easily see that that was an option.


No, it doesn't. The extensions are usually too inaccurate to rely on. That could be either a Bourne or Bash script, meaning it could either fail at some arbitrary point during run if the wrong one is used, or just subtly, critically change some output. Much more true for Python scripts.


I honestly do it mostly coz IDEA is/was mighty stupid when it comes to detecting file types compared to Emacs... altho newer editions seemed to fix that problem for the most part


Also an extension will prevent execution from cron.d on Debian-based systems.


Really? I've never heard of that and I mainly use Ubuntu which is Debian-based


In my setup, I use aliases or functions to have short/mnemonic names for commands. But the files on disk must always have proper extensions like .sh to quickly see what they are.


The extensions are improper from the outset. Commands should not have extensions.


"Use set -o errexit"

Only if it doesn't matter that the script fails non-gracefully. Some scripts are better to either have explicit error handling code, or simply never fail. In particular, scripts you source into your shell should not use set options to change the shell's default behavior.

"Prefer to use set -o nounset."

ALWAYS use this option. You can test for a variable that might not be set with "${FOO:-}". There is no real downside.

"Use set -o pipefail."

Waste of time. You will spend so much time debugging your app from random pipe failures that actually didn't matter. Dont use this option; just check the output of the pipe for sane values.

"Use [[ ]] for conditions"

No!!! Only use that for bashisms where there's no POSIX alternative and try to avoid them wherever possible. YAGNI!

"Use cd "$(dirname "$0")""

Use either "$(dirname "${BASH_SOURCE[0]}")" or grab a POSIX readfile-f implementation.

"Use shellcheck."

This should have been Best Practice #1. You will learn more about scripting from shellcheck than 10 years worth of blog posts. Always use shellcheck. Always.

Also, don't use set -o nounset when set -u will do. Always avoid doing something "fancy" with a Bashism if there's a simpler POSIX way. The whole point of scripts is for them to be dead simple.


> YAGNI

For most people, YAGNI means using convenient Bash-isms, because their scripts won't ever be run on environments that don't have Bash.

Edit: Admittedly, someone in this thread pointed out the flaw in my argument, there are plenty of cases where you can't assume you have Bash. I still hold that proofing something for all possible environments is itself a YAGNI.


> Only use that for bashisms where there's no POSIX alternative

This seems like really bad advice because the number of people writing bash massively massively outnumbers the people writing sh. Regex matching, glob matching, proper parsing, &&/||, no need to quote.

I would say the opposite, enjoy all the bashisms like (( )) [[ ]], numeric for loops, extended globs, brace expansion, ranges, OH GOD YES VARIABLE REFERENCES, and only rewrite when you absolutely have to make it work on sh.


Almost nobody knows all those bashisms. Nobody on the team will be able to understand it, edit it. Avoid being fancy if there's an uglier, simpler thing.


> Some scripts are better to either have explicit error handling code, or simply never fail.

Silently ignoring sub-commands that exit with a non-zero code is not the same thing as "never failing". Your script might return 0, but that doesn't mean it did what you expect it to.

> Also, don't use set -o nounset when set -u will do.

`set -o nounset` is a lot easier to understand for the next person to read the script. Yes, you can always open the manpage if you don't remember, but that is certainly less convenient than having the option explained for you.

What shell are you using that doesn't support `set -o nounset`? Even my router (using OpenWRT+ash) understands the long-form version.

> Only use that for bashisms where there's no POSIX alternative

I totally disagree. You expect people to know the difference between `[[ ... ]]` and `[ ... ]` well enough to know what the bash version is required? You expect the next person to edit the script will know that if they change the condition, then they might need to switch from `[` to `[[`?

How do you even expect people to test which of the two that they need? On most systems, `/bin/sh` is a link to `/bin/bash`, and the sh-compatibility mode of bash is hardly perfect. It's not necessarily going to catch a test that will fail in `ash` or `dash`.

I think the "YAGNI" applies to trying to support some hypothetical non-bash shell that more than 99% of scripts will never be run with. Just set your shebang to `#!/bin/bash` and be done with it.

I totally agree about `pipefail`, though. I got burned by this with condition like below: ``` if (foo | grep -Eq '...'); then ```

Since `-q` causes grep to exit after the first match, the first command exited with an error code since the `stdout` pipe was broken.


> Silently ignoring sub-commands that exit with a non-zero code is not the same thing as "never failing".

Well I meant the former. Very useful for things like init scripts where you would prefer the script do as much as it can to get something online.

> What shell are you using that doesn't support `set -o nounset`?

You're right, this does appear to be in POSIX, so I guess it's fine. But it is unusual to see in my experience.

> You expect people to know the difference between `[[ ... ]]` and `[ ... ]` well enough to know what the bash version is required?

No, I want them to use POSIX semantics until they have to do something Bash-y. Simplicity when it doesn't cost anything extra is best practice.

> Just set your shebang to `#!/bin/bash` and be done with it.

Homebrew, Jenkins, Asdf, etc may provide their own version of Bash that is required rather than the system default, and some systems have no /bin/bash at all. So you should use #!/usr/bin/env bash for Bash scripts and #!/usr/bin/env sh for POSIX Shell scripts. This lets the user override the PATH with their required version of Bash for this script. (and the script itself can check for versions of Bash, and even re-exec itself)


> The whole point of scripts is for them to be dead simple.

To that end, would it not make more sense to always use `[[ ... ]]` for conditions, when I know my .bash scripts will always be invoked by bash?

Consistency is simple.


>> "Use set -o errexit"

> Only if it doesn't matter that the script fails non-gracefully. Some scripts are better to either have explicit error handling code, or simply never fail.

Then handle those errors explicitly. The above will catch those error that you did not think about.


"Use [[ ]] for conditions"

Oh how I hate the double square bracket. It is the source of many head scratching bugs and time wasted. "The script works in my machine!" It doesn't work in production where we only have sh. It won't exit due to an error, the if statement will gobble the error. You only find the bug after enough bug reports hit that particular condition.

After a couple shots to the foot I avoid double square brackets at all cost.


This should be fixed with a shebang and shellcheck. If your shebang is #!/bin/sh, shellcheck will complain loudly about bash-isms. If production is sh and doesn't have bash, there's quite a few other bash-ism you want to check for. You can run shellcheck in CI to check your scripts and return non-zero if they aren't clean, and you can force off warnings for lines that are ok.

EDIT: I should have said, "could be fixed once and for all", "should" is just my opinion.


If I may ask, why do you only have sh in production?


A common containerization philosophy is to use a bare-minimum base image and add only what you need. Something like an Alpine container doesn't come with Bash.


Alpine docker container only comes with ash shell by default. If you don't use any Bash-isms, you can just write a POSIX shell script. Otherwise if you have many containers from many different sources, you might have to bake all new containers just to add Bash to them.


In this case production is a fleet of embedded devices.


Use a linter.

Pass all scripts through https://www.shellcheck.net/ or use `shellcheck` on the commandline.

Learn the things it tells you and implement them in future scripts.


That should be the zeroth rule of all shell/bash scripting.

I'm almost tempted to put in a self-linting line in scripts so that they won't run unless shellcheck passes completely. (It would be unnecessary to lint the same script every time it's called though, so it's not a serious suggestion).

There should be an option in bash to auto-lint scripts the first time that they're called, but I don't know how the OS should keep track of when the script was last changed and last linted.


It would be simpler to modify shellcheck to add flags to shellcheck that limit the kinds of warnings it produces, and then just run it on every invocation of the script. That keeps everything local and deterministic.


I usually fix the scripts so that shellcheck is completely happy. The occasional times that there's a warning that you know is not relevant, you can add a comment just above the offending line e.g.

# shellcheck disable=SC2154

(That's for not warning about using a variable that hasn't been defined)


Shellcheck is a godsend. I'm not a linuxy guy but have had to write some bash at work for gitlab pipelines... I was getting very frustrated with it until I found shellcheck and it instantly resolved a lot of annoyances I had. I added it into the pipeline for the repo that holds the CI scripts (using the koalaman/shellcheck-alpine docker image) and installed the VSCode extension locally. Super simple.


Thanks for the pointer! For some reason, I never looked for such a tool for shell scripts - and indeed, it pointed out a myriad of things in my code, most of which seem useful.


There's a bug in his template.

He suggests to `set -eu`, which is a good idea, but then immediately does this:

    if [[ "$1" =~ ^-*h(elp)?$ ]]; ...
If the script is given no arguments, this will exit with an unbound variable error. Instead, you want something like this:

    if [[ "${1-}" =~ ^-*h(elp)?$ ]]; then


Good catch. Fixing it.


> set -o errexit

Unfortunately, `errexit` is fairly subtle. For example

    [ "${some_var-}" ] && do_something
is a standard way to `do_something` only when `some_var` is empty. With `errexit`, naively, this should fail, since `false && anything` is always false. However, `errexit` in later versions of Bash (and dash?) ignore this case, since the idiom is nice.

However! If that's the last line of a function, then the function's return code will inherit the exit code of that line, meaning that

    f(){ [ "${some_var-}" ] && do_something;}; f
will actually trigger `errexit` when `some_var` is empty, despite the code being functionally equivalent to the above, non-wrapped call.

Anyway, there are a few subtleties like this that are worth being aware of. This is a good, but dated, reference: https://mywiki.wooledge.org/BashFAQ/105


I'm not convinced about having shell scripts end with ".sh" as you may be writing a simple command style script and shouldn't have to know or worry about what language it's using.

I'm a fan of using BASH3 boilerplate: https://bash3boilerplate.sh/

It's standalone, so you just start a script using it as a template and delete bits that you don't want. To my mind, the best feature is having consistent logging functions, so you're encouraged to put in lots of debug commands to output variable contents and when you change LOG_LEVEL, all the extraneous info doesn't get shown so there's no need to remove debug statements at all.

The other advantage is the option parsing, although I don't like the way that options have to have a short option (e.g. -a) - I'd prefer to just use long options.


TIL about bash3boilerplate, thanks! Going to check it out.


This is fantastic! I'm going to start using this as the baee for all my scripts, and will also start using shellcheck (on CI as well.)


It makes things so much easier. I end up putting in loads of debug statements as I'm writing the script and it just saves time in the long run.


> If appropriate, change to the script’s directory close to the start of the script.

> And it’s usually always appropriate.

I wouldn't think so. You don't know where your script will be called from, and many times the parameters to the script are file paths, which are relative to the caller's path. So you usually don't want to do it.

I collected many tips&tricks from my experience with shell scripts that you may also find useful: https://raimonster.com/scripting-field-guide/


I agree. I make an effort to not change directory wherever possible and if a change is needed, do it in a subshell and just for the command that needs it (hardly any commands actually need it, anyway).

Edit: just had a quick look at your recommended link and spotted a "mistake" in 4.7 - using "read" without "-r" would get caught out by shellcheck.


Fixed, thanks!


In e.g. "Read the great Oil Shell blogpost." it's not clear there's a link there: the "blogpost" is a link but you only see that if you hover your mouse.


Oh, I hadn't noticed that links are not highlighted as such (unless already visited).

Fixed, thanks!


Use the shell only if your script is mostly about calling other programs and filtering and redirecting their output. That's what the syntax of these languages is optimised for. As soon as you need any data manipulation (i.e. arrays, computation, etc.) it becomes a pain and Python is the much better fit.



I've had Perl change its syntax on me and break my pet monitoring system too often to make me feel good about Perl. It hasn't been too hard to keep it running (20 years so far), but starting fresh I'd probably use Python.

Python is much cleaner code than Perl for the most part as well.

However, for anything that should run forever, make sure you have a copy of all of its source code AND its libraries AND the source code for it's compiler. Repo rot is a serious problem over time.


AWK is just fine for data manipulation. And unlike python, you don't need to worry about whether it's installed and in what version.


I see AWK as a general-purpose output filter and aggregator, something like a programmable grep. You read another program's output line by line, and output it filtered and/or summarised. Works nice when the format is known, but in case it isn't, error handling and recovery is not something I would enjoy doing in AWK.


Isn't awk that thing that has different versions with entirely different features and syntaxes? I think one of them ships with mac and the other with, uh, everything I've ever used anyway (maybe not *bsd or something). Or was that sed?

I only ever use it on my own systems so awk works fine for me and I use it regularly, but iirc it's not true that you don't have to worry about versions.

Python is where I don't worry about versions. Everyone's got python3 by now (word got round) and most basics, like print(), works just fine in 2.7 (the main py2 backwards compatibility thing I run into is bytes vs unicode strings; if the script needs to work with raw bytes, you'll just need any python3 version). The issue I run into is with Windows people not having python installed, and (worse) not being able to install it in 30 seconds with one command, but that would be the same with awk.


Arrays are useful for arguments, e.g.

  FOO_ARGS=(
    # Some explanatory comment
    --my-arg 'some value'
    # More comments
    some other args
    #...
  )
  myCondition && FOO_ARGS+=(some conditional args)
  foo "${FOO_ARGS[@]}"


I favor POSIX and dash over bash, because POSIX is more portable.

If a shell script needs any kind of functionality beyond POSIX, then that's a good time to upgrade to a higher-structure programming language.

Here's my related list of shell script tactics:

http://github.com/sixarm/unix-shell-script-tactics


I've rewritten a lot of shell scripts with awk. Obviously it's not a good fit for everything, but when it is a good fit I found it a very pleasant experience. In spite of using Unix systems for 20 years I only learned awk a few years ago and I really beat myself up for not learning it earlier.


Convince me to up my game in awk!

I only use it to select the n'th word in a csv-like line. Anything more than that, I need to search stackoverflow for the invocation.

Don't you find its syntax cumbersome?


> Don't you find its syntax cumbersome?

Not really; just seems the same as most other dynamic languages. Awk does a lot of stuff for you (the "implied loop" your program runs in, field splitting) that's certainly possible (even easy) to replicate in Python or Ruby, but Awk it's just so much more convenient.

I use it for things like processing the Unicode data files, making some program output a bit nicer (e.g. go test -bench), ad-hoc spreadsheets, few other things. I got started with it as I needed to process some C header files and the existing script for that was in Awk; it worked pretty well for that too.

The Awk Programming Language book is pretty good. GNU Awk has a bunch of very useful extensions, but pretty much everything in the book still works and is useful today. You can get it at e.g. https://archive.org/details/awkprogrammingla00ahoa or https://github.com/teamwipro/learn_programing/blob/master/sh...

The GNU Awk docs are also pretty decent.


> I only use it to select the n'th word in a csv-like line.

If it's not a CSV with quotes to allow for commas inside values (which I think AWK will also fail in), you can use `cut`.

    cut -d, -f n
To delimit on a comma and select the n'th field. Reads a bit easier IMO for that common AWK use case (which I used to use AWK for all the time).


I see this argument a great deal here on hacker news. I think it's more important that the scripter understands their use case and develop accordingly. Posix portability is almost never a factor or the things that I develop, because we have a near monoculture in terms of operating system and version.

For me, the additional features that bash provides are much more important than a portability that I'll never need to use.


POSIX sucks when shells aren't implementing it correctly. POSIX says that PS1 expansion needs to support at least the bang ('!') expansion and the regular parameter expansion ('$' and '${'), but i found that several kinds of almquist shells don't support that even when they are explicitly POSIX-compliant, dash included.

Bash in --posix mode does that perfectly.


Do you even ran your code in place where bash wasn't available? I held thought like you... 10 years ago but that really doesn't happen and if it does, rest of it probably won't work either...


Yes. Alpine ash shell (the default),macOS zsh shell (the default), and Oracle Solaris sh shell (the default). The systems are enterprise regulated, so a typical user cannot easily install a different shell. POSIX works great.


Bash extensions are cool to have in the interpreter. I really don't think we need more than the basic POSIX shell for most scripts. I once wrote a tar replacement in it, with a restricted YAML generator and parser and a state machine — don’t judge me, I think I was manic — and the result was weirdly beautiful.


The lack of arrays, dicts and local variables when trying to be POSIX compliant becomes quickly annoying when writing big programs though. There are of course workarounds to deal with those but I wish we didn't have to use them.


Needing arrays, dicts, and local variables is a strong hint that you need something more capable than a shell. So is calling the artefact a program.


> [[ ]] is a bash builtin, and is more powerful than [ ] or test.

Agreed on the powerful bit, however [[ ]] is not a "builtin" (whereas [ and test are builtins in bash), it's a reserved word which is more similar to if and while.

That why [[ ]] can break some rules that builtins cannot, such as `[[ 1 = 1 && 2 = 2 ]]` (vs `[ 1 = 1 ] && [ 2 = 2]` or `[ 1 = 1 -a 2 = 2 ]`, -a being deprecated).

Builtins should be considered as common commands (like ls or xargs) since they cannot bypass some fundamental shell parsing rules (assignment builtins being an exception), the main advantages of being a builtin being speed (no fork needed) and access to the current shell process env (e.g. read being able to assign a variable in the current process).


Thanks. Didn't know the word builtin had a specific meaning in bash, which, seems obvious now in hindsight. Should be fixed soon.


This is not a best practices guide, please look forward to: https://mywiki.wooledge.org/BashGuide

For example, using cd "$(dirname "$0")" to get the scripts location is not reliable, you could use a more sophisticated option such as: $(dirname $BASH_SOURCE)


And THIS is the primary source of my furious hate in my toxic love-hate relationship with bash. Guy is writing bash FOR 10 FUCKING YEARS and still apparently doing it wrong in 10 letter oneliner.

When it comes to bash search for even simplest command/syntax always ALWAYS leads to stackoverflow thread with 50 answers where bash wizards pull oneliners from sleeves and nitpick and argue about various intricancies


It's a case of knowing the wooledge website (and working with shellcheck), or not. Picking snippets on stackoverflow will probably do more harm than good, tbh.


10 years of doing something wrong doesn't make it any better. 10 years of doing something and RTFM instead of random blog posts and SOF may help.


This is the way.

I'm more likely to use the BashFAQ though for actual snippets: https://mywiki.wooledge.org/BashFAQ

I start scripts from the very useful template at https://bash3boilerplate.sh/



Thank you to share. Can you provide an example where "$0" != "$BASH_SOURCE"?

Also, did you mean to write...?

    $(dirname "$BASH_SOURCE")"


Please read: https://mywiki.wooledge.org/BashFAQ/028 this will explain why it's not recommended to use $0


Thanks for sharing. The only actual useful thing I got from this post.


Maybe the discussion should start at: Can you even do anything safely in Bash? - https://mywiki.wooledge.org/BashPitfalls


That's an excellent resource.

Luckily, most commonly encountered scripting issues are with whitespace in filenames/variables and running a script through shellcheck will catch most (all?) of those problems.

It's amazing how edge cases can make a simple command such as 'echo' break. (Top tip - use printf instead of echo)


echo has long been unreliable. Even the built-in echo in the shells were unreliable in SunOS, because the shell would look at the binaries in your PATH and try to figure out whether to emulate the BSD vs SysV (IIRC) version of echo and then change what echo would do. So much for writing a single script (with echo) that would work for all your users on the same host.

This is why you'll see code like this: echo 'prompt: ' | tr -d '\012'

No other simple mechanism was portable at the time. Seriously portability-minded coders still use that line, because although the issue is finally dead in linux+bash (i.e. /bin/echo is enough like bash's builtin) - it's likely still broken in other Unixen out there.


echo is unreliable; I agree. Instead, I use "paranoid" printf with leading double dash:

    prinf -- "fmt str here..." "$carefully" "$quoted" "$args"


printf -- "fmt str here..." "${carefully}" "${quoted}" "${args}"


What is the difference?


In this example: None


That list seems to be loosely sorted by obscurity. I knew the first 29! What's your highscore?


> For copy-paste: if [[ -n "${TRACE-}" ]]; then set -o xtrace; fi

> People can now enable debug mode, by running your script as TRACE=1 ./script.sh instead of ./script.sh.

The above "if" condition will set xtrace even when user explicitly disables trace by setting TRACE=0.

A correct way of doing this would be:

    if [[ "${TRACE-0}" == "1" ]]; then set -o xtrace; fi


Excellent point. Thanks for this. Fixing it.


What would be the justification for 'cd "$(dirname "$0")"'? Going to the scripts directory does not seem very helpful. If I don't care about the current directory, I might just go to '/' or a temporary directory, when I do care about it I better stay in it or interpreting relative command line arguments is going to get difficult. When symbolic links are involved, dirname will also give the wrong directory.


To be fair, it's common to want to be in the script directory for certain classes of scripts. For example, scripts which automate some tasks in a project, and are written for a project.

But, more importantly, people will google for how to set cwd to the script directory more often then will google how to go to an absolute path. Having 'cd "$(dirname "$0")"' as reference in an article discussing best practices and the topic of changing the directory early, is a good idea.


And for certain classes of users, certainly. If I were in $prj/some/dir and called “../../script.sh foo”, I’d expect it to operate on $prj/some/dir/foo, not on $prj/foo. The latter would be a confusing practice, not even remotely the best one.

people will google for how to set cwd to the script directory more often

The answer should suggest setting $script_dir instead of chdir and refer to it when needed, explaining why chdir is a wrong shell mindset except for a very few special cases. It’s okay for personal use, but inheriting such scripts would be an awful experience, imo.


I can only agree with this. In my experience, everybody coming from a lifetime with windows has to learn that a "working directory" has a very real and everyday meaning in Linux. Windows software just isn't designed that way because it usually has GUIs and "Open file" dialogs and doesn't use the cwd mostly. Most shortcuts even execute software in their installation directory (because it's most compatible with developers using relative paths for their assets and ignoring the existence of the cwd mechanic altogether, I guess?).

So for somebody just coming from Windows, this isn't just the wrong mindset for shell scripts, more dangerously it's a mindset they find appealing, because it matches their prior experience on Windows better.


My theory is that this same Windows experience is also how so many joined the command-with-filename-extension cargo cult. But unix and windows work differently, and the approach does NOT port.


> To be fair, it's common to want to be in the script directory for certain classes of scripts. For example, scripts which automate some tasks in a project, and are written for a project

I think it would be more correct to just use vcs to get the root of a project (or fail if you can't), instead of essentially hardcoding path to the script.

For example if you put your script in helpers/ then someone else did a refactor and moved all of the stuff into cmd/helpers/, any relative reference you put into script is now invalid and your script is doing the wrong thing


Those scripts are exactly the ones where I don't want to be in the script's directory. Something like this is more like what I use:

    project=$(git rev-parse --show-toplevel 2>/dev/null || pwd)
ie, find the top of the git project, if there isn't one, use the current dir. My scripts live in ~/bin or similar and aren't where I want them to run.


It's sometimes a bit convenient if you want to read file from the directory the script is stored in. Overall I found it more confusing and awkward than anything else, and prefer setting it explicitly. It's still okay for a quick script though, but as general "best practices" advice: meh.


I also think so. Often script needs to access a file in actual current dir (for example a config file) or process files with relative paths supplied by user and changing working dir makes this hard.

I think an easier way is to find script's location and construct paths for accessing script dependencies, for example (works on Linux & macOS):

  script_root="$(cd "$(dirname "$(readlink "$([[ "${OSTYPE}" == linux* ]] && echo "-f")" "$0")")"; pwd)"
  source "${script_root}/common.sh"
  source "${script_root}/packages.sh"
  source "${script_root}/colors.sh"


I agree, getting to know where a script "comes from" can be complex though. You can `readlink -f` (or equivalent) in many cases, but when implementing a library this might not be entirely practical. I have had to rely on this ugly if-statement [1] for that purpose.

  [1]: https://github.com/Mitigram/mg.sh/blob/cbeb206d67fe08be2107deee50acf877f990dbdf/bootstrap.sh#L6


No, this is pretty meaningless unless sourcing in function.sh or something. If your script is short lived and doesn't have a work area (read/writing files), don't bother with cd. If it's long lived, the best default would be "cd /" or "cd /tmp" - sadly, since bash seem to mmap the script, this still doesn't free up the filesystem for unmounting. Python is different, and "cd /" is a good default for a long-lived program.


I think BASH scripting is the opposite of riding a bike - you end up re-learning it almost every time you need to do it


Then you haven't learned it, or you need it no more than once a year for 15 minutes maybe?

My girlfriend complained about Firefox aalllways needing updates every time she starts it. Yeah, because she used Chrome most of the time, if you start Firefox once every other month, of course that's going to happen every time. This sounds like a similar issue: the software may not be the friendliest, but you can't really expect another outcome if you never use it because you don't like it because you never use it.


What really put a stick in my spokes early on was not realising how whitespace acts differently to what I was used to. syntax error near unexpected token ? I was missing a space inside a [[ ]] - I started paying more close attention, this isn't javascript.


Use shellcheck as a linter for your scripts as that'll catch stuff like that.


Instead of implementing a -h or --help, consider using some code like "if nothing else matches, display the help". The asterisk is for this purpose.

  while getopts :hvr:e: opt
  do
      case $opt in
          v)
              verbose=true
              ;;
          e)
              option_e="$OPTARG"
              ;;
          r)
              option_r="$option_r $OPTARG"
              ;;
          h)
              usage
              exit 1
              ;;
          \*)
              echo "Invalid option: -$OPTARG" >&2
              usage # call some echos to display docs or something...
              exit 2
              ;;
      esac
  done


My request: usage on error must be output to STDERR, but -h must be to STDOUT


Why not both (or all three)? That's what I do.

When I get to a new command I find it a bit anti-social when it takes effort to find the help.


I find it really annoying when I typo an argument and now my shell scrollback is pooped full of help text and you first have to scroll up to find the actual error message (like "invalid choice for --mode" or whatever). Don't remember the most recent offender, but it's typically ancient software that is not in widespread use that does this. Often C or Perl (maybe because those languages are also the oldest).

Running without any arguments? Yes, that should output info in most cases, identical to -(-)h(elp) or even /? and /h(elp) if you're feeling Windowsey that day. Outputting your full usage info, especially when spanning more than half a terminal in full screen on a modern resolution, when "nothing matches"? Please no.


Relying on errexit to save one from disaster is also often fatal, except for surpassingly simple scripts. While inside of many different kinds of control structures, the errexit is disabled, and usually just provides a false sense of security.

For someone who knows errexit can't be trusted, and codes defensively anyway, it's fine.



I can highly recommend Greg's wiki/BASH faq: https://mywiki.wooledge.org/BashFAQ

Now when I'm processing files with BASH, I nearly always end up copying stuff from there as it just bypasses common errors such as not handling whitespace or filenames that contain line breaks.


I agree with basically all of this. A few more:

The order of commandline args shouldn't matter.

Env vars are better at passing key/value inputs than commandline arguments are.

Process-substitution can often be used to avoid intermediate files, e.g. `diff some-file <(some command)` rather than `some command > temp; diff some-file temp`

If you're making intermediate files, make a temp dir and `cd` into that

- Delete temp dirs using an exit trap (more reliable than e.g. putting it at the end of the script)

- It may be useful to copy `$PWD` into a variable before changing directory

Be aware of subshells and scope. For example, if we pipe into a loop, the loop is running in a sub-shell, and hence its state will be discarded afterwards:

  LINE_COUNT=0
  some command | while read -r X
  do
    # This runs in a sub-shell; it inherits the initial LINE_COUNT from the parent,
    # but any mutations are limited to the sub-shell, will be discarded
    (( LINE_COUNT++ ))
  done
  echo "$LINE_COUNT"  # This will echo '0', since the incremented version was discarded
Process-substitution can help with this, e.g.

  LINE_COUNT=0
  while read -r X
  do
    # This runs in the main shell; its increments will remain afterwards
    (( LINE_COUNT++ ))
  done < <(some command)
  echo "$LINE_COUNT"  # This will echo the number of lines outputted by 'some command'


> - It may be useful to copy `$PWD` into a variable before changing directory

$OLDPWD is set when you 'cd'. Also 'cd -' will take you back to the last directory.

https://pubs.opengroup.org/onlinepubs/9699919799/


> The order of commandline args shouldn't matter.

Ugh I've seen so many (bash and non bash) cmdline tools that made it utterly annoying like that

The special place in hell goes to people who force users to write

    cmd help subcmd
instead of

    cmd subcmd --help
or ones that do not allow doing say

    cmd subcmd --verbose
because "verbose is global and doesn't belong to subcmd`

or ones where you need to write

    cmd --option1 subcmd --option2 subsubcomd --option3
and need to jump all over the cmdline if you want to add some option after previous invocation

and if you go "well but the option for command and subcommand might have same name" DONT NAME THEM THE SAME, that's just confusing people and search results.


What's interesting with this example of command1 | command2 is that some shells such as zsh will optimize the last member of the pipeline to be executed in the current process (nothing mandated by POSIX here), so effectively this works on zsh.


> - It may be useful to copy `$PWD` into a variable before changing directory

Why not use pushd/popd instead?


pushd and popd are commands which change the current working directory. In contrast, variables like $PWD are expressions, which are far more flexible. For example, we can run commands like `diff "$OLD_PWD/foo" "$PWD/bar"` which reference multiple directories. Doing that with pushd/popd would be weird, e.g. `diff "$(popd; echo "$PWD/foo")" "$PWD/bar"`


I fully agree. The parent wrote "before changing directory", so I assumed we are talking about the case where changing directory was necessary.


a) if pushd fails you are doing things not in the target directory, and when you call popd you are now in a totally wrong place. set -o errexit should handle this, but there could be situations (at least theoretically) when you disable it or didn't enable it in the first place

b) you need to mentally keep the stack in your head when you write the script. And anyone else who would be reading your script. (Edit: including yourself a couple of months/years later)

c) pushd $script_invocation_path is easier to understand and remember.

Eg:

    $global:MainScriptRoot = $PSScriptRoot
    $global:configPath     = Join-Path $PSScriptRoot config
    $global:dataPath       = Join-Path $PSScriptRoot data

    $dirsToProcess = gci -Path $PSScriptRoot -Directory | ? Name -Match '\d+-\w+' | Sort-Object Name
    foreach ($thisDir in $dirsToProcess) {
        foreach ($thisFile in $moduleFiles) {
            . $thisFile.FullName
        }
    }
It's PowerShell, but the same idea. I use it in a couple of scripts, which call other scripts.


I'd strongly recommend using /tmp/* temp dirs to save your SSD and speed up things.


Master Foo once said to a visiting programmer: “There is more Unix-nature in one line of shell script than there is in ten thousand lines of C.”

The programmer, who was very proud of his mastery of C, said: “How can this be? C is the language in which the very kernel of Unix is implemented!”

Master Foo replied: “That is so. Nevertheless, there is more Unix-nature in one line of shell script than there is in ten thousand lines of C.”

The programmer grew distressed. “But through the C language we experience the enlightenment of the Patriarch Ritchie! We become as one with the operating system and the machine, reaping matchless performance!”

Master Foo replied: “All that you say is true. But there is still more Unix-nature in one line of shell script than there is in ten thousand lines of C.”

The programmer scoffed at Master Foo and rose to depart. But Master Foo nodded to his student Nubi, who wrote a line of shell script on a nearby whiteboard, and said: “Master programmer, consider this pipeline. Implemented in pure C, would it not span ten thousand lines?”

The programmer muttered through his beard, contemplating what Nubi had written. Finally he agreed that it was so.

“And how many hours would you require to implement and debug that C program?” asked Nubi.

“Many,” admitted the visiting programmer. “But only a fool would spend the time to do that when so many more worthy tasks await him.”

“And who better understands the Unix-nature?” Master Foo asked. “Is it he who writes the ten thousand lines, or he who, perceiving the emptiness of the task, gains merit by not coding?”

Upon hearing this, the programmer was enlightened.

(https://catb.org/~esr/writings/unix-koans/)


Personally I try to stick with POSIX sh (testing with dash), if I need anything fancier, I reach for Perl or Python.


POSIX sh also yields better performance, provided you're using dash.


I ran some tests some time ago, and the differences are pretty minimal unless you start doing comp-sci-y stuff in shell scripts. But for the type of thing that people typically use shell scripts for: it makes basically no meaningful difference.


> Check if the first arg is `-h` or `--help` or `help` or just `h` or even `-help`, and in all these cases, print help text and exit.

`-h` and `--help` are fine. But `help` and `h` should only display the help if the script has subcommands (like `git`, which has `git commit` as a subcommand). Scripts that don't have subcommands should treat `h` and `help` as regular arguments -- imagine if `cp h h.bak` displayed a help message instead of copying the file named "h"!

I wouldn't encourage `-help` for displaying the help because it conflicts with the syntax for a group of single-letter options (though if `-h` displays the help, there is no legitimate reason for grouping `-h` with other options).

And ideally scripts that support both option and non-option arguments should allow `--` to separate them (e.g. `rm -- --help` removes the file called "--help"). But parsing options is complicated and probably out of scope for this article.

> If appropriate, change to the script’s directory close to the start of the script. And it’s usually always appropriate.

This is very problematic if the script accepts paths as arguments, because the user would (rightly) expect paths to be interpreted relative to the original working directory rather than the script's location. A more robust approach is to compute the script's location and store it in a variable, then explicitly prepend this variable when you want paths to be relative to the script's location.


There is an error with the template script on a fully patched m1 macbook. $1 is unbound, unless you provide an argument. This seems to be an utterly basic oversight for a template script from someone attempting to educate on bash's best practices. Especially true for seeking a "good balance between portability and DX".


You say fully patched. Does that include upgrading bash from 3.2 released in 2007?


Yes. Version 5.1.16(1).


Shell scripts are great for executing a series of commands with branching and looping logic around them.

As soon as output needs to be parsed — especially when it’s being fed back into other parts of the script — it gets harder. Handling errors and exceptions is even more difficult.

Things really fall down on modularity. There are tricks and conventions: for example you can put all functions to do with x in a file called lib/x.sh, prefix them all with x_, and require that all positional parameters must be declared at the top of each function with local names.

At that point though, I would rather move to a language with named parameters, namespaced modules, and exception handling. In Python, it’s really easy to do the shell bits with:

  def sh(script):
    subprocess.run(
      [‘sh’, ‘-c’, script, ‘--‘, *args],
      check=True,
    )
which will let you pass in arguments with spaces and be able to access them as properly lexed arguments in $1, $2 etc in your script. You can even preprocess the script to be prefixed with all the usual set -exuo pipefail stuff etc.


> "15. Use shellcheck. Heed its warnings."

(Disclaimer: I'm one of the authors)

After falling in love with ShellCheck several years ago, with the help of another person, I made the ShellCheck REPL tool for Bash:

  https://github.com/HenrikBengtsson/shellcheck-repl  

It runs ShellCheck on the commands you type at the Bash prompt as soon as you hit ENTER.

I found it to be an excellent way of learning about pit falls and best practices in Bash as you type, because it gives you instant feedback on possible mistakes. It won't execute the command until the ShellCheck issues are fixed, e.g. missing quotes, use of undefined variables, or incorrect array syntax.

It's designed to be flexible, e.g. you can configure ShellCheck rules to be ignored, and you can force executtion by adding two spaces at the end.

License: ISC (similar to MIT). Please help improve it by giving feedback, bug reports, feature requests, PRs, etc.


My biggest complaint about "idiomatic" shell scripting is the use of the [ and [[ operators. It gives the illusion that [ or [[ are part of the shell syntax when actually they're just programs / builtins / functions which communicate with the rest of the script the same way (most) other things interact -- setting exit status. Specifically this means if .. then .. fi works with any program not just [ [[ operators.

Traditional shell might be:

  grep -q thing < file
  if [ $? -eq 0 ] ; then echo "thing is there ; fi
  
VS just using if to look at the ES of the prior program

  if
    grep -q thing < file
  then
    echo "thing is there"
  fi

"test" and [[ are a fine programs / tools for evaluating strings, looking at file system permissions, doing light math, but it isn't the only way to interact with conditionals.


Nice tip!


I would add a "zero" best practice: don't. If you're thinking about writing enough shell script that it is worth putting it in a file, consider other languages or tools.

I'm not saying *never* write shell scripts, but always consider doing something else, or at least add a TODO, or issue, to write in a more robust language.


Other tips for working with files:

always quote filenames, because you never know if there's a space in them.

filenames with dashes or periods will kill you

prepend current directory file manipulation filenames with "./", because the file might start with a period or dash

Dashes in filenames still might kill you, especially if you pass those to another command


If you are on Windows or Linux, Powershell is a decent scripting language that comfortably replaces Shell for scripted task running. The commands are vastly more readable and you get an okay experience with branches.

I'd also say that in most cases Python is also a better choice, especially when you use the ! syntax.


The issue I have with powershell is the extreem verbosity.

But that's just a personal thing and not something that I can realy blame the language.


The verbosity is optional; most examples you see will be verbose in an effort to be more clear (i.e. Get-ChildItem vs 'gci') but when you have a little experience are typing with Powershell, you'll find the verbosity basically goes away, because you'll be familiar with using aliases and because you won't need abstruse tools and sublanguages (which are more verbose) to do filtering and processing:

  gci *.txt | %{ $tot += $_.length }; echo $tot
That's not more verbose than bash (one way of doing this):

  ls -l *.txt | awk '{ tot += $5 } END { print tot }'
So you'll see the pipeline written (in an example, for clarity), more like:

  Get-ChildItem *.txt | ForEach-Object { $tot += $_.length }; Write-Output $tot
But that's not how you'd usually use it; until/unless you're putting it in a script.

Note that 'ls -l' and guessing that you want to total up field 5 is brittle in way that the Powershell snippet isn't, but I'm leaving that issue aside.


The verbosity might be annoying at first, but it does make long pipelines easier to read. This enforced verbosity makes it easier to read than a linux pipeline using some arcane `qw -eRTy` command with no rhyme or reason.


Not to me, powershell is a pain to write and read with my heavy dyslexia. But again, this is personal.

Pretty sure there are other dyslectics who will find that it helps them. So I guess this all depends on person to person


So, as a fellow dyslexic. Powershell supports complete tab-completion for arguments (even with custom commands), and that includes doing things like writing "get-*" and hitting tab to see possible commands.


Powershell works great on MacOS; it's my primary shell.


One of my favorite shell script snippet is prepending timestamp to every output with the help of ts command of moreutils package, meanwhile write to log file at the same time: https://unix.stackexchange.com/questions/26728/prepending-a-...

exec &> >( ts '[%Y-%m-%d.%H:%M:%S] ' | tee ${LOGFILENAME} )


For systems that I control myself I much prefer to avoid Bash/sh. They’re just to clunky. And if I need to use them, I try to do as little as possible in order to make it more robust.

Case in point: Declaring an array. IMHO, it’s just not ergonomic at all. Especially not in sh/dash.


Some things to add:

* use bats for testing * use shfmt for code formatting * use shellcheck for linting


Author mentions using xtrace aka `set -x`. If using xtrace I highly recommend doing:

    export PS4='+ ${BASH_SOURCE:-}:${FUNCNAME[0]:-}:L${LINENO:-}:   '
This will then append the filename, function name, and line number to the command being executed. Can make it much easier to find where exactly something is happening when working with larger bash scripts.


One missing for me: when doing anything with numbers, use shell arithmetic $(()) and (()) instead of [[]] to be explicit: https://tldp.org/LDP/abs/html/arithexp.html


There is a VS Code extension[0] for Shellcheck that works just like ESLint. Very helpful when writing Bash scripts.

[0]: https://marketplace.visualstudio.com/items?itemName=timonwon...


I get my "best practices" from here: https://tldp.org/LDP/abs/html/index.html

I think this site is amazing, and it must be older than at least two decades.


I used to refer to that all the time, but it doesn't have newer bashisms (shell != bash).

A better resource is https://mywiki.wooledge.org/BashGuide Also, a preliminary read of https://mywiki.wooledge.org/BashPitfalls is advised.

Using shellcheck as a bash/shell linter is the ultimate. When you get a new warning, you can look up the code and learn why it's complaining.


I've always had good results following "Unofficial Bash Strict Mode": http://redsymbol.net/articles/unofficial-bash-strict-mode/


Missing: When using "set -o pipefail" you should also catch any non-zero return codes that you want to accept, e.g., "{ grep -o pattern file || true ; } | sed pattern" to let the command continue (if desired) to execute even if pattern isn't found.


Google Shell Guidelines are really good if someone is looking for good practice and clean code.


I'm surprised that no one mentioned a pair of tiny functions to log each command before it is run. Nicer versions also print a timestamp. Of course, this setup assumes: set -e

    echo_command()
    {
        echo
        echo '$' "$@"
    }

    echo_and_run_command()
    {
        echo_cmd "$@"
        "$@"
    }
Then something like:

    main()
    {
        # For simple commands that do not use | < > etc.
        echo_and_run_command cp --verbose ...

        # More complex commands
        echo_command grep ... '|' find ...
        grep ... | find ...
    }

    main "$@"


I feel like I'm spamming these comments with this, but check out https://bash3boilerplate.sh/ for a much better logging system along with a neat way of parsing options. You define the usage and help section like so to define your options:

  ### Usage and help - change this for each script
  ##############################################################################

  # shellcheck disable=SC2015
  [[ "${__usage+x}" ]] || read -r -d '' __usage <<-'EOF' || true # exits non-zero when EOF encountered
    -t --timestamps       Enable timestamps in output
    -v --verbose          Enable verbose mode, print script as it is executed
    -d --debug            Enables debug mode
    -h --help             This page
    -n --no-color         Disable color output

  EOF
Then you get to refer to ${arg_t} for the --timestamps option etc.


It does not address timestamps, but

  set -x
does this seamlessly without cluttering up your script. You can even run your script with

  sh -x script
If you didn't always want the logging output.


In addition to set -x I have taken to wrapping my main entry point in some scripts where record keeping is helpful in a sub shell and pipe all output to a function that tees it's output to a log file after cleaning up escape sequences so I can generate a log file without having to annotate every line with some kind of wrapper:

  #!/usr/bin/env bash
  (
    foo
    bar
  ) 2>&1 | print_and_log "$logfile"


My personal best practice for using shell scripts:

Don’t.

Use a proper programming language instead. bash (and similar scripting languages) are non-portable (Linux/Windows) and the opposite of what I want in a good programming language.


In my experience, the best practice is to implement all the non-trivial logic in the actual program or a Python script, and use shell script only for very straight-forward stuff like handling command-line arguments, environment variables and paths.


Best practice for shell scripting: don't.


What I came here to say.


Shell scripting feels like scripting in cursive. It’s obfuscatory (sed? curl? grep? ssssuper descriptive) for the sake of thinking about solving problems like the elder generation. To make matters worse, you’re not even doing hard computer science things most of the time. You’re tweaking bits in files, uploading/downloading, searching, etc. It’s like having a butler who only understands your grocery list if it’s written in cursive.

I agree we need a shell scripting language, I disagree that bash zsh or anything that frequently uses double square brackets and awful program names is the epitome of shell scripting language design.


does a mandalorian worry if another can wear their armor? no, its just for them.

giving up on the notion "others will use or collaborate with my scripts" was the single most productive thing i've done for my scripting.


love this


Define a cleanup function to nicely handle SIGTERM/SIGKILL/... maybe?


You can't trap SIGKILL.


Thank you, you are right. SIGTERM and SIGINT then :) https://tldp.org/LDP/Bash-Beginners-Guide/html/sect_12_02.ht...


> Use set -o errexit at the start of your script. [...]

A couple of days ago this link was posted to hn http://mywiki.wooledge.org/BashFAQ/105

It showed me once again how little bash I know even after all those years. I checked the examples to see if only set -e is dangerous or also set -o like the author suggested and sure enough it's just as bad es set -e. You just got to thoroughly check your bash scripts and do proper error handling.


I wrote a fair number of bash scripts, and the area where they're definitely weaker than using a mainstream programming language is debugging. If something bad goes on a large script, it is not only harder to figure out why, but sometimes the error may be in one of a dozen native UNIX commands that have nothing to do with bash. The interaction between the shell and these UNIX commands is the weak point in the process and you can spend a long time trying to figure out what is really going on.


No command should have an extension. And - quite notably - almost none do.

Adding an extension to make it easier to tell what's inside without opening it is being lazy rather than following best practices. Best practice is half century of leaving them off.

Unlike Windows, which ignores extensions and lets you run a command omitting them, Unix has a better (I'm not saying perfect) approach which allow the metadata to pulled from the first line of the file, tuned exactly to what the script needs. No sane extension is going to capture this info well.

Extensions expose (usually incompletely) the implementation details of what's inside, to the detriment of the humans using them (the OS doesn't care), who will then guess at what the extension means.

However, many extensions are WRONG, or too vague to actually tell what interpreter to call on them - which this subgroup of devs does all the time, mostly commonly using the wrong version of python (wrong major, wrong minor, not from a specific python env) and breaking things. .sh is manifestly wrong as an extension for Bash scripts, which have different syntax.

The exception is scripts that should be "."-ed in (sourced), where having a meaningful .sh or .bash (which are NOT interchangeable) is ACTUALLY good, because it highlights that they are NOT COMMANDS. (and execute isn't enabled)

If you want a script to make it easier to list commands that are shell scripts or whatever, there's a simple one at the end of:

https://www.talisman.org/~erlkonig/documents/commandname-ext...

I've seen several cases of .sh scripts which contained perl code, python, or were actually binary, because the final lynchpin in this (abridged) case against extensions is that in complex systems the extensions often have to be kept even after the implementation is upgraded to avoid breaking callers. It's very normal for a program to start as shell, get upgraded to python, and sometimes again to something compiled. Setting up a situation which would force the extension to be changed in all clients in a large network to keep it accurate is beyond stupid.

Don't use extensions on commands, and stop trying to rationalize it because you (for those to whom this applies) just like to do "ls *.sh" (on your bash scripts). These are a violation of Unix best practices, and cause harm when humans try to interpret them.


As someone who used to have to write a lot of shellscripts because I worked at a company that believed in files and not databases, if you want to get funky use:

shellcheck

It's like pylint for your shellscripts.


A thousand times this. Shellcheck is a godsend that will save you tons of headaches if you have to deal with longer shell scripts - whether you are writing new scripts or maintaining old ones.


Job interviews should require unix admins to write a ten line bash script that passes shellcheck on the first attempt.

Also, write a "find" command without checking the manpage or internet


A couple of other, important, settings:

- `set -o errtrace`: trap errors inside functions

- `shopt -s inherit_errexit`: subprocesses inherit exit error

Unfortunately the list of Bash pitfalls is neverending, but that's a good start.


> 9. Always quote variable accesses with double-quotes. > - One place where it’s okay not to is on the left-hand-side of an [[ ]] condition.

And the right-hand-side of a variable assignment.

And the WORD in a case statement. (Not in the patterns, though).

Plus a bunch of other single-token(?) contexts.

I don't recommend relying on the context though, it's clever and makes it hard to verify that the script does not have expansion bugs.


Mostly agree, but I add more.

1. end all your lines C-style; this may save your life many times;

2. declare -is variables and -r CONSTANTS at the beginning, again, C-style;

3. print TIMESTAMP="$(date +%Y-%m-%d\ %H:%M:%S)"; where appropriate if your script logs its job;

4. Contrary to OPs reommendation I strongly try to stick to pure SH compatibility in smaller acripts so they can run on routers, TVs, androids and other busybox-like devices; BASH isn't everywhere.


I like to use:

    date -u +%Y-%m-%dT%TZ
because the time zone is unambiguous, the command works with POSIX date, and it's valid under both ISO 8601 and RFC 3339.


how do you make sure your scripts are SH compatible?


Make sure? Well, aside from keeping sh feature subset in my head, I usually run them like "sh myscript.sh" on target or limited environment (they have #!/bin/sh shebang) for testing. Other people here probable have better suggestions though. )



Can someone enlighten me on the

  cd "$(dirname "$0")"
part of this? This is changing to the directory of where the script is all cases?

EDIT: I should've just tested this to see :) I did and it does exactly that. Very helpful. I didn't realize $0 is always the first argument. Kind of like how `self` is the first implicit argument in OOP methods?


Speaking of shell, which language do you think has the best interoperatibility with shell commands. I mean, running a command, parsing the output, looping, adding user interaction etc. with the least amount of friction. Ruby used to come close for me, just put the command in backticks `` and write the main logic in Ruby, but I want to hear if there is something better.


Powershell, because it actually is a shell so it's great at easily invoking commands and using their outputs as actual return values, and because it has "programming language" constructs like dependency management in modules, etc.

It has some great tools for user interaction, too, including secure string handling for credentials, a TUI framework, easy parallelism, unit tests and lots more.


I did that for 10+ years with perl, but I guess that these days Ruby and Python would be equally valid choices.

To be honest these days I use shell scripts, and if they get too large I'll replace with either golang or python. I don't love python, especially when dependencies are required, but it is portable and has a lot of things built-in that mean executing "standard binaries" isnt required so often.


I like scsh (Scheme shell). A more recent/maintained alternative is Racket's shell-pipeline package https://docs.racket-lang.org/shell-pipeline/pipeline.html


Perl is a great fit. awk, sed, grep, xargs. Expect, occasionally. Python is too fanatically anti-fp and with indentation. JavaScript is too janky.


If someone could compile a similar list for PowerShell, that would be extremely helpful.

Kudos for nicely put tips that are easy to follow and understand.


Here is a template https://sharats.me/posts/shell-script-best-practices/ Should I put my code inside main()?

I'm newcomer in bash


Nice, but for point 14 I would recommend using pushd/popd instead of cd-ing directly into $0... any reasons to prefer cd directly?


pushd/popd are intended for interactive use, not for use in scripts. It prints the full stack on directories and there is no option to be quiet. Of course, there is always redirecting to /dev/null but it is intentional to not have option to be quiet.

Usually there is no need to return to original directory. Change of directory is process-local (script-local) so the calling process is not affected by this 'cd' in the script.


'pushd' would imply that you want to 'popd' back out of it, but that's unnecessary, as the 'cd' will only affect a subshell that gets terminated at the end of the script. So for the user it makes no difference, the current directory stays the same. For the script it saves you an unnecessary 'popd'.


Template in article is awful. It's better to use this one, which is a real CLI tool: https://github.com/vlisivka/bash-modules/blob/master/bash-mo...


My favourite one has to be this: https://bash3boilerplate.sh/


One thing I try to do is retrieve information from the system instead of hardcoding it.

For example, instead of

    USER=mail
    UID=8
use

    USER=mail
    UID=$(id -u $USER)
It improves portability and removes potential sources of errors.

Also note that this is something that should be done in any programming language, not just shell scripts.


The best practice for me is to not use bash or zsh, use a better defined and robust language like JavaScript or python


These are really good information for shell script. I feel that not enough emphasis have been put on shell script development in general. Shell script is the glue language for lots of things. The power of a shell script is the all tools that it can call and orchestrate the data passing between the tools.


I agree with most points except [[ ]] instead of test.

The explanation for that wasnt really an explanation either ...


Thank you for this. I have dropped a backlink for learners to find the article on exams.wiki/bash-linkedin/, and for myself to learn. I have also just started "Command Line Fundamentals" (Packt Publishing) to work through the theory and examples.


The shellcheck plugin in JetBrains IDEs leveled up my bash scripting immediately. 100% recommend.


More than one decade of shell script: bash is not shell, and talking about bash without version is suspicious.

I won't check with for version those tips applies, and continue writing POSIX shell as much as can. I might check which or those suggestions are POSIX, though.


Sure, but everyone has bash installed already and it's far more featureful than POSIX shell.

Any reason to avoid writing bash scripts, other than purism?


    1. use shellcheck
    2. use shfmt (to format your shell script)
    3. set -euo pipefail  (much shorter)
my slight complain about bash is that it disallows space around =, X=100 is OK, X = 100 is not, sometimes I just make mistakes like that.


Also agree with basically all of it.

My order preference would be:

    1. use shellcheck.
    … rest …


I switched to https://github.com/google/zx. I'm tired of working with strings and prefer actual data structures.


About that "#!/usr/bin/env bash" business - are there any systems out there that have "/usr/bin/env", but do not have "/bin/bash"?


That you've asked this question means you don't understand the actual reason to do this. I might have my own bash in my home I use to run all my shell scripts, why are you ignoring my environment and going for the system shell? Unless you control the system or are writing a system script this is absolutely unexpected and bad behaviour. On macOS now that zsh is the main supported shell plenty of people run a modern Bash out of their home.


> plenty of people run a modern Bash out of their home

Ah, got it. Another failure mode. There is a /bin/bash, but it's an ancient, crummy thing, that is difficult to upgrade. MacOSX does this, so users paper this over by installing a private copy as ~/bin/bash. Thank you.


A standard Guix System install is one example.


For local variables I'd also use the -r flag to explicitly mark read-only variables when possible. It makes it easier to glance at the code and know that variable isn't expected to change.


I'll throw another one: If it is longer than a ~screen, throw it away and write it in <scripting language present> bash is just not a good language at the best of days


There is an actual, honest-to-goodness, standardized, current, Shell Command Language now. It's part of POSIX.1-2017 or, if you like, IEEE Std 1003.1-2017.

Perhaps not surprisingly, it's bourne shell, not bash. But still, it's an actual published standard all can refer to when the language in question is "shell scripts", i.e. .sh files, or "shell commands" in some context where a shell command is called for (e.g. portable makefiles).

  https://pubs.opengroup.org/onlinepubs/9699919799/utilities/V3_chap02.html


Please, please, please:

* Write help text to stdout, not stderr, so I can grep it

* Set exit status to 0 on success and 1 or some other small positive integer on failure so I can use || and &&


These could be linting rules for bash script files.


If you need to follow these rules your script probably shouldn’t be written as a shell script.


Ha yeah someone should make a single lint "Your script is over 100 lines. You should rewrite it in a sane language!"


Start every script with the boilerplate

  #!/bin/bash
  if [[ `wc -l $0|cut -f 1 -d ' '` -gt 100 ]]
  then
  echo "No, this is too long!"
  exit
  fi


What is the difference between `set -o errexit` (as recommended in the article) and `set -e` (which is the method I knew previously)?


Regarding

> 9. Always quote variable accesses with double-quotes.

Does the author refer to "$MYVAR"? Why would you want to use that over ${MYVAR}?


Wrap the entire script in {}, otherwise a change to it will impact running instances (best case, causing an abrupt error).


HEREDOC for help is nicer than echo IMHO


Yeah, you can actually indent it with <<- so it doesn't look so ugly.

That said, I like doing the usage like so for short scripts:

  #!/bin/sh
  #
  # Sleep until a specific time. This takes a time in 24-hour clock format and
  # sleeps until the next instance of this time is reached.
  #
  #   % sleep-until 15:30:45
  #   % sleep-until 15:30      # Until 15:30:00
  #   % sleep-until 15         # Until 15:00:00
  #
  # Or to sleep until a specific date:
  #
  #   % sleep-until 2023-01-01T15:00:00
  #
  # Or space instead of T; can abbreviate time like above.
  echo " $@" | grep -q -- ' -h' && { sed '1,2d; /^[^#]/q; s/^# \?//;' "$0" | sed '$d'; exit 0; }  # Show docs
That will re-use the comment as the help:

    % sleep-until -h
    Sleep until a specific time. This takes a time in 24-hour clock format and
    sleeps until the next instance of this time is reached.
    …
It's a bit of a byzantine incarnation, but I just copy it from one script to the next, it saves a bit of plumbing, and generally looks pretty nice IMO.

I'm not 100% sure if I thought of this myself or if it's something I once saw somewhere.


I try to use the long-form of command-line switches in scripts, e.g. `cp --force` instead of `cp -f`.


Don't forget to put in a '--' to end option parsing too: 'cp --force --'

This works around malicious filenames that may start with a '-'. Especially important if you're running an 'rm' command

Edit: another workaround is to ensure that files are always absolute pathnames or even starting with './' for relative ones.


It's #13 in the article.


(I missed that one, thanks)


I do all of this all the time.

But I use, set -euo pipefail. I think -u is -o unset ,etc? Just easier to type.


1. If you have to start with a template, then shell script is not the right thing for whatever you're trying to accomplish.

2. Shell scripts are wonderful, but once they exceed a few lines (give or take 50), they've entered the fast track on becoming a maintenance headache and a liability.


Naming your executable shell scripts with .sh has similar problems to Hungarian notation. If your ~/.local/bin shell script ends up useful in a lot of places, you may want to re-write it in something less crap (and I say that as an experienced bash abuser who knows it quite well and uses it a lot more than he should) than bash. When you do that, your python/nim/lua/whatever script now has .sh at the end. What was the point?

.sh is appropriate for a shell library module which you source from another shell script. It is not really appropriate for something which is more abstract (such as a "program" inside your PATH).

set -e / set -o errexit will only be helpful if you fundamentally understand exactly what it does, if you don't, you are bound to end up with broken code. Once you fundamentally understand set -e you will be better placed to decide whether it is appropriate to use it or more appropriate to simply do proper error handling. The oft repeated mantra of using set -e is really misleading a lot of people into thinking that bash has some sane mode of operation which will reduce their chance of making mistakes, people should never be mislead to think that bash will ever do anything sane.

set -u / set -o nounset breaks a lot of perfectly sensible bash idioms and is generally bad at what proponents of it claim it will help solve (using unset variables by accident or by misspelling). There are better linters which solve this problem much better without having to sacrifice some of what makes bash scripts easier to write/read.

set -o pipefail is not an improvement/detriment, it is simply changing the way that one of bash's features functions. pipefail should only be set around specific uses of pipelines when it is known that it will produce the intended result. For example, take this common idiom:

    if foo | grep -q bar; then ...
The above will NOT behave correctly (i.e. evaluate to a non-zero exit code) if grep -q closes its input as soon as it finds a match and foo handles the resulting SIGPIPE by exiting with a non-zero status code.

Guarding set -x / set -o xtrace seems unnecessary, -x is already automatically inherited. Just set it before running the program.

Good advice on using [[ but it is important to fundamentally understand the nuances of this, quoting rules change within the context of [[.

Accepting h and help seems incredibly unnecessary. If someone who has never used a unix-like operating system happens upon your script then they may find it useful. But I don't think catering to such a low common denominator makes sense. Your script should just handle invalid arguments by printing a usage statement with maybe a hint of how to get a full help message.

I'd say changing to your script's directory is almost never appropriate.

Shellcheck, while useful, is useful only if you understand bash well.

The lesson here is that if you think that you have spent enough time writing bash to suggest best practices, you've not spent enough time writing bash. Only when you realise that the best practice is to not use bash have you used bash long enough (or short enough).

If you want to write a script which you're going to rely on or distribute, learn bash inside out and then carefully consider if it's still the right option.

If you are unwilling or unable to learn bash inside out then please use something else.

Do not be fooled into thinking that some "best practices" you read online will save you from bash.


Superb comment. It's rare that I agree with so many words as-is.

I might add only a minor note that his construct for h/help breaks globbing in a directory that happens to contain a file named "h" or "help" (even without a leading dash '-'), but only if that happens to be first alphabetically. No footgun there..Lol. He also does no "--" support, about the only convention to make globbing work reliably.


Do you guys think that Shell scripting will still be around in 20 years?


Why wouldn't it? While we've seen a trend towards consolidating systems infrastructure using more robust programming languages - as long as the shell is used for human-computer interaction (and I don't see this going anywhere), shell scripting will be around as a natural extension of the interaction. There is a beautiful ergonomics in conserving commands that you typed and interactively improved in a text file for future repeated execution.


I have bash and perl scripts that keep major business critical services running that are about that age. Why would I think scripts I write today won't still be running in 20 years time?


Most definitely.

It occupies a sweet spot of being ubiquitous, quick to write/deploy and naturally interfaces with OS commands. It's the glue that holds unixes/linuxes together.


Shell has been around for 40 years, so another 20 will be easy.


Some good ones in here. Especially the ones to "set" stuff.


I’ve scripted way longer than a decade. Stil, this is a great list!


My shell script best practice is not to use shell script.


awesome. this article and comments will make scripts tons better. is there resonable subset of zsh and bash? is there linter to enforce these?


I didn't know many of these! Thanks


bash is rather ubiquitous but wouldn't it make more sense to target /bin/sh?


after 30+ years writing scripts, I picked up several cool ideas. #thx


Timing couldn't be any better. Been getting serious about bash/zsh scripting lately.


my experience: no bashism, indepotence, explicit error handling.


Also: use functions.


This guy shells!


Hey Peter! It's so humbling to see you check out my blog!

Your articles on awk and sed were a huge inspiration to me around 2008-09, and I super-looked up to you. Never have I imagined you would check out my blog one day!

Thank you for all your work dude! Stay awesome.


^5


"Use bash"

Are you listening Apple?


eye roll emoji


More opinions

1. Bash shouldn't be used, not because of portability, but because its features aren't worth their weight and can be picked up by another command, I recommend (dash) any POSIX complaint shell (bash --posix included) so you aren't tempted to use features of bash and zsh that are pointless, tricky or are there for interactivity. Current POSIX does quite well for what you would use shell for.

2. Never use #!/usr/bin/env bash. Even if you are using full bash, bash should be installed in /usr/bin/bash. If you don't even know something this basic about the environment, then you shouldn't be programming it, the script is likely to create a mess somewhere in the already strange environment of the system.

3. Don't use extensions unless you're writing for Windows machines. Do you add extensions to any other executable? head, sed can help you retrieve the first line of a file and neither of them have extensions.

4, 5, 6. You may do this is obscure scenarios where you absolutely cannot have a script run if there is any unforeseen error, but it's definitely not something that should be put on without careful consideration, http://mywiki.wooledge.org/BashPitfalls#set_-euo_pipefail explains this better. And it goes without saying that this is not a substitute for proper error handling.

7. I agree that people should trace their shell scripts, but this has nothing to with shell.

8. [[]] is powerful, so I very often see it used when the [] builtin would suffice. Also, [[ is a command like done, not a bash builtin.

9. Quote only what needs quoting. If you don't know what needs quoting, then you don't understand your script. I know it seems like a waste of time, but it will make you a much better shell programmer then these always do/don't do X unless Y then do Z, rules that we are spouting.

10. Use either local or global variables in functions, depending on which you want. I see no reason to jump through this weird hoop because it might become an easily fixable problem later.

11. This is a feature, make usage appear when you blink, I don't care, if anything variations of -h too limited,

12. Finally, one "opinion" we agree on, not sure how else to redirect to stderr, but I'm sure that other way isn't as good as this one.

13. No, read the usage. If you want inferior long options, then you can add them to your scripts, but they are not self documenting, they only serve to make commands less readable and clutter completion.

14. No, it's not usually appropriate, do you want all installed scripts writing to /bin? The directory the script is running in should be clearly communicated to the user, with cd "$(dirname "$0")", "It runs in the directory the script is in." Needs to be communicated somewhere, or you have failed.

15. Yes, use ShellCheck.

16. Please call your list Bash Script Practices if it's unrelated to shell.


2. Why should bash be in /usr/bin? Mine's in /usr/local/bin and I've seen vendored bash binaries in very weird places. Respect the user's PATH.

8. '[' and '[[' are both bash builtins.

9/15. If you're using shellcheck, you'll need to quote (almost) all vars anyway.


> 2. Why should bash be in /usr/bin? Mine's in /usr/local/bin and I've seen vendored bash binaries in very weird places. Respect the user's PATH.

Yup, nix and guix consistently put their binaries in "very weird places" - unless you want to make users of those tools unhappy (among others!) please use env. The user knows where their shell is more than you do.


> 8. '[' and '[[' are both bash builtins.

'[' is a builtin, '[[' is a keyword. Can use bash's builtin 'type' to check.


> 1. Bash shouldn't be used, not because of portability, but because its features aren't worth their weight and can be picked up by another command,

I agree but

> I recommend (dash) any POSIX complaint shell (bash --posix included) so you aren't tempted to use features of bash and zsh that are pointless, tricky or are there for interactivity. Current POSIX does quite well for what you would use shell for.

That's just terrible recommendation. It's saying "well, bash is terrible, use a terrible one that also have less functions"


Just looked at two servers I'm sshed to - one redhat, one ubuntu. Neither has bash in /usr/bin/bash.


POSIX complaint shell? I'm intrigued by that.


> 1. Use bash.

Credibility gone.


> Use bash. Yeaah, closes tab.


Closes tab, opens comments tab instead




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: