
The Bash Hackers Wiki - dvfjsdhgfv
https://wiki.bash-hackers.org/
======
dijit
Bash was the first scripting language I ever got "good" with.

(I put "good" in quotes because while I could get basically anything done, I
promise that it was not good looking).

However, I feel like it left me a bit brain damaged, I still think in terms of
quotation quirks and "sub-shells" even in proper languages such as python or
rust.

It also left me a bit weird when thinking about concurrency, because I never
consider IPC, I just think in terms of jobs+join so my code ends up performing
as a series of bottlenecks as everything rejoins the main thread.

That said; getting good with bash has helped my career a lot as a sysadmin;
but maybe this is part of why sysadmins struggle with real programming.

~~~
ofrzeta
I've come to use more and more Python for tasks I would have used Shell in the
past. Because of weird syntax with anything but simple variables or endless
fiddling with quoting.

You can get a lot done quite easily and concise in Python 3, for instance with
the pathlib module that has methods for recursive globbing and stuff like
that.

[https://docs.python.org/3.7/library/pathlib.html](https://docs.python.org/3.7/library/pathlib.html)

~~~
badsectoracula
The nice thing about Bash which can obviously not be said about Python is that
Bash scripts will keep working (assuming whatever they call is also there of
course) since Bash goes out of its way to preserve backwards compatibility.
Considering its legacy, Bash can run scripts more than a decade before Python
even existed.

The only other scripting language i know of that has similar backwards
compatibility is Tcl. Perhaps Perl too (ignoring Perl 6 that was renamed to
something else again due to it being practically a different language).

~~~
Doingmything123
I guess it's the trade off between backwards compatibility and
readability/correctness/maintainability. Both languages will have technical
debt overtime. In my opinion, I would much rather have to rarely update python
scripts in exchange to for peace of mind that I can fix/update the script as
need be. In my experience, *sh scripts can be fragile and hard to ensure
correctness.

~~~
Aloha
Having migrated something recently from python2 to python3, I'll take bash. I
actually ported two of the three scripts to bash, and the third one I
reluctantly moved to python3 (I needed python-pil). String handling in python3
in my opinion is a tire fire compared to bash.

------
ravoori
It would be remiss to talk of bash and not bring up Greg's Bash Wiki
([https://mywiki.wooledge.org/BashFAQ](https://mywiki.wooledge.org/BashFAQ))

~~~
rascul
This and the bash man page are all I need in general.

------
zimbatm
Another great reference is [https://github.com/dylanaraps/pure-bash-
bible](https://github.com/dylanaraps/pure-bash-bible)

------
bachmeier
I recently came across this static site generator written in Bash:
[https://github.com/hmngwy/jenny](https://github.com/hmngwy/jenny)

It's kind of cool, but what makes it really nice is that it's going to work on
any computer I use, there are no concerns about dependencies, and no concerns
about future compatibility. (Obviously not 100%, but I'm not worried in the
least.)

------
anonsivalley652
It seems like there is potential for making bash faster.

1\. JIT compilation, caching and invalidating on updated source files

3\. Rearchitecting similar a-la busybox all of coreutils, sharutils, grep, awk
and sed as internal components to avoid forking as much as possible. Tool
symlinks back to itself for compatibility.

4\. Rearchitecting to simulate sub-shells with threads and local contexts,
rather than globals/singular locals. With formally-proven safety
methodologies, there only needs to be one "shell" process running, a server.
All other invocations become thread children if called directly or interact as
a client if exec()ed. Subshells become just lightweight threads with Copy-On-
Write contexts. No more shell fork-bombs, scripts get crazy fast and reduce
first PID.

~~~
saagarjha
If you really care about speed, you probably shouldn't be using Bash.

~~~
yesenadam
Also true of Python, but I almost never hear people say that. Strange.

~~~
saagarjha
Python's a bit faster than Bash.

~~~
vaingloriole
For what? Is it faster than (g)awk? How about sed? How about dd? How about any
of the thousands of tools in userspace that you wrap in bash and are faster
than python at the same task? Use the right tool is being lost in the 'you are
too stupid to make a good decision, do it this way' generation.

~~~
saagarjha
It depends, of course! But usually the issue with shell scripts is that they
create many new processes, which has high overhead. In the instances you're
spending more time in dd than bash, in Python you're probably inside an in-
process memcpy.

------
m463
I like bash, but I sort of have a rule...

If I am thinking of using advanced features of bash, or I'm over-using
grep/sed/awk within the script.... switch to python (formerly perl)

Mostly it's a win with respect to quoting, path manipulation and proper data
structures.

~~~
_jal
Python's major glaring problem as a systems language is dealing with
processes. It is just verbose, ticky and annoying.

I have my other issues with it as a systems language that are more
idiosyncratic, but for replacing bash, process management is just way too much
hassle.

~~~
Too
99.9% of all process management can be done with oneliner
subprocess.check_output. Not verbose at all and much safer.

Biggest mistake people do is bringing bash idioms into python, like trying to
pipe the output through head/cut/tail/grep/sed instead of just using string
operators like endswith/in/startswith/re. If you do that then process
managment will be more verbose, but it is in bash also unless you ignore
decent error handling like $PIPEFAIL[0] which many many scripts actually do
unknowingly. For those other 0.1% of the places where you actually need a pipe
then just accept that you have to write 4 lines instead of 2, you will save
thousands of lines in other places. For parallel running processes &, jobs,
nohup and wait in bash isn't exactly convenient either if you are interested
in the results, so there it's a tie vs POpen if i'm being generous.

~~~
vaingloriole
Do you really think the python popen() workalike with optional shell inclusion
is better than running in a shell? More secure perhaps? Easier to read? I
forced a developer from my team who would not stop using subprocess calls
(including $SHELL) in random new python scripts with hard coded arguments
instead of writing a single shell script that could be reused. subprocess is a
tarpit and is more misused than any other python function.

------
asveikau
Clicking around, I am glad to see portability to non-bash shells covered in
some topics, despite the name of the site.

It seems to me like sometime in the last 10-15 years, what we used to call
"shell scripting" became "bash" in popular discourse. I myself learned to
write shell scripts in times and places where bash was the most popular
default, but I never thought of it as writing intentional bashisms, but a set
of skills applicable to multiple possible implementations, so I find that
shift in jargon a little disappointing or incorrect.

~~~
tannhaeuser
An even more important point since the two mainstream desktop Unixen (Mac OS
and Ubuntu) ship with zsh and dash, resp., by default.

~~~
laumars
dash is shipped as a replacement for the Bourne shell, not bash. Thus Ubuntu
does also ship bash too.

Alpine doesn't have bash installed. It alsonpretty common not to have bash as
part of the base image on non-Linux systems too. You've already mentioned
macOS but the same is true for many of the BSDs.

------
INTPenis
This, Greg's wiki and #bash@freenode is how I became a bash nazi to my peers.

Just hanging out in #bash@freenode you learn so much best practice. Like sub-
shell quotation, variable capitalization, parameter expansion, read usage and
much more.

------
wodenokoto
I clicked the config article and it reminded me of 2 major pet peeves:

1) No good way to store config values, nor ones that can be shared with other
languages. I might have a set of python applications running in the cloud and
a shell script to deploy them, and guess what, I'd like them to share the same
config, so I don't have to keep to update resource names in 2 places.

2) No simple way of templating files. A script that would replace {var} with
value of $var in a file seems like an obvious thing for shell to do for you,
but I haven't found a good way of doing it.

~~~
laumars
1) There's a few ways you could skin this proverbial cat:

option 1:

A bit python specific but you can use python-dotenv[1] for Python; and
`source` that file in bash for shell support.

option 2:

Alternatively you could just pass environmental variables between applications
(that's kind of the point of them).

option 3:

You could still use a dot env file and import that in other languages as TOML.
This is a really nasty hacky way of doing things though because you'd need to
read that file in, then prefix a key at the start of the file (eg "[vars]") to
adhere to the TOML standard before you could run it through a TOML parser. I
wouldn't personally recommend this approach but it would work as a lazy
solution.

2) Sure there is, `envsubst`[2]

hello.tmpl:

    
    
        This is an exmaple
        Hello, ${HELLO_NAME}!
    

hello.env:

    
    
        HELLO_NAME="wodenokoto"
    

hello.sh:

    
    
        #!/bin/sh
    
        source hello.env
        envsubst < hello.tmpl > hello.txt
    

Run hello.sh and it will create a file called hello.txt:

    
    
        This is an exmaple
        Hello, wodenokoto!
    

One caveat is that this is part of the gettext package which doesn't always
ship with every distro. But it's a pretty small package (small enough that I
don't have any issue installing it into a docker container for one CI
pipeline).

[1] [https://pypi.org/project/python-dotenv/](https://pypi.org/project/python-
dotenv/)

[2]
[https://linux.die.net/man/1/envsubst](https://linux.die.net/man/1/envsubst)

~~~
wodenokoto
> just pass environmental variables between applications

That would require me to 1) somehow ensure that shell is sourcing the the
environment variables from a file, and 2) we are back to all the hackery
mentioned in the article [1]

[1] [https://wiki.bash-hackers.org/howto/conffile](https://wiki.bash-
hackers.org/howto/conffile)

~~~
laumars
You don't need a shell script to initialise an application with specific
environmental variables. systemd and docker will both do this (and both
supports the same env file too). AWS lambda supports environmental variables,
so does most CI/CD solutions. It's a pretty standard way to do things in
UNIX/Linux. In fact if you're having to resort to "hacks" to get environmental
variables loaded then that might be a symptom of a bigger problem with your
orchestration rather than an issue with Bash / Python.

Also that was just one of three options I listed; and there will be a plethora
of other alternatives I've not mentioned too.

~~~
wodenokoto
Thank you for your patience in answering my woes.

You’ve given me a lot of food for thought and some good ideas on how to attack
some of my problems.

------
AceJohnny2
This website has been my go-to reference when hacking together Bash scripts. I
can never remember the exact syntax of parameter substitution, and how do I do
arithmetic substitution again? What's all the 'set -o' options?

I've since switched to Zsh, and while its documentation is extensive, it's
basically one fat book and is far from being as accessible or navigable at
this excellent wiki.

------
oh_sigh
[https://xkcd.com/927](https://xkcd.com/927)

Is there room for a lightweight scriptable language that takes the place of,
say, bash/zsh/korn/fish shell scripts?

Please don't say perl/python or sed/awk.

~~~
chubot
That's what Oil is, except it also runs bash scripts and lets you upgrade
seamlessly.

That is, bin/oil is the same as bin/osh with a bunch of shell options on,
including 'shopt -s simple_word_eval' which eliminates a lot of the quoting
hassles.

You can try it now, but some things will be cut out after optimizing the
prototype interpreter (in the name of time):

 _You Can Now Try the Oil Language_
[http://www.oilshell.org/blog/2019/10/04.html](http://www.oilshell.org/blog/2019/10/04.html)

I'm looking for help too: [http://www.oilshell.org/blog/2019/12/09.html#help-
wanted](http://www.oilshell.org/blog/2019/12/09.html#help-wanted)

\----

Also, I think there is more room than ever, because both Python 3 and Perl
6/Raku are worse for shell-like tasks than their predecessors! Mainly because
of startup time and the string abstaction.

We discussed that a couple weeks ago here, and I think it's on the blog
somewhere too:

[https://news.ycombinator.com/item?id=22156151](https://news.ycombinator.com/item?id=22156151)

~~~
lizmat
> Python 3 and Perl 6/Raku are worse for shell-like tasks than their
> predecessors! Mainly because of startup time and the string abstaction

Startup time can always be better, agree. But full support for Unicode _is_
needed in this day and age, and that brings overhead, whichever way you do
that, and especially so if you want to do it 100% correctly. If you're still
living in an ASCII world, then by all means, go for it.

Additionally, in this world of source version control, and virtually unlimited
diskspace for source-code, I think using scripts, rather than one-liners, will
help better in maintainability rather than using one-liners.

~~~
chubot
Oil has better Unicode support than Python 2 or 3 for shell-like tasks,
because of the way file systems, libc, and the kernel work.

Explained here:

[http://www.oilshell.org/blog/2018/03/04.html#faq](http://www.oilshell.org/blog/2018/03/04.html#faq)

which links:

[http://lucumr.pocoo.org/2014/1/9/ucs-vs-
utf8/](http://lucumr.pocoo.org/2014/1/9/ucs-vs-utf8/) (by Armin Ronacher)

The summary is that there are two kinds of Unicode support:

\- UTF-8 based: Go, Rust, and Oil (and Perl 5 it seems, not sure about Raku)

\- array-of-codepoint based: bash/zsh/libc, Python, Java, JavaScript, Windows
(JavaScript notably requires surrogate pairs, Python used to have build time
configuration, now has complex storage heuristics, etc.)

This isn't a theoretical problem -- the Unicode problems in the other comment
thread I linked are real and show up in practice.

~~~
Grinnz
Perl (5) is array-of-codepoint based, at the logical level. Those codepoints
might be internally stored as their encoded-to-UTF-8-bytes, or they might not,
but this does not affect the usage of the string.

Many don't really understand the string model (because many don't really
understand character encoding) but it comes down to: all input and output is
going to be bytes, which by default is stored as the codepoints sharing the
ordinals of those bytes, and there are several mechanisms by which you can
manually or automatically decode/encode those byte ordinals to the represented
characters; for most text processing, you do this on STDIN and STDOUT.

