Hacker News new | past | comments | ask | show | jobs | submit login
Unix Shell Programming: The Next 50 Years [pdf] (sigops.org)
138 points by signa11 on June 4, 2021 | hide | past | favorite | 135 comments



The problem with the shell to me is that it started like a good idea, and then just stopped improving. Things that should be the bread and butter of scripting are oddly arcane. Some examples.

How do you find where the script itself is? Because if the script comes with files, we want to find those when the script gets called from somewhere outside the current dir. Stack overflow suggests this as a first approach:

    SCRIPT_DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" &> /dev/null && pwd )"
Then follows with 7 lines of arcane looking code containing a loop for a better solution. That's a ridiculous hoop to jump through for a common need.

Dealing with filenames? Hope you enjoy pain. Spaces, newlines and quotes are a horrible pain to deal with properly, and regularly cause trouble.

Dealing with correctly passing parameters? More of the same.

Dealing with long file lists? You run into the command line length limit.

Obtaining some incredibly common system parameter? Some insane combination of grep, awk and cut, probably.

Personally I find it amazing that people invented this thing, then spent decades tripping on the same quoting and filename handling issues and not finding that to be a good reason to fix it.

These days if it's more than 5 lines of code, or if arbitrary filenames are involved I go straight to Perl or Python, because the amount of completely preventable pain is drastically reduced.


Not to mention how dangerous it is to use. Accidentally type the > symbol? Enjoy overwriting whichever file came after in the command. Output to an existing file? Old file is gone. Accidentally put a space the wrong place? Gone.

And somehow people think mastering this careless design is something to be proud of.

Can anyone name any other popular piece of software that is that relaxed about irreversibly destroying your data?


Pretty much every RDBMS on the planet (and some no-SQL databases too). :)

I jest but I do agree that Bash et al have their warts. Seriously obscene warts that only make sense when you look at the language from the content of 40 year old systems. But these days there are other options if you want the power of a command line but without (most of) the warts of traditional shells.


A circular saw can easily ruin a piece of work if your not careful, and it could even chop off your hand. We don't tell carpenters that they shouldn't use saws because they are dangerous. There are lots of jobs that are work with dangerous tools.


If a manufacturer could easily design the circular saw to be safe (á la SawStop) with little added cost, but decided not to, I think most people would consider them grossly negligent.


Huh.

I didn't know about SawStop before, so your comment sent me on a wiki-hunt.

"Gross negligence" seems like a huge understatement for all the rationalizing and lobbying manufacturers did to avoid having to use safety technology.

https://en.wikipedia.org/wiki/SawStop#Opposition_from_trade_...


A saw is <<hard>>ware, a physical thinghard to change and improve, while a shell is <<soft>>ware, supposed to be easy to change and improve.


"Not to mention how dangerous it is to use. Accidentally type the > symbol? Enjoy overwriting whichever file came after in the command."

At least in zsh, you can prevent such mistakes by

  setopt NO_CLOBBER
It's saved my bacon many times.

Also, from bitter experience I've forced myself to get in to the habit of never typing:

  rm *
but instead always doing:

  cd ..
  rm foo/*
It forces me to be more conscious of what I'm deleting.


I have `alias rm="echo DISABLED"` in my shell rc, and I have `trash` defined as a big shell function that safely moves stuff to the x desktop trash bin without clobbering anything. It's mildly irritating, but it gives me incredible peace of mind.


> in zsh […] setopt NO_CLOBBER

In Bash it’s

  set -o noclobber
or

  set -C


Those work in any POSIX shell, and also zsh.



If you are that concerned about this specific problem, shell allows you to "set noclobber" (or "set -C") to not overwrite existing files.


Yup, I only learned about that while writing https://www.oilshell.org ! I'm sure that has been around for decades; shell also has discovery and documentation problems :)


> shell also has discovery and documentation problems

Only because people refuse to read the specification. The POSIX Shell & Utilities volume is exceedingly clear and straight-forward. (I vehemently disagree w/ its characterization in that paper.) It helps if you use the frames-based HTML version to make it easier to navigate and explore the various utilities, and especially to see how the syntax specification is broken down--while that section is short, it's difficult to appreciate without explicitly seeing how the subsections are related.

I only code in POSIX shell because I really can't be bothered to remember all the minutiae of other shells, almost all of which provide strong POSIX compatibility, Bash especially. I've never lamented the lack of arrays; but I regularly wish that POSIX would adopt ksh-style coprocesses so dash would have them. The shell is really all about stitching together processes (including subshell contexts), and coprocesses fill a conspicuous gap in functionality.


I've never used ksh-style coprocesses so I'd be interested to hear about some use cases for them, and what you like about them. Examples would be even better -- I've never seen them, and I've looked at a lot of shell.

Bash has a different implementation of coprocesses that doesn't seem to be used much, and is very limited. I believe there can only be one coprocess active at a time, which is very un-shell-like! I got an e-mail once from someone who was using them, and I felt it could be achieved by other means.

I have a good idea coprocesses in Oil which basically is a startup time optimization. The coprocess will basically take argv over a unix domain socket, and have other commands to change the process state. It solves "the problem of VMs and containers that start slowly".

https://www.oilshell.org/blog/2021/04/release-0.8.9.html#wha...

Unix domain sockets are already used for the headless shell in Oil (more coming on the blog about that)

https://www.oilshell.org/release/latest/doc/headless.html#im...


The simplest description I've found is at https://www.dartmouth.edu/~rc/classes/ksh/coprocesses.html, which also provides a couple of examples.

Basically, you start a coprocess using the "|&" operator:

  somecommand |&
This starts somecommand asynchronously, with stdin and stdout attached to anonymous pipes with descriptor numbers that can be referenced by a literal descriptor name, "p". So running

  cat <&p
will shuttle the output of the somecommand coprocess to stdout.

"p" always references the most recently created coprocess, so to juggle multiple coprocesses you have to dup "p" to explicit descriptors:

  exec 4>&p
  somecommand2 5<&p |&
That's not as great as having named references, but it's rare to need to independently juggle more than 1 or 2 coprocesses from the same context. You're almost always plugging them together in series or in a complex graph using subshells. In either case you're only needing to manage the same few descriptor constants. It's sort of like writing in a stack-based language, where you really only use explicit references to the top of the stack.

coprocesses provide a simple construct for cases where you would otherwise have to manually create a named pipe using mkfifo. The need for this pattern comes up surprisingly often, IME, yet because it's so messy--now you need to worry about /tmp races, cleaning things up, etc--people rarely bother, myself included. From the perspective of a shell it's a relatively simple construct to implement. The semantic benefit/cost ratio is exceptional.

This basic functionality is complemented with extension options to the read and print (non-standard) builtins: -p reads from or writes to the latest coprocess, and -u the specified descriptor number. I haven't used coprocesses much because I can't rely on them, but I don't think these are strictly needed. Maybe they were required for proper buffering in early implementations (or even current ones), where redirection to and from builtins didn't utilize the same buffered I/O context? That seems more like an implementation detail.

ksh93, bash, and some other shells provide the process substitution construct, "<()", which can provide similar capabilities to coprocesses in many cases. But I'm pretty sure anything you could do with process substitution you could do with coprocesses, making the latter the more powerful construct. coprocesses seems like the critical missing primitive from the traditional Unix shell model, and would round it out. (Of course, the sky is the limit for evolving the Unix shell model in new directions, as Oil has shown.)

ksh88 had coprocesses so I'm curious why it wasn't included in POSIX Maybe because bash lacked an implementation? Or incompatible quirks between ksh88 and ksh93 related to stdout inheritance across coprocesses?


Lack of warnings about accidental file overwrite is less of a problem today than say 20-30 years ago: modern filesystems allows to make regular snapshots which help to recover accidentally lost file. It doesn't happen often in my practice, but by doing what told without questions shell saves my time every day.


Well "mastering [that] careless design" has made me farore capable than many of my peers at tasks that involve debugging network problems. I can slice and dice data files with ease and much more speed than any of my associates because the power the shell brings, I have gained a much deeper understanding of how am OS and computing works, what is going on under the hood, and consequently have reaped many advantages due to many hours spent mastering the shell.

You are right it has some warts, but if a file is really that absolutely critical I make a .bak copy of it before I start my work. The shell has its limitations but so do all tools and the power it gives me is unmatched.

As for other popular software there is plenty out there but other popular software out there destroys something far more valuable to me. My time.


>You are right it has some warts, but if a file is really that absolutely critical I make a .bak copy of it before I start my work

This seems to miss the point completely, unless you back up your entire hard disk before running every command. The whole problem is it's easy to nuke things you weren't intending to touch, accidentally.

The user-hostility of the shell is not a required feature to achieve its power. All of the advantages you cite could be had without being a terrible footgun with human factors mired in 1970s computing culture.


How is any of the advantages you listed not achievable by learning a better programming language, say, Python?


Some of them are but for quick stuff it’s hard to match the speed and flexibility of the shell


The shell is a tool - one of many in your kit. I 100% agree that one should be quick to use Perl or Python for many tasks. But for some tasks, the shell is just the thing to pull out of your back pocket.


The problem I have with this is it’s just software.

I am not a carpenter and context switching is expensive when it comes to thought work (probably because it’s still just typing, lacking the mechanics of swapping a hammer for a screwdriver).

Anyway, the point is by now, 20+ years into an IT career, I should have at least ONE option that can handle these needs without flip flopping through an entire operating systems “tools”.

The desktop metaphor is not the only real world model computing needs to get rid of. I want an interface that can do the job, not a million “tools”.


I'm surprised you didn't mention math. It's astounding to me how fragile it is to add 2 and 2 in the shell. And forget about floating point numbers. Anytime I need basic math in the shell I reach for 'python -c'


It is often easier to reach for my calculator, calculator app, or the address bar, to do arithmetic, than:

      echo "2 + 2" | bc -l


ksh arithmetic includes floating point. Sadly bash made it to the extinguish step before embracing and extending that far. (Compound variables and getopts are the other big omissions.)


Dealing with files and directories? Best option. Dealing with calling multiple other tools written in different languages and/or network calls, and combine, store and do some basic operations on the data? Best option.

It's a job control language after all.

The pipes and redirects, and the instant access to the file system is also very powerful.

Sure it could/should be improved massively, but I'd say it's still the best tool for a significant amount of pretty common use cases, it doesn't just suck at everything.


Try writing this in another language:

  wget http://server{1..5}.com/file.txt >> /tmp/data.txt & wait && wc -l /tmp/data.txt
There's some pretty serious stuff going on there


The problem is we've historically limited ourselves to the lowest common denominator shell so our scripts worked everywhere without modifications.

Hence everything has been stuck with the archaic limitations and oversights of posix shell or gnu bash as linux distros took over and largely standardized on it.

There have been numerous alternative shells and some significantly improve on the situation, unfortunately they've been largely ignored. Default shell choice has proven very sticky.

I do agree that it's practically impossible to write robust and safe scripts in bash/posix shell. You have to jump through many hoops if arbitrary user input is being handled. Frankly nobody should be writing generalized tools using such shells today, unless they have a very good reason to self-impose the constraint.


Yeah, but in the modern era, where almost all the ancient Unixes are for all practical purposes dead (you basically have to be an IBM consultant to use AIX, an HP consultant to use HP-UX, an Oracle consultant to use Solaris, *BSDs probably have 0.01% of Linux' usage rate), we live in a Linux mono culture.

99.99% of everything is internet connected and auto-updates. New distro versions come out every 2 years.

There's no real reason why bash couldn't have been evolved into a sane language with sane parsing and with modern constructs. By sane, I mean... read this:

https://www.oilshell.org/blog/

At least these articles:

https://www.oilshell.org/blog/2021/01/why-a-new-shell.html (especially this one)

https://www.oilshell.org/blog/2016/10/20.html

https://www.oilshell.org/blog/2016/10/28.html

https://www.oilshell.org/blog/2016/11/06.html

If freaking Javascript could "use strict;" and turn a page, bash could have definitely have done the same.

It's just that there's no real commercial interest and FSF doesn't have the manpower (or desire) to do this.


Yes 'use strict;' is a great analogy! I mentioned that here with regard to Perl 5:

https://www.oilshell.org/blog/2020/07/blog-roadmap.html#the-...

It turns out that bash's "shopt" can be co-opted for this, and that's exactly what Oil does. For example, after 'shopt --set parse_paren', you can do this:

    if (x > 0) {  # parens are now Python-like expressions, not subshell!
      echo "$x is positive"
    }
Yes I agree bash could have done this. But the bash codebase is very fragile and things often break. The maintainers are very concerned about breakage.

Oil is written in a way to make breakage less likely: with principled parsing and a detailed code representation using algebraic data types. I'm surprised at how much language evolution this style has enabled. It makes me optimistic about the project's future!

Although we still tons of work to do with the C++ translation -- writing a shell is hard either way!


> Yes I agree bash could have done this. But the bash codebase is very fragile and things often break. The maintainers are very concerned about breakage.

True, but that's not a good reason for such a used application. It has been around for 40 years, I think it literally has trillions of runtime hours. It's used on billions of systems, most likely.

It's exactly the place where you'd create a super comprehensive test suite that covers <<everything>> (see SQLite) and then re-architect it to make it robust.

I wonder if it's due to lack or money or due to something else.


> There's no real reason why bash couldn't have been evolved into a sane language with sane parsing and with modern constructs. By sane, I mean... read this:

> https://www.oilshell.org/blog/

I think the point is that, it did evolve, as your link shows—and there are lots of other sane alternative shells, too. The problem isn't that they don't exist; it's getting people to adopt them in sufficient numbers.


> *BSDs probably have 0.01% of Linux' usage rate

Just wanted to point out that Hacker News runs on FreeBSD[0] :) Some people like the BSDs, and would appreciate it if everybody stopped designing for the Linux monoculture (well, monoculture if you disregard Windows and macOS).

[0] https://docs.freebsd.org/en/books/handbook/introduction/


Hacker News runs on FreeBSD and Arc, Arc being a super obscure Lisp dialect that I can't imagine being used in production anywhere else (for sure not in a place with a similar visibility or impact).

"What HN uses" doesn't say much about tech trends :-)

Edit: You probably want to use Netflix as a better example. I think it's used by their CDN layer.


Don't forget filenames that start with a dash... Usually you need to remember to put "--" in every single command invocation.


That has to do absolute nothing with the shell, that's a Unix convention how commands parse their arguments.


I disagree, the shell is in the best situation to do something about it. Or at least it would be, if there was a consistent way for programs to report to the shell how their argument parsing works.


that would be horrific


More horrific than having to guess what switches are supported by any given command on the target machine, even though shell scripts are supposed to be cross platform?


Yes, those are all problems with bash and the state of the art.

The funny thing is that there about a dozen ways to spell it, like $(dirname $(readlink -f $0)). Since the shell itself doesn't provide this functionality, people come up with a lot of workarounds.

In Oil it will simply by $_this_dir, e.g. "source $_this_dir/mylib.sh" for relative imports. (Whenever a variable is silently mutated by the interpreter, akin to $?, Oil prefixes it with _).

https://github.com/oilshell/oil/issues/587

I solicit feedback on every Oil release: http://www.oilshell.org/blog/tags.html?tag=oil-release#oil-r...

so let me know what you think of the fixes: https://github.com/oilshell/oil/wiki/Where-To-Send-Feedback

Dealing with filenames is fixed in Oil: http://www.oilshell.org/blog/2021/04/simple-word-eval.html and with QSN:

http://www.oilshell.org/blog/2020/10/osh-features.html#safe-...

Again, if you don't think this actually solves the problem, leave some feedback. Oil is the most bash-compatible shell by a mile, so you might need to use it someday :)

(The theory is that in MOST situations you don't get to choose your language, just like you didn't choose C, C++, or shell. It's all the inertia of compatibility.)

Oil has named parameters: http://www.oilshell.org/release/latest/doc/idioms.html#use-n...

I know about the long file list problem, but I don't think I've ever run into it, mainly because I use xargs, which batches up commands correctly. If there is some other situation where it comes up, I'm interested.

Insane combination of grep / awk / cut: Oil has eggex which compiles to ERE to help you write egrep and awk patterns: https://www.oilshell.org/release/0.8.11/doc/eggex.html

You get syntax errors when you write the pattern (at parse time), not when you run it.

Crucially, we don't "rewrite" grep and awk. We just make them easier to use (optionally). There is a smooth upgrade path and you can retain your knowledge while forgetting about some sharp edges.

> Personally I find it amazing that people invented this thing, then spent decades tripping on the same quoting and filename handling issues and not finding that to be a good reason to fix it.

Yes I quoted David Korn complaining about the quoting problem in the early 90's, which was closer to Unix's invention than we are to the early 90's.

https://www.oilshell.org/blog/2019/01/18.html#slogans-to-exp...

The reason we still have it is the inertia of compatibility, and the fact that nobody really owns shell. Even though people rightly complain about Google's web stewardship, HTML5 was a great improvement and cleanup. They paid people to fix HTML, and it worked to a large extent.

Some food for thought: A Generation Lost in the Bazaar https://queue.acm.org/detail.cfm?id=2349257


Maybe you should use a line of Rust in Oil so that the rewrite-it-in-Rust people make Osh into the new ubiquitous shell.


There is probably some use for Rust indirectly, as there are several use cases for WASM in shell, and Rust has some unique advantages for writing WASM:

https://github.com/oilshell/oil/issues/941

(This probably won't happen for a long time though, unless someone really wants to contribute and own it!)


You might want to have a look at the fish shell or oil shell, some progress has been made


> These days if it's more than 5 lines of code, or if arbitrary filenames are involved I go straight to Perl or Python

Totally agreed. A shell script exceeding that's about 10 lines of code becomes so complex to read, interpret and improve that even though it may perform faster I would rather get the job done in a language that I can understand better when I come back to this 3 months later.

Although ... on that note I don't think I'd ever choose Perl ha ha.


One does not... simply choose perl.


> How do you find where the script itself is?

This question does not really make sense. There need not be any script, or there may be many at the same time. What exactly do you want? Imagine, for example, that you are piping the output of some commands into the shell.


>There need not be any script

Sure, but that is irrelevant.

When there IS a script, you should be able to get its full path much easier, that is the complaint.


I agree that it may be possible to implement this feature in a nice form where it makes sense (for the people who need it). But it seems really strange to me and I cannot imagine a use for it. Conceptually, the shell only sees the contents of the script, why would it act differently according to where the script is? When you open a jpeg image, the jpeg library does not provide a facility to recover the directory where that file was found. For all you know, this may be an in-memory jpeg. Isn't the same with shell scripts?


>Conceptually, the shell only sees the contents of the script, why would it act differently according to where the script is?

Doesn't need to act differently in what it does.

But it does need to report it, because e.g. you might want to bundle data/assets/config relative to the script, and want to be able to run it from wharever and have it still be able to find it.


It's an interesting question. In most shell, it is possible to "source" a given file, which basically is a form of file inclusion. How should the script location feature behave in that situation? Should it report the sourced file location, or the sourcing one?

In the later case, does it mean that it is not possible to write reusable code based on that feature? In the former case, what should be done when sourcing is done recursively? Can this be used to defeat file location, for instance, by generating a script in /tmp and sourcing another file from it?


First of all: https://en.wikipedia.org/wiki/Argument_from_incredulity

Secondly, it's super frequent that scripts I write or that others write need the folder they're executed in as an argument.

99% of scripts I've seen aren't meant to be piped to. They're just launched as simple, dumb, standalone "programs".


In that case, of course, the method of finding where the script lives would return something that indicates there is no script. But also in that case, the commands you are piping into the shell probably are not looking for the files they were bundled with, they are probably working on files you've explicitly specified, and so there is no problem with not finding the script's location in that case.


This makes me really hope to see more uptake of PowerShell in Linux (and performance improvements in PowerShell too). The above is just $PSScriptRoot.


as I comment in basically every one of these shell articles, while Unix shell does kinda suck, the main problem is that the bad examples far, far outnumber the good ones, and so everybody programs in an overly complicated manner.

  SCRIPT_DIR="${0%/*}"
works in practically all shells, as long as the script is actually invoked by path (if you do curl script | bash, you're SOL right off the bat).

the same applies with "combination of grep, awk and cut". awk contains almost all the functionality of grep, sed, and cut, so any pipeline of awk plus one of the others is almost always unnecessarily complicated. it would be like complaining "in python, string handling is so hard. look, you have to do import re; for x in re.sub(...).match(5).split(' ')[3:-5]: lst.append(x)". of course if you make it overly complicated then it will be overly complicated.

there are also too many footguns. spaces and newlines are trivial to handle in 95% of scripts, as long as you quote every expansion. as long as you never ever write cmd $file and always write cmd "$file", that solves virtually all whitespace problems. quotation marks in filenames are never interpreted in a special manner in Bourne-like shell unless you use eval, which has the same pitfalls as any other interpreted language (see python above).


Now, invoke a script, containing that code, in the current directory, and you will see why his example was more complex.


    SCRIPT_DIR="$(dirname $0)"
It also works when you're not running bash.


This doesn't work if $0 contains whitespace. Fix it by using "$0" instead.


LOL. What kind of savage runs a script with whitespace in its name?


What kind of savage OS doesn't even have a standard text encoding for its filenames and even allows its filenames to be binary?

Just because we wish something weren't so, doesn't make it as we wish it.


That's essentially what he did, but he also accounted for the case of running a script from the current directory by running pwd.


> How do you find where the script itself is?

No program can do this portably.


I think you might be mixing two things up: it's impossible for a native program to find where it is, because argv[0] can be anything.

But it's possible for an interpreter to tell the script it's interpreting where it lives, i.e. argv[1]. For example, Python does this in a pretty straightforward way.


You're right, my bad.


This line was highly distracting for me, "To make matters worse, the shell has been mostly left for dead by both academia and industry, considering it an unsalvageable piece of junk that needs to be replaced at the first opportunity."

Um, false?

Many people in industry use and love using the shell.

Please read, "In the beginning was the command line."

The shell is by far one of the most powerful and expressive tools we have. Yes, there are many ways in which it could be improved, but JSON? That's not it.


Actually that is the only time I started to find the shell limiting was working with JSON. But then I discovered jq and my issues are solved.


Yeah, except that jq, beyond 'jq .' is horribly unfriendly.


I had a similar complaint so I wrote this in ruby https://github.com/nburns/utilities/blob/master/rson


I started using jq as well -- was shocked to find the author was a former comp sci classmate of mine. It's very useful.


Do they love the shell or just piping small CLI tools? Because I’m fairly sure it is the latter, which can be easily implemented in anything.


A REPL is more expressive, allows for graphical and structured output and regardless of UNIX culture books it precedes it by at least 10 years.


I think you can invoke Perl in command line mode. Do people use Perl like that if they want to use Bash command line?


From Section 2.2 "The Bad"

>> B1: Too arbitrary. The shell’s virtue of limitless composition (G1) is also its vice: the shell can compose arbitrary commands written in arbitrary languages.

This is what I fundamentally love about the shell, and you can try to pry it from my cold, dead fingers

>> B3: Too obscure. The semantics of the shell and common commands are documented in 300pp of standardese [7].

With respect to Bash, I couldn't agree more about the 'standardese' gripe. Man pages would be so, so much more useful if they prioritized showing examples of common/useful ways to use the command. I would love a 'man --examples $COMMAND' feature. Does something like that exist?


I don't recommend this ever, but I of course make this moonshine on my own computer:

I usually edit the man page to add cookbook/bugfix sections. Most of them are cookbook-style recipes. Whenever I find out some obscure bug or gotcha out (spaces in file names passed through a pipe is one of my frequent issues), I go to the man page of cut/tr etc. and add it to the page.

Next time I man for that, I remember what I did right a couple of years ago. Another thing I copiously modify is the "SEE ALSO" section for common commands, whenever I install new stuff from github etc.

I confess that it is not a proper solution, but it works for me. Man pages have continued in the gloriously terse initial style of Ken Thompson. For mere mortals like I, I need a nicer explanation. My machine's manpages for some commands (find, grep being the most modified) tend to look like the old "HOWTO"s on linux.



Or as an alternative with no installation or dependence : `curl cht.sh tar`.

Obviously, replace 'tar' by whatever command you see fit. Works on any machine online with curl available.


That should probably be `curl cht.sh/tar`


The article recognizes that interop and backwards compatibility with existing systems is important for adoption and it does not try to impose a new clean room replacement. This is IMHO a good strategy that has proven itself in other contexts.

For example Kotlin/Clojure/Scala all started out focusing on interop with the base Java/JVM technology. They provide a new layer of modern language features that is still compatible with all the existing code without forcing a rewrite. Over time these new languages get adopted, some older libraries get a modern equivalent and that helps these languages to branch out into other areas: e.g. Kotlin Native, multiplatform and ClojureScript.

A similar thing can be seen with Typescript. It started as a type layer on top of JavaScript, making it easy to interop with old JS code and over time lots of pure Typescript library equivalents have popped up. Now that the Typescript ecosystem is flourishing we see technology like Deno that slowly moves Typescript away from its JS base.

For a post-POSIX shell to be adopted widely I think it will need to use a similar strategy.

Edit: re-reading my own comment, I might have just advocated for an Embrace, Extend, Extinguish strategy.


That only seems to work for a short period of time, eventually the platform adopts the most interesting features of the candidates to replacement, and they fade away while the large majority of platform users keeps coding away on the language used to write the platform.


That's bad news for some new language authors, but not necessarily for programmers, right? These languages still succeed in pushing the norms language design forward, and that's pretty great for the users of programming languages.


True, the only issue is that dreams of taking over the world need to be refrained.

The only platforms where new languages kind of took over, the move was forced by the company that owns the platform.

However you are right, in every case even the system language for the platform got better.


This is the kind of strategy that some next-gen shells take up, most notably Oil Shell.

For my part, I think the old syntax sucks and has got to go, and if your new shell's language is good enough, people will want to deploy it where they might use it, just like any other scripting language.


Yes I'm glad that message is getting through. Although I also want to add the other perspective: Oil from a clean slate. I'm working on "A Tour of Oil" doc that will show that Oil is not encumbered by much legacy at all.

Even though it evolved from bash/OSH, and is in fact the same interpreter parameterized by some global options, it is a nice clean new language :)

It's surprising how much language evolution was enabled by "shopt" options, principled parsing, and algebraic data types!


The new doc sounds exciting! I'm looking forward to giving it a read.

I think the approach Oil takes is pretty cool. It just seems like a lot of work, and I'm much more interested in Oil than in OSH.

I can see how it might make Oil attractive to Linux distros for choosing as a system shell, though, since it means they could just ship the one binary, and then offer experienced POSIX shell users OSH as an escape hatch while still presenting Oil as the preferred way to write scripts.

It does also partially spare users from the problems Fish users face where CLI tools ship with scripts to initialize a bunch of env vars, and now suddenly you either need them to ship a Fish script, translate it yourself, or do some hacking where you run a BASH shell and mirror its env var changes.

So I can see some benefits for sure, but to me those are less exciting than the new stuff. :D


It's a bit clickbity.

In the introduction, the paper mostly talks about the ergonomics / maintainability (that resonates the audiencel like me), but the proposed solution, "jash", addresses the performance problem (who cares???).

That's said, the brief survey in the first half of the paper is moderately interesting. It's not a real survey but an advertisement of the authors' work. But they've done something worth a quick glance like formalizing shell's semantics [1], which looks interesting if no useful.

[1] http://shell.cs.pomona.edu/


Authors use some vague and unconvincing arguments about deficiencies of well established tool (shell) in academic research to boost relevance of their obscure research (formal methods and performance of shell regarding multiprocessing).

Without giving details of their work in the paper! Just tell me the damn problem your tool solves regarding unix shell programming and why it's cool. Don't waste our time with "rehabilitating shell" in academia.


> Unfortunately, this means that the behavior of a shell program cannot be known statically: a simple grep $PWD -in ~/.*shrc

This is a weird claim. Every program's bahavior depends on its inputs in the general case. If you want to process static input you can do that using the shell. If your program should work on external inputs you cannot statically determine its output regardless of the language.


I have a bunch of reservations, already when reading just the paper's abstract:

> The Unix shell is a powerful, ubiquitous, and reviled tool for managing computer systems.

Reviled? By those who use it often, it is more loved than reviled.

> The shell has been largely ignored by academia and industry.

Seems like a bit of an exaggeration, but it's true that corporations have a thing for creating custom proprietary software rather than providing shell-script-based solutions.

> While many replacement shells have been proposed, the Unix shell persists.

There is no single "the" Unix shell. bash is probably the most popular, but it isn't "the" shell, singular.

> Two recent threads of formal and practical research on the shell enable new approaches. We can help manage the shell’s essential shortcomings (dynamism, power, and abstruseness)

Again with "the" shell. Anyway, a shall should be dynamic and powerful; and I wouldn't say bash is _terribly_ abstruse. If you apply yourself, your code could actually be rather readable.

and address its inessential ones. Improving the shell holdsmuch promise for development, ops, and data processing.


the fact that you can make your own functions and aliases makes me really doubt that people really understand the point of a shell. sure treating everything like a string may seem really dumb. but its easy peasy. because, well everything is a string. once you accept that its not that hard. anytime i can predict i need to repeat a task i can just create a function like work_helpers__reset_dans_password_again, and then you have it there in your tab complete. or why not just run: work_helpers__reset_password dan

i think its because people dont want to rtfm. if they would rtfm, they would learn most problems have a solution that is not insane like the one they are about to start developing. you know how often i see this:

cat somefile |grep thething instead of just: grep thething somefile


The useless cat is actually very useful in practice - it is more readable and easier to modify.


> the shell’s semantics is black magic, specified in a 119page impenetrable document that is the POSIX shell specification (with an extra 160pp on utilities!).

That's academia going pedantic and mystifying things right there. Strict POSIX compliance was never that important to most people. I never read the POSIX shell specification and never needed it. Writers of shells like bash or ksh probably did and apparently managed this "black magic" well enough to make their shells work well.

You can learn shell programming by yourself, from bash manpage and examples used in Linux systems. It's not rocket science.

Computer languages get adopted not because they have nice and simple spec, but despite not having a neat and simple spec. It's like with natural languages. There is no authority and no simple spec. And it's not an important problem.


Thank you. You've saved me bothering with TFA.


The researchers main concerns (making shell programs perform better, especially by better leveraging the parallel computing resources of supercomputing environments), but they do give a nod to stuff I care more about (usability and breaking out of the limits imposed by extant terminal emulator standards).

I hope that this jump starts a bit more research into shell languages. Maybe the veneer of authority given by papers suggesting various improvements will help draw people to next-generation shells that take up those improvements and advertise that.

If you were not aware, PowerShell has inspired a whole generation of next-generation shells built around structured pipelines, and they're very exciting. My personal favorite is Elvish, which also draws much inspiration from Fish, a lovely 'old-fashioned' (text-only pipeline) shell with a strong emphasis on interactivity and ease of use. One thing that makes Elvish special besides its ambition to be a pleasant and useful general-purpose programming language is its portability: it's typically distributed as a single static binary, and it even has native support for Windows.

When I think of 'the future of the shell', I absolutely think of Elvish, namely with respect to:

  * better portability
  * high aims as a 'real' programming language
  * built-in static checks where possible (for now just syntax, I think, but a more complete type system is on the roadmap)
  * sophisticated and responsive interactive experience out of the box
  * structured pipelines, structured pipelines, structured pipelines
To me, the performance issues the authors address are secondary to what I've enumerated above. But they're right that the fundamentals of the shell are great as well as overdue for some innovation.


I'm skeptical about academia being capable to help shell evolve. The paper is a good example why - they focus on justifying themselves and then on obscure academic ideas they like to pursue (POSIX formalization, highly parallel data processing) rather than usability for most users. Sometimes these align, but most of the time they don't.

Your description of Elvish is much better in that respect. I'd love more capable shell with modern advancements like types and static checking. However, I don't get why structured data over pipelines is useful. You can already exchange any kind of data over pipeline as string of bytes. Sender and receiver have to be aware of the format in any case.


> I don't get why structured data over pipelines is useful. You can already exchange any kind of data over pipeline as string of bytes.

The Elvish docs have a pretty good answer to this here[0], but I'll give my own answer, too.

There are a few things. Structured pipelines spare you from having to do escaping and transformations in order to preserve structure.[1] It's also a natural complement to a type system. Some other next-gen shells call their structured pipelines ‘typed pipelines’ instead.[2] Structured pipelines also provide a nice mechanism for separating the visual/textual representation of data from its structure, so you can ‘convert’ values between different display formats without doing any parsing, in a convenient, extensible way. PowerShell has this with Format-List[3] and Format-Table[4], for example. This kind of pretty-printing isn't implemented in Elvish yet, but Kurtis Rader helped me outline one possibility in an open GitHub issue.[5]

On some level, structured pipelines just mean that commands ‘native’ to the shell language (builtins and functions written in the shell language) all understand the same types or data structures. This means you can ingest data from some external format (JSON, CSV, TOML, a SQL result, whatever) and then use common tools. Nushell's first example for working with pipelines[6] hints at one of these benefits: you can use a single command, like `inc` to increment the value associated with a certain key in a map, with data that comes from any format, and `inc` doesn't have to know anything about data formats or parsing.

Another way to think about it is that it lets pipelines have the same kind of type checking as functions called with arguments. In a typed language, you get type checking for your arguments when you call a function like

  (some_command arg0 (another_command arg1 arg 2))
but in a way, pipelines are an alternative to subshells.[7] If you want passing values through the pipes to be as robust as passing them as arguments, you need structured pipelines.

You're right that if you are using a lot of external commands in a ‘raw’ way, you don't get to take much advantage of the structured pipeline. Imo PowerShell proves that in practice, if you have a big enough library ecosystem, you can leverage wrappers and ‘pure’ tools written in your shell language to get a pretty nice programming experience.

0: https://elv.sh/learn/unique-semantics.html#motivation

1: https://elv.sh/learn/effective-elvish.html#returning-values-...

2: https://murex.rocks/docs/user-guide/pipeline.html

3: https://docs.microsoft.com/en-us/powershell/module/microsoft...

4: https://docs.microsoft.com/en-us/powershell/module/microsoft...

5: https://github.com/elves/elvish/issues/1149#issuecomment-705...

6: https://www.nushell.sh/book/pipeline.html#basics

7: https://elv.sh/learn/effective-elvish.html#prefer-pipes-over...


Everybody comments on shell programming, but nobody comments on the main message in their paper PaSH, POSH and JIT.

I must admit I did not understand their message at the first quick reading.

Formal methods? Well, I'm not whether anybody tries to prove their installation scripts (or whatever people would script) correct.

Parallelization? Well, if you try to high-performance computing, is the shell really the place to start?


Well they start with "arguments" for why shell programming is deficient, and these arguments seem vague and not well expressed or incorrect. Most people just lose interest in further reading. And when you do read further, it's a lot of research field building with buzzwords that nobody who uses shell cares about.


Something that I particularly hate with shell scripting is that if you modify a shell script while it is executing, it will break the running instance: bash does not load the whole script on startup, it reads it line after line while executing it. I wonder why this stupid and dangerous behavior has never been changed.


This is not true. I'm not sure what you're seeing, but neither bash nor zsh behave like this in Linux or MacOS.


GP is correct. Bash does not read and parse whole script on start, that would be slow. It is done when needed.


Wrap the script with:if :; ...; fi; exit


> Many projects focus on better interactive shells, at some cost to programmability [23]. Innovation in terminal emulators (like Fig and iTerm2) improve user experience, too

It's still early / unreleased but I'd like to add my own terminal [0] which lets the emulator have an understanding of user input -- rather than simply delegating to the shell.

I'm finding that the interactive pathways that open up are mostly-uncharted territory.

[0] https://media.handmade-seattle.com/terminal-click


One good (maybe the only one?) point in the article:

> Systemd uses its own vari-able expansion regime, slightly different from the shell’s...encouraging you to runsh -cif you have an actual pipelineto run. Dockerfiles and CI config files arealmostshell scripts,but there’s no convenient way to just execute theRUNcom-mands orscriptlists they contain. And few are the Vagrant-files that don’t call out to someprovision.shto set up de-pendencies. Giving up on the shell means that each ‘modern’system tool will have its own janky quasi-shell language:a decidedly worse situation.


Recently, I have been looking at the Factor language http://factorcode.org. I think I would like to have a shell based on that.


I think what would be more use-full would be a new "shell" which is base on a modern context, incorporating lessens learned from years of cmd-line UX design (ok let's be honest many programs didn't learn anything wrt. UX).

But adding jet another POSIX shell which maybe does some parts better but in the end has still not a grate UX because it's a POSIX shell seems kinda pointless IMHO.


This is the approach murex tries to take. It breaks POSIX compliance without remorse but retains compatibility wherever practical. This means it can have some of the flexibilities of Powershell (typed pipelines), the flexibilities of IDEs (events, better auto-completions, etc) but still works with all of your everyday command line tools.

My ultra long term aim is to integrate the shell into a media rich terminal emulator so using the command line will have the speed and precision of a TUI, the power of an IDE but the rich content of a GUI.

I'm a loooong way off achieving that but the shell is already usable and has been my primary shell for ~4 years now. And I welcome any and all feedback on the current and any future builds.

At some point I'll publish a paper with my ideas.

https://github.com/lmorg/murex


Just a FYI, murex is (also) a very widely used front of house trading platform, which costs an absolute fortune and has an army of lawyers, you may want to reconsider the name..


Yeah, I get this a lot. I wasn't aware of the trading platform when I created the shell and was quite late into the development before I was made aware. The shell and the trading platform exist in different domains so as far as I'm concerned there isn't any trademark violation but if the trading platform (and their lawyers) disagree then I'm open to dialogue with them.

I think HN is uniquely placed where it has a lot of trading topics as well as IT so it wasn't until my shell started trending on here that anyone had even noticed the naming conflict. But my shell had been around for a good few years before it started trending on here.


I'm surprised I hadn't heard of Murex yet! It looks pretty similar to Elvish but the type system is more foregrounded (and more complete?).

Could you help me differentiate it from Elvish in terms of features and design?


I quite like elvish (https://elv.sh) for a non-POSIX compatible shell. It's by no means perfect, and it's not 1.0 so breaking changes are pretty frequent, but I find the language structure and syntax make it worth it. The docs on the site are a little haphazard too, but once you get a feel for it, you probably won't need to refer to them too often.


This. I want a shell that will contextually spit out plaintext in interactive mode, then a JSON object when scripted or piped.

Pretty sure this is what Powershell does, but the UI just feels so damn unnatural.


I'd like most programs to implement a JSON interface, given right flags. Or maybe the existence of some env var like $DEFAULT_OUTPUT=JSON

I like this approach:

https://github.com/kellyjonbrazil/jc

> CLI tool and python library that converts the output of popular command-line tools and file-types to JSON or Dictionaries. This allows piping of output to tools like jq and simplifying automation scripts


Powershell does not use JSON. It's based on .NET objects.


Perhaps IDEs and shells could converge?


Murex does this. eg

Take a plain text table, convert it into an SQL table and run an SQL query:

  » ps aux | select USER, count(*) GROUP BY USER
  USER                   count(*)
  _installcoordinationd  1
  _locationd             4
  _mdnsresponder         1
  _netbios               1
  _networkd              1
  _nsurlsessiond         2
  _reportmemoryexception 1
  _softwareupdate        3
  _spotlight             5
  _timed                 1
  _usbmuxd               1
  _windowserver          2
  lmorg                  349
  root                   134
The builtins usually print human readable output when STDOUT is a TTY, or JSON (or JSONlines) when the TTY is a pipe.

  » fid-list:
    FID   Parent    Scope  State         Run Mode  BG   Out Pipe    Err Pipe    Command     Parameters
    590        0        0  Executing     Normal    no   out         err         fid-list    (subject to change)

  » fid-list: | cat
  ["FID","Parent","Scope","State","RunMode","BG","OutPipe","ErrPipe","Command","Parameters"]
  [615,0,0,"Executing","Normal",false,"out","err",{},"(subject to change) "]
  [616,0,0,"Executing","Normal",false,"out","err",{},"cat"]
and you can reformat to other data types, eg

  » fid-list: | format csv
  FID,Parent,Scope,State,RunMode,BG,OutPipe,ErrPipe,Command,Parameters
  703,0,0,Executing,Normal,false,out,err,map[],(subject to change)
  704,0,0,Executing,Normal,false,out,err,map[],csv
and query data within those data structures using tools that are aware of that structural format. Eg Github's API returns a JSON object and we can filter through it to return just the issue ID's and titles:

  » open https://api.github.com/repos/lmorg/murex/issues | foreach issue { printf "%2s: %s\n" $issue[number] $issue[title] }
  348: Potential regression bug in `fg`
  347: Version 2.2 Release
  342: Install on Fedora 34 fails (issue with `go get` + `bzr`)
  340: `append` and `prepend` should `ReadArrayWithType`
  318: Establish a testing framework that can work against the compiled executable, sending keystrokes to it
  316: struct elements should alter data type to a primitive
  311: No autocompletions for `openagent` currently exist
  310: Supprt for code blocks in not: `! { code }`
  308: `tabulate` leaks a zero length string entry when used against `rsync --help`

Source: https://github.com/lmorg/murex

Website: https://murex.rocks


As much as this is an improvement in many ways, using JSON for this feels like it doubles down on part of the problem with the current standard. Every tool is rendering JSON only for the next tool in the pipeline to parse it back out.


Yeah, I do understand where you're coming from and I've spent a lot of time considering how I'd re-architect murex to pass raw data across (like Powershell) rather than marshalling and unmarshalling data each side of each pipe.

In the end I settled on the design I have because it retains compatibility with the old world while enabling the features of the new world but it also behaves in a predictable way so it's (hopefully) easy for people to reason about. Powershell (and other languages in REPL like Python, LISP, etc) still exists for those who want a something that's ostensibly a programming environment first and a command line second and I think trying to compete with the excellent work there wouldn't be sensible given how mature those solutions already are. But for a lot of people, the majority of their command line usage is just chaining existing commands together and parsing text files. Often they want something terser than $LANG as a lot of command lines are read-once write-many and thus are happy to sacrifice a little in language features for the sake of command line productivity. This is the approach murex takes. Albeit murex does also try to retain readability despite being succint (which is probably the biggest failing of POSIX shells in the modern era).

What I've built is definitely not going to be everyone's preferred solution, that's for sure. But it works for me and its open source so hopefully others find it as useful as I do :)


A good binary format would be best, but JSON is already a step up - at least it's obvious where each value starts and ends. That said, maybe it wouldn't be too hard to offer a binary-serialized JSON format as well (I think BSON is the currently widespread standard)?

On a related note, I wonder if and how a pipe could handle "format negotiation" between processes? I.e. is there a way for a CLI app to indicate it can consume and produce structured binary data? Then the piping layer could let compatible apps talk through an efficient protocol, and for anyone else, it would automatically drop to equivalent JSON (and then maybe binarize it back up, if the next thing in the pipeline can handle it).


> A good binary format would be best, but JSON is already a step up - at least it's obvious where each value starts and ends. That said, maybe it wouldn't be too hard to offer a binary-serialized JSON format as well (I think BSON is the currently widespread standard)?

You can already use BSON. The data is piped in whatever serialisation format it's typed as but the type information is also sent. Builtins then use generic APIs that wrap around STDIN et al which are aware of the underlying serialisation.

So the following works the same regardless of whether example.data is a JSON file, BSON, YAML, TOML or whatever else:

  open example.data | foreach i { out "$i[name] lives at $i[address] }
The issue is when you want to convert tabulated data like a CSV into JSON (or similar) since you're not just mapping the same structure to a different document syntax (like with JSON, YAML, BSON, etc), you're restructuring data to an entirely new schema. I haven't yet found a reliable way to solve that problem.

> On a related note, I wonder if and how a pipe could handle "format negotiation" between processes? I.e. is there a way for a CLI app to indicate it can consume and produce structured binary data? Then the piping layer could let compatible apps talk through an efficient protocol, and for anyone else, it would automatically drop to equivalent JSON (and then maybe binarize it back up, if the next thing in the pipeline can handle it).

That isn't that far removed from how murex already works. Supported tools can use common APIs to convert the STDIN into memory structures, and similarly convert them back to their serialisation file formats. So if you have a tool like `cat` in your pipeline, they can use the pipe as a standard POSIX byte stream, but murex-aware software can treat the pipeline as structured data. The drawback of this is if you're reading from a POSIX pipe into a murex-command, you might need to add casting information (see below). But the benefit is you're not throwing away 40 years of CLI development:

  # Using a POSIX tool to read the data file:
  # casting is needed so `foreach` knows to iterate through a JSON object

  cat example.json | cast json | foreach { ... }



  # Using a murex tool to read the data file:
  # no casting is needed because `open` passes that type information down the pipe

  open example.json | foreach { ... }
(`open` here isn't doing anything clever, it just "detects" the JSON file based on the file extension -- or Content-Type header if the file is a HTTP URI)


Outputting raw structs would also have its own issues. What would be reasonable? Protobufs?


Protobufs requires each end of the comms agreeing to the same schema. You'd need something that transmitted key names like JSON, YAML, TOML etc. If you wanted a binary format then you could send BSON (binary JSON), and murex does already support this. But pragmatically a standard command line (or even your average shell script) isn't going to be consuming the kind of data that is going to be latency heavy to the extent that the difference between JSON or BSON would impact the bandwidth of a pipe.

Worst case scenario and you're dealing with gigabytes or more of data, then you'd a streamable format like jsonlines where each command in a pipeline can run concurrently without waiting for the file to EOF before processing it. In those situations most binary serialisations aren't well optimised to solve.


this looks like fun! thanks


What if there was a standard way to describe inputs, outputs and side effects of programs?

Think of it like writing/downloading type definitions for existing untyped code. With program descriptions like this shells could become smarter, generate warnings, abort before execution if types don’t match up, etc.


I think I like the theory of this but in practicality I'm pretty sure I'd just get irritated by it. In most usage, there is only one input and output format, being plain text. If I specify a given type further and send it to grep/awk/sed/cut, I'd need to explicitly or implicitly coerce the given output to plain text, at which point I'd either lose the original type information or require some unholy transformations to retrieve the original type. I think this misses how I use the shell: sending plain text information to and from files and executables. The whole point is that there is no type system and everything is just plain text.

For something smarter that behaves more like a typed programming language, there's Perl, Python, etc.


Future is powershell


I have said it several times on HN, but it bears repeating: As much as I find shell programming approachable and enjoyable, it really should be avoided if you can help it.

Using `set -euo pipefail` and tools like Shellcheck help, but there's a limit.

https://sipb.mit.edu/doc/safe-shell/

^ Says the guy who forked a project an hour ago just to fix a couple errors, found a subshell and said, 'You're done, go no further' :)


15 seconds on the shell beats 5 minutes trying to debug a ruby or pythons script.

That said I would agree as soon as something becomes complex or you need to do computation looking outside the shell is probably a good idea.

The huerestic that I use is

1. Do I need to do math? Move out of the shell.

2. Is the logic starting to get complicated enough that I need consider creating modules? Move out of the shell.

3. Is there enough code to fill more than one screen? Out of the shell I go.

4. Is this going to be something that needs robust error handling and that will be used frequently outside of me? Out of the shell.

I've found that as long as I don't hit one of those conditions hacking around in the shell is quite powerful and useful.


> 3. Is there enough code to fill more than one screen? Out of the shell I go.

Completely disagree. Just parsing a command-line takes a screen's worth of lines. Hell, just the usage info message for a script can take that much.

> 4. Is this going to be something that needs robust error handling and that will be used frequently outside of me? Out of the shell.

Disagree even more strongly. You're saying never to write shell scripts for use by many people. Actually, shell scripts make up a lot of a typical systems' /usr/bin and /usr/sbin - and that is fine and proper.


For me (a Comms major/IS minor as an undergrad), most of what I'm manipulating is text and shell is just a tool to speed up the process and as such doesn't have those concerns.

Still, I've found Rake and Make - as simple containers for shell rather useful and more flexible - newlines are sane, far less of this `\n` nonsense.

^ An odd complaint, but it just seems like the terminal or the environment should handle this without input from me.

Now if only I could do something about Make syntax requiring tabs...


In what concerns UNIX shell, Python always wins for me, other than very basic scripts to automate launching applications, or a pipeline.


my colleague advised -uxe literally yesterday.. (I have a 30,000 line bash environment inherited from a chain-smoking author ten years ago.. no one wants to touch it.. it is built with autotools too!)


As much as I love shell ... there's no debugger worthy of the name. That makes it really hard sometimes.


With interpreted languages, "printf debugging" is really easy. It's one of their main advantages. I'd rather have "bash -x" and be able to edit the script than lose those features in order to gain gdb.


Did Ansible/Puppet, I can only say: "Go away with your shitty DSL!"




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: