Right now the goal is to clone bash but have better parse time and runtime errors. I hope to make an initial release this summer. But the project is much larger (see the blog if interested).
This project is one of the coolest things in the unix world today. I do not care too much about the python implementation (writing it in C alone would be even cooler), but the work is just awesome.
Your blog post about "vectorized, point-free and imperative style" was so beautiful that it brought tears to my eyes. Thank you!
Wow thanks! I appreciate the encouragement. The architecture has settled, and I'm slogging through all the features now, so that helps.
Yeah Python wasn't the original plan, but it helped me focus on correctness. I was able to iteratively reverse engineer bash and other shells, while keeping the code relatively clean.
I had hoped to rewrite it in native code, but it's a huge amount of effort, so the first release will be Python.
Existence and correctness are both higher priorities than performance :)
I disguised Python by bundling a slice of the Python interpreter, which will allow me to rewrite it over time without user-visible packaging/deployment changes. I'm also able to fork the Python language because I have the "OPy" compiler.
One thing that should make it faster is using integer offsets everywhere instead of dictionary lookups, and that should be possible without much code change. It is written in a fairly "static" style, although with some important metaprogramming.
EDIT: I also hope to resume blogging more regularly once the code is released. I have a big pile of ideas/notes. But getting it out there in any primitive form is the most important thing right now.
I have to ask, from a student perspective how does one acquire sufficient knowledge of writing a shell? Does one need to take OS? Or is knowing data structure enough?
The shell should be significantly easier than say writing a compiler, although I think it's less well documented.
Much of the work in writing a shell is parsing -- I would estimate that parsing is 60% of the work, whereas it might only be 10-20% of the work of a compiler. This is both because interpreters are smaller than compilers (fewer transformations), and because shell is harder to parse than most programming languages.
But as you can see from my blog, it requires a few different parsing techniques, and there are some non-obvious choices I made that I think made it easier. (e.g. Lexer modes aka lexical state are a huge win for reducing complexity.)
The shell uses surprisingly few system calls -- fork/exec/wait, open/close/read/write, pipe/dup/fcntl, and that's almost it. These are well worth studying. File descriptors are non-obvious and essential.
I never took the OS class that most people take, which shows you how C and Unix work. This thread lists a few, and also check out xv6 in the parent thread:
I learned a lot about system calls by using strace on many programs over the years. I think Python helps a lot because it is interpreted and has relatively direct bindings to Unix system calls.
FWIW, although I learned interesting things about parsing, I don't think there is that much "hard" in the computer science sense about shell. It's mainly an exercise in software engineering -- how do you keep your program from degrading into an enormous mass of poorly-debugged if statements? (I would classify bash in that category, unfortunately)
As far as data structures, you should be able to write a shell without anything fancy at all. In fact some shells are basically just tons of linked lists. Linked lists make memory management easier in C, although I'm using a more modern, high-level style.
I also spent a significant amount of time reading bash, dash, and mksh code for this project. (and to a lesser degree zsh).
To be honest the biggest "gotchyas" I stumbled across when writing my shell was handling PTYs. In the end I used someone else's module instead of writing my own because they supported several different flavours of UNIX as well so it seemed a pointless duplication of effort for me to roll my own. However it was real humbling when I discovered just how little I actually new about PTYs compared to what I thought I knew.
I don't know anything about PTYs either, since my shell is mostly a batch program like Perl right now. I'm viewing it first from a programming language perspective.
As mentioned, I'm deferring to GNU readline for interactive features. I'd be interested in what problems you had with PTYs and what module you used?
Are PTYs standardized by POSIX? It feels like you should be able to write a shell against only POSIX APIs (and ANSI C).
I'd also be interested to see your shell. There are a bunch of alternative shells and POSIX shell implementations here:
EDIT: I saw the sibling comment linking to https://github.com/lmorg/murex. Taking a look! A few months ago I started a thread with the authors of elvish, oh, mash, and NGS shells to exchange ideas. (elvish and oh are also written in Go.) I didn't know about your shell or I would have included you!
You'd think PTYs would be standardised - I certainly assumed they would be - but the process of assigning a PTY seemed to differ from UNIX to UNIX and I couldn't find any decent documentation that discussed PTYs in a consise, cross-platform, way. So started to look if anyone in the Go community had also played around with PTYs. The package I used was https://github.com/kr/pty. The code to be surprisingly readable too.
The problems I had was initially not even realising that many command line tools check if they're outputting to a PTY. This affected their execution behaviour. I'm sure this is all stuff you're already familiar with but a few examples I noticed were grep wouldn't highlight it's match, ls wouldn't output in multi-lined view and apt-get wouldn't give you many (any?) of it's interactive options. I also wanted tools like vi and top to function the same in my shell as they would in Bash but they couldn't without me assigning a PTY (I also needed to put the terminal into "raw mode" to passthrough sigkill and disabling echoing from STDIN - but thankfully that was very easy in Go)
I've played around a little with GNU readline. It's a really nice tool but I had a few cross compilation issues in Go when porting it to other platforms (eg FreeBSD). So I used an entirely Go package instead - which isn't without it's own bugs but at least it keeps my shell fully portable.
> A few months ago I started a thread with the authors of elvish, oh, mash, and NGS shells to exchange ideas. (elvish and oh are also written in Go.) I didn't know about your shell or I would have included you!
Thank you. My shell is only about 3 months old though (possibly less as I only named it 2 months ago) so likely wouldn't even have existed when you started your thread. :)
Writing a simple shell for unix is actually a common student project for CS students in their second or third year.
If you know C at the level of K&R (especially the last few chapters on system calls), you can learn what you need to write a shell from a book like Stevens' Advanced Programming in the Unix Environment and/or Bryant and O'Halloron's Computer Systems: A Programmer's Perspective.
FWIW dash is 20K lines of C code, although it has many fewer features than bash.
I use metaprogramming in several places to make the code shorter, so 11K sounds about right for Python vs. C. I think it will be 15K with all the bash features filled out, excluding interactive parts.
I use GNU readline and not my own code, so that substantial part isn't counted.
The osh/osh.asdl file gives a concise overview of what features are represented and implemented.
There is more to writing a shell than simply reading commands and executing them after fork()ing. TTY handling must be done right¹ and interrupt and signal handling is rather tricky to get right as well².
Yes, good recommendation. I have mentioned it here on HN before. I had got to read it when working in an HP joint venture, years ago. They (HP) used to ship a complementary copy of his book with every new HP-UX business server or workstation (CAD etc.) sold. IIRC the book even shows how to write a simple DBMS (not RDBMS), with client and server parts. And shows how to write all that code somewhat portably (in C) across some Unix versions that were common then. Quite an achievement. The book is still available (in an updated version - after 19 years!):
It also shows how to set up pipes - using the pipe(), dup() etc. system calls. There are some subtleties involved in that. And that stuff is used for the shell he creates.
Please excuse the self promotion but I'm also writing a Unix shell in Go. It's not designed to be a drop in replacement for Bash (ie it follows a few Bash-isms but is essentially my own bespoke scripting language) however it does spawn processes in their own PTY and support interactive commands such as vi and top so it does work as a proper $SHELL. It's still very much alpha but at a stage where I'm ready for feedback.
Writing a shell, like writing a compiler are very good exercises. Writing a compiler gives a feeling of power, but I think writing a shell teaches humility.
Then the idea is that instead of piping the whole result from command to command, your shell would compile a set of commands into a single program and run that.
Now you might say that that sounds like it goes against the Unix philosophy, but actually it doesn't need to. If all of the core utilities were implemented in such a way that their logic could be extracted without duplication of code then the shell can still be doing "one thing and one thing well".
---
Another idea I have is to make these core utilities pipe objects instead of text, like on Windows. I am very fond of bash but I think one thing that Microsoft seems to have done right is to have the idea of being able to work on objects in PowerShell. But I don't want PowerShell. I want a mostly Unix shell, except, as I said, with objects.
UNIX shells already optimise pipelines by running the commands along it concurrently. In your example, the first grep will be outputing it's results while it's still sucking in /var/log/somefile; and the second grep will be parsing the output of the first grep as and when the first grep outputs.
tail is a little different because it needs to find the end of the file but the greps certainly don't wait for each other to finish.
As for your "objects" point. The shell I'm working on (discussed elsewhere in this thread) uses JSON to pass objects around functions written for that shell while still falling back to standard flat text files when piping out to traditional UNIX/GNU/whatever tools. Though I still need to make the whole thing more intuitive.
I get what you're saying but technically those tools are running inside the same single loop (as that file is only iterated the once most of that pipeline) but distributed across multiple OS threads.
Shell scripts are surprisingly efficient at pipelined work where you're just dealing with streaming data (as your example mostly does). It's the more complex logic they fall significantly short on.
The weakest area and why I kept saying "most" of that pipeline would be the use of tail part way through the chain (as others have mentioned). But that could be optimised within Bash to something like:
tail -r somefile | grep 'expression' | egrep 'expression' | head -n number | cut ...
(I think -r flag on tail is GNU rather than POSIX though)
You're object point is definitely relevant though. For example how many times have we seen shell scripts break because they don't consider spaces in filenames?
Talking about annoyances in Bash I'd also throw in exception handling as a major problem for traditional shells. Having Bash fail in expected ways and handle them cleanly is a huge pain in the arse.
Those above two points (and wanting cleaner syntax for iteration) are what drove me to write my own $SHELL so it does sound like I'm addressing some of your annoyances - albeit not all of them. However it's a long way from being something usable in production environments but I'm happy to access pull requests if you do ever feel you want to contribute to something that's trying to work towards at least some of your goals :)
"... compile a set of commands into a single program and run that."
Linux: see busybox ("multicall binary")
BSD: see binaries on install media ("crunched binary")
sed and tail are both compiled into busybox
re: bsd compiling in tail with crunchgen is easy
As for "objects", the k language can mmap the output of UNIX commands and run parallel computations on those results as "data" not "text". It can be faster than I/O.
I intended for the delimiter to be tab in this example.
Using sed or awk is an option, yes, but I am so used to the standard utilities that I would rather keep using them.
Also I'm not sure if your sed script does what I intended for it to do.
1. Take ever line that starts with 2017-05-29.
2. Out of the lines we have, remove any that contain INFO or WARN.
3. Take the five last of all of those lines.
4. Take the first and third fields of all of those lines.
Let's create a sample file
cat > /tmp/somefile <<EOF
2017-05-28T08:30+0200 nobody ERR: Foobar failed to xyzzy.
2017-05-29T13:01+0200 nobody INFO: Garply initiated by grault scheduler.
2017-05-29T13:37+0200 nobody DEBUG: Garply exited with 0.
2017-05-29T14:12+0200 nobody WARN: Plugh quux corge.
2017-05-29T14:55+0200 nobody ERR: PLUGH QUUX CORGE!
2017-05-30T00:17+0200 nobody ERR: Failed to retrieve baz needed for thud.
EOF
(Note that due to how HN formatting works you won't be able to copy-paste this because HN requires leading space to format as code but then it includes the spaces in the output also and we don't want leading space in the file.)
But that's just because you assumed that space was the delimiter when it was not.
Your use of the word "INFO" as a placeholder for the first two occurrences of the space character threw me off quite a bit when reading your script, since the word "INFO" occurs in the file we are working with itself, but it makes sense to use a word that we know is no longer possible to be present since we've already removed all lines that contain it. However, while a neat trick, those kinds of strange hacks are the kinds of things that has brought me to believe that having objects (like they have in Microsoft PowerShell) instead of pure text would be beneficial in Unix also.
---
As for your comments on compiling into a single program and run that, I don't think you understood what I meant, or I don't think that busybox and those chrunched binaries you mentioned perform the optimization I am talking about, do they.
---
Using the k language you mention in that fashion seems more like a hack and will require a lot of work each time. I would rather rewrite all core commands of my system so that they produced true objects, or actually, rather than objects just structured binary data. I don't need the output to have methods you can call.
One of the main things I want from structured binary data is to be able to select the columns of data by name instead of by index and without having a mess of some commands using tab for delimiter and others space and so on.
So instead of
zfs list -H | cut -f1,3,4
I would like to
zfs list | cut name used avail
for example, or something like that.
Also all commands that output tabular data must have a "header" command that will show the column headers. So to see the headers that the 'list' subcommand of the zfs command will output, I would say
zfs header list
And it would tell me
NAME USED AVAIL REFER MOUNTPOINT
and furthermore since I am authoring the shell, when you tab to complete a command it will call the binary with the header subcommand and output the headers so that you can have them easily accessible while you're working on a pipeline. And furthermore this shall work with pipes already present.
So if I type
zfs list | cut
and then tab to complete, it will call
zfs header list
and pipe that to
cut header
which will return the input it saw
Naturally, "header" will be a reserved word.
All commands will understand how to work with tabular data.
When you use grep you will either specify which column to use, or you can tell it to look across all columns with *
The shell will only expand * to file names when the * is positionally last in the argument list of a command since all commands will take the list of files as the last ones in their list of arguments.
---
All of this being said, I appreciate all replies, including yours.
Ctrl-V then tab. Or if using GNU sed just type \t.
We can reject this simplicity as a "trick" and demand something more complex.
But that suggests that the goal is not to solve the problem, it is to satisfy someone's desire for having some underlying complexity that moves the solution out of the realm of "trivial".
I don't have the citation but I believe they mentioned that the Chapel language also does this. Rather than materializing intermediate arrays, it just does everything element-wise in one pass.
I think this optimization may make sense in some cases that arise in practice, but probably not very many. There are a lot of other things that need to be fixed with the shell first.
Hmmn. A 'smart enough' compiler would combine the two grep patterns into a single pattern and run them in reverse, stopping at the first 5 lines. :-)
Even if we don't assume we have such astounding transformations, I think you're vastly overestimating the power of multithreading in this example. There's very little for each core to do, especially the final step (which is getting a whopping 5 lines to process).
All that being said, I'm not sure that, if I wanted highly parallel text processing OR deeply clever compiler-style optimizations, I'd start from "existing Unix utilities and their associated cruft".
Not everyone understood what I meant to say but you did, thanks :)
Starting from the Unix utilites is simply because I'm so used to them.
I can assure you, however, that anyone who was under the impression that my system was actually Unix would be in for a nasty surprise :->
My system would be, shall we say, not POSIX compliant.
My system would take the things I like about Unix, such as being command-line centric and having utilities focused on being good at small individual tasks and using pipes to glue them together seemingly (but actually as mentioned each pipeline is compiled to an optimized program when run), and the filesystem would remain essential and hierarchical.
I would separate Operating System binary data, Operating System config files, Application binary data, Application config files, User config files, User downloaded data and User created data STRICTLY!
Also like I mentioned, to have structured binary data instead of text. This would lead to commands which are named like in POSIX but which operate a bit differently with regards to the arguments they take.
Probably a lot of other things as well that I've been thinking about but which escapes me at the moment.
But yeah, all of that remains a dream because it would involve more work than I have time for :(
> Hmmn. A 'smart enough' compiler would combine the two grep patterns into a single pattern and run them in reverse, stopping at the first 5 lines. :-)
Only if `/var/log/somefile` is a regular file and isn't opened for writing by any other process. Or your "smart enough compiler" doesn't care about doing the same thing.
Mark Hibberd gave a great talk at Yow! LambdaJam this yeah on writing mundane code in Haskell, with a tutorial on writing a shell. The code can be found on https://github.com/markhibberd/xsh and the talk should be on YouTube in the near future (I hope).
Seems like a good time to point to Urchin, the Bash-ish like shell I wrote that also allows writing Ruby directly on the command line. I've been using it happily for years but haven't packaged up a release for a recent version:
fork() also clones all the filedescriptors that might be open to the child. It is the resposibility of the forking code to close filedescriptors it doesn't need. Easy way to cause too many open files errors.
(I read this article quickly, so apologies if I missed it.)
And then there's the fun of dealing with stdout, and stderr at the same time; in a single thread.
One more fun thing to do with fork is to learn to pass ownership of the child process to pid 1, what the command nohup does. A parent process when it exits, requires that all child processes exit as well.
There is an assertion that the child pid will be greater than the parent pid. While this is usually/often true, it isn't always true.
This isn't a big deal per se but is indicative that the author has a limited understanding of the subject.
Which itself isn't a big deal either, but the article would be improved by the author stating up front that they don't know the material very well and they too are learning as they go along.
This is not a blog post about how to write a shell, it doesn't explain how fork actually works, it doesn't actually make the program deterministic, and it takes far too long to explain the concept of independent concurrent processsing. That's what's wrong.
If you want a detailed explanation with examples you'd have to paste several chapters from a book on Unix programming in the comment thread, although in this case there's probably already a StackOverflow thread that summarizes the problems with this post's subject.
Not mentioned there, but aside from kill() doing something special with -1, waitpid() also does something special with -1. The referenced article code could/would pass -1 to waitpid().
In addition to obvious -1 check, one should never exit() nor return from main() if fork() returned zero. Sorry that I didn't care to note that, but one doesn't explain all fork() subtleties in a single sentence. If interested, refer to APUE or similar book.
Forks are too sharp for children, if you like that pun.
Perhaps this "third chapter" is not as universal as you expect; nor, I believe, is a formal CS education as universal among the HN population as you seem to be assuming.
Diving in and trying it out is a good way to learn. Sharing your results can be helpful for others, even if you haven't learned very much yet; it can be a good starting point, and others can correct your misunderstandings and comment on areas you might investigate further.
I'm always happy to see people who have enough intellectual humility to show their incomplete work as they go, so that others can benefit from the learning process.
I've been a professional dev for a decade now, but failed out of college due to mental health issues (depression). I know a ton about stuff higher in the stack, but I've never spent much time writing C and mucking around with the OS itself.
This project is interesting enough to me to be worth the time to read it, and it's given me the idea of writing my own shell as a means of learning a Nim.
https://github.com/oilshell/oil (osh/ and core/ directories):
Right now the goal is to clone bash but have better parse time and runtime errors. I hope to make an initial release this summer. But the project is much larger (see the blog if interested).