Hacker News new | past | comments | ask | show | jobs | submit login
The shok command shell (shok.io)
65 points by nfoz on Dec 29, 2013 | hide | past | web | favorite | 49 comments

Actually, TempleOS has a really neat command-line / shell.

The command line feeds directly into the C compiler. The C compiler can JIT, and the "HolyC"[1] language has modifications which make it much nicer for command-line use.

The author added default args to C, and parentheses are optional on functions with no arguments (or all default) so:

    Dir; is shorthand for Dir("*.*", FALSE);
Even cooler, it's possible to go in and edit the Dir command on the fly; since it's JITed, your changes will be there the next time you run it.

[1] OK, the name is quirky but most of the changes from C have sound reasons.

The VxWorks shell is similar, but not as advanced. It's basically a stripped down C interpreter and you call functions directly instead of commands.

I don't honestly find the pain point to shell scripting languages, in terms of programming them, to be that they don't look enough like 'real' programming languages. I think that on a basic level the way that posix shells work is quite natural for the tasks they're expected to perform. There's even a certain elegance to things like [ being (potentially) a program that behaves as any other would.

It is quite primitive as languages go, though, and has some ugly quirks, but I think it's a mistake to look at it and say it needs to be changed to be more like things not designed to treat running programs as primitives. I'd rather see something from someone who said "bash is great but could be so much better!" than someone who said "bash is TEERRRRRUUUBLE and we must replace it!"

That said what I want so much more than a new shell is one that isn't stuck letting me only reason about the machine I'm logged in on. There's been a few things that call themselves distributed shells, but they're really more orchestration tools.

In 35 years I've never been able to write more than ten lines of bash-level scripting without saying "Screw this, it's unreadable and unmaintainable" and writing the thing at hand in something else. (I think the longest non-trivial batch file I ever wrote was about 20 lines. I hated it). The "something else" changes with platform and decade, but usually it's native, or at least something like Python or C#.

Makefiles make me feel the same way, but here I have little choice. You can structure them "well" but they remain essentially undebuggable bits of hell that always seem to fall apart at the worst time.

I've worked with systems that had multi-thousand-line batch files and shell scripts, and my reaction has been "this crap is unsalvageable". A lot of those systems are still in production, and the people on them hate their lives. The code is a mass of spaghetti with a bunch of behavior modified by global and environment variables.

About 30 years ago I made the statement that "BASIC programs should self-destruct after 50 edits", as a way to encourage users of that language to move on to something more better, and to limit the damage that a "mission critical" 2 KSLOC hunk of bad code incurs. I maintain this would be desirable for bash and batch-like stuff, too...

I agree. (And booting a Debian system invokes many 10s of KSLOCs of the things, e.g., in /etc/init.d/.)

I must be wired differently. I can get a for loop right the first time I write it in C, JavaScript, php, specially python, even if I do not use them daily. I think I never got a for loop right in bash. And ifs are always weird too. And letting a bracket be any thing is horrific, like if we were to use p as the full stop sign in English, and ( would be the name of a country: no thanks, remember brainfuck is a joke.

It's not that [ can be anything (practically speaking it can't, actually, since it's often/usually a built-in that supersedes PATH). It's that [ obeys the rules of the 'language', where it is a program run on its arguments.

I agree that the surface syntax is bad. Unlike other languages with clear Algol influences it never got past thinking writing if backwards to end the block is clever. I'm all for fixing these sorts of things, if you're going to break compatibility with posix anyways.

Shells are what people use every day, so it's tempting to make more out of it. grep, sed and awk are clumsy tools, but they are more "within the reach" than python or ruby.

Also, you miss one particularly elegant aspect of shell programming: the pipeline. It's a powerful facility of concatenative programming, but most people haven't realize it. Try to write "find . -name '*.py' | xargs cat | wc -l" in other languages. You end up either with some parentheses or intermediate variables or a loop, and in any case, much longer code.

Yeah UNIX philosophy rocks! It's weird that they want a system shell and it not be POSIX. I don't need support for index arrays in a system shell interpreter.

Even erlang can be a shell and wrap around the system toolset making your basic shell scripts work concurrently.

For everyday usage the core ash shell is perfect for scripting and zsh csh or korn for interactive usage.

> For everyday usage the core ash shell is perfect for scripting

In my experience, most people have great difficulty correctly writing a simple loop to renames a set of files in a directory. I sumultaneously appreciate UNIX philosophy, and think the shell can be much improved by moving away from POSIX.

The omission wasn't because I'm unaware, believe me. Maybe I just consider it so thoroughly fundamental as to be not worth noting.

This is where some of the warts on a higher level than syntax come in, though, to be fair. Your example above fails in confusing -- and maybe even dangerous, depending on the command at the end -- ways if there's spaces in the filenames. One thing I would like would be a more rigid sense of arguments vs. strings and how they come out of such situations. Having to add -print0 to find and --null to xargs to make it safe for that situation is tedious and unfriendly.

> This is where some of the warts on a higher level than syntax come in, though, to be fair. Your example above fails in confusing -- and maybe even dangerous, depending on the command at the end -- ways if there's spaces in the filenames. One thing I would like would be a more rigid sense of arguments vs. strings and how they come out of such situations. Having to add -print0 to find and --null to xargs to make it safe for that situation is tedious and unfriendly.

Yes, that's one of the problems I'm trying to solve :)

Powershell solves this by having pipelines pass .NET objects, but I found them too heavyweight.

Agree that trying to turn it into an object pipeline is way too heavy.

map, fold and filter can approximate the pipeline pretty well. Personally I only use shell scripting when I actually want to record a set of commands I ran (or, well, when working in a shell); in all other cases I whip out proper tools.

Here's a comparison of code to do the same thing written in Ruby:

    Dir['**/*.py'].map{|f| File.readlines(f).length }.inject(:+)
    find . -name '*.py' | xargs cat | wc -l
That's 60 characters versus 39 (most of these are just because the "commands" are longer – and I don't think anyone would even try to argue that 'cat' or 'wc' is more readable than 'File.readlines' or 'length').

I ran them on Python 2.7's library directory and my version counted four lines more – I didn't investigate, but I think yours missed lines when cating files without trailing newline. I'd argue that's a bug in your code :)

Though it's mostly a matter of taste, I find pipelines simpler and more natural than method calls. Compare:


    filter p xs | map f
(Suppose filter and map are implemented such that they accept input either on the command line or from pipeline.)

Well, lets try Haskell syntax

    map f $ filter p xs
what, don't like the reverse order? That's fixable. If we define # as reverse function application, it's

    xs # filter p # map f

    filter p xs # map f
which is quite similar to what you are looking for.

> Try to write "find . -name '.py' | xargs cat | wc -l" in other languages.

  import pipe
  find(name='*.py') | xargs(cat) | wc().lines
The implementation of the functions is left as an exercise for the reader, but the syntax is valid.


> The implementation of the functions is left as an exercise for the reader

This is kind of the point.

Yes it's possible in Python, but doing that means working against all idioms and zens of Python.

Languages are not only about what they make possible, but also about what they encourage.

The problem with shell scripting is that it's not meant to be a programming language at all. Its philosophy is that anything that can be a program should be, with a few capabilities and builtins sprinkled on top.

I've always been a bit surprised by the lack of innovation in *nix shells. Despite huge advancements in technology, bash is pretty much the same as it was ~25 years ago. Personally, I use fish shell everywhere I can, including on my Mac and Linux servers. Bash scripting works, but this idea looks to make scripting a much simpler and more straight-forward task.

If you are interested, I'm working on (another) experimental Unix shell: https://github.com/xiaq/das

It's still in the stage of design (even the name is not actually decided!), but I've implemented a few components here and there, most notably syntax highlighting, tab completions for file names, running external programs and pipelines for external programs. I will announce it when I personally use it day to day. :)

I agree with you. It is really interesting to see that in the scripting/programming language domain there is really an ongoing movement whereas on the shell side nothing big happened and POSIX only is just incrementally changing or there are alternatives with mostly cosmetic/ergonomic changes only (e.g. fish).

I guess it has to do with the fact that the set of nix commands is the library, every shell has to support the process calling mechanism and conventions like environment variables and every shell must really provide concise syntax for files, streams, IPC, environment variables and jobs - which in "normal" programming languages is less of use.

This bugged me so much that I tried to find if there have been any alternatives for nix shells that are completely different from the POSIX shells - and the only ones that I found interesting are ES and SCSH: https://news.ycombinator.com/item?id=6979170

My preference is for ES. However there seems to not be much of community around. I would really love to see a bit of love for these projects.

Have you tried zsh?

zsh is not innovative. It's overgrown.

Define innovative. One major goal of any shell is to get nasty shit done as fast as possible. Conventions accreted over time and breaking those too much diminishes the viability of an alternative project.

rush Ruby Shell [1] might qualify, but it has the problem of any "innovative" shell: different special syntax and doing something so different that the transition curve is too steep for most people.

Zsh is kind of weird because it feels academic with the option of having every feature imaginable AND not having hashes or signatures with releases. The good thing about zsh is that most of it isn't loaded by default. It has modules. It has lots of neat options lacking in other shells such as case-correction and interactive completion. That's all kinds of innovative.

Bash is the mysql of shells. It's pervasive so there's loads of pressure not to innovate.

I would like to see shells' community get more like node by having minimal, focused packages of added functionality that each do one thing well, because having to declare the same primitive functions to do something DRY and useful is a pain.

[1] https://adam.heroku.com/past/2008/2/19/rush_the_ruby_shell/

I am happy to see some effort going into the shell domain. I hope that the authors of shok will look into some other non-POSIX shells that are almost forgotten now, but in my eyes would really deserve to live on: ES [1] and [2] and SCSH [3] and [4].

[1] http://stuff.mit.edu/afs/sipb/user/yandros/doc/es-usenix-win...

[2] https://github.com/wryun/es-shell

[3] http://scsh.net/

[4] https://github.com/scheme/scsh/network

Thanks for the links, I will look closer at these shells. Are there any specific ideas that you think they do well?

There is also fish (friendly interactive shell). It has been around for a while, so it is in most linux distributions:


Though a Unix stalwart, I am finding the developments in PowerShell exciting. In particular, the concept of PSDrives, or filesystems which reside "inside" of a program and which the shell can connect to for standard hierarchical navigation and object/method execution. This makes little sense for a system which strictly adheres to the Unix philosophy, ie., all programs perform one task and are so simple that the concept of a PSDrive would expose nothing that would provide benefit. However, personal *nix, like Windows, now frequently run large GUI applications which would benefit by having their complexity exposed through a PSDrive.

Perhaps the open source community is currently working on something like PowerShell's PSDrives, but adoption by the teams producing large apps would be critical to its success.

Anyway, the AREXX ports from the AmigaOS era are back again. The world has come full circle.

> Perhaps the open source community is currently working on something like PowerShell's PSDrives, but adoption by the teams producing large apps would be critical to its success.

plan 9 (http://plan9.bell-labs.com/plan9/). Virtually all of plan9's system services are exposed as filesystem servers (http://plan9.bell-labs.com/sys/man/4/INDEX.html). Some of these may astonish you - a windowing system, a FTP client, an SSH server, an authentication agent... Overall, the system is much simpler than PSDrive - PSDrive providers are full-fledged .NET objects, plan9 FS are just directory structure plus text files. Yet they achieve roughly the same thing. (Actually plan 9 does more, but I believe in principle PSDrive can be used to do these as well.)

Similar things (exposing system service as fs) is actually possible in Linux or BSD, with FUSE. You can actually wrap utilities and expose them as file servers, and there has been some limited success. But as long as upstream utilities are not adopting the FS as the primary API (which is unlikely), I don't think it's really going to take off.

What are the advantages of using this vs using the REPL of a mature language like Ruby that already lets you execute shell commands?

I find myself using %x() while in irb all the time.

I think a shell is where running programs is the default, and writing code is secondary. In shok, the code is a DSL for filesystem and job management. I think that's quite a different type of thing than general-purpose scripting languages like Ruby.

My typing barrier for %x(ls) is too high.

but Ruby is perfectly suited to writing a DSL. There's a large and growing number of system administration tools that use Ruby to do exactly that.

You could totally make Ruby turn ": ls" into "%w(ls)", or whatever other symbol.

This just seems a lot more work for the same thing.

Sure, but go the extra step: Drop the mandatory :, make just "ls" run ls. Then you need a syntax to get at the programming language, not the other way around. That's all that's happening here.

shok could have put ruby or python or some other language behind its {} syntax. That would be less NIH-syndrome, but maybe there's merit to trying something more ambitious. Or maybe not.

How about support shebang style block?


    do dome python...


I've never seen %x() but good to know. I prefer backticks though.

%x() works for me because it gives me an obvious context clue about what I'm doing. After a long day in front of the computer I tend to slip up spotting the difference between ' and `.

Also, I'll just leave this here: http://devopsanywhere.blogspot.com/2011/09/how-ruby-is-beati...

Finally, a Unix shell whose name doesn't end in "sh".

(Actually plan9 rc is earlier than that.)


It's usually known as ksh, isn't it?

Looks like tclsh to me.

My thought as well. Though mocked at work for using Tcl/Tk, I have been employing the language for personal projects successfully over decades.

would be nicer if it borrowed syntax from a subset of an existing high-level language

My side project ljsyscall [1] is a LuaJIT wrapper for the system call and other kernel APIs (eg netlink to configure networking). Its not really designed as a shell per se (although you can use it like that), but it is designed to expose Lua data structures for the output of stuff - ls will return an iterator, network interfaces are hash tables and you can modify the components to reconfigure etc.

[1] https://github.com/justincormack/ljsyscall

host> python

that's all you need. ls is a little more tricky but nothing that a very simple init.py couldn't fix, then you'd have the full and awesome power of python to do shell things. We don't need a new *sh that half assed implements a language otherwise we'll just end up with another syntactically primitive language like bash, zsh, etc.

Maybe it turns out brilliant and we realise we do need it.

ipython maybe?

Applications are open for YC Winter 2020

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact