
Elvish – An experimental Unix shell in Go - networked
https://github.com/xiaq/elvish
======
xiaq
Author here. Wow, I completely didn't expect this to hit HN this early. I did
a talk at FUDCON Beijing 2014 (slides: [http://go-
talks.appspot.com/github.com/xiaq/elvish-fudcon201...](http://go-
talks.appspot.com/github.com/xiaq/elvish-fudcon2014/elvish.slide)) and warned
that "this might eat your ~, so don't post this on HN yet". My original plan
was to only post this HN when it's usable, and by "usable" I mean I use it as
my login shell on my laptop. (My login shell is still /bin/zsh for now.) But
apparently someone else just came across it on the Internet and I should have
put up this warning in the README. :)

I'm on the go now, and will come back to add more details and try to respond
to questions here.

~~~
tinco
In one example you pipe a list through some functions, does every function
parse the list from a string or is there some protocolly side channel action
going on for the typing?

You said you had read about powershell, would be interesting to know if you
plan on doing something with datastructures as well as that is its power
feature :)

------
comex
Huh, wow. I was recently thinking about and prototyping a shell seemingly very
similar to this. Explicit goals I have in common include:

\- Focus on the lispy parts of sh - prefix functions everywhere and a simple,
regular syntax. Not that anything with FD redirections as a primitive can get
anywhere near S-expressions' simplicity, but shell is naturally more regular
than most programming languages (even if bash goes out of its way to be
complex).

\- Be a real programming language that doesn't make you reach for awk or perl
(separate, incompatible environments) to do moderately complex things sanely.

\- Emphasize using pipelines rather than 'backwards' function application to
naturally string together operations.

\- Emphasize lambdas.

\- Typed (i.e. non-string) pipes.

\- Syntax highlighting.

\- Nonzero-exit modeled as exceptions. (I think elvish is doing this, but I
haven't reviewed the code in detail.)

Things I want but don't see in the readme include:

\- A story for passing typed data around between processes (potentially in
different languages) rather than just builtins. I'm not sure exactly what the
story should be, but it should exist.

\- A somewhat more succinct syntax, with metaprogramming kept in mind.

\- A JIT, eventually.

There seems to be a lot more in the first category than the second... give up?
But elvish isn't anywhere near done, and I want to do things all my own way
for once. I'll keep going, and post my project on HN if it gets anywhere. :)

But to the OP, congratulations on elvish. Hope it gets finished and seriously
takes off.

~~~
Dewie
I think the points about types are really interesting. It sounds like a worthy
project for that alone.

Does there exist anything like 'type protocols' across languages? Would it be
feasible?

~~~
comex
Well, there's JSON and variants. Probably a lot simpler than what you had in
mind, but if Unix pipes were based on JSON (and had a suitable shell language
to work with it), that would already make many tasks easier, like

\- Grep the output of ps, without having to count column numbers or do special
work to preserve the header line after filtering.

\- Search for files matching some conditions, without remembering the weird
and unique syntax of find(1) - instead you would write something vaguely along
the lines of

    
    
        ls | filt {< $1.size 500} {eq $1.type file}
    

where the stuff after the pipe is the shell language, so theoretically more
memorable than

    
    
        find -size -500c -type f -maxdepth 1
    

As a bonus, if you want to cache the output for multiple searches rather than
going through the the kernel every time (OS X find is rather slow on huge
source trees), you can save the output of ls to a file and keep the rest the
same. Good luck doing that with find.

\- Handle filenames with spaces and newlines without issues. Simple, but
newlines at least are near impossible to solve in the standard shell
environment - only a few select tools support null-separated lists. Spaces are
easier... unless you want such extravagances as multiple filename columns in a
table.

Of course it's not that simple: structured data can sometimes be hard to work
with and formatting for display is an issue; Unix's "everything should be
plain text" philosophy didn't come out of nowhere.

But I honestly believe that it's time to reexamine that tenet.

~~~
Dewie
Let's take a concrete example: Say you have a programming language L that has
a parser that is implemented in lang. A, and an interpreter implemented in
lang. B. Now if you want the interpreter (in B) to interpret the program, you
need some structured data to come from the parser (in A). Abstract syntax
trees (ASTs) are the usual conceptual tool for this. Now assume that the
parser outputs an AST that is described by something, like for example an
algebraic data type (ADT). Assume that the structure of the ADT conforms to
some type protocol. Now the interpreter can implement the ADT in its language
(B) and thereby take the ADT directly as input. Now you can use a "typed
pipeline" to easily combine these two programs:

> parser directory/file.lang | interpreter

Does this make sense? Would it have some utility?

------
heavenlyhash
Tangentially related: readers interested in alternatives to bash might be
interested in a golang library called Gosh that provides shell-ish dsl, which
you can see an example of here: [1].

Author here; I use it regularly to replace bash scripts in system glue code --
using golang channels to pipe data between shell commands is awesome. It's
heavily (heavily!) inspired by amoffat's "sh" library [2] which lets one call
any shell program as if it were a function.

But, Gosh exists to scratch some itches as a shell scripting alternative. It's
nowhere near the full-fledged interactive shell environment Elvish is gunning
for. Elvish looks to have a very exciting future :)

[1] [https://github.com/polydawn/pogo/blob/master/gosh-
demo.go](https://github.com/polydawn/pogo/blob/master/gosh-demo.go)

[2] [https://github.com/amoffat/sh/](https://github.com/amoffat/sh/)

------
thinkpad20
This looks really cool and promising! I was thinking of a shell along these
lines, props to you for writing it.

I do take issue with one of the things you wrote though:

> a more complex program is formed by concatenating simpler programs, hence
> the term "concatenative programming". Compare this to the functional
> approach, where constructs are nested instead of connected one after
> another.

There's nothing about the functional approach that necessitates writing
"nested" functions, as you describe. With higher-order functions, you can
structure your code in almost any arbitrary way. In particular, haskell's >>=
operator has this behavior, and you can easily write an operator like

    
    
        x |> f = f x
    

to facilitate something like

    
    
        2 |> addOne |> timesTwo |> show |> reverse |> putStrLn
    

Or whatever one desires :)

~~~
xiaq
I was actually shown this by another Haskeller once, and I agree that Haskell
is a very expressive language allowing for a myriad ways of doing things,
including the concatenative paradigm.

But I have always doubted that if the language doesn't explicitly endorse a
particular paradigm, you will find it pretty awkward to work with other
people. Which is why less expressive languages that emphasize particular ways
of doing things still make sense.

------
mathetic
Heavy use of backquote might be a problem because it's one of the first keys
to drop when keyboards are localised.

~~~
currysausage
+1. On German keyboards for example, the backtick is on a dead key, meaning
that it is only printed if you press the spacebar afterwards. Thanks to dead
keys, you can e.g. get "é" by typing <´> <e>. But it is a severe annoyance if
you actually want to get "`" (<Shift>+<´>, <Space>).

~~~
bartbes
I switched to using a compose key instead of dead keys, because I just end up
writing much more code (and english) anyway, so it's more efficient than
actually having dead keys in the end.

~~~
levosmetalo
I'm glad I switched to Colemak way before having to type german umlauts, so I
had to learn a few easy combos and keep the rest of the layout unchanged.

------
LesZedCB
The shell looks great, good work!

On a side note, I have to say I love README's that look like this. The author
did a couple things I really like. 1) They attributed feature ideas to the
people they got them from. 2) They listed the good things right along with the
bad/lacking. 3) Screenshots.

------
qmaxquique
Hey guys! For anyone who want's to try Elvish without having to deal with
golang compilation and such, I just created a terminal.com snapshot. Just
register and spin up my elvish container.
[https://terminal.com/tiny/UtZ8VSgWJL](https://terminal.com/tiny/UtZ8VSgWJL)

As it's in development, I will upgrade it again in a couple days.

------
ridiculous_fish
Hey, that name looks familiar!

Best of luck to xiaq, a fellow fish shell contributor. It's definitely good to
see more innovation in the ossified command-line shell space.

Assuming this is planning on using Go's concurrency support, it will be very
interesting to see how it deals with the nasty interactions between fork and
multithreading.

~~~
xiaq
Thanks ridiculous_fish. I'm still more or less following fish development, and
elvish owes a lot to fish.

For now syscall.ForkExec
([http://godoc.org/syscall#ForkExec](http://godoc.org/syscall#ForkExec)) is
sufficient for me. It is written to avoid async-unsafe calls between fork and
exec, which is also what fish does IIRC.

~~~
agentS
Out of curiosity, why do shells need to use fork?

Note: I am not familiar with the implementation techniques behind shells.

~~~
ridiculous_fish
The essential function of a shell is to start processes. In Unix and Linux,
the usual way to start a new process is to clone yourself (fork), and then
have the clone replace itself with a new executable image (exec).

It's kind of roundabout, but the brilliance of this approach lies in what
happens between those two calls. There exists process metadata that survives
the call to exec, such as where stdout goes, or whether the process is in the
foreground. So shells call fork, the clone sets up the metadata for the target
process, and then calls exec to start it.

But when a multithreaded program forks, the clone is very limited in what it
can do (before exec). In particular, the clone must not acquire a lock that
may have been held at the time of fork (which usually rules out heap
allocations!). Now say something goes wrong: the clone needs to print an error
message, _without_ locking anything. But lots of functions acquire locks
internally. How do you know what's safe to call?

fish solves this by providing its own known-safe implementations of printf()
and friends, and being careful to only call those after fork. Go solves this
by disallowing any user-code between fork and exec. Instead it provides a
single posix_spawn-like entry point called ForkExec, and does some black magic
(like raw syscalls - see
[https://code.google.com/p/go/source/browse/src/pkg/syscall/e...](https://code.google.com/p/go/source/browse/src/pkg/syscall/exec_linux.go#39)
) in between the underlying fork and exec calls.

My hunch is that a shell written in Go will eventually bump up against the
limitations of ForkExec. Happily Go has a strong FFI, so you can hopefully
implement this stuff in C, if it comes to that!

------
nawitus
If I type 'mplayer ' and press tab in a folder with only one video file, but
multiple text files, does this shell autocomplete the media file? It's pretty
frustrating that even the most basic stuff like these are not enabled by
default on your average Linux distribution. I wonder if everyone simply
stopped developing unix shells.

I'm aware that one can install support for proper autocompletion by installing
additional stuff. That shouldn't be required, smart autocompletion should work
out of the box. I don't want to configure and/or install stuff for every
single application that I use.

~~~
oinksoft
Bash Completion[1] scripts are what provide this feature; perhaps fewer Linux
distributions are including them. I recommend keeping them in your dotfiles[2]
and loading them in your .bashrc[3]. This is nice because Bash Completion
operates by filename, meaning you can take any project's bash completion
script and install it at `$bash_completion/completions/script-name`.

[1] [http://bash-completion.alioth.debian.org/](http://bash-
completion.alioth.debian.org/)

[2] [https://github.com/oinksoft/dotfiles/tree/master/lib/bash-
co...](https://github.com/oinksoft/dotfiles/tree/master/lib/bash-completion)

[3]
[https://github.com/oinksoft/dotfiles/blob/master/bashrc#L22-...](https://github.com/oinksoft/dotfiles/blob/master/bashrc#L22-L24)

------
jalfresi
I've been tinkering about with writing my own shell at home, as an excerise in
learning more about unix etc but bumped up against the limitation in that
proper job control is currently impossible to do from within go (off the top
of my head it was something to do with the inability to set the process group
correctly for forkexeced processes). How did the author get around this, or
does Elvish not have the ability to put processes into and out of the
background?

------
zokier
It would be nice if

    
    
        put 1 2 3 4 5 | filter {|x| > $x 2} | map {|x| * 2 $x}
    

could be written just as

    
    
        put 1 2 3 4 5 | filter > $_ 2 | map * 2 $_

~~~
Dewie
Some languages have a sort-of default variable name for cases like this,
namely "it":

> put 1 2 3 4 5 | filter > $it 2 | map * 2 $it

~~~
JadeNB
As zokier was presumably pointing out, other languages have a _different_
sort-of-default variable name, namely `$_`
([http://c2.com/cgi/wiki?DollarUnderscore](http://c2.com/cgi/wiki?DollarUnderscore)).
:-)

------
chrissnell
Hmmm...it's not building for me:

    
    
      $ go get github.com/xiaq/elvish
      # github.com/xiaq/elvish/edit/tty
      dev/go/src/github.com/xiaq/elvish/edit/tty/termios.go:20:   undefined: syscall.TCGETS
      dev/go/src/github.com/xiaq/elvish/edit/tty/termios.go:24: undefined: syscall.TCSETS
      dev/go/src/github.com/xiaq/elvish/edit/tty/termios.go:49:   cannot use &term.Lflag (type *uint64) as type *uint32 in   argument to setFlag
      dev/go/src/github.com/xiaq/elvish/edit/tty/termios.go:53:   cannot use &term.Lflag (type *uint64) as type *uint32 in   argument to setFlag
      # github.com/xiaq/elvish/sys
      dev/go/src/github.com/xiaq/elvish/sys/select.go:66: not   enough arguments to return
    

I opened up an issue for you:
[https://github.com/xiaq/elvish/issues/16](https://github.com/xiaq/elvish/issues/16)

~~~
bobbyi_settv
I think this is because you are on an older version of Go. Try it with Go 1.3

~~~
chrissnell
I am using Go 1.3.

    
    
      $ go version
      go version go1.3 darwin/amd64

~~~
bobbyi_settv
It builds for me with 1.3 (linux/amd64) on Ubuntu, so I'm guessing it somehow
works on Linux but not Mac.

EDIT: Specifically, the first two lines are complaining about an undefined
syscall and the next two are complaining about something terminal-related
being 64-bit where 32-bit is expected, so those certainly seem like plausible
things that would differ across OSes.

------
nathell
Clojure has -> specifically to facilitate writing pipeline code that doesn't
read backwards.

~~~
tgkokk
Or, in this case, ->>:

    
    
      (def lols (->> strs
                     (filter #(re-find #"LOL" %))
                     (map upper-case)
                     sort))
    

How I read it: take strs, filter the strings that contain "LOL", turn them
into upper case and then sort them. Basically it reads the same as the
pipeline example in the README.

------
jkbyc
Many alternative shells popping up recently. Someone already mentioned the
Fish shell [1] in this thread. There is also Xiki [2] and its recent
Kickstarter campaign [3].

[1] [http://fishshell.com/](http://fishshell.com/) [2]
[http://xiki.org/](http://xiki.org/) [3]
[https://www.kickstarter.com/projects/xiki/xiki-the-
command-r...](https://www.kickstarter.com/projects/xiki/xiki-the-command-
revolution)

------
pjmlp
Xerox PARC environments and their derivatives keep being partially reinvented.

I really wonder how computing would look like if those systems had succeed in
the market, instead of UNIX.

------
anon4
Unrelated, but I would like to note that you shouldn't use subpixel anti-
aliasing in text in images, especially ones that will be shown by a web-
browser. You never know whether the user's screen is rotated or not; if it has
three subpixels per pixel, or if it's pentile or similar; or if it is high-DPI
and therefore the image is zoomed; and even if none of those are the case, a
lot of people dislike the coloured fringes.

~~~
xiaq
Good to know that, thanks. I have captured and pushed the screenshots with
subpixel turned off.

------
jonathanyc
This is awesome! Necessity is the mother of invention, and all that. The idea
of using lispy syntax while dropping the requirement for the outermost pair of
parentheses is definitely a neat idea - I'll try the shell out in practice to
see how much easier it makes usage, but it seems like it would allow for 99%
of the consistency of Lisp/Scheme less 20% of the annoyance for general shell
usage.

Looking forward to what's in store.

~~~
lispm
LispWorks:

    
    
        CL-USER 7 > + (sin 1) (cos 2)
        0.4253241

------
sgt
I get this error:

clang: error: no such file or directory: 'libgcc.a'

------
bnegreve
I really like in-shell floating point operations. But this:

    
    
        > / 1 0
       +Inf
    

Is not the correct answer - even as a limit - since the limit of 1 / x (when x
-> 0) can be +Inf or -Inf.

~~~
xiaq
The 0 is a positive one.

    
    
        > / 1 -0
        -Inf
    

BTW this is just IEEE floating point.

~~~
bnegreve
Ah yes indeed [1], I didn't know.

[1]
[http://en.wikipedia.org/wiki/IEEE_floating_point#Exception_h...](http://en.wikipedia.org/wiki/IEEE_floating_point#Exception_handling)

------
sobkas
So will it work with a gccgo?

------
e12e
"It attempts to prove that a shell language can be a handy interface to the
operating system and a decent programming language at the same time; Many
existing shells recognize the former but blatantly ignore the latter."

Wasn't that the reasoning behind csh[1]? (She scripts csh by the c source).
Did you have a look at plan 9's rc[2]?

I welcome attempts at new/better shells -- doesn't look like elvish will be my
saviour (partly due to the heavy use of backticks) -- but there is always need
for fresh blood in the battle for the terminal.

I suppose that "vi keybindings that makes sense" and "a programmable line
editor" is meant to indicate that rlwrap isn't good enough? I must say, after
a few years (has it really been years) of "set editing-mode vi" in .inputrc, I
actually think it works kind of nice with (plain) bash. I suppose there's room
for improvement in terms of history editing etc... But either due to lack of
imagination or force of habit, I've never really felt a pull towards zsh (or
fish, or other "improved" shells). But playing around with ipython and/or
Conque for vim has made me consider looking for greener shells than bash.

The secret, I think, is to avoid doing to much in the shell, and rather try to
subtly improve on the "simple programs build complex pipes"-idea. I actually
think some rethinking of core command line tools (cat/tac/tee, grep/sort/uniq,
seq etc) might be a better investment than "better syntax".

Not that better syntax is a bad idea -- a "strict" subset of modern bash would
be good, with saner handling of words/expansion/substitution -- essentially
defaulting to proper checking for empty variables ("${might_be_empty}a" ==
"a") (but isn't [[ -z "${var} ]] always better anyway..?), always defaulting
to "${var}" rather than $var, ${var}, and preferring
var="$(some_command_that_outputs)" to var=`dirty backtick command`...

Essentially getting rid of all the crazy old cruft that's needed for backwards
compatibility, and defaulting to sane, modern versions (it's what's hard about
scripting (especially posix [k|b])sh -- there are 5 wrong ways to do
everything, 2 mostly right and 1 perfect -- but which is perfect often depends
on context...).

[1] I'm _not_ a csh-fan, for some reasons, see:
[http://www.faqs.org/faqs/unix-faq/shell/csh-
whynot/](http://www.faqs.org/faqs/unix-faq/shell/csh-whynot/) But mostly I'm
just grumpy and conservative, and having mostly figured out how to properly
get things right in ksh/bash, I stubbornly refuse to use something else ;-)

[2]
[http://swtch.com/plan9port/man/man1/rc.html](http://swtch.com/plan9port/man/man1/rc.html)

~~~
SixSigma
Ironically, Rob Pike eschews big shell scripts.

If you want a programming language, you know where to find one.

------
bubersson
Yep, I just read that as Elvis with Sean Connery's accent... Nice work though.

------
Dewie
> It attempts to exploit a facility Shell programmers are very familiar with,
> but virtually unknown to other programmers - the pipeline.

Really?

It seems that Unix pipes are the go-to example of what you might call pipeline
programming, as if it originated there or because every programmer is a shell-
programmer first and foremost. But I'm not sure that this is such a secret
technique exclusive to shell programmers - object-oriented languages can and
does seem to like to use "fluent interfaces", I think its called, which has
the same pipelining style. Functional programmers are able to and probably
find it convenient to use a "pipeline style" on longer expressions that are
essentially long chains of function application or function composition - they
just have to flip the order of the operators for function application and
function composition, respectively. This is possible in languages like
Haskell, and I think it is even pretty idiomatic in F#.

