
NGS: Next Generation Unix Shell - yoodit
https://github.com/ilyash/ngs/
======
heydonovan
I'm sure we can all agree that the current state of shells needs some work,
but I don't think inventing a new one is the right solution. I'm a huge fan of
the fish shell, but in the real world, it never seems to be installed across
the farm, and convincing the older SysAdmins to install it is more trouble
than it's worth.

We should be focusing on saner bash defaults, since it's the most common shell
in use. We shouldn't have to remember these large list of gotchas, and
pitfalls that shouldn't be there in the first place. A few people out there
are trying to recreate common utilities such as ls or cp, and while I welcome
the change, I feel it should be part of the actual gnu coreutils package, and
not a new project: [https://bsago.me/exa/](https://bsago.me/exa/)

~~~
oblio

        We should be focusing on saner bash defaults, since it's the most common shell in use.
    
    
    

That kind of work is not glamorous and highly controversial. Ideally there
should be some sort of cross distro/OS working group (Debian/Ubuntu, Red Hat,
SUSE, FreeBSD, NetBSD, OpenBSD, Mac OS X, etc.), similar to the working groups
that standardize the web, where such proposals can be made, voted on and
adopted.

For example there should be no reason in 2016 why a sane solution can't be
found for having almost every bash installation recognize all keys on the
keyboard (i.e. arrow keys wouldn't produce ^A sequences or similar).

Another thing that would be sorely needed but would involve a much higher
volume of work would be a template, at least for all GNU utils, which they use
to define they options, parameters, arguments, whatever. And by "template" I
mean library, actual working code they could include and configure.

Instead of each shell (bash, zsh) having to come with a million small scripts
that configure auto completion, these shells could just query the standard-
compliant tool for its usage and would receive a standardized reply with
everything. Powershell has something like this and it is a great idea.

~~~
iheartmemcache
I've been using -nix for almost 20 years. I've used everything from ksh on
SunOS 2.6 (Solaris?? what's that) to oh-my-zsh (for 3 years, before happily
graduating to oh-my-fish). I grew up on Slackware 6 waiting hours and hours
for a 2.2 kernel build to finish. In high school, FreeBSD 4.3 kernel mods took
up more time than booze and women. My 3 year puppy love for oh-my-zsh dimmed
as I transitioned to a more sane, less emotional woman - oh-my-fish.

I hate to admit it, but Microsoft just _got it right_ with Powershell.
Standardizing the convention with an intuitive Verb-Noun, the out of the box
documentation with -examples, -full, etc goes into so much detail that if
everyone used it, there wouldn't be any dumb "How does I move directory??"
questions on Stackoverflow (well, there'd be less at least).

Don't know a command? Show-Commands and type in 'network' to see what's
available. Don't see it there? Go get a Powershell script (often offered by
vendors like VMware and Citrix, making devop lives easier). I remember
spending _weeks_ trying to get my dual-boot machine (FBSD / Win2k) Cygwin/msys
setup to work well on my P3 600 at 12 or 13. Now I can just Feature install
Bash and get a native binary. Let me say that again -- now, I can Feature
install Bash on Windows jeeebus. QT is LGPL. CLR is open source. Visual Studio
Community (basically Professional) is free (unless you need historical
debugging then Ultimate's going to cost a lot). Satya + Meijers + Hanselman et
al have made favor Microsoft so hard, despite being an active donor of the
FSF.

That GNUutils-with-autocomplete is real trivial to write, but since it's
already in fish, I see no need to right it. :smug: [In all seriousness, I
agree with you re: standards. Plan 9's interface standards semi-addressed
that. But le sigh such is life.]

~~~
MichaelGG
Powershell's almost great. Things are way too verbose. No built-in stuff like
curl or wget (yes, there's a simple webrequest thing and aliases, but they're
clunky). In fact, it feels like everything in PS is clunky.

Just simple stuff like "time ./foo" becomes "Measure-Command {... }" and then
it prints out 10 lines of the same measurement, in different units. And
doesn't distinguish CPU time versus wall time.

All that adds up and makes PS crap for interactive work. As far as writing
programs, PS is a much better programming language than bash.

And yes, I understand that using a bunch of 3 or 4-char names kills your
global namespace. But judicious use really aids ergonomics.

~~~
iheartmemcache
Yeah, for things like that, I have a wrapper function and/or aliases which get
loaded with PowerShell. The over-verbosity I'd imagine really helps the "click
next-get cheques" guys who are a little intimidated by the command-line. (I on
the otherhand bleed Hayes 9600 baud and VT220 green cold cathode.). 'ps aux'
wouldn't really cut it. Just like one customizes their bash, fish, zsh or
whatever shell, you customize your PS shell with modules (the Powershell
Community Extensions (PSCX)) and other such things (SQL Server integration
with Powershell? Yes please!).

So you effectively can do t foo, which your alias to a wrapper script around
Measure-Command and get the same effect. But, wait, you also want to log the
outputs, just to keep track of the progress as you refine the memory
allocation on my_malloc.cc. Pop into ISE[1] stock with Windows 8+ (or your
favorite Powershell editor) add a few lines and now you can have a log that
associates each revision with it's performance. Keep it open like you would
your editor and your .bashrc or .zshrc or whatnot.

The great thing about that is I only have to write the verbose command _once_
and it's entirely clear what that command does. I can share it with _anyone_
and they can run it, assuming they have .NET 4 or higher. They know
immediately what it does.

RE: Other software - All the unix utils (grep, awk, etc) are in a downloadable
"Gow". Windows has a package manager "chocolatey" which is basically like the
apt repos. All the software is vetted. Heavily. They run every binary through
VirusTotal which is 57 engines from the enterprise Sophos type stuff to the
less-good-ones.

I've run into one bug in 18 months (multiple python installs 2.7.1 / 3.4.x +
the env var of PythonHome conflicting references. I'm sure there's a solution
but I don't care enough about python to research it).

[1] [https://i.imgur.com/dXmT2MX.png](https://i.imgur.com/dXmT2MX.png)

~~~
mpweiher
Hayes 9600 baud? VT220? Geez, kids these days...setting your (directly
attached) ADM-3s to 1200 bps in order to smooth over the task-switching
jerkiness of your PDP-11 ;-)

Back to PowersShell. When I first looked at it, it seemed like just the right
answer, but on closer inspection...not so much. Being able to customise: yes.
Having to customise right out of the gate to get an even barely usable
experience: not so much.

It is obvious that the task is difficult, but PowerShell shows many
interesting direction and some ways to not solve it.

------
maho
My number-one wish for a NGS:

Undo!

Take, for example, rm. The hoops we have to jump through when accidentally
rm'ing a file are ridiculous [1]. But in most cases (smallish, non-secret
files), rm should be trivially undoable. Windows gets this right: By default,
files are not deleted, but moved to trash. If there is not enough space in
trash, Windows warns you. Or, if you really want to delete a file instead of
moving it to trash, you can press SHIFT+DELETE, in which case Windows will
also warn you that it can't be undone. (What is missing in Windows is a "nuke"
option that overwrites the old bits with random data, for those rare cases
where a file must be purged from the system completely.) But in most cases,
after deleting a file, you can simply press CTRL+Z to get it back.

It is possible to make rm behave like that in Linux [2], but in a NGS, this
should be the default behaviour (in my view), with "delete" for really
deleting (unlinking) a file, and "nuke" for completely distroying a file, as
seperate commands.

Undo is hard. Most programs are on their third, fourth, or even higher release
before getting it right. (Mathematica 10 is stil trying to get there...) But I
think we should try harder to solve the undo-problem with respect to file-
system interactions, or with respect to system-settings, in Linux.

[1]: [http://unix.stackexchange.com/questions/10883/where-do-
files...](http://unix.stackexchange.com/questions/10883/where-do-files-go-
when-the-rm-command-is-issued)

[2]: [http://unix.stackexchange.com/questions/42757/make-rm-
move-t...](http://unix.stackexchange.com/questions/42757/make-rm-move-to-
trash)

~~~
Arnavion
>Windows gets this right: By default, files are not deleted, but moved to
trash.

The del or Remove-Item commands also permanently delete the file, just like
rm. The "Windows" behavior you're talking about is the behavior of the
graphical shell Explorer, which is also present in Gnome and KDE. There's
nothing specific to Windows here.

~~~
maho
You are right, I forgot about Gnome and KDE. I tend to use classic shells on
Linux, and the graphical shell on Windows, so I made a wrong generalization.
It's probably more accurate to say that "undo" of simple operations (rm, mv)
is something graphical shells get right.

But why are classic shells shipped without even the simplest "undo" features?
We can probably all tell stories-from-the-trenches of how we accidentally did
something aweful by typing the wrong command. Sometimes, it's hosing the
network configuration of a remote machine, which is not something where "undo"
would help you. However, in my experience, most cases involve doing something
foolish with rm or mv, where "undo" would be tremendously helpful. I guess the
main reason why I don't use the text shells of Windows is that Windows
explorer gives me Ctrl+Z, which is saving my behind approximately once per
year.

As a final thought, all shells (graphical or not) should expand the undo-
features beyond rm and mv. For example, if I change system settings (or even
Application settings?) and am not happy, I would love to be able to simply
"undo" them, without trying to remember what the previous setting was. I
realize that this is an incredibly hard problem, and that it is unlikely to be
solved in an evolutionary step of the existing shells. That's why hearing
about NGS got me so excited.

~~~
JdeBP
> _But why are classic shells shipped without even the simplest "undo"
> features?_

This question is assuming an untrue premise. There are Windows command
interpreters that integrate DEL and RD with the Recycle Bin.

* [https://jpsoft.com./help/del.htm#r](https://jpsoft.com./help/del.htm#r)

* [https://jpsoft.com./help/del.htm#k](https://jpsoft.com./help/del.htm#k)

* [https://jpsoft.com./help/rd.htm#r](https://jpsoft.com./help/rd.htm#r)

* [https://jpsoft.com./help/rd.htm#k](https://jpsoft.com./help/rd.htm#k)

------
voidz
A shell is an application that provides system interaction to users.

> _What I see is a void. There is no good language for system tasks (and no
> good shell). What 's near this void is outdated shells on one hand and
> generic (non-DSL) programming languages on the other. Both are being
> (ab)used for system tasks._

Aside from the, in my opinion, abhorrent word "outdated" (why are old things
considered _bad_ just because of their age?), the comment about there being
"no good shell" is something I disagree with very much. Personally I'm really
fond of zsh and its manual pages are a treasure trove. But, lest this digress
into a flamewar about shells or into a non-productive discussion about
preferences, the point I would rather make is that the author provides a
different thing than, as it seems to me, qualifies as a shell.

For example, the open issue mentioned in the README is:

 _Open issue: how to deal with a command that requires interaction._

As a Linux system administrator I am happy with the tools I have, and
initiatives such as this seem to digress into the realm of the programmer.
Sysadmins and programmers (a.k.a. developers - though I'd consider myself a
developer too but not a programmer) tend to have very different perspectives
about how to get an application onto the environment on which it runs, within
the greater system of servers and networks.

In that sense, Docker and such mostly seem to get a preferential treatment
from programmers, while sysadmins (again, as it seems to me) tend to dislike
these kinds of abstractions. And I often get the impression that developers
haven't got as much appreciation for sysadmins and what they do, as vice
versa, but this could be false, or even more likely is that this is true for
some and false for other cases. OMMV.

NGS might be a very useful project, I'm not at all negative about the project
itself. It just seems more useful to programmers than to people like me who
enjoy nothing more than to interact with their command line interfaces a.k.a.
shells.

~~~
CJefferson
I'm suprised you can't think of problems you currently have with shells. Here
is a few of mine:

* Once I've started a program, and easy way of sending it to the background if it is taking a while, which sends it's output to some buffer I can refer to later, rather than continuing to spew it all over the screen.

* While we are at it, stop spewing the output of multiple programs over the screen, under any circumstances.

* Have some simple, easy to follow rules which let me work on files that have spaces in their names, without having to remember the various commands with various special cases (like -print0).

~~~
txutxu
issue 1) sounds like you miss nohup, sreen or tmux in front of that long
running program.

issue 2) usually I don't have one terminal session with multiple programs
doing output at the same time, this solves this issue.

Just use a new xterm, a new terminal tab, a new screen tab for programs that
are going to produce output and stay running.

issue 3) I think you refer to files with other special characters, because
spaces on filenames do not need -print0 at all, simply proper quoting in the
code will do.

Regarding issue 3, there is a lot of caveats when coding shell; even file
names starting with a dash (-) which are interpreted as options by tools
external to the shell (which fortunately support a double dash (--) to stop
processing parameters).

~~~
hibbelig
The point of issue 1 is that sometimes you don't know that the program will
run for a long time. So a way to send a program into the background while it
is already running, would be nice.

~~~
txutxu
'Ctrl + z', 'bg' and 'fg' are for that.

~~~
CJefferson
Except, now you are back to my original problem -- the output of the program
will continue to spew all over my terminal.

~~~
txutxu
Well, if you work always in tmux/screen (a recommended practice for
operations, that I did import to my daily shell activity) you don't need job
control.

When there is a "surprisingly slow" command, then just press the shortcut for
a new screen session.

Will get alerted in the status bar when the slow command finishes.

Output of each command will be in it's own screen session (not mixed).

That Ctrl + z response was no for issue1, but for the wishes of the second
sentence of @hibbelig.

~~~
hibbelig
I meant to include the part of capturing the output -- the part that Ctrl+Z
does not do.

Opening a new screen/tmux session is a cool idea, but the new session doesn't
inherit the state from the already running one, I guess. (At least in screen,
not sure about tmux.) By "state" I mean the shell history, the current working
directory, the (environment) variables.

Also, say that long running command runs five minutes, and after one minute
you create a new session. Then after four more minutes you now have two
sessions, both ready to accept input. Which one do you use to continue
working? How do you know which one to close?

~~~
txutxu
My screen respects the current working directory for new sessions.

My history is configured to write at every prompt redraw and to don't miss
commands.

With vanilla history and prompt command settings, history will be separated.

But as this is a common problem, and not only with screen, also with multiple
xterms and IDE's shells, and... also is a problem that history may miss lines
by default...

And as it's a common problem, there are settings to fix that, configure the
history merge strategy to your taste and do not miss history lines, because
you did close a xterm, or you did open a new screen sessions.

Curiously, those settings are even documented upstream.

Session variables? I expect the ones that could be loaded by login executing
bash -l...

If I did export something after that, then maybe I simply need to export it
again in a new session, but really in 20 years never was in this situation I
think... (do not wait to a command, continue with other commands, and need a
environment variable not initialized by default between both sessions....). If
I was in that situation... was so easy to fix that I don't even remember.

Which one to close? the one I prefer randomly, probably by focus, or probably
the highest screen id. As history is merged, I can close any or even both.

~~~
hibbelig
Thank you for all these great ideas. Yes, I totally forgot about settings to
merge history.

------
pjmlp
For me a next generation UNIX shell needs to catch up with what REPL
environment in Lisp Machines, Interlisp-D, Mesa/Cedar, Smalltalk, Oberon(-2),
AOS features and capabilities.

For the young HNers, think having something like Swift Playgrounds or IPython
as your shell, while having full access to the OS API without relying on
external programming languages.

Otherwise the _next generation_ prefix isn't worth mentioning.

~~~
cm3
And arguably you'd have to call it last-generation as well, if you recall how
long ago those innovations happened but never made it into mainstream. That
would label current shells as dark age of tty :). I still cannot accept the
fact that we use tty emulators.

~~~
pjmlp
For me PowerShell feels a bit more closer to that model, but it is still text
based, no inline graphics or data structures output, and the syntax could
surely be improved.

~~~
zbjornson
PowerShell actually does have data structures, and that's my favorite part of
it. Pipe a directory listing as an object instead of text so you can access
properties directly instead of parsing text!

~~~
pjmlp
No,I mean displaying them inline in the REPL and allowing you to interact with
them.

~~~
snuxoll
Well, that's an issue with the REPL, not the design of PowerShell. No reason
you couldn't make one that does this.

~~~
pjmlp
Sure, I do like it much more than any UNIX alternative and it is the only
widespread shell that is closer to the experiences I was referring to.

But that REPL could be improved, that is what I mean.

------
s_kilk
Having read through the readme, I think some of the issues outlined could be
solved by having a cell-based model like jupyter/ipython.

Basically a command (input) and its corresponding output are tied together as
a "cell". The UI could be very like the ipython notebook, where you have a
text box for command input and then above it you have a chronological list
view of cells. You would also have a good keyboard-shortcut language for
navigating back and forward in the cell history, so you could, say, go back
and focus a cell which has been running for a while and is now prompting for
input.

(Edit: imagine a key sequence like `Alt-b, Alt-b, Ctrl-Enter` meaning "go back
two cells and give text focus to that cells input field.". In reality you'd
probably want a command language that's more vim-like.)

The cell model would also allow you to truncate output, minimize or maximise
the cell display, and a bunch of other UI tricks I can barely think of right
now.

Basically each cell would be a little parallel shell of it's own.

If this sounds even remotely compelling, I wouldn't mind writing up a more
formal proposal and submitting it as an issue on the repo.

------
skywhopper
There are lots of mixed-up goals here that I think might be better served
broken out into multiple projects:

1) Terminal UI improvements 2) Interactive shell UI improvements 3) Shell
scripting language improvements 4) Userspace utility improvements

Tackling all of these together may well be possible, but it will likely limit
the potential impact of any improvements you are able to make. IMO, it's
easier to move the status quo with incremental improvements, not wholesale re-
imaginings of how the entire text-based Unix user interface works.

Ultimately, the most dangerous thing to mess with is the language itself. The
strongly-typed Python-lite described in the linked man pages is not going to
replace sh/bash for simple scripts, sorry. For most shell-scripting needs that
sort of thing is wayy overkill. Sure Bash is ugly, but ultimately there's no
way to harmonize quick-and-dirty command-line compatibility with an elegant
scripting language syntax.

An alternative approach here would just be to expand your thought process a
bit and don't get too worked up about Bash itself. Pipelines and new tools can
solve all your problems if you're willing to think about them in a different
way. `jq` is an amazing pipe-friendly JSON munger that is now just as
essential to my CLI toolbox as `awk` and `sed` ever were. It solves your
structured-data and typing complaints without interfering with Bash one way or
another.

~~~
ilyash
> The strongly-typed Python-lite described in the linked man pages

It's not strongly typed and types annotations can be left out in many places:

mylist.map(F(elt) "my $elt")

elt type is not specified.

The command syntax $(...) will cover some of the simple scripting needs.
Please provide specific code examples (probably in bash) that you think will
look bad.

> `jq` is an amazing

It is but in my opinion much less convenient than built-in support for data
structures.

------
gcr
Looks quite interesting! Some feedback:

Yes, a new paradigm for interacting with text commands would be wonderful. But
what would that look like?

One strong advantage of the current system is that the input matches the
interface. Everything is text, so it all can be typed with the keyboard. But
if this shell "displays structured results as real f __ing structures (YAML,
JSON, ...) ", that may not be true. If the shell spits out a table, is it a
CSV table? A YAML / JSON nested array? How is the user supposed to edit it?
The interface isn't clear, and any layers at all are going to take some
thought to be able to compose naturally with other tools. (The author
addresses part of this with "All operations made via a UI, including mouse
operations in GUI must have and display textual representation, allowing to
copy / paste / save to a file / send to friend.")

Another thought: from the proposal in the README, it isn't super clear which
tasks the author feels the shell ought to handle versus which tasks should be
handled by some other toolkit. For example, the "Manage multiple servers at
once" subsystem is handled by other devops tools like Ansible or Chef at the
"deploy an application" layer and tools like Tmux or 'clusterssh' at the "send
input to multiple processes in many screens at once" layer. Another example:
the author proposes "Actions on objects that are on screen. Think right click
/ context menu.", but since these actions _must_ require cooperation by the
program being invoked, it's not clear whether this serves the purpose better
than simply wrapping that program's operations in some GUI toolkit.

This could be pretty interesting, but I wish the author would formalize some
of this thinking into a concrete standard. Having too big of a vision without
some set boundaries seems to be hurting the focus of this promising project.

~~~
ZenoArrow
> "One strong advantage of the current system is that the input matches the
> interface. Everything is text, so it all can be typed with the keyboard. But
> if this shell "displays structured results as real fing structures (YAML,
> JSON, ...)", that may not be true. If the shell spits out a table, is it a
> CSV table? A YAML / JSON nested array? How is the user supposed to edit it?
> The interface isn't clear, and any layers at all are going to take some
> thought to be able to compose naturally with other tools. (The author
> addresses part of this with "All operations made via a UI, including mouse
> operations in GUI must have and display textual representation, allowing to
> copy / paste / save to a file / send to friend.")"

In PowerShell, all data is stored as .NET objects. Using this object-oriented
approach enables the sort of flexibility I believe you're looking for. Perhaps
something similar could be developed for Linux. There has been some activity
in this area:

[https://github.com/Pash-Project/Pash](https://github.com/Pash-Project/Pash)

[https://blogs.msdn.microsoft.com/powershell/2015/05/05/power...](https://blogs.msdn.microsoft.com/powershell/2015/05/05/powershell-
dsc-for-linux-is-now-available/)

[http://www.forbes.com/sites/justinwarren/2016/03/08/is-
micro...](http://www.forbes.com/sites/justinwarren/2016/03/08/is-microsoft-
about-to-bring-powershell-to-linux/#663910e8528b)

------
akatechis
Lots of assertions about how "bad" our current shells are, but no
substantiation. I'm a developer, and I know just enough shell to write a
simple deploy script or two, but reading this, I'm not convinced that what we
really need is new shells and scripting languages.

~~~
ilyash
Simple example: using current shells it is not convenient to work with API
call results which are structured data. Yes, there is jq but if the shell had
data structures it would be much better, wouldn't it?

~~~
wicket
On UNIX, the "API" is basically that everything is a file and programs are
written to handle text streams. After all these years, the shell is still
perfect for this.

It sounds like you want create a new language and shell in order to have
something similar to Windows PowerShell which interacts .NET objects using its
API. This does not fit into the Unix "API" described above. This isn't a "Next
Generation UNIX Shell" but an alternative shell that suits a certain group of
users who don't really understand UNIX.

~~~
blakeyrat
That's great, but how much data do you work with, day-to-day, that is best
represented as text streams? For me, the answer is "virtually none". Heck,
most of the stuff I work with, day-to-day, _can 't even be represented
meaningfully in a text stream_. (Editing video, for example.)

~~~
wicket
For me, the answer is "plenty", but that's neither here or there. Everyone has
their own use case and I daren't say yours isn't valid.

Video editing tends to done in applications, of course that's not to say video
data can't already be piped. I once set up a screencast using gstreamer to
convert /dev/video to an HTML 5 compatible video format which I then
redirected to a named pipe which could then be opened from a HTML file page in
a web browser. The UNIX shell is already very powerful. You haven't stated
what it is you want to do with video editing that is either difficult or not
currently possible but would be possible with this new shell, so I'm not sure
that I really understand the point that you are trying to make.

There is nothing wrong with creating an alternative shell. I can see the need
for something like this. PowerShell for example works very well. My main gripe
with this project is that the author has named it "Next generation UNIX shell"
which suggests that it is intended to be a replacement rather than an
alternative. In my humble opinion, it tosses the simplicity out of the window
that makes UNIX, UNIX.

To quote Dennis Ritchie, "UNIX is simple. It just takes a genius to understand
its simplicity." I'm no genius, but I get it.

------
philjackson
I don't think there's enough that needs fixing in the current shells/terminals
that require a new paradigm at the moment. These things seem to come and go
without much adoption.

Sorry, bit negative of me - it's awesome that people are trying.

------
lottin
Just because something has been around for a long time it doesn't mean that
it's outdated. On the contrary, it means that it's stood the test of time,
which is usually a sign of a good design.

~~~
ilyash
Sorry, by outdated I meant not a good fit for today's problems.

~~~
CaptSpify
Personally, I don't see how today's problems are that much different. I agree
we should _always_ be looking for better solutions, but IMO, it seems better
to adjust our current tools than to rewrite them.

------
yoodenvranx
Something I would LOVE to see (not only in a shell but in all tools) is some
kind of "project mode".

\- In Chrome/Firefox I'd like to have a mode for recreational browsing, one
for research of webdesign and one for fitness/health stuff. This basically
means that I want to click at a browser window and say "This is now my fitness
window. Please remember all the open tabs". And then there is a list of open
sesions/projects where I can switch between those views/instances. I'd like to
be able to close my fitness-window (without losing my fitness tabs) and then
re-open the same fitness-window on the next day.

\- For Dolphin/other_filemanager I'd like to see a "normal" mode for everyday
browsing, a mode with large icons for sorting pictures and some mode for
working with Python scripts in which the nautilus terminal is always open

\- for Sublime I want a scratch window where I open and edit random small
files, one for my html coding and one for my Python coding

Currently for almost every program we only either have one "session" or some
really specific project settings (like Visual Studio) but I'd like to see
something in between.

For example I have a lot of HTML files open in Sublime and I need to edit some
unrelated config file. If I just open this config file it will open in the
same window as the html files. What I'd like to have is a way to say via
comment line or context menu "open this file in scratch pad mode".

The same for Dolphin. If I have one instance of Dolphin open for picture
management with large icons and I have to do some Python file editing I'd like
to have a simple way of saying "Open Dolphin in python mode".

Or sometimes I have a random link. I want to open that link in my "random
browsing" instance of Chrome and not in the window which has currently all my
fitness tabs open.

And the same is true for video players, music player, ebook programs, ...

~~~
azdle
> \- In Chrome/Firefox I'd like to have a mode for recreational browsing, one
> for research of webdesign and one for fitness/health stuff. This basically
> means that I want to click at a browser window and say "This is now my
> fitness window. Please remember all the open tabs". And then there is a list
> of open sesions/projects where I can switch between those views/instances.
> I'd like to be able to close my fitness-window (without losing my fitness
> tabs) and then re-open the same fitness-window on the next day.

Sounds like you're asking for 'Tab Groups'. It used to be a feature built into
Firefox, but recently got removed due to lack of use, it is however still
available as an add-on (mostly using the existing code that was removed from
Firefox itself afaik):

[https://addons.mozilla.org/en-US/firefox/addon/tab-groups-
pa...](https://addons.mozilla.org/en-US/firefox/addon/tab-groups-panorama/)

~~~
yoodenvranx
I used to love and use Tab Groups but it was always amazingly slow on my PC
(old-ish Celeron 2.6 Ghz)

------
voaie
> The problem with outdated shells looks pretty clear: they were made with one
> kind of tasks in mind but are used for other, bigger and more complex tasks.
> Such scripts usually look as a fight against the language and working around
> it much more than using it to solve the problem.

Mind you,

> Create a shell (in that language) that is up to date with today's tasks -
> cloud, remote execution on a group of hosts.

I'd rather create another DSL/tool to solve it.

> Two languages actually.

> Current-shells-like but simplified (called "commands"), $(...) syntax

> Scripting language, "normal" programming language (called "code"), {...}
> syntax

Good luck.

> The scripts that I've seen (and written in Python and Ruby) look too verbose
> and show unnecessary effort. Such scripts do not look an optimal solution
> (at the very least).

Did you check scsh? (I don't use it.) [http://scsh.net/docu/html/man-
Z-H-3.html](http://scsh.net/docu/html/man-Z-H-3.html)

~~~
spc476
The QNX shell could easily do remote execution. I worked at a company in the
mid-90s (porting Unix software to QNX). My machine had a modem on it, and my
boss (the owner of the company) would use it from his computer to dial out---
all he had to do was reference the modem device on my computer at his command
line.

In fact, you could run a program on a local machine referencing a file on a
second machine, pipe the output to a program on a third machine and have that
redirect the output to a file on a fourth machine, all from the command line.
I don't recall the exact syntax, but it was something like:

    
    
        cat @2/path/to/large/file | @3/bin/grep 'foo' >@4/tmp/output
    

(Individual hosts were numbered on a QNX ethernet based network, but the
default permmissions were Unix like). All of this was a consequence of the
message-based QNX operating system (whether messages were delivered locally or
remotely was invisible to programs) but I don't see why something like can't
be done today.

~~~
dmytrish
This is a quite neat example, but it looks like the QNX shell mixed several
concerns about how do you connect to these machines, how do you secure the
communication, etc.

    
    
        $ ssh @2 "cat /path/to/large/file" | ssh @3 "/bin/grep 'foo'" | ssh @4 "cat >/tmp/output"
    

is rather clunky, but separates those concerns well.

~~~
spc476
Sorry about not getting back sooner (returning from vacation) but the QNX
network message passing ran over Ethernet, not IP, so it only worked (to my
knowledge) over a local segment. Given that it was transparent to user
processes, it could be argued that security could be added at that layer (two
of the seven layer OSI model).

------
ilyash
Thanks for your input and GitHub stars! You are all very welcome to open
GitHub issues with your suggestions.

I'm looking for developers to join the project. There is a lot of work to do
:)

Ilya Sher, the author.

------
kps
Actually rc was the next generation Unix shell.

(Personally, I use ksh, which, while as bloated as bash, is bloated in more
useful ways, and not so gratuitously divergent from plain sh.)

------
dustingetz
If processes were referentially transparent functions, the shell would be
wicked fast as it can run commands speculatively as you type them and discard
the side effects if you backspace and memorize the results. Along with a
zillion other game changer benefits like determinism. Built in perfect
auditing. Rewind and restore processes to past states. That's what we should
be striving for. But it's a lot harder.

------
Gracana
The first few bullet points (regarding job control / status / blocking) sound
interesting and useful for anyone. I think it would be cool to have a shell
with some sort of visual job control system and an easy way to background
anything that was taking significant time. Maybe that wouldn't be as deeply
integrated as the system the author is describing, but it would probably be
easier to implement and less of a change from a standard shell. These days I
hardly ever suspend a process and continue it in the background because it's
easier to spawn a new shell, but if that workflow were streamlined... maybe I
wouldn't need a dozen shells open all the time.

As for the rest, I dunno. Neat, but complex solutions like this make me pause.
Maybe instead of trying to solve this problem, we should instead try to not
have this problem. That is, if your tools require this much of your shell,
maybe the tools are the problem and not the shell?

------
therealmarv
IMHO the world does not need another Shell language. Please not... it's even
sometimes complicated with ZSH & FISH today. A shell which behaves totally
different will get me totally different problems than nowadays... no thanks. I
know my "enemy and friend" Bash and it's a good thing I get it nearly
everywhere.

------
isaaaaah
My 2 tinfoil hats: If they can't have your kernel, they try to get your
shell...

To me, i really do not see the points listed to be missing. For instance my
shell script for remote execution on multiple hosts needs somewhere around ten
lines of code and a peer list. I would never outsource this to an external
entity. Especially today.

------
pmlnr
Actually... bash is a pretty solid language since version 4, just no one takes
time to learn it.

~~~
chungy
Might you elaborate on what version 4 does new/differently?

~~~
Adaptive
associative arrays are one of the biggest v4 advantages

------
htor
The irony...

> Next generation UNIX shell. See the man page.

------
fao_
I think an OOPish DSL as a shell is pretty wrong[0], to be honest. Most of the
time in UNIX what you're doing is chaining commands or automating. So perhaps
what's needed is a DSL with a more functional, stream-based approach.
Something along the lines of:

> map `rm -rf ` (grep "*.tex" (ls))

[0]: I haven't had much experience with FOOP, however.

------
nickpsecurity
Use a general-purpose language with macro's or DSL's. Racket and Rebol are
older ones with RED language being designed partly to do what article
describes.

[https://en.wikipedia.org/wiki/Red_%28programming_language%29](https://en.wikipedia.org/wiki/Red_%28programming_language%29)

------
chickenbane
If you're into Scala, Haoyi Li has a great video about his (similary) Ammonite
project

[https://www.youtube.com/watch?v=dP5tkmWAhjg](https://www.youtube.com/watch?v=dP5tkmWAhjg)

[http://www.lihaoyi.com/Ammonite/](http://www.lihaoyi.com/Ammonite/)

------
amelius
I suppose all of the behavior should be highly customizable. It would be nice
if plugins could be written in any language. For example, I expect special CLI
editor extensions for editing JSON input to programs (with syntax highlighting
if desired). Also, I expect specific key-bindings (for e.g. vi/emacs modes) to
be pluggable.

~~~
ilyash
I encourage you and others to open issues or make a pull request with updated
README

------
wiremine
The github page links to a screencast which I found more helpful than the
github writeup itself:

[https://www.youtube.com/watch?v=T5Bpu4thVNo](https://www.youtube.com/watch?v=T5Bpu4thVNo)

------
anfroid555
But why nodejs?? There are a lots better languages to base this off of.

~~~
nailer
Probably because JavaScript is the worlds most popular programming language,
it's portable, has the largest package manager, supports the shell's non
blocking requirement, and the current version (ES6) has a better stdlib than
previous versions.

~~~
tacos
You're wrong on about three of those statements, but note that the same hand
wave could have been made for Perl ten years ago. That would've been a poor
choice then for the same reason Javascript is a poor choice now.

~~~
nailer
> You're wrong on about three of those statements

OK. Do you have any supporting arguments? "You're wrong" doesn't contribute
anything.

> note that the same hand wave

It's not a "hand wave". Those things are easily verifiable facts. You're new
here, you might want to read:

[https://news.ycombinator.com/newsguidelines.html](https://news.ycombinator.com/newsguidelines.html)

~~~
tacos
Not new. Javascript is not the most popular language, does not have the
largest "package manager" (nor standard library, nor package library which is
what I think you mean) and more importantly it does not have the most
programmers specializing in the problem domain at hand -- shell functions.
Moreover the referenced toolset _doesn 't even use Javascript_! You merely
jumped in blind to promote your favorite tool. Also if you want to play HN-by-
the-book then you should refute my argument why Javascript shell scripting in
2016 from a cold start is somehow better than Perl shell scripting in 2000
given its lead back then.

~~~
nailer
> Javascript is not the most popular language

> does not have the largest package library, which is what I think you mean.

Yes, you've said that before. Do you have any supporting arguments?

I'm basing that opinion on:

1\. JS is #1 on modulecount and #2 on libraries.io, the two places that track
the size of package repositories.

2\. JS continually coming up as the #1 or #2 most popular language on
StackOverflow eg [http://www.r-bloggers.com/the-most-popular-programming-
langu...](http://www.r-bloggers.com/the-most-popular-programming-languages-on-
stackoverflow/)

Happy to talk about Perl once this becomes a dialog rather than you telling me
I'm wrong without the courtesy of explaining why.

~~~
tacos
And the most popular girl in my high school can't ride a motorcyle. I took her
to prom but I'm not inviting her on my next cross-country ride. What's your
point?

[https://en.wikipedia.org/wiki/Measuring_programming_language...](https://en.wikipedia.org/wiki/Measuring_programming_language_popularity)

Regardless you don't pick a toolset based on popularity. You choose it based
on capability. 100,000 front-end HTML developers asking the same "how do I
regex?" question on StackOverflow 10,000 times doesn't pre-qualify an
ecosystem as the perfect tool for shell automation.

And at this point you're just trying to save face after you jumped in to
explain why it's such a great idea that this project uses Javascript. It
doesn't. I'm moving on and suggest you do the same.

~~~
nailer
My point is that while you keep saying something is wrong, the evidence shows
otherwise.

I'm pretty sure you're already aware of that. Your account is new, perhaps
you've been here before but you don't seem very capable of participating in
technical discussions. I'm going to end this conversion for obvious reasons.

------
nikolay
There've been many attempts and no web-based shell is the next generation -
only the next toy. A replacement of Bash, Zsh, and Fish is needed, it just
won't be a web-based toy!

------
leephillips
Curious if anyone here has adopted the shell mode of iPython as their routine
shell (or something similar). Is it a good idea? Experiences?

------
andrey_utkin
So, this project starts about shell, then I see it also aims to implement
full-blown terminal emulator and a lot of new very ambitious features, and
then it invites terminal-based apps to use new capabilities of it, so it would
be best usable only if the whole ecosystem of compatible apps appear.

Unfortulately, I'm pessimistic.

Also, would like to comment on project's README. Not meaning to be harsh.

> The shells never caught up.

Wrong. To prove this, it is enought to see that almost none of "features" is
new in comparsion with traditional terminals.

> Not to block, allow typing next commands even if previous command(s) still
> run(s).

Got a plenty of ways to handle that with shell. Backgrounding, GNU Screen,
Tmux.

> Open issue: how to deal with a command that requires interaction.

Oh yeah, you've invented a thing and don't know how to use it with real-world
apps. Happens.

> Provide good feedback. In GUI for example, this can be green / red icon

Enhanced setting of shell's "PS1" variable gives you good indication of exit
code of last app (color + text + whatever, you may even use audio if you
want). I'm using it for years.

> All operations made via a UI, including mouse operations in GUI must have
> and display textual representation, allowing to copy / paste / save to a
> file / send to friend.

Sure, why not, but all this is possible with shells. Both native consoles
(with gpm) and X11 terminal apps, and also Tmux has its own possibilities to
handle clipboard.

> Different UI modules must exist

Lots of people was playing around with shells via IRC or XMPP ages ago.

> Web

Good luck handling keystrokes with tricky keycodes, and preserving all other
standard terminal features. BTW I thought this project aims another shell, but
this sounds like requiring another terminal emulator. I'd be glad to hear we
have powerful web-based terminal emulator, but this feels very laborous.

> allow multiple users to collaborate

Screen/Tmux.

> some rw, some ro

Seems lacking in tmux at the moment, but possible to implement on its own,
without reinventing everything else.

> Most of the data dealt with is tables. List of files, list of instances in a
> cloud, list of load balancers. Amazing that none of current shell tools (I
> heard of) don't treat the data as such

It is discouraged to parse textual output of shell programs unless app clearly
guarantees unchanging parsable/consumable format (with specific options you
use). Quite often there's another, more reliable way to mine information than
parsing of output of general-purpose program.

> Allow editing it and saving to file

some_program | vim -

Then, in editor, edit and save. Done. If you add too many specific usecases
into the core, you end up with bloated interface.

~~~
jerf
"So, this project starts about shell, then I see it also aims to implement
full-blown terminal emulator and a lot of new very ambitious features, and
then it invites terminal-based apps to use new capabilities of it, so it would
be best usable only if the whole ecosystem of compatible apps appear."

This is actually my only objection and the only feedback I'd make to the
author. Contra some of the other posters here, I fully agree shell is a mess
and that the basic root cause of the mess is that you can not straddle
interactive use and programmatic use with one language. But something a bit
smaller needs to be bit off.

I'd also suggest that most shells have already got the interactive use case
sufficiently covered and that the low-hanging fruit is in the command case.
Perl is perhaps the best language that functions as a programming-shell
replacement, but using it for a while reveals it still has many significant
issues in that use case. Even with modules, it pipes surprisingly poorly. It
tends to make stream processing harder than it needs to be, unless "one line
at a time" works for you. It's got a lot of nasty syntax that is arguably
optimized for big program use cases that doesn't make sense when replacing
shell scripts. When being used for shell programming, it basically shares C's
abject failures when it comes to error handling, making it range between
bizarrely difficult (I have to left-shift the result of system to get the
actual exit code?) to almost-impossible-to-remember (backticks) to get errors
properly handled. It has no concurrency story or any particular "run on
multiple targets" ability (though I confess I don't know what that looks like
necessarily; in my head it rather becomes puppet or ansible fairly quickly).

There's room here, but it's going to be an uphill battle.

~~~
JdeBP
> _[...] shell is a mess and [...] the basic root cause of the mess is that
> you can not straddle interactive use and programmatic use with one
> language._

Note that some of the world has already taken this notion on board long since.
About a decade ago, Debian Linux and Ubuntu Linux swapped out /bin/sh so that
it was no longer the Bourne Again shell. Nowadays, as a result of this, one
regularly finds Linux systems where programmatic shell scripts are interpreted
by something like the Debian Almquist or Debian Policy-Ordinary (posh) shells
and interactive login session shell work is the domain of the Bourne Again,
Korn, Z, or other shells.

And of course, having a different "better" shell for interactive use was the
reported rationale for the C shell.

------
anfroid555
But why nodejs?? There are a lots better languages to base this off ove.

------
rhabarba
Because lol, standards.

------
vox_mollis
[WARNING, RANTY]

This is a blue-sky project. Completely new, with no legacy dependencies.

AND YET THE DEVELOPER CHOSE TO WRITE IT IN C. WHY FOR THE LOVE OF GOD, WHY?

Could someone explain this to me? Is it developer hubris, believing in one's
own infallibility that a single exploitable stack frame or buffer overflow
couldn't possibly happen "on my watch"?

If you need a native binary, your options are endless. Rust, Go, Haskell,
hell, even C++ with smart pointers and runtime bounds-checking would be a step
up.

Please, someone educate me on why people still choose to write inevitably-
vulnerable software in 2016, when there is no legacy reason to do so.

~~~
leephillips
Do you think it's a bad idea that git and the Linux kernel are written in C?
Is the possibility of a buffer overflow important in a program that's not
processing input from random people on the internet?

~~~
vox_mollis
No, because they are both legacy software. I understand momentum in codebases
- that's why I'm reserving my vitriol for fresh bluesky projects like this
one.

As for exploitability, privilege escalation and shellcode injection is still
very much a thing, internet-facing or not.

~~~
leephillips
Git started in 2005. I'm sure Torvalds knew all about buffer overflows then,
but decided to use C. You seem to think this is a crazy decision.

~~~
codygman
> Git started in 2005. I'm sure Torvalds knew all about buffer overflows then,
> but decided to use C. You seem to think this is a crazy decision.

Well is it? Will it be if Linus isn't maintaining it?

~~~
leephillips
Linus Torvalds has not been the git maintainer for years.

------
gravypod
What is this intended on fixing?

I don't think the solution to anything is to make the shell more user-
friendly. The only solution is to make the graphical user-interface more user
friendly and feature full.

The only time it should be acceptable to be forced to use a shell, in my
opinion, is if you are swapping out your desktop environment.

Until that happens, Windows and OS X will rule the desktop and PC market.

I'm fairly certain that Linux-based distributions will win out in the end if
we can overcome this clingy gravitation to a command line. It's already
happening already. Android is already winning against all of the major phone
platforms and it's not because of it's award winning terminal emulator.

~~~
regularfry
> The only solution is to make the graphical user-interface more user friendly
> and feature full.

There's only one feature that matters here, really: composability. Provide an
environment in which I can compose simple tools into complicated workflows I
can fling data through, and we might have a winner.

> The only time it should be acceptable to be forced to use a shell, in my
> opinion, is if you are swapping out your desktop environment.

Why should that be true? It's not at all obvious to me. I have yet to see a
GUI that lets you express something like `grep '^127' /etc/hosts | awk '{print
$2}'` in any sane fashion. The closest you get is something like LabView, but
that's counted out here because it's a general purpose(ish) language.

If your response is that this type of task is artificially skewed towards a
text-based, shell-centred environment, then you don't have the type of problem
that this shell is intended to fix.

~~~
gravypod
> I have yet to see a GUI that lets you express something like `grep '^127'
> /etc/hosts | awk '{print $2}'` in any sane fashion.

I think we are in a very interesting time in computer science history where he
have a huge slew of technology that solves a large set of problems that no one
wants to exploit.

I could easily see an "assistant" type window being created in a desktop UI.
Since we don't need a "windows" key in the linux world, I'd assume that would
be used.

My ideal system would allow me to go into a text document, click the windows
key, and say "grab me every line from /etc/hosts that starts with 127"

When the job would be completed, I'd see a check mark or something and the
result would either be pasted to my cursor or, if my cursor wasn't in a text
box, copied into my clipboard or something.

While this might sound like magical science fiction, it isn't. Neural
networks, NLP, and HCI fields are doing some amazing things. The only problem
is these are harder to implement then a simple terminal and pipe.

That is a sane way of interacting with a system that is user friendly.

In the same way, it could also handle other complex things. IE: "open my
browser", "open the Internet", "list all the things connecting to the
Internet", "open the last document I was editing in my text editor"

This is doable, albeit extremely difficult to implement well, but doable.

~~~
regularfry
I think you might have missed the subtlety here. The interesting bit isn't
`grep '^127' /etc/hosts`. It's `|`. There are umpteen ways to launch
individual applications, but nobody's yet come up with quite such a flexible,
simple and expressive way of getting them to communicate as streams of text.

