Hacker News new | past | comments | ask | show | jobs | submit login
NGS: Next Generation Unix Shell (github.com)
151 points by yoodit on Apr 21, 2016 | hide | past | web | favorite | 199 comments



I'm sure we can all agree that the current state of shells needs some work, but I don't think inventing a new one is the right solution. I'm a huge fan of the fish shell, but in the real world, it never seems to be installed across the farm, and convincing the older SysAdmins to install it is more trouble than it's worth.

We should be focusing on saner bash defaults, since it's the most common shell in use. We shouldn't have to remember these large list of gotchas, and pitfalls that shouldn't be there in the first place. A few people out there are trying to recreate common utilities such as ls or cp, and while I welcome the change, I feel it should be part of the actual gnu coreutils package, and not a new project: https://bsago.me/exa/


    We should be focusing on saner bash defaults, since it's the most common shell in use.


That kind of work is not glamorous and highly controversial. Ideally there should be some sort of cross distro/OS working group (Debian/Ubuntu, Red Hat, SUSE, FreeBSD, NetBSD, OpenBSD, Mac OS X, etc.), similar to the working groups that standardize the web, where such proposals can be made, voted on and adopted.

For example there should be no reason in 2016 why a sane solution can't be found for having almost every bash installation recognize all keys on the keyboard (i.e. arrow keys wouldn't produce ^A sequences or similar).

Another thing that would be sorely needed but would involve a much higher volume of work would be a template, at least for all GNU utils, which they use to define they options, parameters, arguments, whatever. And by "template" I mean library, actual working code they could include and configure.

Instead of each shell (bash, zsh) having to come with a million small scripts that configure auto completion, these shells could just query the standard-compliant tool for its usage and would receive a standardized reply with everything. Powershell has something like this and it is a great idea.


I've been using -nix for almost 20 years. I've used everything from ksh on SunOS 2.6 (Solaris?? what's that) to oh-my-zsh (for 3 years, before happily graduating to oh-my-fish). I grew up on Slackware 6 waiting hours and hours for a 2.2 kernel build to finish. In high school, FreeBSD 4.3 kernel mods took up more time than booze and women. My 3 year puppy love for oh-my-zsh dimmed as I transitioned to a more sane, less emotional woman - oh-my-fish.

I hate to admit it, but Microsoft just got it right with Powershell. Standardizing the convention with an intuitive Verb-Noun, the out of the box documentation with -examples, -full, etc goes into so much detail that if everyone used it, there wouldn't be any dumb "How does I move directory??" questions on Stackoverflow (well, there'd be less at least).

Don't know a command? Show-Commands and type in 'network' to see what's available. Don't see it there? Go get a Powershell script (often offered by vendors like VMware and Citrix, making devop lives easier). I remember spending _weeks_ trying to get my dual-boot machine (FBSD / Win2k) Cygwin/msys setup to work well on my P3 600 at 12 or 13. Now I can just Feature install Bash and get a native binary. Let me say that again -- now, I can Feature install Bash on Windows jeeebus. QT is LGPL. CLR is open source. Visual Studio Community (basically Professional) is free (unless you need historical debugging then Ultimate's going to cost a lot). Satya + Meijers + Hanselman et al have made favor Microsoft so hard, despite being an active donor of the FSF.

That GNUutils-with-autocomplete is real trivial to write, but since it's already in fish, I see no need to right it. :smug: [In all seriousness, I agree with you re: standards. Plan 9's interface standards semi-addressed that. But le sigh such is life.]


Powershell's almost great. Things are way too verbose. No built-in stuff like curl or wget (yes, there's a simple webrequest thing and aliases, but they're clunky). In fact, it feels like everything in PS is clunky.

Just simple stuff like "time ./foo" becomes "Measure-Command {... }" and then it prints out 10 lines of the same measurement, in different units. And doesn't distinguish CPU time versus wall time.

All that adds up and makes PS crap for interactive work. As far as writing programs, PS is a much better programming language than bash.

And yes, I understand that using a bunch of 3 or 4-char names kills your global namespace. But judicious use really aids ergonomics.


Yeah, for things like that, I have a wrapper function and/or aliases which get loaded with PowerShell. The over-verbosity I'd imagine really helps the "click next-get cheques" guys who are a little intimidated by the command-line. (I on the otherhand bleed Hayes 9600 baud and VT220 green cold cathode.). 'ps aux' wouldn't really cut it. Just like one customizes their bash, fish, zsh or whatever shell, you customize your PS shell with modules (the Powershell Community Extensions (PSCX)) and other such things (SQL Server integration with Powershell? Yes please!).

So you effectively can do t foo, which your alias to a wrapper script around Measure-Command and get the same effect. But, wait, you also want to log the outputs, just to keep track of the progress as you refine the memory allocation on my_malloc.cc. Pop into ISE[1] stock with Windows 8+ (or your favorite Powershell editor) add a few lines and now you can have a log that associates each revision with it's performance. Keep it open like you would your editor and your .bashrc or .zshrc or whatnot.

The great thing about that is I only have to write the verbose command once and it's entirely clear what that command does. I can share it with anyone and they can run it, assuming they have .NET 4 or higher. They know immediately what it does.

RE: Other software - All the unix utils (grep, awk, etc) are in a downloadable "Gow". Windows has a package manager "chocolatey" which is basically like the apt repos. All the software is vetted. Heavily. They run every binary through VirusTotal which is 57 engines from the enterprise Sophos type stuff to the less-good-ones.

I've run into one bug in 18 months (multiple python installs 2.7.1 / 3.4.x + the env var of PythonHome conflicting references. I'm sure there's a solution but I don't care enough about python to research it).

[1] https://i.imgur.com/dXmT2MX.png


Hayes 9600 baud? VT220? Geez, kids these days...setting your (directly attached) ADM-3s to 1200 bps in order to smooth over the task-switching jerkiness of your PDP-11 ;-)

Back to PowersShell. When I first looked at it, it seemed like just the right answer, but on closer inspection...not so much. Being able to customise: yes. Having to customise right out of the gate to get an even barely usable experience: not so much.

It is obvious that the task is difficult, but PowerShell shows many interesting direction and some ways to not solve it.


> No built-in stuff like curl or wget (yes, there's a simple webrequest thing and aliases, but they're clunky).

wget is a built-in alias in Powershell. 'Wget' alone gets you an object of course, with the status code, headets, etc. included. (wget http://google.com).Content gets you the content.

Seems about the same as basic uses of wget. Can you go into more detail?

The verbosity doesn't bother me too much personally: it saves me a lot of time remembering commands and there's tab completion, and of course there's aliases.


Powershell seemed like a great advance upon cmd, until I realized it was a great step backwards from Python. It's an objectful interpreted scripting language, but suffers from poor namespacing, verbosity, and some fairly insane defaults.

Meanwhile, IPython as a shell[1] has been around for years. I do mostly python scripting, and spend much of my day at an ipython shell. I don't personally use it as a general purpose shell, but it is perfectly acceptable for many problems that I use if for.

[1]http://ipython.readthedocs.org/en/stable/interactive/shell.h...


Your complaint about clunkiness isn't wrong. But consider the contrast of (1) stating a full phrase that preserves the symmetry between syntactical constructs with their semantic counterparts or (2) the acronymic representation thereof. Whilst (1) is always clunky, it tends to be easier to use for non-experts, whereas (2) is minimal but requires an expert-level familiarity. In essence, it is little different from the contrast between written English and written Chinese.


It sounds a lot like the VMS shell (outside the directory manipulation and file redirection which was horrid, it wasn't that bad of a shell and the help facility was first rate).


Powershell on itself does have really nice design decisions, but does not interact well with the whole system, leading to interoperability, portability issues and being clunky to use, in my view. It looks like PS is designed more towards DevOpsing (let developers do some administrative tasks in a familiar environment) than systems administration. Some of the rough edges I have encountered in the past:

The whole point about piping objects (.NET class instances, sort of) around is really awesome, except that even on 8.1 (did not check on 10) standard utilities like ipconfig output wall of text, which has to be string-parsed.

To be fair, there are cmdlets for most if not all subsystems, except that wrappers are sometimes too shallow, e.g. System.ServiceProcess.ServiceController returned by Get-Service does not expose delete(), so you have to resort to WMI. I have not used PS extensively enough, but would not be surprised if some tasks would require one to resort to wmic, netsh or similar tools.

Powershell is not automatically updated, so you can find anything between 1 to 5 and anything portable has to be written for 1. e.g. Get-Service, which is in global namespace, is available since 3. IIRC W7 ships with 2.

Powershell allows to use .NET directly, e.g. spawn process with [System.Diagnostics.Process]::Start, except as with above, you have no idea which version of .NET is available on target system.

I found no other way to gain administrative privileges except for relaunching itself in administrative context.

PS ISE could be used to incrementally write and debug scripts by directly fine tuning each command, except that it is non-interactive, so utilities asking for password are kind of painful.


Slackware jumped from Version 4 to 7 back in 1999.

Anyways, Powershell is pretty amazing and they did get it right. However, some of the syntax leaves much to be desired. I mean this:

Get-ChildItem $Path | Where {$_.extension -eq '.DLL'}

It's the worst example of a recursive search for a specific file extension, easiest being "gci -Recurse -Include .DLL", but the syntax using {$_. ,'*'} is used in many places, most of which I've been able to figure out less complicated replacements and achieve the same results. This includes being able to pipe out each "object" in the resulting text appropriately.

This isn't a bash against powershell. They did get it right, but I'm left wondering why they bothered including this kind of syntax'ing?


> That kind of work is not glamorous and highly controversial. Ideally there should be some sort of cross distro/OS working group (Debian/Ubuntu, Red Hat, SUSE, FreeBSD, NetBSD, OpenBSD, Mac OS X, etc.), similar to the working groups that standardize the web, where such proposals can be made, voted on and adopted.

Like the Austin group, the technical committee that constantly releases updates to POSIX and the Single Unix Specification? http://www.opengroup.org/austin/ http://pubs.opengroup.org/onlinepubs/9699919799/


Aren't those folks more "enterprise" oriented than "quality of life" oriented?


I have to disagree with the idea that everything should be standardized. What's the point of different distros if everything is just going to be the same? Might as well just tell everyone, "Ok, we're all going to use CentOS."


If you want shipping, you want standards. They should be creative in the UI and system administration tools (maybe not even for those tooks), otherwise we can't build anything on top of them.

There's no real use for 3000 distributions anyway. Most of them can't even support the software they include properly. And most of them duplicate so much effort it's not even funny anymore (rpm vs deb, yum vs apt, etc.).

Standards are generally good (HTTP, IP, SSL, TCP, POSIX, USB, etc.).


There should be a standard withing the organization (we use CentOS for everything internally), and for systems which need to inter-operate, for example, the internet.

There does not need to be a standard for shells, for network script syntax, etc.

You say they duplicate effort, which is true, but you assume the effort being expended would simply be redirected where you think it's needed. If you kill off half of distros, you probably just cause the people working on them to go away, not switch to one of the remaining distros.


I've been using Linux for a long time and I've been involved in discussions about this for as long. A while ago I wouldn't have had a good argument, but now I do:

> If you kill off half of distros, you probably just cause the people working on them to go away, not switch to one of the remaining distros.

That would actually be good. Except for their personal benefit: they're having fun, together with their friends or random people on the internet, they're learning, etc., for the rest of us, they don't help that much, they just serve to increase the confusion. They also help drive away commercial software developers since Linux users in general are very active and vocal. And then we get stuff like "why isn't software X ported to distribution Y".

Not much real value would be lost if half of today's distributions would be lost. Specialized ones, like system recovery, penetration testing, firewall ones, file system layout experiments (Gobo) would still be useful. But the myriad distributions in existence are an exercise in anarchy. They're not even an exercise in democracy since democracy involves also caring about your fellow men and exercising self control and discipline, which many of those distributions don't. As I said, the vast majority of distributions can't even support their software, see the recent security fiasco with Linux Mint. Which is a very popular distro!


> We should be focusing on saner bash defaults, since it's the most common shell in use.

Bash already provides two "modes": when called as `sh` it is more POSIX-compliant, when called as `bash` it allows more bash-specific features and syntax.

Maybe `bash` invocation can remain for backwards compatibility, and a new mode be added (e.g. `sash` for "sane shell") which can be used for breaking changes in the name of safety and sanity.

This opt-in usage would avoid breaking existing scripts; whilst being bundled as part of bash would mitigate some of the chicken-and-egg and paradox of choice. Over time, `/usr/bin/env sash` will appear in more and more guides/tutorials/etc. whilst those using `/usr/bin/env bash` will continue to work, but look a little clunky.

Of course, making such a "sane mode" as part of Bash would make it more high-profile and cause much more NIMBY, politics and bike-shedding than as a standalone project; but if it has a clear, consistent "mission statement" (e.g. "safety over convenience", "consistency over edge-cases", etc.), then maybe it could avoid too much scope creep and benefit from input by lots of very knowledgable experts.


The new name has to be dcbash or maybe dcsh but dcbash is more fun.


That's because fish is a better version of existing shells. It's not significantly different to throw away what we have. Sane syntax and proper colors and better autocomplete make it better than bash, but not enough to throw away the network effects of bash. Think of hg trying to compete with git - it might be better, but it's not better enough.

As a shell person (I'm the maintainer of https://certsimple.com/rosetta-stone) NGS offers:

1. non blocking

2. structured results

Doing both of those - if NGS can do it - is a MASSIVE shift over existing Unix shells. Think git replacing svn.

The POC for #1 looks really good: https://www.youtube.com/watch?v=T5Bpu4thVNo


Personally, I use fish as my interactive shell. Bash still exists in the background for purposes of _the whole world runs on it_.


Same here.


Same here too. Fish on all local machines, bash on AWS instances, and all scripts written in bash, simply for compatibility.


older SysAdmins

I hope you aren't referring to their age, in which case consider using a different adjective: experienced, seasoned, wiser, etc. Your colleagues probably don't want to install fish shell for reasons besides how old they are.


Or, if you disagree with their opinion: hidebound, closeminded, timorous, etc.

I think the parent was going for something indicating disagreement, so it seems unlikely that your "positive" adjectives are what they were looking for.


I've been on both sides of the equation, having disagreed with sysadmins about their enforcing of a particular login shell, and having been a sysadmin myself. Not installing fish shell on production systems ("across the farm") is something I'd chalk up to wisdom, not a lack of open-mindedness. I was subtly suggesting that in my choice of positive adjectives.


Really, you shouldn't be logging in interactively into production machines, so no need to install extra tools by default.


This to me makes too many assumptions. I'm sorry but experience doesn't mean wise. And in many cases experience can mean complacent or scared of change.


Neither experience not wise was used. Older was used. You are comparing two possible substitutions with each other, not the original term. There's not much point to that.


I just found HN a couple months ago and have been lurking since then. I just want to comment on how nice it is to see such sane conversations, and how reading all these comments makes me happy, and that I feel really at home here even just lurking.


I find ZSH a much more reasonable alternative than Fish. Comes preinstalled on many Linux distributions as well as OS X, and can be configured with plugins to do most everything Fish does.


If it's hard to try to convince them about fish... try to convince them to enter commands in a browser over a web stack, as a "shell".

If they are conscious about the parts involved and the security repercussion, maybe they will say the same: "no".

On the other side, technology is not advanced only by older SysAdmins, and it's nice to see new projects, experiments and developments.


> try to convince them to enter commands in a browser over a web stack, as a "shell".

What? I had never heard of fish before and this is making me nop out pretty quickly.


Fish is not web based. I think he's referring to the browser ui in the OP.


> I'm sure we can all agree that the current state of shells needs some work, but I don't think inventing a new one is the right solution.

Disagree. A shell is a domain-specific programming language, and all the existing such languages are terrible. We'd be better off with a properly designed language, like a shell language based on OCaml: Caml-shcaml [1].

[1] http://users.eecs.northwestern.edu/~jesse/pubs/caml-shcaml/


> We should be focusing on saner bash defaults

Yes! A huge part of the fragility of shell scripting is the fact that there are a billion options which change crucial semantics of how the language works.

Just pick some goddamned sane option values and disallow changing them!

There's a reason you don't see this kind of nonsense in "real" programming languages: It plays absolute havoc with reasoning about what will happen when you run a script. (You could easily miss the fact that the shebang line does/doesn't contain "-e" or that option to error-on-unset-variables is/isn't set, etc.)


Side note: https://github.com/supercrabtree/k is a ZSH plugin in the same register that `exa`.


Relevant xkcd: https://xkcd.com/927/


My number-one wish for a NGS:

Undo!

Take, for example, rm. The hoops we have to jump through when accidentally rm'ing a file are ridiculous [1]. But in most cases (smallish, non-secret files), rm should be trivially undoable. Windows gets this right: By default, files are not deleted, but moved to trash. If there is not enough space in trash, Windows warns you. Or, if you really want to delete a file instead of moving it to trash, you can press SHIFT+DELETE, in which case Windows will also warn you that it can't be undone. (What is missing in Windows is a "nuke" option that overwrites the old bits with random data, for those rare cases where a file must be purged from the system completely.) But in most cases, after deleting a file, you can simply press CTRL+Z to get it back.

It is possible to make rm behave like that in Linux [2], but in a NGS, this should be the default behaviour (in my view), with "delete" for really deleting (unlinking) a file, and "nuke" for completely distroying a file, as seperate commands.

Undo is hard. Most programs are on their third, fourth, or even higher release before getting it right. (Mathematica 10 is stil trying to get there...) But I think we should try harder to solve the undo-problem with respect to file-system interactions, or with respect to system-settings, in Linux.

[1]: http://unix.stackexchange.com/questions/10883/where-do-files...

[2]: http://unix.stackexchange.com/questions/42757/make-rm-move-t...


>Windows gets this right: By default, files are not deleted, but moved to trash.

The del or Remove-Item commands also permanently delete the file, just like rm. The "Windows" behavior you're talking about is the behavior of the graphical shell Explorer, which is also present in Gnome and KDE. There's nothing specific to Windows here.


  apt install trash-cli
  alias rm=trash
Done? This implements the Freedesktop specification, so is compatible with what KDE and Gnome do.

https://github.com/andreafrancia/trash-cli


You are right, I forgot about Gnome and KDE. I tend to use classic shells on Linux, and the graphical shell on Windows, so I made a wrong generalization. It's probably more accurate to say that "undo" of simple operations (rm, mv) is something graphical shells get right.

But why are classic shells shipped without even the simplest "undo" features? We can probably all tell stories-from-the-trenches of how we accidentally did something aweful by typing the wrong command. Sometimes, it's hosing the network configuration of a remote machine, which is not something where "undo" would help you. However, in my experience, most cases involve doing something foolish with rm or mv, where "undo" would be tremendously helpful. I guess the main reason why I don't use the text shells of Windows is that Windows explorer gives me Ctrl+Z, which is saving my behind approximately once per year.

As a final thought, all shells (graphical or not) should expand the undo-features beyond rm and mv. For example, if I change system settings (or even Application settings?) and am not happy, I would love to be able to simply "undo" them, without trying to remember what the previous setting was. I realize that this is an incredibly hard problem, and that it is unlikely to be solved in an evolutionary step of the existing shells. That's why hearing about NGS got me so excited.


> But why are classic shells shipped without even the simplest "undo" features?

This question is assuming an untrue premise. There are Windows command interpreters that integrate DEL and RD with the Recycle Bin.

* https://jpsoft.com./help/del.htm#r

* https://jpsoft.com./help/del.htm#k

* https://jpsoft.com./help/rd.htm#r

* https://jpsoft.com./help/rd.htm#k


That said, i'd like a "del to recycle bin" in Windows too.


Then get yourself off to https://jpsoft.com/ , set the Delete To Recycle Bin option on in the configuration tool (https://jpsoft.com./help/inistartupdlg.htm), and enjoy exactly that. (-:


Back in the 90s (IIRC) someone wrote a kernel module for linux that provided the equivalent of a GUI "trash can/recycling bin" but at the filesystem level, where a specific directory (folder for you young 'uns) was set aside for the deleted files. Because it was a kernel module (filesystem driver) it was transparent to applications, including the shell.


lost+found? it's still there in each file system but i never see a real usage for it.


I believe that's less an everyday-use "trashcan" and more of an ext* feature for recovering from filesystem corruption; "Something broke and I think I've recovered all the files, but I've forgotten where some of them were supposed to go".


Although I kind of agree with you, I'll play the devil's-advocate: If you actually care about the file, it should have a backup that you can restore from anyway. And, depending on the file, that backup should be source-control.


If the solution to the problem relies on human beings being fully-informed, entirely competent, and never making mistakes-- it's not a very good solution, IMO.

Yes, people should maintain backups. But if people don't, for whatever reason (maybe they've never heard the word "backup" before; maybe they typed the wrong command and thought they had backups all along, etc.) shouldn't be screwed if something goes wrong.


Backups are good. More frequent backups are even better.

I have btrfs snapshots run on the fileservers at 10 minute intervals. Which, of course, captures the entire filesystem state, not just files that you have deleted by accident.


I know you're playing devil's advocate, but that attitude is making the perfect the enemy of the good. Backups are non-trivial enough that many people don't set them up correctly if at all; and even if they do, a restore can take some work. I think it's perfectly reasonable to have relatively painless alternative (e.g. "undo_rm") as an extra layer of safety.


I bet you could solve this with ZFS and automatic snapshot creation for each new command. Want to undo the past N commands? Roll back to a previous snapshot!


And a policy mechanism that lets you define how important a directory (or file) is and how hard the system will work (eg removing less-important data to free up space) to save this.

This idea falls over if the amount of data you create (which the shell can't know in advance) is too big to contain in a snapshot. All that would be possible is an automatic notification that the operations in XYZ directory are no longer undoable due to its size.


On the flip side, you can have multiple filesystem entries (via hard links) to the same set of files and set up your own playground directory while keeping a mirror view of the same directory. Remove a file from dir A, and it's still in dir B. Once you're satisfied with the result, you can delete the entire dir B. Accidentally removed a file from A? Restore it from B by making another hard link. (Caveat: hard links don't work with directories unless you're Apple TimeMachine)


It would require the programs to explicitly describe their actions to the shell and by default behaving in a reversible manner. That would allow you to single out effects you dislike, and tell it to undo it.

Kind of like making the shell behave in relation to your OS as an editor/IDE + version control system, rather than just as an editor/IDE. Old versions are preserved until you purge them.

Or perhaps it would just simulate state changing actions by default, waiting to be told to execute them.


If I haven't misread, you're proposing a replacement to rm that has support for undo? Why not just make a shell script that uses mv to place the item in, say, ~/.deleted and drop it in your PATH?


A shell is an application that provides system interaction to users.

> What I see is a void. There is no good language for system tasks (and no good shell). What's near this void is outdated shells on one hand and generic (non-DSL) programming languages on the other. Both are being (ab)used for system tasks.

Aside from the, in my opinion, abhorrent word "outdated" (why are old things considered bad just because of their age?), the comment about there being "no good shell" is something I disagree with very much. Personally I'm really fond of zsh and its manual pages are a treasure trove. But, lest this digress into a flamewar about shells or into a non-productive discussion about preferences, the point I would rather make is that the author provides a different thing than, as it seems to me, qualifies as a shell.

For example, the open issue mentioned in the README is:

Open issue: how to deal with a command that requires interaction.

As a Linux system administrator I am happy with the tools I have, and initiatives such as this seem to digress into the realm of the programmer. Sysadmins and programmers (a.k.a. developers - though I'd consider myself a developer too but not a programmer) tend to have very different perspectives about how to get an application onto the environment on which it runs, within the greater system of servers and networks.

In that sense, Docker and such mostly seem to get a preferential treatment from programmers, while sysadmins (again, as it seems to me) tend to dislike these kinds of abstractions. And I often get the impression that developers haven't got as much appreciation for sysadmins and what they do, as vice versa, but this could be false, or even more likely is that this is true for some and false for other cases. OMMV.

NGS might be a very useful project, I'm not at all negative about the project itself. It just seems more useful to programmers than to people like me who enjoy nothing more than to interact with their command line interfaces a.k.a. shells.


I'm suprised you can't think of problems you currently have with shells. Here is a few of mine:

* Once I've started a program, and easy way of sending it to the background if it is taking a while, which sends it's output to some buffer I can refer to later, rather than continuing to spew it all over the screen.

* While we are at it, stop spewing the output of multiple programs over the screen, under any circumstances.

* Have some simple, easy to follow rules which let me work on files that have spaces in their names, without having to remember the various commands with various special cases (like -print0).


Thanks to tmux, I don't have a problem with multiple interactive commands and their outputs. If I need to do something new, I open another shell on a new pane. Not that other solutions aren't welcome, but this works for me.

If I don't want output at all, the &>/dev/null output redirection works, or file descriptor redirection, for which zsh has useful methods.

Spaces in filenames are easily solved by quoting filenames or variables. Again, zsh provides a myriad of ways to make this a trivial issue.

My main point was to argue what qualifies as to being called 'shell'.


One of the problems tmux and friends don't solve that a "smarter" underlying system might add is the ability to make the choice to detach a process or redirect its output after it's already begun running.

For example, I specifically have had on many occasions started something and then later wished I'd previously started screen. While, "gain the knowledge to better configure your environment to start" is a valid response, equally valid is a desire for a more flexible default environment in general.


"Detach during running" is a problem solved from the beginning of shells: just suspend a foreground command with C-z and type `bg` to wake it up in the background.

Output redirection seems to be a valid point, though.


I'd personally like a way to redirect the output of a running process (either `bg`d or `disown`ed) to a log file. The only way I've come across are horrible hacks using gdb[1] or strace.

This should be easier. Anyone know of any better ways? I'd like to be able to do something like `proclog <pid> <stdout output file> <stderr output file>`.

[1]: http://stackoverflow.com/questions/593724/redirect-stderr-st...


There's this: http://www.isi.edu/~yuri/dupx/dupx_man.html

Still uses gdb under the covers, but presents a front end close to what you are asking for.


For the first two issues, I use tmux. If something is long-running, I open a new shell to continue working. If I want to run multiple related commands, I can open many shells and have tmux send my keystrokes to all of them.


issue 1) sounds like you miss nohup, sreen or tmux in front of that long running program.

issue 2) usually I don't have one terminal session with multiple programs doing output at the same time, this solves this issue.

Just use a new xterm, a new terminal tab, a new screen tab for programs that are going to produce output and stay running.

issue 3) I think you refer to files with other special characters, because spaces on filenames do not need -print0 at all, simply proper quoting in the code will do.

Regarding issue 3, there is a lot of caveats when coding shell; even file names starting with a dash (-) which are interpreted as options by tools external to the shell (which fortunately support a double dash (--) to stop processing parameters).


This is the canonical writeup about argument injection vulnerabilities:

http://www.defensecode.com/public/DefenseCode_Unix_WildCards...


The point of issue 1 is that sometimes you don't know that the program will run for a long time. So a way to send a program into the background while it is already running, would be nice.


'Ctrl + z', 'bg' and 'fg' are for that.


Except, now you are back to my original problem -- the output of the program will continue to spew all over my terminal.


Well, if you work always in tmux/screen (a recommended practice for operations, that I did import to my daily shell activity) you don't need job control.

When there is a "surprisingly slow" command, then just press the shortcut for a new screen session.

Will get alerted in the status bar when the slow command finishes.

Output of each command will be in it's own screen session (not mixed).

That Ctrl + z response was no for issue1, but for the wishes of the second sentence of @hibbelig.


I meant to include the part of capturing the output -- the part that Ctrl+Z does not do.

Opening a new screen/tmux session is a cool idea, but the new session doesn't inherit the state from the already running one, I guess. (At least in screen, not sure about tmux.) By "state" I mean the shell history, the current working directory, the (environment) variables.

Also, say that long running command runs five minutes, and after one minute you create a new session. Then after four more minutes you now have two sessions, both ready to accept input. Which one do you use to continue working? How do you know which one to close?


My screen respects the current working directory for new sessions.

My history is configured to write at every prompt redraw and to don't miss commands.

With vanilla history and prompt command settings, history will be separated.

But as this is a common problem, and not only with screen, also with multiple xterms and IDE's shells, and... also is a problem that history may miss lines by default...

And as it's a common problem, there are settings to fix that, configure the history merge strategy to your taste and do not miss history lines, because you did close a xterm, or you did open a new screen sessions.

Curiously, those settings are even documented upstream.

Session variables? I expect the ones that could be loaded by login executing bash -l...

If I did export something after that, then maybe I simply need to export it again in a new session, but really in 20 years never was in this situation I think... (do not wait to a command, continue with other commands, and need a environment variable not initialized by default between both sessions....). If I was in that situation... was so easy to fix that I don't even remember.

Which one to close? the one I prefer randomly, probably by focus, or probably the highest screen id. As history is merged, I can close any or even both.


Thank you for all these great ideas. Yes, I totally forgot about settings to merge history.


What you are asking for, "post-redirection" (a term I just coined) is not an easy task. Redirection is, but that's set up prior to the program run (warning: pseudocode ahead):

    child = fork()
    if child == 0 then
      stdout = open("output","w")
      stderr = open("error","w")
      dup2(stdout,FILENO_STDOUT)
      dup2(stderr,FILENO_STDERR)
      Close(stdout)
      Close(stderr)
      Execve("someprogram",arglist,envlist)
    End
Standard Unix stuff. But doing that after the program is running isn't easy. The only way I know of is to send the program SIGSTOP (stops execution of the program---said signal can't be caught), and using ptrace(), inject code into the process to open the output file and call dup2() to get the redirection going, calling said code, then resuming the program.

I suppose one could script gdb (or whatever Unix debugger exists on the suystem) to do such a thing, but the fact that no one has really done this might mean something.


Those can all be seen as tweaks to existing shells. With regards to...

"* Have some simple, easy to follow rules which let me work on files that have spaces in their names, without having to remember the various commands with various special cases (like -print0)."

I'm not sure I understand what's hard about files with spaces in their names. I'd say it's easy enough to work with such filepaths by using tab autocomplete when using the shell interactively, and quote marks when writing shell scripts. Can you give an example of where these approaches wouldn't work?


When you pipe the output of one command into another, e.g 'ls | wc' (obviously a dumb example), the second command will split the filenames on spaces and so will not run properly.

The workarounds for this all involve nasty extra parameters for different commands (e.g. the -print0 example)


ls is not bash wc is not bash

And that is a discouraged way to count files in shell script.

Still, simply adding -l to ls, could handle spaces correctly (the files count)

I insist, spaces are not your enemy, there are much more weird file names for a shell. Shell can handle spaces if used properly.


You miss my point. Of course, for every example I give, it's possible to build a workaround to handle the spaces. My point is, it's the very fact that you need a workaround that makes it so irritating.

'command1 | command2' just works in most circumstances, so it's frustrating that it falls apart when a filename with a space appears.

And it technically is a shell issue, insomuch as the shell is dividing up the ARGV for each program. The shell is perhaps not to blame, because it can't tell the difference between a filename that has a space in it, and ordinary output that just so happens to correspond with a filename. In other words, it's hard to see what a shell could do to make things better. But the problem still exists.


wc is not a builtin.

In that example we're not in front of a bash word splitting issue.

I agree with you that shell scripting has caveats one need to learn. As does Perl, C, PHP, Ruby, Node, Go, Java and what not.

I don't feel a big change is needed to handle spaces in shell scripts, my scripts handle them and I enjoy writing them. Maybe you know of minor tweaks for bash,zsh or any common shell which could be useful in general purpose of files with spaces in the name? don't hesitate to open them a bug, maybe we even get a fix.

But don't send them this example, and insist on it, because the conversation is over:

    $ touch a_file
    $ ls | wc
          1       1       7
    $ rm a_file
    $ touch "a file"
    $ ls | wc
          1       2       7
Equivalent input, with/without spaces and expected output.

The 2 is a word count, and we did pass two words, I don't expect a 1 there, _that_ could be a bug.


Once again, you're getting too hung up on my dumb little example, which I spent exactly 0 seconds thinking about. It's the general problem that's interesting (and annoying), the 'command1 | command2' general case.

If you want a difficult example, then take a more real-world example: e.g. the workflow of a 'find [some stuff] |grep [some other stuff]' is one to consider. That's where horrid workarounds like -print0 and -z have to come in, but the simple 'find|grep' works fine up until a file has a space in it.

As I said, there's no simple fix, even for the re-organised form of 'grep [some other stuff] `find [some stuff]` because the shell can't tell the difference between a filename and just a stream of text in the output of one program.


The (ex) AT&T Research command tw(1) has pretty much replaced find(1) for me (particularly with some canned search selectors for particular projects).


I'm struggling to find any information on this command (it's not an easy name to search for!) Do you have any links you could share please?


Apologies, I forgot to include a link: The toolkit is now at https://github.com/att/ast since AT&T laid off the group a couple years ago.

About half the package consists of evolutions of traditional Unix commands. The parts I use regularly are ksh and tw. tw ('tree walk') is sort of a 'find --exec' replacement with a C-like selector syntax. It's a bit verbose, so for interactive use I generally set up project-specific shell aliases with selection expressions, e.g.

  alias cctw=$'tw -e "select: return (type == REG) && ((name == \'*.c\') || (name == \'*.h\') || (name == \'*.cpp\') || (name == \'*.cc\') || (name == \'*.h\') || (name == \'*.hpp\') || (name == \'*.mm\') || (name == \'*.inc\'));" '
and then use those, e.g.

  cctw egrep -w MyIdentifier


Thanks, this looks interesting!


If I could need to combine a find|grep right now (this is, if the directory recursion and filters of grep by itself, weren't enough, which maybe a corner case too...) I could do it like this:

    while IFS= read -rd '' file; do
      echo "do whatever with: $file"
      grep whatever -- "$file"
    done < <(find ~/whatevers -print0)
It's like natural language if you do it daily.

Will handle not only spaces, but also

new

lines

on

file

names.

Have a nice day.


You miss my point. Of course, for every example I give, it's possible to build a workaround to handle the spaces. My point is, it's the very fact that you need a workaround that makes it so irritating.


For me, the code I did give, is not a workaround.

It's the canonical way of do it.

Other ways, even if they are "expected to work by inexperienced occasional users"... are simply flawed a first eye view.

A workaround is to ditch shell script, as soon as you face a problem, and blame shell script, and turn to do it in a "more advanced language" that has the same or more caveats. That could be a workaround.

Delimiting file names with null bytes, in case they could be split by any of the $IFS values, is NOT a workaround, is pure logic.


That there are worse things that exist is no reason not to pluck the low hanging fruit. I see this attitude on HN a lot. Spaces are probably the 80 in the 80/20 rule here. Why not address them?


Please give a single example in shell about spaces, that has not been already addressed.

From where do your get that rule?


It's the pareto principle. In this case he means "20% of the problems account for 80% of the trouble" -- albeit somwhat strangely worded.


Thanks I did know about the theory. My question was more about where are the numbers coming from?

I write shell daily, and that are not my numbers regarding "spaces on filenames issues", sorry.


Not sure on why this comment got down-voted.

If it's because I did not explain how to do that, there is howto do it:

    http://mywiki.wooledge.org/BashFAQ/004
If it's because the comment on the given example...

    $ touch file1
    $ ls -l | wc -l
    1
    $ touch "file 2"
    $ ls -l | wc -l
    2
... (?)

If we were talking about "new lines in file names", or "dashes at the beginning of file names", or code injection through file names, then we could be talking of more complex solutions.

But the space issues in shell are simple, and have known solutions. If you're a daily user or you're not at learning stage, spaces don't turn to be a issue.


I'm guessing it was down-voted because you keep missing the point. Yes, there are work-arounds for this scenario, but the overhead in remembering the work-arounds is the problem. He's not saying you can't do it, he's saying the problem is that you have to change your setup for edge-cases, which are actually fairly common.


Indeed, edge cases keep hitting you, until you know them and howto work-around them. Is that a shell specific issue? that is a universal issue I think.

A big problem, is that people that underestimate an unknown technology, does not take the time to learn properly that technology.

Many programmers think they "know" shell (and many beginners), so they don't invert more time and tests, and then they keep facing corner cases, facing known and documented and solved issues, etc...

You are asserting, than learning bash is harder than learning the corner cases of other more advanced languages. Is that what you say? do you think that "bash" has more corner cases than ... (?)

How we can patch that? With a web shell?

Totally true, I've miss totally the point.


It's not even so much the overhead in remembering the work-arounds, as remembering when exactly you need the work-arounds.


Shell script split on spaces (or the values of $IFS) by design.

For example, use null delimited values, because a variable (like IFS) cannot contain null bytes. Is this a workaround? I see it as pure logic.

Even understanding howto handle them, helps to understand the internal design of the shell and common external utilities.

Remembering is hard? not a "shell" specific issue... I use it daily and maybe that's why I don't face the same problems that other see so clear.


haha, yes, that's what I meant. Your phrasing is better


But, I need to remember to do that.

How about (for example):

    $ for i in $(ls | grep 'cheese\|fish' | head -n 5) ...
I'm sure there is a way to make this work correctly (take 5 filenames containing either cheese or fish), but I'd need to put more thought into making it work correctly.


  ls | grep 'cheese\|fish' | head -n 5 | while read -r i …


With Zsh:

  > touch {dark\ red,green,light\ blue}{cheese,fish\ fingers,fruit}
  > ls -1 *(cheese|fish)*([1,5])
  dark redcheese
  dark redfish fingers
  greencheese
  greenfish fingers
  light bluecheese
  > for i in *(cheese|fish)*([1,5]) ...
Since that's shell globbing, it can cope with any spaces, newlines etc.


Wow, that is quite impressive, I might have to look into this more. The advantage of using head is I only have to learn it once, to get the first n of lines or files, but it might be worth learning. Thanks.


I know 25% of the special Zsh syntax, and know of a further 25% of what exists.

I read the whole manpage some years ago, and noted down what seemed useful. I use "man zshexpn" when I forget the syntax for things like this.


wc, in its basic form, is made for counting words. A filename in your case is not a word. I don't see the problem here. Use wc in line mode and the "problem" is solved.

It also has nothing to do with the shell, nor does the general case of pipes you present below. Your shell redirects the output from ls to the input of wc. If you don't like the simple approach that tools consume and produce arbitrary text in a manner it sees fit for its purpose,, maybe it's your operating system that you have a beef with.


Just use tmux. It'll take a day to figure out, and then all your problems go away. Well, save the spaces, that's never going to happen.


> all your problems go away. Well, save the spaces, that's never going to happen.

Spaces in filenames are fine. The only strange thing is that, as it turns out, shells have this hidden easter-egg/anti-feature where you can actually leave out the quotes around filenames under certain, special edge-case scenarios. One of those is when the filename contains no ampersands, quotation marks, asterisks, dollar-signs, parentheses, backslashes, newlines, or, you guessed it, spaces! In fact, this special "no quote mode" also contains another special, embedded mode where you can still use those characters, but put backslashes before them! Since this layering of tricks leads to funky, confusing-looking commands, it's obviously much better to just stick with the regular mode of quoting everything.

Of course, the above is quite tongue-in-cheek, but its the mental-mode I specifically try to adopt: "occasionally, you might get away with leaving out the quotes"; compared to the seemingly more common "you need to add quotes in these cases".

Still, this doesn't eliminate the problem of single-quotes in filenames never going away! Whilst it only requires one character code to be escaped, it's still accomplished in a pretty funky way: "'" becomes "'\''"; we first close the string, then use a backslash to write a literal "'", then we start a new string ;)


You could use a shell that magically handles all the escaping for you either while typing/pasting for some programs (curl, grep etc) when not in quotes and tab completes with a selector so you don't have to think about spaces, special chars, etc:

http://paste.click/VfUUWv


By and large I agree. As a programmer, I generally feel like the sysadmins which I work with "hide" how they do things and aren't transparent enough about their role for me to know why I should respect them. I've seen some do amazing things and then I recognize the difficulty but I don't feel their is enough collaboration between the two sides


>Open issue: how to deal with a command that requires interaction.

Yet this problem is solved very simply in bash and other shells by simply allowing you to explicitly tell it when to run a command in the background and when not to using the & command.


For me a next generation UNIX shell needs to catch up with what REPL environment in Lisp Machines, Interlisp-D, Mesa/Cedar, Smalltalk, Oberon(-2), AOS features and capabilities.

For the young HNers, think having something like Swift Playgrounds or IPython as your shell, while having full access to the OS API without relying on external programming languages.

Otherwise the next generation prefix isn't worth mentioning.


And arguably you'd have to call it last-generation as well, if you recall how long ago those innovations happened but never made it into mainstream. That would label current shells as dark age of tty :). I still cannot accept the fact that we use tty emulators.


For me PowerShell feels a bit more closer to that model, but it is still text based, no inline graphics or data structures output, and the syntax could surely be improved.


PowerShell actually does have data structures, and that's my favorite part of it. Pipe a directory listing as an object instead of text so you can access properties directly instead of parsing text!


No,I mean displaying them inline in the REPL and allowing you to interact with them.


Well, that's an issue with the REPL, not the design of PowerShell. No reason you couldn't make one that does this.


Sure, I do like it much more than any UNIX alternative and it is the only widespread shell that is closer to the experiences I was referring to.

But that REPL could be improved, that is what I mean.


Smart comment.

The UNIX shell and core-util model of "everything is text", and line-by-line output and tools for line-by-line data manipulation, goes very far, much farther than you'd think, but is limited. Everyone has internalized those limitations.

It reminds me of MATLAB: everything is a matrix (another standard, versatile, tabular data structure), and many language constructs exist to manipulate this common data structure.


Something like TempleOS, written in Holy C?


Terry really did something cool there with his JIT Holy C (or C+ before the TempleOS thing). I'd definitely like to see something like that ported to other platforms.


As you said, something like IPython: https://ipython.org/ipython-doc/3/interactive/shell.html


Having read through the readme, I think some of the issues outlined could be solved by having a cell-based model like jupyter/ipython.

Basically a command (input) and its corresponding output are tied together as a "cell". The UI could be very like the ipython notebook, where you have a text box for command input and then above it you have a chronological list view of cells. You would also have a good keyboard-shortcut language for navigating back and forward in the cell history, so you could, say, go back and focus a cell which has been running for a while and is now prompting for input.

(Edit: imagine a key sequence like `Alt-b, Alt-b, Ctrl-Enter` meaning "go back two cells and give text focus to that cells input field.". In reality you'd probably want a command language that's more vim-like.)

The cell model would also allow you to truncate output, minimize or maximise the cell display, and a bunch of other UI tricks I can barely think of right now.

Basically each cell would be a little parallel shell of it's own.

If this sounds even remotely compelling, I wouldn't mind writing up a more formal proposal and submitting it as an issue on the repo.


There are lots of mixed-up goals here that I think might be better served broken out into multiple projects:

1) Terminal UI improvements 2) Interactive shell UI improvements 3) Shell scripting language improvements 4) Userspace utility improvements

Tackling all of these together may well be possible, but it will likely limit the potential impact of any improvements you are able to make. IMO, it's easier to move the status quo with incremental improvements, not wholesale re-imaginings of how the entire text-based Unix user interface works.

Ultimately, the most dangerous thing to mess with is the language itself. The strongly-typed Python-lite described in the linked man pages is not going to replace sh/bash for simple scripts, sorry. For most shell-scripting needs that sort of thing is wayy overkill. Sure Bash is ugly, but ultimately there's no way to harmonize quick-and-dirty command-line compatibility with an elegant scripting language syntax.

An alternative approach here would just be to expand your thought process a bit and don't get too worked up about Bash itself. Pipelines and new tools can solve all your problems if you're willing to think about them in a different way. `jq` is an amazing pipe-friendly JSON munger that is now just as essential to my CLI toolbox as `awk` and `sed` ever were. It solves your structured-data and typing complaints without interfering with Bash one way or another.


> The strongly-typed Python-lite described in the linked man pages

It's not strongly typed and types annotations can be left out in many places:

mylist.map(F(elt) "my $elt")

elt type is not specified.

The command syntax $(...) will cover some of the simple scripting needs. Please provide specific code examples (probably in bash) that you think will look bad.

> `jq` is an amazing

It is but in my opinion much less convenient than built-in support for data structures.


Looks quite interesting! Some feedback:

Yes, a new paradigm for interacting with text commands would be wonderful. But what would that look like?

One strong advantage of the current system is that the input matches the interface. Everything is text, so it all can be typed with the keyboard. But if this shell "displays structured results as real fing structures (YAML, JSON, ...)", that may not be true. If the shell spits out a table, is it a CSV table? A YAML / JSON nested array? How is the user supposed to edit it? The interface isn't clear, and any layers at all are going to take some thought to be able to compose naturally with other tools. (The author addresses part of this with "All operations made via a UI, including mouse operations in GUI must have and display textual representation, allowing to copy / paste / save to a file / send to friend.")

Another thought: from the proposal in the README, it isn't super clear which tasks the author feels the shell ought to handle versus which tasks should be handled by some other toolkit. For example, the "Manage multiple servers at once" subsystem is handled by other devops tools like Ansible or Chef at the "deploy an application" layer and tools like Tmux or 'clusterssh' at the "send input to multiple processes in many screens at once" layer. Another example: the author proposes "Actions on objects that are on screen. Think right click / context menu.", but since these actions must require cooperation by the program being invoked, it's not clear whether this serves the purpose better than simply wrapping that program's operations in some GUI toolkit.

This could be pretty interesting, but I wish the author would formalize some of this thinking into a concrete standard. Having too big of a vision without some set boundaries seems to be hurting the focus of this promising project.


> "One strong advantage of the current system is that the input matches the interface. Everything is text, so it all can be typed with the keyboard. But if this shell "displays structured results as real fing structures (YAML, JSON, ...)", that may not be true. If the shell spits out a table, is it a CSV table? A YAML / JSON nested array? How is the user supposed to edit it? The interface isn't clear, and any layers at all are going to take some thought to be able to compose naturally with other tools. (The author addresses part of this with "All operations made via a UI, including mouse operations in GUI must have and display textual representation, allowing to copy / paste / save to a file / send to friend.")"

In PowerShell, all data is stored as .NET objects. Using this object-oriented approach enables the sort of flexibility I believe you're looking for. Perhaps something similar could be developed for Linux. There has been some activity in this area:

https://github.com/Pash-Project/Pash

https://blogs.msdn.microsoft.com/powershell/2015/05/05/power...

http://www.forbes.com/sites/justinwarren/2016/03/08/is-micro...


> Manage multiple servers at once

IMHO will be much better if integrated into the shell. For example it will allow easy access to a shell variables with lists of hosts on which the command succeeded or failed.


Lots of assertions about how "bad" our current shells are, but no substantiation. I'm a developer, and I know just enough shell to write a simple deploy script or two, but reading this, I'm not convinced that what we really need is new shells and scripting languages.


Simple example: using current shells it is not convenient to work with API call results which are structured data. Yes, there is jq but if the shell had data structures it would be much better, wouldn't it?


On UNIX, the "API" is basically that everything is a file and programs are written to handle text streams. After all these years, the shell is still perfect for this.

It sounds like you want create a new language and shell in order to have something similar to Windows PowerShell which interacts .NET objects using its API. This does not fit into the Unix "API" described above. This isn't a "Next Generation UNIX Shell" but an alternative shell that suits a certain group of users who don't really understand UNIX.


That's great, but how much data do you work with, day-to-day, that is best represented as text streams? For me, the answer is "virtually none". Heck, most of the stuff I work with, day-to-day, can't even be represented meaningfully in a text stream. (Editing video, for example.)


For me, the answer is "plenty", but that's neither here or there. Everyone has their own use case and I daren't say yours isn't valid.

Video editing tends to done in applications, of course that's not to say video data can't already be piped. I once set up a screencast using gstreamer to convert /dev/video to an HTML 5 compatible video format which I then redirected to a named pipe which could then be opened from a HTML file page in a web browser. The UNIX shell is already very powerful. You haven't stated what it is you want to do with video editing that is either difficult or not currently possible but would be possible with this new shell, so I'm not sure that I really understand the point that you are trying to make.

There is nothing wrong with creating an alternative shell. I can see the need for something like this. PowerShell for example works very well. My main gripe with this project is that the author has named it "Next generation UNIX shell" which suggests that it is intended to be a replacement rather than an alternative. In my humble opinion, it tosses the simplicity out of the window that makes UNIX, UNIX.

To quote Dennis Ritchie, "UNIX is simple. It just takes a genius to understand its simplicity." I'm no genius, but I get it.


It is more that unix shells are built on piping and piping is systematically broken in unix shells. It just becomes extremely obvious after using for powershell for any time which doesn't require most of the work to scrap and rewrite ouput to make it compatible withan other program.

Doing piping well is extremely hard and basically requires that all commands give metadata about their arguments but doing it better wouldn't be all that hard.

You could do the same thing with text streams but structured piping would be a lot harder. If all programs would read and write something like json or msgpack and have a unified argument parser and dumper working with the shell would be a lot easier and faster.


I'm not really sure what you mean when you say "piping is systematically broken". It sounds like you might be trying do something that isn't really meant for piping. The general idea behind UNIX programming is that projects should be broken down into small, simple programs that rather than overly complex monolithic programs[1][2].

See also my reply to blakeyrat[3].

[1] http://www.catb.org/~esr/writings/taoup/html/ch01s06.html#id...

[2] http://www.catb.org/~esr/writings/taoup/html/ch01s06.html#id...

[3] https://news.ycombinator.com/item?id=11547241


I don't think there's enough that needs fixing in the current shells/terminals that require a new paradigm at the moment. These things seem to come and go without much adoption.

Sorry, bit negative of me - it's awesome that people are trying.


Just because something has been around for a long time it doesn't mean that it's outdated. On the contrary, it means that it's stood the test of time, which is usually a sign of a good design.


Sorry, by outdated I meant not a good fit for today's problems.


Personally, I don't see how today's problems are that much different. I agree we should always be looking for better solutions, but IMO, it seems better to adjust our current tools than to rewrite them.


Something I would LOVE to see (not only in a shell but in all tools) is some kind of "project mode".

- In Chrome/Firefox I'd like to have a mode for recreational browsing, one for research of webdesign and one for fitness/health stuff. This basically means that I want to click at a browser window and say "This is now my fitness window. Please remember all the open tabs". And then there is a list of open sesions/projects where I can switch between those views/instances. I'd like to be able to close my fitness-window (without losing my fitness tabs) and then re-open the same fitness-window on the next day.

- For Dolphin/other_filemanager I'd like to see a "normal" mode for everyday browsing, a mode with large icons for sorting pictures and some mode for working with Python scripts in which the nautilus terminal is always open

- for Sublime I want a scratch window where I open and edit random small files, one for my html coding and one for my Python coding

Currently for almost every program we only either have one "session" or some really specific project settings (like Visual Studio) but I'd like to see something in between.

For example I have a lot of HTML files open in Sublime and I need to edit some unrelated config file. If I just open this config file it will open in the same window as the html files. What I'd like to have is a way to say via comment line or context menu "open this file in scratch pad mode".

The same for Dolphin. If I have one instance of Dolphin open for picture management with large icons and I have to do some Python file editing I'd like to have a simple way of saying "Open Dolphin in python mode".

Or sometimes I have a random link. I want to open that link in my "random browsing" instance of Chrome and not in the window which has currently all my fitness tabs open.

And the same is true for video players, music player, ebook programs, ...


> - In Chrome/Firefox I'd like to have a mode for recreational browsing, one for research of webdesign and one for fitness/health stuff. This basically means that I want to click at a browser window and say "This is now my fitness window. Please remember all the open tabs". And then there is a list of open sesions/projects where I can switch between those views/instances. I'd like to be able to close my fitness-window (without losing my fitness tabs) and then re-open the same fitness-window on the next day.

Sounds like you're asking for 'Tab Groups'. It used to be a feature built into Firefox, but recently got removed due to lack of use, it is however still available as an add-on (mostly using the existing code that was removed from Firefox itself afaik):

https://addons.mozilla.org/en-US/firefox/addon/tab-groups-pa...


I used to love and use Tab Groups but it was always amazingly slow on my PC (old-ish Celeron 2.6 Ghz)


Firefox's Session Manager (http://sessionmanager.mozdev.org) does almost exactly what you want.


Chrome has this feature already, with the concept of Persons. I have a "Work" person and a "Home" person, which gives me a different set of cookies, extensions, pinned tabs etc...


I know about persons but I'd like to share my cookies and other stuff between all my "projects". I tried to use it but it is too separated for my uses.


You should try Session Buddy for Chrome: it does just what you asked.


> The problem with outdated shells looks pretty clear: they were made with one kind of tasks in mind but are used for other, bigger and more complex tasks. Such scripts usually look as a fight against the language and working around it much more than using it to solve the problem.

Mind you,

> Create a shell (in that language) that is up to date with today's tasks - cloud, remote execution on a group of hosts.

I'd rather create another DSL/tool to solve it.

> Two languages actually.

> Current-shells-like but simplified (called "commands"), $(...) syntax

> Scripting language, "normal" programming language (called "code"), {...} syntax

Good luck.

> The scripts that I've seen (and written in Python and Ruby) look too verbose and show unnecessary effort. Such scripts do not look an optimal solution (at the very least).

Did you check scsh? (I don't use it.) http://scsh.net/docu/html/man-Z-H-3.html


The QNX shell could easily do remote execution. I worked at a company in the mid-90s (porting Unix software to QNX). My machine had a modem on it, and my boss (the owner of the company) would use it from his computer to dial out---all he had to do was reference the modem device on my computer at his command line.

In fact, you could run a program on a local machine referencing a file on a second machine, pipe the output to a program on a third machine and have that redirect the output to a file on a fourth machine, all from the command line. I don't recall the exact syntax, but it was something like:

    cat @2/path/to/large/file | @3/bin/grep 'foo' >@4/tmp/output
(Individual hosts were numbered on a QNX ethernet based network, but the default permmissions were Unix like). All of this was a consequence of the message-based QNX operating system (whether messages were delivered locally or remotely was invisible to programs) but I don't see why something like can't be done today.


This is a quite neat example, but it looks like the QNX shell mixed several concerns about how do you connect to these machines, how do you secure the communication, etc.

    $ ssh @2 "cat /path/to/large/file" | ssh @3 "/bin/grep 'foo'" | ssh @4 "cat >/tmp/output"
is rather clunky, but separates those concerns well.


Sorry about not getting back sooner (returning from vacation) but the QNX network message passing ran over Ethernet, not IP, so it only worked (to my knowledge) over a local segment. Given that it was transparent to user processes, it could be argued that security could be added at that layer (two of the seven layer OSI model).


I see now. The parenthesis don't feel like a convenient syntax.


Scheme could be a good language (for fans?) if it is not used interactively.


Scsh is a very powerful shell, that is based on Scheme 48 plus its own additions. It is important to note that it runs on x86_64 with the 0.7 release.


Thanks for your input and GitHub stars! You are all very welcome to open GitHub issues with your suggestions.

I'm looking for developers to join the project. There is a lot of work to do :)

Ilya Sher, the author.


Actually rc was the next generation Unix shell.

(Personally, I use ksh, which, while as bloated as bash, is bloated in more useful ways, and not so gratuitously divergent from plain sh.)


If processes were referentially transparent functions, the shell would be wicked fast as it can run commands speculatively as you type them and discard the side effects if you backspace and memorize the results. Along with a zillion other game changer benefits like determinism. Built in perfect auditing. Rewind and restore processes to past states. That's what we should be striving for. But it's a lot harder.


The first few bullet points (regarding job control / status / blocking) sound interesting and useful for anyone. I think it would be cool to have a shell with some sort of visual job control system and an easy way to background anything that was taking significant time. Maybe that wouldn't be as deeply integrated as the system the author is describing, but it would probably be easier to implement and less of a change from a standard shell. These days I hardly ever suspend a process and continue it in the background because it's easier to spawn a new shell, but if that workflow were streamlined... maybe I wouldn't need a dozen shells open all the time.

As for the rest, I dunno. Neat, but complex solutions like this make me pause. Maybe instead of trying to solve this problem, we should instead try to not have this problem. That is, if your tools require this much of your shell, maybe the tools are the problem and not the shell?


IMHO the world does not need another Shell language. Please not... it's even sometimes complicated with ZSH & FISH today. A shell which behaves totally different will get me totally different problems than nowadays... no thanks. I know my "enemy and friend" Bash and it's a good thing I get it nearly everywhere.


My 2 tinfoil hats: If they can't have your kernel, they try to get your shell...

To me, i really do not see the points listed to be missing. For instance my shell script for remote execution on multiple hosts needs somewhere around ten lines of code and a peer list. I would never outsource this to an external entity. Especially today.


Actually... bash is a pretty solid language since version 4, just no one takes time to learn it.


Might you elaborate on what version 4 does new/differently?


associative arrays are one of the biggest v4 advantages


The irony...

> Next generation UNIX shell. See the man page.


I think an OOPish DSL as a shell is pretty wrong[0], to be honest. Most of the time in UNIX what you're doing is chaining commands or automating. So perhaps what's needed is a DSL with a more functional, stream-based approach. Something along the lines of:

> map `rm -rf ` (grep "*.tex" (ls))

[0]: I haven't had much experience with FOOP, however.


If you're into Scala, Haoyi Li has a great video about his (similary) Ammonite project

https://www.youtube.com/watch?v=dP5tkmWAhjg

http://www.lihaoyi.com/Ammonite/


Use a general-purpose language with macro's or DSL's. Racket and Rebol are older ones with RED language being designed partly to do what article describes.

https://en.wikipedia.org/wiki/Red_%28programming_language%29


I suppose all of the behavior should be highly customizable. It would be nice if plugins could be written in any language. For example, I expect special CLI editor extensions for editing JSON input to programs (with syntax highlighting if desired). Also, I expect specific key-bindings (for e.g. vi/emacs modes) to be pluggable.


I encourage you and others to open issues or make a pull request with updated README


The github page links to a screencast which I found more helpful than the github writeup itself:

https://www.youtube.com/watch?v=T5Bpu4thVNo


But why nodejs?? There are a lots better languages to base this off of.


NodeJS was first throw-away implementation. Then Lisp. Now it's in C and looks like it will stay this way.


Probably because JavaScript is the worlds most popular programming language, it's portable, has the largest package manager, supports the shell's non blocking requirement, and the current version (ES6) has a better stdlib than previous versions.


You're wrong on about three of those statements, but note that the same hand wave could have been made for Perl ten years ago. That would've been a poor choice then for the same reason Javascript is a poor choice now.


> You're wrong on about three of those statements

OK. Do you have any supporting arguments? "You're wrong" doesn't contribute anything.

> note that the same hand wave

It's not a "hand wave". Those things are easily verifiable facts. You're new here, you might want to read:

https://news.ycombinator.com/newsguidelines.html


Not new. Javascript is not the most popular language, does not have the largest "package manager" (nor standard library, nor package library which is what I think you mean) and more importantly it does not have the most programmers specializing in the problem domain at hand -- shell functions. Moreover the referenced toolset doesn't even use Javascript! You merely jumped in blind to promote your favorite tool. Also if you want to play HN-by-the-book then you should refute my argument why Javascript shell scripting in 2016 from a cold start is somehow better than Perl shell scripting in 2000 given its lead back then.


> Javascript is not the most popular language

> does not have the largest package library, which is what I think you mean.

Yes, you've said that before. Do you have any supporting arguments?

I'm basing that opinion on:

1. JS is #1 on modulecount and #2 on libraries.io, the two places that track the size of package repositories.

2. JS continually coming up as the #1 or #2 most popular language on StackOverflow eg http://www.r-bloggers.com/the-most-popular-programming-langu...

Happy to talk about Perl once this becomes a dialog rather than you telling me I'm wrong without the courtesy of explaining why.


And the most popular girl in my high school can't ride a motorcyle. I took her to prom but I'm not inviting her on my next cross-country ride. What's your point?

https://en.wikipedia.org/wiki/Measuring_programming_language...

Regardless you don't pick a toolset based on popularity. You choose it based on capability. 100,000 front-end HTML developers asking the same "how do I regex?" question on StackOverflow 10,000 times doesn't pre-qualify an ecosystem as the perfect tool for shell automation.

And at this point you're just trying to save face after you jumped in to explain why it's such a great idea that this project uses Javascript. It doesn't. I'm moving on and suggest you do the same.


My point is that while you keep saying something is wrong, the evidence shows otherwise.

I'm pretty sure you're already aware of that. Your account is new, perhaps you've been here before but you don't seem very capable of participating in technical discussions. I'm going to end this conversion for obvious reasons.


There've been many attempts and no web-based shell is the next generation - only the next toy. A replacement of Bash, Zsh, and Fish is needed, it just won't be a web-based toy!


Curious if anyone here has adopted the shell mode of iPython as their routine shell (or something similar). Is it a good idea? Experiences?


So, this project starts about shell, then I see it also aims to implement full-blown terminal emulator and a lot of new very ambitious features, and then it invites terminal-based apps to use new capabilities of it, so it would be best usable only if the whole ecosystem of compatible apps appear.

Unfortulately, I'm pessimistic.

Also, would like to comment on project's README. Not meaning to be harsh.

> The shells never caught up.

Wrong. To prove this, it is enought to see that almost none of "features" is new in comparsion with traditional terminals.

> Not to block, allow typing next commands even if previous command(s) still run(s).

Got a plenty of ways to handle that with shell. Backgrounding, GNU Screen, Tmux.

> Open issue: how to deal with a command that requires interaction.

Oh yeah, you've invented a thing and don't know how to use it with real-world apps. Happens.

> Provide good feedback. In GUI for example, this can be green / red icon

Enhanced setting of shell's "PS1" variable gives you good indication of exit code of last app (color + text + whatever, you may even use audio if you want). I'm using it for years.

> All operations made via a UI, including mouse operations in GUI must have and display textual representation, allowing to copy / paste / save to a file / send to friend.

Sure, why not, but all this is possible with shells. Both native consoles (with gpm) and X11 terminal apps, and also Tmux has its own possibilities to handle clipboard.

> Different UI modules must exist

Lots of people was playing around with shells via IRC or XMPP ages ago.

> Web

Good luck handling keystrokes with tricky keycodes, and preserving all other standard terminal features. BTW I thought this project aims another shell, but this sounds like requiring another terminal emulator. I'd be glad to hear we have powerful web-based terminal emulator, but this feels very laborous.

> allow multiple users to collaborate

Screen/Tmux.

> some rw, some ro

Seems lacking in tmux at the moment, but possible to implement on its own, without reinventing everything else.

> Most of the data dealt with is tables. List of files, list of instances in a cloud, list of load balancers. Amazing that none of current shell tools (I heard of) don't treat the data as such

It is discouraged to parse textual output of shell programs unless app clearly guarantees unchanging parsable/consumable format (with specific options you use). Quite often there's another, more reliable way to mine information than parsing of output of general-purpose program.

> Allow editing it and saving to file

some_program | vim -

Then, in editor, edit and save. Done. If you add too many specific usecases into the core, you end up with bloated interface.


"So, this project starts about shell, then I see it also aims to implement full-blown terminal emulator and a lot of new very ambitious features, and then it invites terminal-based apps to use new capabilities of it, so it would be best usable only if the whole ecosystem of compatible apps appear."

This is actually my only objection and the only feedback I'd make to the author. Contra some of the other posters here, I fully agree shell is a mess and that the basic root cause of the mess is that you can not straddle interactive use and programmatic use with one language. But something a bit smaller needs to be bit off.

I'd also suggest that most shells have already got the interactive use case sufficiently covered and that the low-hanging fruit is in the command case. Perl is perhaps the best language that functions as a programming-shell replacement, but using it for a while reveals it still has many significant issues in that use case. Even with modules, it pipes surprisingly poorly. It tends to make stream processing harder than it needs to be, unless "one line at a time" works for you. It's got a lot of nasty syntax that is arguably optimized for big program use cases that doesn't make sense when replacing shell scripts. When being used for shell programming, it basically shares C's abject failures when it comes to error handling, making it range between bizarrely difficult (I have to left-shift the result of system to get the actual exit code?) to almost-impossible-to-remember (backticks) to get errors properly handled. It has no concurrency story or any particular "run on multiple targets" ability (though I confess I don't know what that looks like necessarily; in my head it rather becomes puppet or ansible fairly quickly).

There's room here, but it's going to be an uphill battle.


> [...] shell is a mess and [...] the basic root cause of the mess is that you can not straddle interactive use and programmatic use with one language.

Note that some of the world has already taken this notion on board long since. About a decade ago, Debian Linux and Ubuntu Linux swapped out /bin/sh so that it was no longer the Bourne Again shell. Nowadays, as a result of this, one regularly finds Linux systems where programmatic shell scripts are interpreted by something like the Debian Almquist or Debian Policy-Ordinary (posh) shells and interactive login session shell work is the domain of the Bourne Again, Korn, Z, or other shells.

And of course, having a different "better" shell for interactive use was the reported rationale for the C shell.


But why nodejs?? There are a lots better languages to base this off ove.


Because lol, standards.


[WARNING, RANTY]

This is a blue-sky project. Completely new, with no legacy dependencies.

AND YET THE DEVELOPER CHOSE TO WRITE IT IN C. WHY FOR THE LOVE OF GOD, WHY?

Could someone explain this to me? Is it developer hubris, believing in one's own infallibility that a single exploitable stack frame or buffer overflow couldn't possibly happen "on my watch"?

If you need a native binary, your options are endless. Rust, Go, Haskell, hell, even C++ with smart pointers and runtime bounds-checking would be a step up.

Please, someone educate me on why people still choose to write inevitably-vulnerable software in 2016, when there is no legacy reason to do so.


Do you think it's a bad idea that git and the Linux kernel are written in C? Is the possibility of a buffer overflow important in a program that's not processing input from random people on the internet?


No, because they are both legacy software. I understand momentum in codebases - that's why I'm reserving my vitriol for fresh bluesky projects like this one.

As for exploitability, privilege escalation and shellcode injection is still very much a thing, internet-facing or not.


Git started in 2005. I'm sure Torvalds knew all about buffer overflows then, but decided to use C. You seem to think this is a crazy decision.


> Git started in 2005. I'm sure Torvalds knew all about buffer overflows then, but decided to use C. You seem to think this is a crazy decision.

Well is it? Will it be if Linus isn't maintaining it?


Linus Torvalds has not been the git maintainer for years.


I wonder if git would be better served by a more modern language.


I'm sure you've heard of the Shellshock bug.


Shellshock is a flaw in bash parsing and has nothing to do with that fact that bash happens to be written in C. Also, it's only a problem for untrusted input sent to bash. Typically, from the internet.


See the full answer here: https://github.com/ilyash/ngs/issues/3


What is this intended on fixing?

I don't think the solution to anything is to make the shell more user-friendly. The only solution is to make the graphical user-interface more user friendly and feature full.

The only time it should be acceptable to be forced to use a shell, in my opinion, is if you are swapping out your desktop environment.

Until that happens, Windows and OS X will rule the desktop and PC market.

I'm fairly certain that Linux-based distributions will win out in the end if we can overcome this clingy gravitation to a command line. It's already happening already. Android is already winning against all of the major phone platforms and it's not because of it's award winning terminal emulator.


> The only solution is to make the graphical user-interface more user friendly and feature full.

There's only one feature that matters here, really: composability. Provide an environment in which I can compose simple tools into complicated workflows I can fling data through, and we might have a winner.

> The only time it should be acceptable to be forced to use a shell, in my opinion, is if you are swapping out your desktop environment.

Why should that be true? It's not at all obvious to me. I have yet to see a GUI that lets you express something like `grep '^127' /etc/hosts | awk '{print $2}'` in any sane fashion. The closest you get is something like LabView, but that's counted out here because it's a general purpose(ish) language.

If your response is that this type of task is artificially skewed towards a text-based, shell-centred environment, then you don't have the type of problem that this shell is intended to fix.


> I have yet to see a GUI that lets you express something like `grep '^127' /etc/hosts | awk '{print $2}'` in any sane fashion.

I think we are in a very interesting time in computer science history where he have a huge slew of technology that solves a large set of problems that no one wants to exploit.

I could easily see an "assistant" type window being created in a desktop UI. Since we don't need a "windows" key in the linux world, I'd assume that would be used.

My ideal system would allow me to go into a text document, click the windows key, and say "grab me every line from /etc/hosts that starts with 127"

When the job would be completed, I'd see a check mark or something and the result would either be pasted to my cursor or, if my cursor wasn't in a text box, copied into my clipboard or something.

While this might sound like magical science fiction, it isn't. Neural networks, NLP, and HCI fields are doing some amazing things. The only problem is these are harder to implement then a simple terminal and pipe.

That is a sane way of interacting with a system that is user friendly.

In the same way, it could also handle other complex things. IE: "open my browser", "open the Internet", "list all the things connecting to the Internet", "open the last document I was editing in my text editor"

This is doable, albeit extremely difficult to implement well, but doable.


I think you might have missed the subtlety here. The interesting bit isn't `grep '^127' /etc/hosts`. It's `|`. There are umpteen ways to launch individual applications, but nobody's yet come up with quite such a flexible, simple and expressive way of getting them to communicate as streams of text.


... although that is a canonical Useless Use of grep. (-:

* http://porkmail.org/era/unix/award.html#grep




Applications are open for YC Winter 2020

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: