Sometimes it seems like I'm the only person that doesn't feel like they need "a better shell". This topic comes up a lot on HN.
Maybe if a better shell came along I'd be surprised, not knowing what I was missing. Perhaps I'm just in the dark or not imaginative enough to conceive of what one would look like.
With that said, whenever the discussion comes up, it seems to revolve around user specific workflow cases and "do what I want you to do, not what I tell you to do" design patterns, that would (from my perspective at least) feel hard to get right for everyone. As a result, we have lots of different shells now that make different trade offs and take different design approaches. That's great, right?
I don't know -- I'm rambling I suppose, I just don't ever really feel like Bash (or fish/zsh/whatever) is holding me back or making my work more difficult than it needs to be. We can argue about the esotericisms of shell scripting and how it's silly to emulate old terminal escape sequences in virtual terminals, but I feel like it misses the point.
I don't know, I'm certainly willing to change my mind here, it's just that for all of the problems I have day to day in sysadmin/programming job functions, the shell is the least of my worries most of the time.
Fish shell's joke tagline is "Finally, a command line shell for the 90s".
I believe that the developer experience for all sorts of tooling has improved dramatically over the last 15 years or so, coinciding with GitHub's rise in popularity.
e.g. the plugin ecosystem for Vim or Emacs have improved, let alone options like VSCode. Tools like Vagrant made it easier to work with VMs. Containerised workflows have similarly improved some cases. Languages like Go-lang or Rust seem pretty nice in some cases. etc.
I don’t even see the need for fish or zsh. Bash is fine for me, and I just don’t see the point. Sure, if you want to use something else, you should feel free to knock yourself out.
I do kinda wish I could filter out all the articles about other shells, though. They seem like a bit of wasted space to me.
Imo the OP isn't really talking about a better shell in the sense that most next-gen shells aim to be. Seems like what they're really asking for is newer standards for terminal emulation and terminal multiplexing that allow shells to handle all the configuration in one place and support better integration for web content and probably multimedia. I agree with the OP that that would be nice, but I agree with you that it seems less urgent than other aspects of my computing environment.
This, though, I don't understand:
> We can argue about the esotericisms of shell scripting [...] but I feel like it misses the point.
Shells are, among other things, programming languages. How can attentiveness to the suitability of that language possibly miss the point?
Thanks for the reply. I think you make a fair point regarding shells as programming languages.
I guess what I meant regarding missing the point is that, from my perspective, the shell was and is a basic way to interface with your computer system. Basic is a keyword here -- there are other ways to do it (native programs, syscalls, etc), but as a basic interface into operating and running programs on a computer, I think most shells do a pretty good job.
As I said in my original comment, maybe they could be doing a better job and I'm just not seeing it, but this is in essence my thought. If Bash is too messy, there are a wealth of other highly portable and maintainable languages to do cross platform scripting on. Python comes to mind, as does Rust, and even older things such as Perl.
I'm rambling a bit, I'm not sure my thoughts on the matter are that coherent. I guess in summation I'd just say, I think it's fine if shells are shells, and if they can't be stretched to accommodate hyper specific workflows and patterns, maybe consider writing it another language? Do shells languages have to be as capable as general purpose programming languages, and if so, why?
> I'm rambling a bit, I'm not sure my thoughts on the matter are that coherent. [...] Do shells languages have to be as capable as general purpose programming languages, and if so, why?
I think your thoughts here are definitely coherent, and in fact you're getting to the heart of the issue. This exact question is pretty central in motivating many next-generation shells. I'll refer to the documentation of a few of them below. Feel free to follow the associated link to tug on any of the threads and see if the overall argument holds in your opinion.
> Python and Ruby aren't good shell replacements in general. Shell is a domain-specific language for dealing with concurrent processes and the file system. Python and Ruby have too much abstraction over these concepts, sometimes in the name of portability (e.g. to Windows). They hide what's really going on.
> Traditional shells use strings for all kinds of data. They can be stored in variables, used as function arguments, written to output and read from input. Strings are very simple to use, but they fall short if your data has an inherent structure. A common solution is using pseudo-structures of “each line representing a record, and each (whitespace-separated) field represents a property”, which is fine as long as your data do not contain whitespaces. If they do, you will quickly run into problems with escaping and quotation and find yourself doing black magics with strings. […] Elvish offers first-class support for data structures such as lists and maps.
> bash does not meet any modern expectations for syntax, error handling nor has ability to work with structured data (beyond arrays and associative arrays which can not be nested). Let it go. You are not usually coding in assembly, FORTRAN, C, or C++, do you? They just don’t match the typical Ops tasks. Don’t make your life harder than it should be. Let it go. (Let’s not make it a blanket statement. Use your own judgement when to make an exception).
>
> Python along with many other languages are general purpose programming languages which were not intended to solve specifically Ops problems. The consequence is longer and less readable scripts when dealing with files or running external programs, which are both pretty common for Ops. For example, try to check every status code of every program you run, see how your code looks like. Sure you can import 3rd party library for that. Is that as convenient as having automatic checking by default + list of known programs which don’t return zero + convenient syntax for specifying/overriding expected exit code? I guess not.
For me, the great thing about a shell language is that it's a programming language I get to _live_ in. Scripts can just grow organically from commands I chain together at my shell prompt, including simple loops and blocks and stuff. The better I get at the programming language, the better I get at navigating my computer and performing routine tasks. The better I get at navigating my computer and performing routine tasks, the better I get at the programming language. This is a really wonderful kind of thing imo, but its usefulness is undermined when one dimension of the shell (interactivity or programming capability) holds back the combination of the two.
The idea with these new shells is to try to find a way to enhance the programming capabilities of shells without making them any less convenient for navigating the filesystem and performing simple tasks with external programs. There definitely are shells that (imo) have tried to improve programmability (usually be embedding in a full-fledged programming language) in a way that undermines the simplicity and freeform character of shell languages, for example Rush (embedded in Ruby), Ammonite (embedded in Scala) and Xonsh (embedded in Python).
But other next-generation shells, mostly following the examples of PowerShell and Fish (and lots of programming languages), (imo) do a remarkable job of improving programmability without making simple, everyday shell navigation workflows feel any weirder or more laborious. In particular, Elvish, Nushell, and Oil Shell strike me as very thoughtful efforts whose very small teams of contributors have been hacking away at them for quite a long time.
One important thing to keep in mind for this particular breed of next-gen shells is that their novelty is mostly in the synthesis of a collection of minor and conservative innovations— things that we've seen in some shells (like PowerShell) and some programming languages (especially Lisps) for a long time. Each of these shells is a substantial leap forward beyond shells like Bash only because the language designs of shells like Bash have been largely frozen for decades. For a concrete example of this perspective:
> Nu takes cues from a lot of familiar territory: traditional shells like bash, advanced shells like PowerShell, functional programming, systems programming, and more. But rather than trying to be the jack of all trades, Nu focuses its energy on doing a few things well:
>
> * Create a flexible cross-platform shell with a modern feel
> * Allow you to mix and match commandline applications with a shell that understands the structure of your data
> * Have the level of UX polish that modern CLI apps provide
That phrase at the end of the second bullet point is key: ‘a shell that understands the structure of your data’. We can see a clear call for shells like that in the growing crop of fancy new CLI tools designed for chopping up, querying, and modifying ‘structured’ (not merely textual or stringly-typed) data. Tools like that frequently show up here on HN: jq for JSON, yq for YAML, xsv for CSV, etc. Developers and sysadmins want to be able to pull in structured data from whatever source and query it and manipulate it right there inside their shells. But each format gets its own utilities for that kind of querying and manipulation, and they don't necessarily know how to talk to each other. The pipelines we use to plug various tools like this into each other, meanwhile, only know about raw text/byte streams. The natural thing to want, when you're faced with this, is to be able to just ingest data of all of these forms into variables in your shell which retain that structure, and query them in a uniform way (that your shell may even be able to assist you with as you type, with some static checking!).
The Elvish shell has a nice example of using this to query the latest Elvish issues from GitHub on their website:
curl -s https://api.github.com/repos/elves/elvish/issues |
from-json | all (one) |
each [issue]{ echo $issue[number]: $issue[title] } |
head -n 11
which prints, on my system right now:
1396: Exception control: “try” should rethrow flow control exceptions
1391: Document forking behaviour
1385: Feature request: command to find location of rc.elv file
1381: `use edit` creates a crash situation for elvish
1380: Issues running `wal -i`
1377: exit builtin does not trigger program's defer statements
1376: buildin cmd source !
1374: Feed stdin to all code blocks in run-parallel
1373: Recent performance/speed benchmarks?
1372: Octal Format Specifier
1371: "autoview", tables, new data formats...
Imo that's pretty neat and concise! Now that JSON is increasingly a de facto standard interchange format, not just for web apps but often (optionally) for CLI programs, I think having idioms like this in your shell makes a ton of sense. And we don't have to give up the convenience of a simple, fast, filesystem-oriented, interactive shell experience to get it!
Wow. Thanks for all of this info and for your very interesting perspective. I have a passing familiarity from HN and elsewhere with some of these new shells, but am not well informed enough to really comment on their usefulness/etc. With that said, you've given an excellent primer in your comment -- thanks very much!
A few things you said stuck out to me:
> For me, the great thing about a shell language is that it's a programming language I get to _live_ in.
> The idea with these new shells is to try to find a way to enhance the programming capabilities of shells without making them any less convenient for navigating the filesystem and performing simple tasks with external programs.
Generally, I agree, and I actually use `eshell` in Emacs for this kind of thing a lot. Being in a pseudo-shell LISP layer allows me to get arbitrarily manipulate text I get back from the command line with elisp and allows for some really flexible workflows. With that said, eshell does of course have many trade offs and limitations, which I won't get into here.
I think where these things always fall apart for me though is in the Elvish shell example you provided. To me, that is not functionally better than using Bash and jq, it's just different. Maybe it's better, I'm not sure, but my first impression is that it's neither more concise or more readable, it's just different syntax.
I think there is a credible argument to be made for "batteries included" type shells, with something like native json parsing, but at what point does it then tip into being like Powershell or Python, where once again you've gotten away from the native/accessible experience because you need to support `n` kinds of structured data inputs/outputs on `n` platforms?
As I was reading your comment, the thought that occurred to me was: Why not just make a python library that can be loaded into the REPL that abstracts over some of the more cumbersome parts of interacting with the OS/filesystem in a shell like way? That seems to be the best of both worlds, and from what I gather that seems like what Rush and Xonsh are doing? I need to look into them further.
> Developers and sysadmins want to be able to pull in structured data from whatever source and query it and manipulate it right there inside their shells. But each format gets its own utilities for that kind of querying and manipulation, and they don't necessarily know how to talk to each other.
I agree re: structured data manipulation, but I think the solution that has naturally emerged exists for a reason. I can use pipes, and programs like jq to push and pull data into and out of whatever format I need, and each layer of that ecosystem can be maintained in parallel, adapting to changes in the overall landscape. In a sentence, it's the core of the Unix philosophy. One thing and one thing well and all that.
I dunno. I've thought a lot about it while typing this comment out. I think it sounds like I'm disagreeing with you, but I'm not. I don't even really think we're debating. I think these new ideas for shells are good things, and I will definitely investigate them more and see if they make my life easier. I just can't seem to shake this feeling that we can't have our cake and eat it too. Maybe I'm looking at it the wrong way though.
Maybe in 20 years time shells like Oil and Elvish will be the norm, and we'll be complaining about how they don't handle quantum data structures well without lots of pipes and fd's :D
Either way, this has been a very interesting digression. Thanks again for your thoughtful comment and insight.
> Generally, I agree, and I actually use `eshell` in Emacs for this kind of thing a lot. Being in a pseudo-shell LISP layer allows me to get arbitrarily manipulate text I get back from the command line with elisp and allows for some really flexible workflows.
FWIW, Xiaq at least (initial author and lead developer of Elvish) envisions the terminal of the future as somewhat Emacs-like (this I recall from conversation— maybe you'd enjoy dropping by one of Elvish's online chat thingies and asking what folks think of eshell and Emacs as a model for rich, programmable interactivity with text), and looks to functional programming languages (especially Lisps and Schemes) for elements of Elvish's language design. (For example: its approach to numerical types is based on Scheme's; its arithmetic operators use prefix notation and parentheses just like in any Lisp; and lists and maps are immutable, like in Clojure or most functional programming languages of the ML heritage.)
> To me, [using a shell to ingest and manipulate structured data] is not functionally better than using Bash and jq, it's just different. Maybe it's better, I'm not sure, but my first impression is that it's neither more concise or more readable, it's just different syntax.
One way that it's better is that since your shell is handling the data, it can offer you tab completion, syntax highlighting, and previews of your manipulation and querying of the data, whereas with jq and bash, your jq query is going to be enclosed in quotation marks (probably single quotes). It's going to contain pipes that don't mean the same thing as your shell's pipes. It's also kind of a pain to pull things out of the ‘middle’ of a jq pipeline, since you have to either write a block of code or split your pipeline into multiple jq invocations and do weird things with tee and maybe open extra fds, idk.
That said, jq is still a great tool, and stuff like jiq can be helpful for letting you compose jq pipelines with an interactive experience.
> [A]t what point does it then tip into being like Powershell or Python, where once again you've gotten away from the native/accessible experience because you need to support `n` kinds of structured data inputs/outputs on `n` platforms?
For my part, I think the way PowerShell handles this is actually pretty nice. But basically what you want is a large library ecosystem that includes wrappers for the external tools you might want to use. In easy cases, this will mostly be adding like a `--json` flag or something. This might sound like a lot, but I think it's basically comparable to the existing way we distribute completions for popular command-line programs. But yes, this does push you to prefer a ‘pure’ approach to some extent, or you end up munging text outputs into structured ones yourself, although for serious scripting cases you still end up doing that kind of output processing even in Bash— you just don't have very nice types to put your data into when you're done.
> As I was reading your comment, the thought that occurred to me was: Why not just make a python library that can be loaded into the REPL that abstracts over some of the more cumbersome parts of interacting with the OS/filesystem in a shell like way? That seems to be the best of both worlds, and from what I gather that seems like what Rush and Xonsh are doing? I need to look into them further.
My impression is that Xonsh is the most successful of that type of new shell, and that its regular users really love it, so I'd definitely check that one out first, if you like Python at all. Oil also seems aimed at Python fans, being written in Python and having some syntax inspired by it. Maybe those two would be good ones for you to try and compare to get a sense of the two approaches to offering shells with better programmability!
Well, nearly nobody have experience with a REAL good shell. Is like using C all your life and don't have experience what a truly good language is.
I work in FoxPro DOS back in the days.
That is another dimension. Was close to jupyter + IDE + Database Builder + Form Builder + Code Editor + ... and the language to make stuff is NOT a joke.
That is not the only vision that happen in the past but is forgotten (or rebuilded with a massive micro-service/over-engineer/multi-machine version).
/r
So, to me, using the shell is purely because not exist most stubborn people that the one that lives in the unix/c world, that have forced for decades the use of the most arcane and inferior toolchain possible, and always shutdown any attempts of improvements under the excuse "but it works for me".
Have you tried magit? Not a serious user myself, but for many, many folks it is way better than using git on the command line (or via any GUI). When he says he'd like that type of interface for all command line utilities (like tar), I can appreciate what he's referring to. Much more discoverable than reading man pages.
I do use magit, daily, and I really enjoy it. I think what I don't understand about the OP's point is that, why is that the shells problem? Tar is a utility that works great, and is typically pretty easy to use on the CLI. If however, I don't particularly like it, or would rather use a GUI that's totally understandable.
The OP could even implement one in elisp for emacs.
I guess my thought is that, why is the burden on the author of this magical new shell to write a piece of software that can GUIfy tar, than the user of tar who doesn't like it's ergonomics? Maybe that's not what the author is saying, but to me it just seems like something not really related to the shell, or a discussion on shells.
If you like using GUIs, find a GUI for tar, or write one. I don't say that to be flippant, but more to reiterate my misunderstanding of how this relates to a discussion on shells.
> I think what I don't understand about the OP's point is that, why is that the shells problem?
I think to understand his point, substitute "universal text interface" with "shell".
What he really is describing is something very Emacs-like, but with a better config than you normally find in Emacs. He'd like most command line utilities to automatically have a magit-like interface through some universal spec/language (i.e. so that each utility doesn't have to create the interface independently).
Tar is one of those utilities that many people, including me, can never remember the options. I can relate to him on that one. If I had a magit like interface for tar, I'd be really happy.
Spacemacs[1] is a particularly nifty solution to the emacs defaults being... special. There are other bespoke config systems out there, but it really impresses me. The maintainers take performance and, more importantly to me, interface consistency and discoverability seriously. It supports both Vi style bindings as well as the traditional Emacs ones. Combined with a distribution like Emacs Plus[2] with the --with-native-comp it's a very nice way to interact with the system.
I'm in the same boat. Even for nontrivial shell scripts just having shellcheck around, assuming bash availability everywhere and treating `set -euo pipefail` (optionally -x bit too for some scripts) as a required header means I'm very happy writing and reading shell scripts.
The only thing I've seen after well over a decade of Linux usage on desktop and servers that I've thought "this shouldn't have been a shell script" was xdg-open. But even then when things when wrong I was glad it was a script I could easily tweak and debug, at least.
"Sometimes I feel like I'm the only person that doesn't feel like they need "a better shell"."
You are not the only person. There are HN readers who are not programming language wonks, who actually enjoy using the shell.
Whenever someone today, usually one of said programming language wonks, tries to "improve" on existing shells it never manages to improve over the relative speed/size/simplicity I am getting with ash already. It never feels like its being written by someone who likes the shell; it feels like some programming language wonk who dislikes the shell (thus wants to "improve" it).
Give me a shell thats even simpler, smaller and faster than ash. That would be an improvement. Early era and deliberately "reduced" shells I have tried in place of ash do not have command history. I have tried to learn to work more "deterministically", without need for history but have never succeeded. I think it can be done. (So does the maintainer of the Heirloom Project.)^1 In practice I often edit command lines in vi, using the fc built-in. Why cant I do all editing in vi or ed. I suspect its the addictiveness of the "REPL" workflow.
For whatever reason, I am more motivated to try shells with reduced functionality than I am ones with increased functionality. For any repeated task, taking a long view, non-interactive use is always faster than interactive use, so although a nifty feature like tcsh-style auto-completion might speed up interactive use, a script that automates navigation is going to save time in the long-term.
1.
"interactive use. The Bourne shell provides job control if it is invoked as jsh and runs on a terminal. Of course, it lacks fancy features such as a command history, command line completion, etc. But working with these features tends to distract the user's attention. After a familiarization phase, use of the Bourne shell can lead to a more even-tempered, concentrated working style. Give it a try. Seriously."
I think there's stuff that could definitely be an improvement.
One that I take issue with is the "everything is a string" mentality of bash. It makes working with structured data harder than necessary, and a ton of stuff I do with my shell involves structured data like JSON. A sane array and hashmap implementation would be nice.
Another thing, which I picked up from elvish, is the concept of separating the textual output of a function from the value it returns. Bash lacks that distinction; the textual output is the value, which makes it annoying to do things like log from a function but also return a value.
I really like Elvish. It's similar enough to bash that I don't feel like the floor has been pulled out from under me, but it makes it dead simple to do something like pull data from a REST API without me having to pull out Python.
The other big pattern I've seen is object-oriented shells like Powershell and Nushell. They seem neat, but too complicated for me to manage in my head while trying to get stuff done. They're a little further towards "programming lanugage" on the range of shell to programming language.
Same. Every time I hear someone complain about, for example, bash it's always because they put no effort into learning how to use it / what it can do / best practices. They just get mad that their typo on a filename isn't auto corrected, and completely ignore the fact that you can tab to auto complete and avoid that problem in the first place.
Everyone wants to rush to fix these perceived problems without realizing how some sort of fully automated AI guessing ability about which file to rm -rf is not a good idea, and is not what anybody actually wants.
Just fix bash so it saves every line of your history ever, no exceptions and no deletions. (Except for the 'leading space' trick when security considerations are in play.)
It's insane that this isn't the default behavior and can't even be configured in 2021!
Doesn’t have all the author’s desired features out of the box, but recently I was forced into using PowerShell (Core edition, which is OSS, different from Windows only pwsh) for a work project and you know what? It’s surprisingly capable! It’s available from powershell/powershell on GitHub as a built binary for multiple systems and architectures including MacOS and Linux, or you could compile it yourself since it’s open source. The powershell gallery is the marketplace this guy was talking about, so there’s that. And you can distribute or consume fully encapsulated modules as well. The scripting language itself is fully object oriented, and while it takes some getting used to, you quickly start seeing real value in your day to day from the conventions and patterns established by the community over the years. I haven’t switched entirely from zsh yet, but I’m slowly investigating exactly how to do that as time allows. It’s a bit more verbose than typical Unix shells so you type a little more, but that’s a small price to pay for everything it brings to the table. If you’ve never tried it (OSS version, aka “core” edition) I highly recommend it. WAY easier to learn for first timers than bash!
It is builds up on top of Xerox PARC ideas regarding REPL and OS integration.
UNIX systems could offer similar integration, via IPC mechanisms, KParts and DBus like communication, exposing shared libraries directly into the shell and structured data.
Where it fails down is lack of standardization across the landscape, so that everything actually works together.
I've recently been thrust into using PowerShell at work too, and it's fine. I still wouldn't choose it, given the option. I get why people like the object orient nature of it, but I also don't think I get much value from it. 9 times out of 10, if I need a shell script, it's just stringing commands together. If I'm doing something complex enough to really benefit from "everything is and object" then why not just use a more capable language like python?
There is a trend of people making increasingly complex and expressive shell languages, but I don't really understand what problem they are trying to solve. Maybe what we really need is a less complex shell so people are less tempted to write complex logic with it.
> If I'm doing something complex enough to really benefit from "everything is and object" then why not just use a more capable language like python?
I've seen this line of reasoning mentioned a lot but never justified.
A few reasons to stick with Powershell over Python:
Team might not be familiar with Python (and the benefits of learning might be outweighed by the learning curve, shallow as it might be)
Hard to judge what the cutover point is (namely at what point does a script benefit from being written in Python?)
A lot (some?) of stuff out of the box in PS requires libraries in Python, which requires installation, which cannot happen during the script (I think at least, I've not done that much Python)
I did not use powersehll but recently I was using a CLI program that was outputting in a tabular format but it was not something well defined that I could easily (with my very limited shell experience) parse. Maybe it was the program fault that it did not offer some CSV output format or maybe there is a natural way of handeling it but is hard to google for it.
One thing about PowerShell that annoys me is the verbosity and the use of capital letters in commands. While it is fine for scripts that you write once it feels suboptimal for daily use in a terminal, where I prefer the least strokes possible. Also the slow startup makes me often choose an alternative shell in Windows (CMD,bash).
You don't have to type capitals, and you have aliases just as terse and understandable (?) as standard unix commands: gci, iwr, ri, etc. So you don't have to type them if you don't want to. The
But if you forget, it's nice to have them. And for commands at the margin of your memory they're entirely discoverable (including the aliases).
It's not necessarily obvious when you start, but it's actually super useful to be able to use and re-use all your skills for selecting and filtering data. Being able to 'get-alias | where definition -like *item' is a great way to discover the terse aliases for file operations, for example.
I know that there are built in workarounds ie default aliases and the case can be omitted. But it creates another set of problems related to documentation and code samples/conventions. The objects passing is great, but it is not enough for me, except for writing scripts. I guess there are ways to make it work fine as an interactive shell, but that is probably an extra customization and in some administrative environments it may not be possible.
Fish has taught me that I should strongly prefer shells with batteries included when it comes to interactivity, and Elvish has beautifully demonstrated how that can be taken even further with things like its built-in navigation mode. PowerShell's big weakness, imo, is that it lacks that emphasis on interactivity OOTB, and configuring it back in often results in astonishingly bad performance.
That said, the language is way better than any traditional Unix shell, and the ecosystem of libraries and scripts is insanely comprehensive. Together, these factors make for an absolutely top-notch scripting experience.
This is definitely one of the the biggest sources of inspiration for most next-gen Unix shells afaict.
dir works because it's actually an alias of Get-ChildItem, which returns a list of objects (DirectoryInfo or FileInfo). My example was specifically a list of paths that are not objects but just strings.
Okay. How about "echo"? (Aliased to Write-Output, which I assume is strings)
C:\temp> dir file3
Directory: C:\temp
Mode LastWriteTime Length Name
---- ------------- ------ ----
-a---- 9/10/2021 1:03 PM 0 file3
C:\temp> echo "file3" | Remove-Item
C:\temp> dir file3
dir : Cannot find path 'C:\temp\file3' because it does not exist.
Edit: I see it now. It works with one string, but if I Write-Output "file1`r`nfile2" | Remove-Item it no longer works because it takes the whole string as an object. I'd have to pipe an array of strings rather than cr-lf separated strings.
I've found that as you learn, the verbosity kind of fades away, because you adapt your aliases and patterns to your work habits. The small amount of remaining "extra" verbosity that remains is very worth it, because it gives you more discoverability and more consistent ways of handling data in exchange.
I would probably rephrase your statement about object-orientation: I don't think Powershell is particularly object-oriented, but it integrates nicely with object-oriented .NET. What I mean is: when writing extensions to Powershell in Powershell, you may use objects incidentally as containers for structured data, or occasionally to access details in .NET, but you are not creating objects or classes--you're working with functions in a straightforward way, and it's easy to make your functions behave like (take the same options in the same way as) "standard" commands.
I switched (completely) a couple of years ago. The main "use case" that is Powershell's sweet spot for me is handling structured data. A lot of my operations have steps like:
Do some data-wrangling to calculate a URL and request options for some data repository or integrated system API; Perform a network call or look something up in a big file of structured data; Munge and wrangle that data to extract information or calculate summaries; Perform more network calls based on that data and do further munging and calculating.
This is usually discovery-oriented, interactive, ad-hoc lightweight integration, and it's in an environment where I'm doing something with this data (like figuring out a machine list from a couple of CMDBs, cross-referencing that with data from the monitoring system and maybe some lookups for system roles, and then performing commands on them). It's not a great fit for a "real" scripting language because the stuff about processing the data and doing the network calls is intermixed with lots of stuff that the "real" scripting language is not very good at, like interactively discovering what's in the data, invoking a bunch of OS commands and putting things together left-to-right in a pipeline you craft gradually. As your activity matures (you find yourself doing the same thing over and over), you can start factoring your code into reusable units, but not all of it, and Powershell is just super excellent at tasks at this boundary, because it's just fine as a programming language in its own right (much more capable than bash, and with goodies that you reference, like having dependency management) while being at least as good an interactive shell as bash.
As a "Unix guy", I favor a lot of Unix-isms so of course I customize my shell heavily. I can do more powerful things in my prompt much more easily than with bash. I like the more familiar bash-style tab-completion than Powershell's, so I use the PSReadLine module settings to enable that. I use vi keys for command-line editing (with remapped cursor motion keys) and getting this done is easier to accomplish than doing so in bash. I don't like some of Powershell core's default aliases (like curl--I want curl to invoke the curl command, and I'll use iwr instead of curl when I want Invoke-WebRequest), so I disable them.
I do all of my professional work on MacOS and Linux and Powershell is a good shell on both, and almost all of my experience with Powershell is as a Unix shell. I'm not much of a fan of commitment to the so-called "Unix philosophy", but I do think that Powershell reflects this philosophy better than a typical Unix shell and Unix commands. For example, once you learn how to sort the output of ls according to its standard options, all you've learned is something about the ls command. It doesn't help you in sorting a process list from ps. It also doesn't help you in sorting files by something other than size (-S), mtime (-t), atime (-u sort of), "natural version in the name (what?)" (-v), or alphabetically by extension (-X). According to the "Unix philosophy" what is the ls command even doing parsing and sorting version numbers anyway? In Powershell, it's more natural to let Sort-Object sort things, which it can do by any of the item's fields, not just what the ls author provided an option for; and once you know you can do that, you can do it for processes in just the same way, or AWS instances or anything else. Boom. Unix philosophy. (And if you just like using ls, and piping it to sort, that works fine, too, so no custom is imposed on you). It's kind of beautiful.
Our views on what we want out of a shell are always based on the tasks we personally try to accomplish with it, and I don't think the author wants a "better shell" at all, but a better, more scriptable IDE. His "use cases" for the shell are all elementary file management and command invocation and he doesn't really seem clear on which component of his system the shell actually even is. Interestingly, there is a full TUI framework for Powershell and it's pretty cute! I didn't know about it before looking it up after reading this article.
It's designed to work with JSON, YAML, CSV, etc so you don't need to remember jq syntax for JSON, awk for CSV, something else for S-Expressions and so on and so forth. It's the same commands (like grep) that work with all formats and are aware of the format.
It can work with tmux to give do stuff like if you hit `F1` it will open up a pane to the right with the man page of the command you're typing. So it's been built around taking UI lessons from IDEs.
But above all else, it's a $SHELL so it works find with all your existing CLI core utils. Which is more than can be said for the likes of Powershell.
There is a learning curve though because it's not POSIX syntax. I've tried to keep some POSIX-isms where they've made sense but redesigned things where POSIX was a little counter-intuitive (though that's a subjective opinion).
I've been using it as my primary shell for about 4 years now.
> But above all else, it's a $SHELL so it works find with all your existing CLI core utils. Which is more than can be said for the likes of Powershell.
Can you expand on what you mean by this? I've been using Powershell as my primary shell for 2-3 years and this statement surprises me. There's no "core util" I can invoke that behaves differently or surprisingly in Powershell so I probably don't understand what you mean.
Maybe I'm wrong on this point then but the few times I've tried to Powershell in the past I had issues piping commands together because some commands expected text while others expected a .NET object and it all felt terribly like working in Delphi again with the type system actively working against you (I'm exaggerating a little here, love Pascal, but hopefully you get my point).
Don't get me wrong, I love strongly typed languages. But when you're using the CLI you want a language that is optimised for ease and working with CLI tools and Powershell functions wasn't easy.
But as I said, I might have been doing something wrong. And if that's the case I'll make sure I don't repeat that comment in the future :)
Out of the box, Powershell comes with some aliases matching standard commands, like ls is an alias for Get-ChildItem. I think the intention is to make it more 'familiar' for Unix users on the "home" platform, where you don't usually have native Unix commands, but I found they just got in the way and I removed them (aliases like gci for Get-ChildItem still remain, so you don't need an ugly verbose command line, but you do need to learn some new commands). I'll wager that some of your experience was due to these aliases, which just get in the way trying to use it as a Unix shell.
So far I've actually been surprised at how infrequently I've encountered the problem you describe. I don't doubt that it's a real problem, but I've generally been pretty impressed with how well a list of lines, which come across a pipeline as an array of strings, mingles with objects. It could be that our "daily driver" tasks have different domains so this mismatch surfaces much more frequently for what you need to do.
I've had ideas for better shells for decades, mostly around handling structured data and easy network connections without destroying too much of what makes a shell a shell. So genuinely, respect to you for actually implementing along this path.
When I use PowerShell, I try to use 'pure' PowerShell, so I don't invoke any unwrapped DOS/cmd or Unix (Cygwin, gow, whatever) commands. (This is actually pretty easy to do, in that everything I've ever thought of doing can be done in pure PowerShell.)
The resulting experience is really nice, since everything returned to me is a nice PowerShell object. The cost, of course, is having to learn how to do everything anew.
Something that prevents me from adopting a lot of superior modern tools is that I need to SSH into a lot of servers and other machines that don't have the tools installed. I don't want to maintain the muscle memory and knowledge of 2 separate workflows & toolchains for the same tasks on different machines.
I'd like to see something that runs locally with my aliases, keyboard shortcuts, plugins, etc. and translates the commands to a remote backend, maybe similar to VS code's remote development.
True. And here is where what people think is a virtue of a standard Unix shell (zsh or bash, say) is actually not as good as it seems ("it's already everywhere"); because shells like this don't have dependency management, you have to do it all yourself. Either out of band (make sure curl and jq is installed, which awk is this? oh and all the other little things that are buried in my scripts and functions but are not declared in a standard way). Now I have a full-on configuration management problem that I will need to integrate--just to make your shell scripts, snippets and functions work) or by bizarre horkiness in your scripts (pure-bash json libs, anyone?).
Interestingly, if all of your machines were to have Powershell core, you can actually do a lot of what you describe over SSH more elegantly with Powershell remoting.
This is what killed my experiment with an alternative shell for me: I didn't a bunch of time SSH'd to other machines trying to do things, and anything useful I learned didn't translate - whereas a quality bash function can be piped into the environment.
Now my primary interest is essentially whether through an SSH tunnel using minimal tools I can pull the remote resources back to my machine and use a real environment.
A lot of this has to do with the integration between the Terminal and the Shell. A project that came up recently that seems to take on a few of the issues is https://blog.warp.dev/how-warp-works. I've been meaning to write up a proper critique of this project at some point, but I keep getting side-tracked.
I also want a replacement for `fish`, however it's a pretty tall order to implement what I have in mind, and I got a bit stuck trying to find the right abstractions for background job management in Rust (there's a lot going on). But I genuinely believe a multi-language shell with POSIX support will finally allow us to move forward in the terminal environment. UI/UX issues like ctrl-c, window management, and everything else can be implemented as derivations from POSIX, or additions.
While somewhat tangential to the main thread of this post, I'll still leave my (incomplete) shell here for anyone who's interested. https://github.com/nixpulvis/oursh. The README has a decent description of features I want off the bat, and there's a bunch of design level issues in the tracker.
I'll never accept the death of the Terminal environment.
"#6: Leverage the cloud.
The basics of computing have changed since the advent of the CLI, and no change has been more profound than the rise of the internet. Once you start to think about it, there are a lot of leverage points: why is shell history limited by your local hard-drive? Why doesn’t your shell configuration follow you across machines? Why can’t you transfer a CLI session to the browser so you can keep working on the go? (Don't worry, though, Warp is local-first, and all cloud features will be opt-in. You can find out more in our privacy policy)."
I've felt for a long time that we need a better Shell, but I think the next evolution is combining the Terminal and Shell together.
I've never thought that what was good about the command line was that a terminal and shell are different. That's just an accident of history. In fact, this feels like a huge area where it's inferior.
I don't actually know anyone who uses plain TTYs for their Shell. If you use a shell today it's 99% running inside of [iTerm2,terminal.app,Gnome Shell,uxrvt,alacritty,etc] on top of a window system. It feels obvious to me that instead of trying to emulate a hardware terminal from the 70s we should be building what is good about that interface into a GUI which makes managing processes, their outputs, and crafting the commands easier. Intelligent IDE-type auto complete, powerful history, collapsing large program outputs, smooth scrolling of the session, Tmux-style session management, built-in file browsing and previewing, dynamic/reactive prompts and status indicators.
Except for Warp this feels like something few, if anyone, ever really considered. I often wonder why.
> I've never thought that what was good about the command line was that a terminal and shell are different... I don't actually know anyone who uses plain TTYs for their Shell. If you use a shell today it's 99% running inside of [iTerm2,terminal.app,Gnome Shell,uxrvt,alacritty,etc] on top of a window system.
Interactive shells can be used without opening a graphical terminal. Simple examples are calling :shell from vim, or S in the `ranger` file manager, or entering a docker container, or using ssh.
Terminals can also be used without a shell. For example, I have keybindings for opening a terminal with a file manager, or with vim, or with vim with the clipboard contents and special vim keybindings to handle it, or with nothing on it (a process that sends a stop signal to itself). This last one is to get a new unused terminal window to which I can redirect stuff. For example, with `gdb -tty` I can interact with a program from one terminal, while having another terminal control it with gdb.
They really are 2 separate types of programs that are useful on their own, and their separation allows for more flexible handling, like how separate programs can take control of the terminal at separate times, like how it happens with vim's :shell, etc.
The "shell" I'm suggesting still would run all those programs they just would be a direct child process of what would normally be just a terminal instead of a child of a bash process. It would absolutely need to have some kind of fallback mode when escape codes are used.
I am not sure how the gdb example works, but one thing I wanted was the ability to manage multiple programs running at the same time. Maybe it'd still be another shell instance running gdb attached to your original process. Maybe the shell could have a feature to execute a `gdb -tty` process with arguments pointing it at the pid. I think it's a solvable thing which would absolutely be different, but an improvement.
> The "shell" I'm suggesting still would run all those programs they just would be a direct child process of what would normally be just a terminal instead of a child of a bash process.
I think you misunderstood. As I understand it, if you run in the shell:
$ yourshell
it would open up a new terminal/shell combination, a new window.
In there, if you run vim, and then you `:set shell=yourshell`, then `:shell`, would that open up a new window or work like the shells of today and give a prompt in the same window? If you run `:!make`, would that open up a new window, print the output there, then close a millisecond later without giving the user a chance to review?
If you `usermod -s /bin/yourshell $your_user`, would logging into that computer via ssh cause it to try to open a new window locally and not provide a prompt to sshd like shells of today do?
If you run vim from an ssh session, then you do `:!make`, would the user see the output anywhere?
> I am not sure how the gdb example works
It redirects the std file descriptors of the child process it's debugging to the terminal specified, like how one would do with `< $empty_tty &> $empty_tty`. This is so that gdb can be controlled from the terminal it was launched from without the child process interfering in terminal input/output, and at the same time allowing one to interact with that child process. E.g. `gdb -tty $empty_tty htop` would have you control `gdb` from the terminal invoked from, and `htop` from `$empty_tty`.
> one thing I wanted was the ability to manage multiple programs running at the same time
Shells have mechanisms for that. Terminals too.
> Maybe the shell could have a feature to execute a `gdb -tty` process with arguments pointing it at the pid.
`gdb -tty` is an example. The idea has uses beyond gdb. If you're debugging a program, you can edit the source at a specific point to run a REPL and have its input and output come from the terminal specified. It could be an already established unused terminal to keep history in the scrollback buffer, or a new terminal. Whatever the case, a forced shell gets in the way.
More than anything, the core issue with the idea is that I see no benefit at all from combining the shell and terminal. That terminal/shell combination still needs to do what terminals of today do so that other programs can use it to interact with the user. I'm not even talking about escape sequences, just read and write. And if it's providing that, why would the shell need to be combined with it? If it can be separated, it's better separated. That's more in-line with the unix philosophy:
1. Write programs that do one thing and do it well.
2. Write programs to work together.
It also allows users to switch shells or terminals as they'd prefer. They don't need to consider giving up terminal features because they don't like the shell or vice versa.
The next evolution of the terminal (text array of characters with positioning drawing escape codes) was the graphics array of pixels with drawing API calls. The problem with graphics arrays is when you try to run them over a remote connection. For low bandwidth situations, you end up with hacks like downgrading to 8 bit color or disabling mouse-move events, both of which make some interfaces less usable. Then you also get the problem of scripting, where there isn’t a consistent way to deliver input to the applications. HTML and HTTP can be seen as yet another implementation of portable networkable user interfaces, and they are easier to script than graphical apps, but in the end, writing a web app or client is still a hundred times more effort than printing text on a terminal.
Many people have thought of many ideas more advanced than terminals, but the terminals persist due to their superiority at just being simple, flexible, and fast.
It is absolutely possible to “make a better terminal” though. But anyone who does has an uphill battle for adoption.
(context: I was bound and determined to make a better terminal 20 years ago, but eventually gave up on the idea after getting familiar with existing tech and deciding it was good enough for me. Same with Bash, though I still think it’s a horrible language and should be replaced eventually)
What I am mostly suggesting is iterative. We wouldn't switch to shipping graphics because as you say they're bandwidth intensive. RDP is not the right answer however useful it is for some things.
But a shell+terminal hybrid focused on running processes, managing outputs, and making it easier for the user to run programs could do a lot of that remotely with a helper program similar to how tmux and git binaries need to be on the remote system. Or how vscode runs a fairly beefy system in order to support remote coding that not only allows editing code remotely with auto complete, but lets you launch multiple terminals to run commands.
In a sense I'm really saying: we have like 80-90% of what a "modern" shell could look like in vs code remote... why can't we just make the shell like that?
One thing which I do not have a good solution for is input because I don't know exactly how reliably a shell would know it's being asked for input from the application. The other thing being that I think this shell could support a legacy mode which is triggered by those escape codes, but most apps don't do that. As you said they just print text cause it's easy and fast. That's exactly what my python scripts at work do and a better shell that I suggest could handle plain text pretty easily. The issue is things like Vim, Emacs, Tmux, and other curses-like apps. But that could be a fallback rather than the happy path.
This person really just needs to lean into Emacs harder. The real strength of Emacs comes when you use it for everything, and so have a perfectly consistent, extensible, and discoverable user environment.
This may seem unrelated but I'm going with it; I did Emacs for a year or so and while I understood that you CAN go this way, I mean, "difficulty in getting there" matters.
"Do everything in Emacs" is an exceedingly short hop from "Maybe just write your own operating system."
It's possible to do "very extensible" and "easy to onboard" -- Obsidian.md is the "actually usable Org-mode" I've been looking for, YMMV.
Yeah, a shell is just interacting with a bunch of text, and once you've built up some familiarity with Emacs you can reuse everything you've learned (search, autocomplete, cut/paste, and once you have some elisp skills then basically anything).
One of my annoyances with VSCode, which I do think will one day eat the world, is that it still seems the basic level of abstraction is too high. So there's a separate terminal window, which I always have to tell to go away, and which has its own key bindings. What I want is just a text buffer running my shell, that behaves like every other text buffer. Same with the database clients available for VSCode - stop building UIs, just give me text with a process running in the background. The editor is already all the UI I need.
Can I get explicit output-as-json flags for all reasonable commands like ps, lsblk, ls, find, grep, etc?
Can I get a database of commands better than history that shows when they were run and error codes that resulted AND THE OUTPUT, and if it completed for multithreaded/background/etc.
The history-as-database is ultra-important for doing cli multiprocessing/async. Fire off command, and then you can get the output from the database to pipe to another command.
Can I get a reasonable metashell standard across all terminal apps for doing a lot of what .profile does but isn't tied to a specific file on a specific machine?
I think Powershell gets you part of the way down this road, at least on some of these features, without requiring you to figure out everything/adopt everything first. Out of the box it works fine as a Unix shell (you know, invokes commands, pipes output, etc.).
Everything's not rosy-hunky-dory, each of these have limitations but:
> Can I get a parseable ls?
> Can I get explicit output-as-json flags for all reasonable commands like ps, lsblk, ls, find, grep, etc?
Better than "parseable", you get commands which just return the structured data, for ls (gci), and many other "foundation" commands like ps (gps), etc. I didn't actually think about a structured grep but maybe you wouldn't use it as much with stuff like ConvertFrom-Csv (so you have structured data to select rather than strings to "grep"). Anything you can get JSON or YAML or CSV output from you can just turn into structured data (ConvertFrom-Json) and handle in the same way. Any native Powershell implementation of anything will emit complex data in the same way (e.g. AWS's Powershell tools).
> Can I get a database of commands better than history that shows when they were run and error codes that resulted AND THE OUTPUT, and if it completed for multithreaded/background/etc.
Background job handling is a lot better with Powershell than with bash and allows you to address background jobs by id, get metadata about them and the output, with rcjb (Receive-Job). Your job list is manipulatable with the same Powershell tools as everything else, so you can sort and select easily on things like status or the host on which it executed.
> Can I get a reasonable metashell standard across all terminal apps for doing a lot of what .profile does but isn't tied to a specific file on a specific machine?
It's an interesting idea, hard for me to conceive. Do you mean shell and command configurations that go along with your shell session to other machines? Powershell remoting allows you to establish a session and then mix local and remote commands while retaining your setup and environment, it may partly help. The gotcha: you need to enable the SSH configuration which allows this and (of course) have Powershell installed on the target systems.
I'll expand this more. What I really want and have already partially built for some things is a very powerful metashell for delivering commands across multiple machines and composing them.
As referenced above, a central database of command, exec status, output, and where executed would be central to this.
THe ability to define sets of machines as clusters and running and returning commands from those in parallel is also essential
The ability to pull, push, edit files remotely...
The ability to store access methods to deliver and return the commands: be it ssh, teleport, (ugh) telnet, kubectl, docker run, aws ssm, etc.
I've written about 20% of that for my current job managing various nosql database clusters. But it is very much ugly and something I'm very far from being happy with, even after about two rewrites
I want a better shell but only in that I want a Bash-like language (with builtin Linux userland programs like mv, cp, etc.) that works across macOS, Linux, and Windows. (I cannot reliably get Bash for Windows installed on every Windows device I own.)
I want this for cross-platform build scripts. I build a desktop app for Windows, mac and Linux. I don't want a different set of scripts in a different language for each platform.
And I don't want to write in Python or JavaScript or Ruby or Perl because they're all so much more verbose than Bash.
But Bash on its own isn't enough either. I need the same Linux userland commands to work the same way on Windows. Powershell has mv and cp but they don't work the same way or have the same flags.
I hacked together a version of this language in Python that operates line by line and massages the differences between things like mv and environment variables in Windows, macOS, Linux.
yarn
setenv UI_ESBUILD_ARGS "--minify"
yarn build-ui
prepend "window.DS_CONFIG_VERSION='{arg0}';" build/ui.js
yarn build-desktop
prepend "global.DS_CONFIG_VERSION='{arg0}';" build/desktop.js
yarn electron-packager --overwrite --out=releases --build-version={arg0} --app-version={arg0} . "DataStation Community Edition"
zip -r releases/{os}-{arch}-{arg0}.zip "releases/DataStation Community Edition-{os}-{arch}"
But this is a massive hack. So I'm starting work on a real language that supports a reasonable subset of Bash on Windows, macOS, and Linux and brings a reimplementation of the few key Linux userland commands (mv, cp, mkdir, rm plus new programs like append, prepend, replace, etc.) built into the interpreter.
A sketch of this more real version implemented in Go is here: https://github.com/multiprocessio/crosh. It doesn't build at all yet but if you're interested you can follow along.
You're not the first, nor do I expect you'll be the last, to reinvent the build scripting wheel. Things like this are why languages like Zig have decided that the build script language should just be the same as the actual programming language.
This is also why I advocate for JS as a scripting language (if it's already a part of your stack, anyway). Curiously, every time I mention it HN gets mad at me.
Truly, distributing more-than-trivial scripts (if trivial, just duplicate effort and have multiple scripts) cross-platform—including Windows—and to machines you aren't remote-adminning to fit your exact specs is hell, especially if some of the users aren't developers.
The use case is _build scripts_ for developer machines and automated builds. WSL2 is supported on Github Actions which is great. I'm the sole developer but again I just need these scripts to work across Windows and the other devices I need to test/develop on.
> Powershell has mv and cp but they don't work the same way or have the same flags.
The better idea would not be to try running bash scripts in it, that won't work. The idea would be to use Powershell for your scripts on all three platforms, where Move-Item and Copy-Item do work the same.
In Powershell 'mv' and 'cp' are aliases to Powershell commands, which is why they behave differently. As a Unix Powershell user, I don't want or like that, so I remove them. Scripts can always spell things the canonical way and so be guaranteed of the correct behavior.
There's a huge caveat to this, but it's no less true for Python or any other language, and that is that lots of platform differences are semantic and not addressable by any shell. In my experience you run into these "other factors" pretty quick, but in the domain of build scripts, they might not matter much.
Have you tried installing WSL2 and using a shell through that? Granted if Bash won't run on a machine reliably, WSL2 may not do much better on the same machine.
Yeah exactly. I cannot reliably get either WSL(1/2) or Bash for Windows installed on every device I own. It worked seamlessly on half of them and I can't figure out anything about why the others don't work at all.
It's cross platform, and while the language is Python, it is a modified Python so it's not nearly as verbose. You can write stuff like "cd directory" in your Xonsh script, as well as capture output of commands - No need to import os, shutil, subprocess, etc.
Been using it for a number of years. Really happy with it.
I think shells are horrible not only from (today's) programming perspective but also from UI/UX perspective. In response to commands, shells dump text to your screen: from stdout, from stderr, from different programs, humanly unreadable amounts - they just don't care, it's "not the shell's job" apparently.
Here is my take: the shell's job is to do everything for my productivity. Imagine programmers arguing that "notepad" is the best, IDEs are not needed, we need to keep text editing pure.
How about shells output objects on the screen and these can be interacted with? Amazing for shells, right? This exists for decades everywhere else.
Better shell should support pointer devices/events. It's shame that you can't move cursor by mouse in Unix shell. There are some hacks in Zsh but I haven't yet found enough courage to explore it. This is one thing that Command Prompt and PowerShell in Windows can do. :)
Like some other commenters here, I don't think I've found any repetitive tasks that aren't scriptable. Aliases and functions handle every one of my use cases and they probably would handle OPs as well.
For example, he could install moeeutils and then just:
alias pre-commit='chronic cargo test& exa -l; wait'
If you really want splits, just spawn tmux with two panes using Tmuxinator, or:
I don't need a better shell, I need a better terminal. On Mac we have Terminal.app which is nice and fast but the feature set is limited and notably doesn't support full color. On the other hand we have iTerm 2 which has too many features (creeping featurism) and is completely ass slow and most annoyingly has high latency. It also leaks memory like it's going out of fashion. Running neovim on fish on iTerm 2 is just an utter pain.
> It doesn’t define out-of-process plugin API (things like hyperlinking output).
The author may want to check out the GNU Hyperbole package.[1] I'm not a user, but those who can get through the strange docs and UI absolutely swear by it.
I just want to invert the shell and the terminal: a programming language that spawns piped text boxes or terminal emulator box for each command run.
screen/tmux is also all wrong: the "rendering" should be entirely client side, with server-side running the commands with pipes and keeping some state and history from which to render scrollback, but that's it.
What features do you gain by architecting tmux/screen that way? It seems like you'd lose a nice feature, the ability to persist sessions' appearance after the client exits
The ability to render things client side in a way the server doesn't anticipate, e.g. native widgets on various platforms.
I don't see myself loosing anything of value? If the data you are talking about is e.g scroll back, nothing is preventing me from storing a wee bit extra metadata on the server.
Every so often I yearn for something that would be like a fusion of 4DOS/4NT/Take Command (https://prog21.dadgum.com/74.html), a lisp "listener" (https://common-lisp.net/project/mcclim/excite.html) and a nicely pre-customized unix shell and terminal. Ideally something that could be also set up on a windows box without spending a day installing unix-style tools.
I'd settle for a nicely pre-customized unix shell and terminal - what are the cool kids using these days? fish?
> When my Rust program panics and prints that it failed in my_crate::foo::bar function, I want this to be a hyperlink to the source code of the function.
I had some success with good old plan 9 plumber (via the plan9 from user space distribution for Linux/Mac). You can define you're rules for which text to match under which context and it can send data to other programs, which can be scripts that talk to your editor and translate the clicked text into the right file and location to open.
I'm sure this exists in some form somewhere but something I really want is a way to pull up my command history numbered with a shortcut and then punch that number to run the last command than mindlessly hitting the up arrow for several seconds (why do we do this?). If anyone knows anything that has this functionality please share, thanks.
Yes and there is also Ctrl+R which kicks off a recursive search which is what I use most of the time but what about the complex 2 liner with pipes and dates 19 commands ago? :)
> However, I personally don’t use this capability of shell a lot: 90% of commands I enter are simpler than some `cmd | rg pattern`.
> (kbd "C-c") doesn’t work as it works in every other application.
So rebind it? Even terminal emulators allow for rebinding of keybindings, and so does emacs.
> I launch GUI version of Emacs: the terminal one changes some keybindings, which is confusing to me. For example, I have splits inside emacs, and inside my terminal as well, and I just get confused as to which shortcut I should use.
IMHE the solution is to decide which tiling manager you wish to you use (my experience is informed by using i3 to tile and sometimes emacs). 99% times I use my i3 to tile, and emacs only splits in half (C-x 2 or C-x 3 depending on what I need and how wide the emacs window is)
> Why can’t I type cargo test, Enter, exa -l, Enter and have this program to automatically create the split?
What you are looking for is `cargo test &`, `fg`, `bg`, and (kbd "C-z"), no? The ability to suspend and bring to foreground or background enables this kind of 1. run tests 2. list files. My own experience suggest that how I create the split is dependent on some, potentially implicit, context.
> Additionally, while magit awesome, I want an option to use such interface for all my utilities. Like, for tar?
Firmly agree, and I believe this is what the transient[1] package is for.
> extensible application container, a-la Emacs or Eclipse, but focused for a shell use-case
Emacs /is/ focused for shell-use case, at least in my understanding and use of it.
> A UI framework for text-based UIs, using magit as a model.
ctrl+c, ctrl+v and friends should work as expected.
cua-mode enables the "normal" copy and paste bindings.
> A tilling frame management, again, like the one in Emacs (and golden-ratio should be default).
What's wrong with `C-x 1`, `C-x 2`, `C-x 3`?
> A prompt, which is always available, and smartly (without blocking, splitting screen if necessary) spawns new processlets.
Isn't this what M-x is or would it taking up the bottom line disqualify it for "splitting". Regarding blocking, I do agree it's very annoying to have emacs become unresponsive.
> An API to let processlets interact with text UI.
I would assume that there is something lacking from emacs lisp api[2] for manipulating emacs, but it's not clear what they are from this authors perspective.
> A plugin marketplace (versions, dependencies, lockfile, backwards compatibility).
What is wrong with melpa, quelpa, emacs-wiki, et al?
> It doesn’t define out-of-process plugin API (things like hyperlinking output).
It sounds like that would require the out-of-process layer (shell) to be as expressive as the in process layer. For instance, emacs can link to files, and new custom link styles, but my terminal is loathe to support http(s)/(s)ftp, and I presume is not easily extensible (as emacs) to new protocols.
> Its main focus is text editing.
Everything already communicates in text, it's not clear there is a obvious alternative or what would suceed text editing for this use-case.
> Its defaults are not really great (fish shell is a great project to learn from here).
Perhaps, but the gnu emacs community seem to profess a "we-came-first" mentality for conflicts with emacs defaults and the 'wider computing experience'.
> ctrl+c, ctrl+v do not work by default, M-x is not really remappable.
cua-mode and false, I have m-x remaped to use helm.
I find the current shell world obsolete. grep/pipes/sed/awk are byzantine. Most REPL environments are better. I'd like a terminal to go straight to Python REPL and run just Python commands. Would be much simpler.
Awk and sed are entire programming languages, so like, ok... fair. It hardly seems fair to call grep byzantine to me, since although it is based around a DSL, it's an extremely common one that you'll find in every programming language, including Python. But fine, regexps aren't for everyone.
But pipes?? What's the problem with pipes? Don't you ever compose functions in your code or call map() over your data structures? How are pipes any trickier than the kinds of control flow you might use in Python?
Maybe if a better shell came along I'd be surprised, not knowing what I was missing. Perhaps I'm just in the dark or not imaginative enough to conceive of what one would look like.
With that said, whenever the discussion comes up, it seems to revolve around user specific workflow cases and "do what I want you to do, not what I tell you to do" design patterns, that would (from my perspective at least) feel hard to get right for everyone. As a result, we have lots of different shells now that make different trade offs and take different design approaches. That's great, right?
I don't know -- I'm rambling I suppose, I just don't ever really feel like Bash (or fish/zsh/whatever) is holding me back or making my work more difficult than it needs to be. We can argue about the esotericisms of shell scripting and how it's silly to emulate old terminal escape sequences in virtual terminals, but I feel like it misses the point.
I don't know, I'm certainly willing to change my mind here, it's just that for all of the problems I have day to day in sysadmin/programming job functions, the shell is the least of my worries most of the time.