DomTerm approaches this from the other end than eshell does: At first glance it's a solid no-compromise (mostly) xterm-compatible terminal emulator, but it also has features to make it a great REPL/shell interface: Rather than just plain text, support structure (delimit commands, prompts, input, output). Allow "rich" html output: images, styled text, fonts. Builtin less-style paginator and readline-style input editing. Tmux-style panes and session management. DomTerm isn't integrated with Emacs, but there are embeddings for Atom and the Theia IDE.
FWIW when I wrote Emacs term mode my goal was for it to subsume comint mode (the basis for shell mode), but alas no-one else seemed interested in such a merger.
But I think the arguments ring true none-the-less. Our ancient terminals are awful. Colours are basically a hack upon hack. You can't rely on getting text effects working properly. Throwing tmux/screen into the works is almost necessary and as much as I like tmux the complication at the interface between the terminal and tmux is insane. How many times have you used vim or emacs in tmux in a terminal and found that somehow the terminal settings aren't getting through properly? I'm practically an expert in that stuff now (through long hard experience) and sometimes I still run into problems that leave me scratching my head.
We're ripe for something new... but I don't think an application of that kind of girth is going to cut the mustard. Again, I'm super happy that people are tackling this problem and if it works for you, then more power to you. But I think that I'm probably pretty representative of the kind of person that lives in a terminal. I can't see that kind of thing getting popular.
What would be awesome, though, is the generation of a new set of standards. We need "terminal mark 2" that has these kinds of abilities and we need standards that will allow interoperability towards applications running in these terminals. For example, as much as the article asks if we need terminal capabilities like ncurses -- I think we do. But we also need capabilities like being able to spawn panes in a tmux pager (just like a window manager). We need proper history navigation, cut and paste (across and ssh session!), etc, etc. These things need to be environment agnostic so that we can build an ecosystem of tools that will become popular. If people want to live in Emacs, or in Electron, then great. If not, then there are potential options.
I know it's a completely half formed argument, but I hope it resonates with some people :-)
To be clear, I think it is enticing to imagine a perfect solution. I just don't think it is fair to ignore all of the work that has happened. Nor do I think it is realistic that something will be able to get past the ridiculous cost of entry at this point. There is a reason eshell isn't a complete shell replacement.
To be direct to your points of things we need, though. I think you'd be surprised at just how well eshell does all of them. The only real limit to how well it works is that piping a lot of commands together is limited due to everything going through a buffer. For those cases, "shell" and then doing whatever I was wanting to do, works like a champ. And if I am really wanting to do some fancy stuff, an org buffer is better anyway.
xterm supports sixels, which do real graphics:
The img2sixel program does what you want in terms of catting an image to the screen and having it show up correctly.
(Scroll down on that page to get to the per-terminal documentation.)
These are only a small subset of features modern terminals have. There is absolutely no need for terminal 2 or awful slow terminal implementation based on rendering via a DOM.
Many ternminals support all or most of these features, one such: https://github.com/kovidgoyal/kitty
It is also worth pointing out that most of the features you mention (except 1 and maybe 3) use protocols that are not widely supported. DomTerm uses familiar HTML wrapped in a trivial escape sequence. A related benefit is that reflow on window resize works - and you can copy/paste or save as HTML.
"Awful slow" is relative. DomTerm does take some extra seconds if you 'cat' a large file - but if that is your primary measure of a terminal emulator then our needs are very different.
As for familiar HTML, it is trivial to write a library that accepts "familiar" HTML and converts it to SGR cdes for formatting. I could do it in an afternoon.
And for "awful slow", do the following experiment. Open a large text file in less in your terminal. Then scroll it continuously and monitor CPU usage (of the terminal and X together). Now compare with a real terminal. Think of all the battery life and all the energy you are sacrificing just for the ability to use multiple font sizes and families.
Some sort of common semantic markup/annotation would be useful to allow terminals to offer intelligent click/hover/select actions on urls, file/pathnames, etc, etc.
This can be done with regex or deeper parsers in the terminal program, but that's slow & fragile. If the outputting program has a way of generating '<a href=...>yourlink</a>' the term can just interpret those and save a lot of trouble.
It means you need a) a common markup standard, b) support from enough terminals, c) support from enough output-producing programs to make it all worthwhile.
You'd probably also want any such markup to be backwards compatible so it didn't horribly mangle content on unsupported terms, unless you could sneak it in through feature detection in termcap/terminfo.
Also, I'm not sure 'rendering via a DOM' is the real point of contention here. My understanding is that there's already a DOM of sorts in most terminals, being used to represent the current window in terms of lines, rows, character cells, etc.
Those cells hold attributes for text colour, formatting, and content, etc.
I don't think it would be entirely impossible/impractical to come up with an enhanced representation along those lines that allowed supporting producers/consumers to do more better things.
A fully-fledged HTML DOM and the bulk of a browser engine required to actually render it is where I think the complication and performance comes in. Not to mention a loss of some of the relative simplicity of display generation that a fixed-size character grid affords producers.
The overhead of an HTML DOM vs the typical cell structure used ina terminal is the difference between Jupiter and Mercury.
And yes, I wasn't arguing in favour of using a full HTML DOM, but that potentially some simplicity-favouring middle-ground might allow new and interesting features. As you've just demonstrated :P
As a question to think about, what features does Emacs provide that are unused? I think most Emacs users end up using quite a lot of the features, so why do you think it will be possible to create something more lightweight?
Don't get me wrong, I want this to exist. But, it is important to look at existing solutions first. For example, isn't the X11or Wayland spec an implementation of "terminal mark 2"? A window manager is the shell. Perhaps we are just missing the right kind of utilities to make this environment as effectively as a terminal shell?
Another point to consider: are the frameworks massive by themselves? I would argue that the bloat comes mostly from having multiple frameworks. If all apps used the same version of electron then you could have a single electron runtime and then it could be more efficient. Same if all apps agreed on a single Qt or GTK version, or any other framework. If in fact redundancy between multiple frameworks is the problem, then another, new standard will not solve this.
In the end, I want to believe that there's something better, and if you have any examples or arguments to convince me I am eager to hear them.
I’ve often thought this. Who would draft such a spec? Would the IEEE still be the right place (thinking back to POSIX), or GNU, or Google?
terminfo and ncurses seem to solve this problem, assuming you're not just hard-coding in escape sequences.
I thought XMLterm was going to catch on, but no one seemed to care. Almost 20 years, and now scientists seem happy with Jupyter Notebooks.
Wolfram spent that time trying to get everything into a regular model, so any output could be used as a valid input argument. I have tried, and yet it seems only possible to use if your a priori model of the problem domain already matches the Wolfram Language.
We need better terminals. But so far the seem very specialized.
The author of XMLterm (R. Saravanan) more recently developed GraphTerm (https://github.com/mitotic/graphterm) but has not had time to continue work on it.
It's the reason it isn't silly that people read e-mails, chat or manage their files from Emacs. A CLI e-mail client is annoying because of the typing. A GUI e-mail client lacks any sensible interoperability. A curses-based e-mail client is the worst of both worlds - neither interoperable, not particularly nice looking. An e-mail client in Emacs - and any other application made within it - immediately gains several deep levels of productivity features and interoperability:
- All your usual keybindings work. All your usual searching and editing operations work - that includes not just typical "move around and edit stuff", but also things like advanced autocompletion, interactive or batch regexp search & replace, grepping through everything that's displayed, multiple cursors, and whatever other thing you like.
- Since application's UI is rendered mostly as text (with some minimal non-text overlays if necessary), you can navigate around, interact with and copy everything you see. Need to copy headers of an e-mail to somewhere? If they're displayed, you can. Need to copy a list of e-mails sent from someone? Filter those mails, and then just copy the list from the buffer, as if it was regular text.
- Above solves like 90% of your basic interoperability requirements that let you be extremely productive. Need more? Learn some basic elisp, and now you can script or extend everything, both from outside (using "exposed" application's API) and inside (augmenting/modifying application's code at runtime). Emacs exposes lots of functions optimized for working with text in an editor, so it usually takes just a few lines of code to compose together a new productivity feature. Need the list of e-mails mentioned previously regularly, for your weekly report? That's probably one call to generate it in the background, few more lines to select and copy it as text, and paste it straight into the file you're editing.
Really, in my life I haven't seen any other platform embracing interoperability by default.
Turns out, the author of TFA covers this topic in another post: https://ambrevar.bitbucket.io/emacs-everywhere/.
"orthodox" seems to be more related to file managers
Skimming the article, Emacs seems to be an outlier even for that category. Quoting from the abstract, the set of ideas defining Orthodox UIs:
1. Distinct command set layer with commands that can be entered from the command line and reflected in GUI interface. In this sense vi is a reference implementation and OFM inspired by vi have some interesting, distinct from traditional line of OFM ideas implemented. See ranger and vifm.
2. Tiled, nonoverlapping windows with minimum decorations
3. Stress on availability of all commands via keyboard, not only via mouse clicks, although mouse can be productively used and is used in such interface.
4. Ability to redirect output of commands executed in one window to other windows and processes.
5. Usage of GUI elements to generate commands on command line (macrovariables and such commands as Ctrl-Enter, Ctrl-[ and Ctrl-] in OFM. )
6. Accent of extensibility and programmability (with shell and/or scripting languages) instead of eye candy.
Emacs doesn't seem to meet #1. Sure, you can transfer commands from commandline to a new and/or existing Emacs instance (e.g. through emacsclient -e "some lisp code"), but it's not a common way of using it. It definitely meets #2, #3 and #4. It would fail at #5 if I understand it correctly - sure you can do this, but most of the time you use keys to execute elisp directly, not through shelling out. And as for #6, Emacs blows everything else out of the water.
I'd argue it is more useful to talk about "a language interpreter" rather than a "commandline". A command line is a language interpreter, of course, where the commands make up the language. Emacs is also, at its core, a language interpreter -- but its language is not a simple command set, it is Emacs Lisp.
From that perspective, one can reevaluate 1 and 5:
1. Emacs is at its core a virtual machine that runs elisp, and any modifications you do to that elisp environment shows in the GUI. Change the mode line variable? GUI reflects it. Evaling Emacs Lisp expressions is one of the most common ways to interact with the editor in more fundamental ways than offered by the interactive functions.
5. It is common for some people to use the mouse to click around in Emacs. I do it too, sometimes, with stuff like dired. These clicks are not magic built-ins. They are, just like keys on the keyboard, only bound to execute Emacs Lisp functions. Redefine the function, and you redefine the meaning of the click.
This is the very reason Emacs is as hackable as it is.
If you have more thoughts on that, I'd love to read them. So I want to encourage you to write about it. It's a very much underdiscussed topic.
I thought about it in the context of the "Tech's Two Philosophies" article (https://news.ycombinator.com/item?id=17030339) that showed up a couple hours ago. I look at the overall way Emacs works, and how many other powerful OSS tools work, and then I look at those companies mentioned, and it makes me deeply sad. Those companies, which happen to be trendsetters of our industry, are so up to their ears in moneymaking that they no longer realize most of their practices and recommendations are user-hostile and productivity-minimizing.
On Oberon the whole OS works like that.
Plus it had the benefit of being implemented in a GC enabled systems programming language.
Maybe one day we could get ACME re-implemented in Go, running on something like "Goberon System 3", specially now that it finally has plugin's support, essential for how Oberon implemented UI commands.
Do you know if there is a simple wrapper that swallows ansi escapes but yet preserves the illusion the output is a tty (so that the wrapped command doesn't freak out?)
See http://no-color.org for more.
I'm happily not using arc for work anymore.
* Long command output is a PITA? fine, Extraterm will 'frame' it and you can easily get to the top of it (Ctrl+Shift+PageUp).
* Want keep that output for later? No problem, you can move the frame into its own tab. Hell, you can even drag and drop it into your desktop file manager.
* You should have filtered that last command output but you forgot? Extraterm's `from` command will let you feed the contents of a frame back into your shell pipe line.
* You did a `cat file-list.txt` and now you want one of those paths? Just go into cursor mode, move up into the frame, edit in the rest of the command and run it. It's a bit like the C=64 again! yay!
* You just want to see that image? Sure, `show` command can show it directly.
* Sick of composing paths for `scp` to move stuff across machines? Use `show` and `from` directly to download and move binaries.
It also integrates with your favourite shell (as long as it either bash, zsh or fish!), and happily works across SSH. It runs Emacs too and any other normal terminal application because it is a proper terminal emulator.
> Long command output is a PITA? fine, Extraterm will 'frame' it and you can easily get to the top of it (Ctrl+Shift+PageUp).
* Shift+Home to scroll to the top
* Shift+PageUp to scroll up
* Ctrl+Shift+F and search for " $" to find the beginning of the output
* ... | less
> Want keep that output for later? No problem, you can move the frame into its own tab. Hell, you can even drag and drop it into your desktop file manager.
* ... &> output.txt
* Select the output with the mouse and copy it
* F10 -> Right -> a (keyboard shortcut for Select All), copy and paste into an editor, remove parts I don't want
> You should have filtered that last command output but you forgot? Extraterm's `from` command will let you feed the contents of a frame back into your shell pipe line.
Use one of the methods above to get the output into output.txt and then use `cat output.txt`.
> You did a `cat file-list.txt` and now you want one of those paths? Just go into cursor mode, move up into the frame, edit in the rest of the command and run it. It's a bit like the C=64 again! yay!
Triple-click to select the path, middle-click to paste it.
> You just want to see that image? Sure, `show` command can show it directly.
`xdg-open` (has the advantage that it opens my image viewer which can also zoom, rotate, show next image, etc.)
> Sick of composing paths for `scp` to move stuff across machines? Use `show` and `from` directly to download and move binaries.
I'm using my file-manager for uploading / downloading files via SSH.
Of course Extraterm will work better for some of these tasks (the first two points are a lot more cumbersome with my methods), but not by that much that I think it's worth it.
I'm not saying that we couldn't do these things in the past. I will say that easy-of-use matters.
Also, a lot of the methods you outlined don't work well across SSH. Extraterm's methods do.
IMHO it's not that awful and 99% of the time you're doing other things anyway.
> e.g. remembering to redirect output to a file.
But when you forgot to do that, it's not the end of the world, just a little inconvenience involving the mouse.
> a lot of the methods you outlined don't work well across SSH.
Which ones and why not? The only one I can think of is xdg-open and that can work with `ssh -X`.
Unfortunately all it does is complain about not having any session types configured, offering precisely no guidance as to what to do about this. The documentation doesn't mention it at all either.
I purchased a Windows laptop a couple of years ago as an additional machine (my main platform is Linux), and honestly it’s using Windows which feels masochistic: ads, system prompts which pop up beneath windows, ads, the login screen swallowing my first few characters, ads, massive over-use of the trackpad (but this could be my fault, ads, very poor update experience (compared to Debian), ads, sluggishness, IE, Edge, ads, lack of free software and — oh yes, lest I forget — ads. Did I mention that an OS I paid for shows me ads all the time? ’Cause it does.
By comparison, my Debian machines are a joy to use. Every time I have to use my Windows laptop I physically deflate; every time I return to one of my GNU/Linux machines I sit up straight & smile.
0: When I'm using a site which requires a smart card login, Windows pops the smart card certificate & PIN prompt up beneath the current browser window — so it appears that nothing is happening, unless I move the mouse down to the taskbar & hover, revealing the waiting prompt. This is … odd … behaviour.
Just run Ubuntu, like I do. Almost as easy to maintain as Windows.
Anyway, with Emacs as your OS, whether Linux or Windows is your bootloader is of secondary concern ;).
I feel that people who criticize the platform have yet to try it...
Though to be fair, if your are running Photoshop or final cut Linux is not a great use case
So clearly no one else has issues, right? None of the people complaining about poor power management, insane application deployment model, poor support for high-end GPUs, general fragmentation, poor backwards and forwards compatibility, or any of the other innumerable problems have ever tried a Linux desktop.
This attitude of "it works for me, therefore it is good enough for everybody" is one of the biggest problems I have with Linux. It isn't so much that there are flaws, it's that the community refuses to recognize that there are flaws.
Doesn't actually solve any of the problems I have with Linux as a Desktop. And if I listed those problems someone would just tell me to use a different distribution, which is itself a problem. Or alternatively try to convince me they aren't actually problems "for most people" as though I should give a damn what is and isn't a problem for someone else.
For Windows users though, the shell integration stuff is only going to work with WSL and cygwin because anything Windows console based doesn't use or pass VT style escape codes, thus I can't extend the protocol there. This may change in the future as the MS console team is working to add more support for VT and unix style terminals. Making Extraterm into an SSH client itself is also an option for Windows but I've having trouble gauging how important a feature that is and whether it should get some prio.
The bizarre twist to this story is that it is going to be easier to add shell integration features to PowerShell Core running on every platform which ISN'T Windows.
Until we had window systems this really was quite practical.
I've usually got a large number of terminals open simultaneously to maximise use of my visual memory and it's useful to be able to context-switch between "editor mode" and "terminal mode". I don't really like all my shell and code views jumbled up.
Anyone got any tips on maintaining separate configurations of eshell windows and code/text windows?
When you use multiple frames, they are still connected to the same Emacs process, so they share buffers, kill rings, etc. Personally, I use frames to have Emacs windows on multiple screens and desktops.
I'm not sure what I want exactly - maybe something like separable workspaces that have independent saveable and restorable configurations.
Bind that to a key:
(defun my/list-shells ()
(ibuffer nil "*Ibuffer - shells*" '((or (derived-mode . shell-mode)
(derived-mode . eshell-mode)))))
(global-set-key (kbd "C-x C-b") 'ibuffer)
This probably could be handled with one of the couple windowing/"desktop configuration" management packages available on MELPA, but I haven't used any of them so I can't recommend any. Myself, I use a tiling WM, so I just place Emacs frames where I need them and keep them there.
On Un*x systems, I use it a fair bit, too, because its integration with emacs is more comfortable than running a regular shell from within emacs. Being able to call elisp functions from within the shell is something I do not do often, but it is very convenient have anyway.
The ansi-term mode is annoying. It doesn't work like regular Emacs so it messes up all of your keybindings.
It's much easier to just use a decent terminal emulator and live with the cruft than to fight it.
You need to be able to make some assumptions about your environment in order to implement tools like these. All you're doing is specifying the context in which certain commands within your workspace will run.
A common pattern I've seen with some npm packages is to have the globally installed executable delegate to the project-specific version. This is a nice solution, but it's unrealistic to rewrite lots of projects to use this approach.
 - https://common-lisp.net/project/asdf/
Now, regarding the distinction terminal vs text. Translated to the GUI world what the author says is basically, "meh, my framebuffer doesn't support me in opening URLs even if I tell it the location of the pixels in its memory".
Of course that can't work. In the same way, the terminal is just an output device. GUI terminal emulators (like xterm) come with some additional features like selecting and pasting text or reporting mouse events.
We can easily make a shell that buffers all the output from the jobs run through it, and possibly stores a copy of them.
However, that would not play nice with programs that actually want a terminal connected to take advantage of advanced capabilities. The shell would have to simulate a terminal for these programs to work. That's not trivial, and basically what programs like screen and tmux do.
So instead, we're forced to re-run jobs in cases where simple mouse selection from the terminal does not work. Or, if we know in advance that we'll want to store contents, we'll just redirect the output explicitly to a file.
It's not perfect, but I've yet to see a GUI that comes close to the comfort I get from a terminal + shell combination. A big part of that comfort comes from the fact that so many programs work trivially in this environment.
I think one question might be, should we make a distinction between "advanced" and "output-only" terminal programs? The former could be started with an explicit "start" command or similar, and speak to the terminal directly. The latter could have their output redirected to the shell, which does whatever thing with it (display it in a text box, have it easily searchable, etc...).
I think this would quickly lead into a rathole, were we want to make more distinctions, like programs that output files that can be clicked on, programs that output URLs etc. So, those programs should probably just indicate what kind of thing they are drawing themselves, by using a complicated API. We've reinvented the GUI!
I think that's sort of the point of the author. Emacs is kind of like GUI terminal emulator (and multiplexer). Kind of, because in reality it's an extremely advanced and powerful GUI for everything text-based.
Of course, it does have its quirks, especially when used to run curses tools. The problem here is a difference in philosophy between curses and how you would write software for Emacs. In fact, it's a very similar difference to what you mentioned about CLI vs GUI programs. Emacs UI conventions assume high interoperability as a fundamental platform feature. Curses applications are like GUIs or modern web pages - they want to have full control over content, instead of giving most of it to the platform. That's why a curses app run under Emacs will suck (assuming it'll even run correctly). An alternative application written for Emacs will excel at productivity and interoperability, because instead of trying to lock you out of what's on screen, it'll yield itself to the full feature set of the platform.
I like eshell, especially since it also runs on Windows giving a consistent CLI across multiple OSes.
But I don't really use it since I don't want to have to keep in mind: "Oh, better open a real term and not do this in eshell."
Right now I'm with "experimenting with shells in Emacs" phase, but I can already report good results. Most of the typical shell work I do in eshell now, but I also keep some shells (regular and M-x async-shell-command ones) in the background, to e.g. connect with openvpn or run node.js servers. I haven't seen any performance issues so far.
I didn't spend much time digging into Emacs and profiling the problem, so I can't tell you much beyond my guesses that it had to do with fontification.
you can run a bunch of commands and then search back over the previous commands and the output.
cut&paste previous stuff like you were in a text file.
and you can look at a file in the current folder with c-x c-f filename.
I think almost every GUI terminal app has this already. Ctrl+Shift+F in GNOME Terminal for example.
Famous last words! I think this may be why so many of the terminal emulator projects are abandoned. One thinks, "I could do that," and then trips over Dickey's vttest and Paul Flo Williams' vt100.net. And maybe ECMA-048. And then one remembers about unicode... Anyhow, I was going to start one this morning after reading about eshell, but my research persuaded me this would be a lot of trouble.
I've tried using eshell many times, but there are various quirks that just break it for me. It's also really hard to beat tmux for window/pane/session management.
Terrible I know, but that's my store. I guess Eshell is not for me...
(setq scroll-conservatively 30)
(defun go-up-one ()
(defun go-down-one ()
(global-set-key [S-up] 'go-up-one)
(global-set-key [S-down] 'go-down-one)