Also note that those projects have the (not anymore) secret goal to also produce a web app from the same code base automatically.
I imagine native will follow, or wasm, or something. Nevertheless, having the terminal as the smallest denominator for UI makes a lot of sense for small utilities. You won't build the next Figma with it but a lot of scripts could benefit from that.
Indeed, if all the features of modern graphical/web applications are dragged into terminals, then what is the point of using terminal anymore instead of native graphics/web?
Furthermore the CLI proponent in me is saddened seeing this being largely another step away from CLI tools, although strictly speaking its true that the technology doesn't really define the interaction model.
It's not "all the features", but just being able to build a basic interface without too much trouble is a blessing. Several use cases:
- The most important: you don't have access to native graphics/web. A lot of people still need to work with remote servers where installing X is not viable and you can't access web ports.
- Simplicity. For simple interfaces, it's far easier to just use the terminal with an extra Python dependency, than it is to create a regular UI that will require a graphical server, graphics libraries and such. A web interface would be even harder to do.
- Easy adding interfaces to existing scripts. In fact I used textualize precisely for this, a script that launched some long running tasks, and I wanted to check the progress and be able to stop/throttle them. Launching a simple interface was easy enough, the rest of the script is still a regular Python script, but for that situation it just pops up a TUI.
Also, it's not like TUI apps are new. A lot of terminal applications will launch a TUI sometimes because it's just easier. See for example some configuration dialogs for APT.
IMO, the simple input/output and parsable nature of a good CLI app lends itself to chainability, scriptability, and reusability that is still more powerful and flexible than just about anything else.
Put in other terms, you can build a GUI/TUI over one or more CLI apps, but you can’t build CLI apps over one or more GUI/TUI apps.
If developers using these tools realize the need to build these things in layers and not force usage of their app solely through the TUI, then great - everyone wins.
But I’m pessimistic that will be the case.
I've looked periodically for something in this vein, but the only one I can see right now is https://pypi.org/project/matplotlib-terminal/ which I haven't had much luck getting running.
You can do all your text entry by clicking keys on an on-screen keyboard if you like, but saying that a physical keyboard is a more "powerful" way to enter text seems reasonable to me.
(sorry in advance, I know this is needlessly "well ackchually" lol)
No. It's faster and that's all. So call it by that word which is very precise.
If you say 'power' = 'speed' then you've destroyed the meaning of one of those words, just for the modern habit of co-opting a fancier word for a simpler one. I'd prefer people didn't do that, and differentiated 'speed' from a thing you call power which indicates "I can't do that at all, at any speed".
If you don't like bullshit marketing and PR, don't follow it.
Again, I fully acknowledge this is nitpicky to the point of absurdity and we're arguing semantics. But if we're gonna argue semantics, then the definitional meaning of power is on my side.
For "how much crap do I have to write to get it to do something", I call that expressivity. Scala is more expressive than fractran (I guess...).
I would just caution that it's fairly idiosyncratic - plenty of people out there don't use the word "power" the way you do. For example, here is how Paul Graham talks about the "power" of programming languages: http://www.paulgraham.com/avg.html
Just saying, as consistent as your definitions are internally, being too rigid about it may hinder your ability to communicate with others in the field. Good luck buddy <3
Definition of "power" is work done over some unit of time, so OP was using it precisely.
With a mouse I can select text between brackets and delete it, with vim I can tell the editor to do that.
Well, what's the difference?
The difference is the editor won't accidentally delete one of the brackets ever. That's not possible with the mouse. That's possible with the keyboard.
Mouse doesn't have an undo button.
I use the mouse a lot actually, no rodent allergy here. It's not as powerful a tool, it's just not, but it's versatile and literal, the multitool of the UX: I can always click my way to the state I need if all else fails.
Your keyboard does though, as probably does the GUI of your editor in a toolbar or menu item. It isn't like you have to use only the mouse. The mouse was meant to be used together with the keyboard, not as a replacement.
There's a menu item for Undo in most GUI applictions. But who has time? If you can undo from TUI vim without a keystroke, I don't happen to know how.
There's too much faith in your editor. Also. A mouse is a precision pointing device with some tools built in. By the time you've counted how many lines and characters you need to jump in a specific direction, I've already placed my cursor there with a mouse and proceeded with the task I needed to perform.
Because there's no dichotomy between a mouse and a keyboard as vim users would have us believe. These are two tools that people use, effectively, and together.
Also C-u allows multiples of whatever you want. I can move say, 19 lines up with C-u C-u C-p C-u C-p C-n (move up 16 lines, move up 4 lines, move down one line) and it can be quicker than you'd think - there is a cost of moving your hand to and from the mouse/trackball.
If you know it's 19 lines exactly, C-u 1 9 C-p.
It can be quicker, depending. For precise selection down to the character where gross markers aren't present, yeah, I deffo go with the mouse.
I'm agreeing with you here, just adding a perspective.
Or, you know, one click with the mouse. Because sane people don't count lines and characters.
For imprecise text scrolling a regular PageUp/PageDown will be faster than calculating the number of lines you need to go up/down. For precise random access nothing beats the mouse. Unless you're in a structured text your editor understands. But then, again, there's no mouse/keyboard dichotomy.
Jesus. I use mouse and keyboard as what's best for the task of that moment. In some cases it's a faster to do some complex things using a keyboard, at others, I pick the mouse. Depends on what I'm doing, how well the emacs commands fit that task, how tired I am etc. I'm talking from many years experience. Get a couple of years solid emacs under your belt then we can talk because evidently you have little to none.
Yes. In some cases. Not in the bizarre "let's do arithmetic to figure out how to jump to a line", which is exactly what I was responding to. No idea what got you so worked up.
> Get a couple of years solid emacs under your belt then we can talk because evidently you have little to none.
Ah. The person accusing me of dickish comments goes for the jugular and devolves into personal attacks.
I had two years with emacs. In the end I was very unimpressed, and switched to, drum roll, IDEA. That I use primarily from keyboard, by the way.
Type it for me. dib.
I'm not a member of the ratpoison squad, and use vim commands because they are eloquent. Sometimes that's fast, sometimes it isn't.
I use a mouse because it is versatile and I'm not competitive at vim golf or trying to be.
dib is great though, right up there with ddO.
> Type it for me. dib.
Ok. I took the bait and typed it. Fro the looks of it found the next matching set of parentheses, deleted content between them, and placed the cursor between them.
I'm.... supposed to be impressed by this random action of destruction?
> dib is great though, right up there with ddO.
Delete a line and insert one empty line? (As far as I could deduce from typing it several times).
But at this point we're veering into the territory discussing weird Vim commands :)
--- offtopic ---
IDEA's versions for this are, on a Mac:
- Option + Arrow Up to semantically select groups of words. Where "semantically" means "depends on the language". Atrribute value -> attribute value with quotes -> attribute_name="attribute value" -> tag etc. if it's HTML. Value -> declaration -> block -> etc. if it's a programming language (in reality more complex than that)
- Backspace to delete
- Cmd + Backspace to delete line
- Option + Cmd + Enter (insert line above) to insert new line in place of the deleted one
I use di" more often but it's the same thing really. fnCall("a| string"), `di"` lets me replace the string, `dib` to replace the argument (all of them ofc if there's more than one).
If ddO isn't available, granted, Cmd-right-backspace will clean up just as fast. Or Cmd-left-delete if you have both keys.
> `dib` to replace the argument (all of them ofc if there's more than one).
I prefer editors that are aware of context :)
Are there specific actions that might be faster than mouse in vim? Yes.
Does it mean that vim is always faster? No.
Does it mean that there exists a dichotomy between keyboard and mouse? No.
Yes. You don’t have to move your hands away from the home row, you don’t have to do some random moves with your mouse to locate the pointer. Vim is pretty much always faster. The important thing here is not the speed though - who cares if you spend 2 vs 5 seconds to select a paragraph anyway. It’s just that vim has much more convenient and most importantly precise way to interact with text objects, that’s it. Precision is the key.
> Precision is the key.
It's funny how you say this and dismiss an actual precision pointing device. It's also funny we are in a subthread that shows how to do arithmetic just to move up and down a certain number of lines and precisely position cursor where it's needed.
Also, a good article on keyboard vs. mouse someone linked in the thread: https://danluu.com/keyboard-v-mouse/
I believe S or even cc is more succinct
I use ddO because I use dd and I use O, while I've never gotten 'conceptually fluent' with using the c range to do delete-then-edit. I know it's there but I don't use it.
Another benefit is that keyboard input is higher-bandwith than mouse operations. Imagine a mouse-only interface for everything you can do on the Linux command line, or all the operations you can do in Vim or Emacs (beyond actual text input).
It’s more powerful in the sense of “power user”.
Then again I have RSI, do you reckon that's easier/faster for me?
> Another benefit is that keyboard input is higher-bandwith than mouse operations
Depends on the job - a mouse is far higher bandwidth for graphics editing, and I know cos I've done that when mice weren't a standard thing.
There's a simplicity of your argument here that misses an important point, use the right tool for the right job.
NB. I use a trackball and IMO that is better than a mouse in all situations I can think of. I recommend them.
Library apps are intended to be consumed by other terminal apps, or by advanced users. User apps are strictly intended for humans.
Take ffmpeg as an example. This is an excellent “library” app, so much that there are many actual libraries that are thin wrappers around it. It’s incredibly versatile.
Do I want to interact with ffmpeg, though? For one-off tasks, no, I’ll just fire up VLC or some other tool instead of reading the ffmpeg man pages.
Friendly, stylized, non-parseable, “user” terminal apps are a comfortable middle ground between hard-core “library” apps and full-on GUI apps.
For the rest of things, you can go a long way by just aligning things nicely, indentation, wrapping/indenting properly according to the terminal width, and maybe adding a bit of bold text which is both easily parsed by machines and human readable.
I generally think these are often considerably more user-friendly than outputting 6 different colours – half of which don't work well on many background colours so it's unreadable – assuming the terminal is 290 characters wide – many tools these days seem to think you've got infinitely wide screens – and all the other things these "modern" things do.
Something like "df" is a simple but classic example; GNU df at least aligns nicely no matter the column sizes (some other dfs, like NetBSD df, don't) and that's still easily used by machines and humans. Maybe -h should be the default though (which would break scripts, so it can't be changed for /usr/bin/df, but in an ideal world...)
Basically, you can have your cake and eat it too.
Hence -porcelain and friends.
- clonezilla boots by default to a very simple tui, in the vein of make menuconfig on Linux. You can forward, backward, see all options, expert or simple, and check or uncheck them, and get some contextual help. Once you've set up everything, you can start cloning (or restoring) and it gives you simple progress bars. But before you start it gives you the command-line you can just run next time you want to do the exact same operation, which is quite handy...
- AFL (the fuzzing tool) renewed the interest (to me at least) in 'progress TUIs'. Fuzzing can be very long and there are many things that usually flby2go well at first, so you want a dashboard and AFL provides one. Bonus: it's quite fun to look at and you end up trying to optimize some metrics (like the number of fuzzing cases per second...) since the TUI taunts you. For long-term monitoring AFL also maintains TSV files, with gnuplot scripts, but the UI makes the tool fun to use (doesn't hurt that AFL was so easy to use and landed crashes real quick). Praised be Michał Zalewski, aka lcamtuf.
Both have an optional TUI that make the user experience so much more powerful (sorry I know we're nitpicking the word today...).
CLI by and large treat the terminal as something to read/write lines to, with maybe some rudimentary interaction support like redrawing the same line for a progress bar etc.
TUI use the various positioning and mode configuration escape sequences of the terminal to (typically) display "full-screen" applications within a terminal.
Vim/Emacs would be obvious examples, or any of the Curses-based menu-ish systems.
Try piping vim (not in batch mode) to a pager, and it doesn't really know what to do.
There is some middle-ground, Github's `gh' command will use a TUI-lite interactive menu prompts for various parameters if omitted, but can run non-interactively and be piped / folded / mutilated etc if the appropriate args are given.
Sometimes you need an interactive tool and you may want to run then in a console.
Emacs is an example
Cramming output intended for humans, and output intended for data transfer between processes into the same format makes the lives of both groups of recipients worse.
This isn't it at all. Great example: k9s . Sure, you can manage k8s via kubectl. Or you can manage it via some web UI. And both of those options are a pain in the ass compared to what you can do in k9s in a few keystrokes. The TUI is the perfect combination, when done well, between a GUI and CLI apps. The interface can get out of the way and the usage can be driven by keyboard input vs the duality of click 'n type, or memorizing commands with a myriad of nested subcommands / flags.
Having a common TUI would be a bonus, which briefly existed back in Borland’s heyday.
This stuff is not "new" at all. It's just the latest twist on the ANSI art of old, and the use case is similar.
That's not exactly new, and has been around long enough that I dont think these other project are moving the cli any further away.
What is missing is a nice library for high level languages, as ncurses is famous and all that, but who, unless they are coding in C, can interface with it and even understand the documentation.
What they don't realize is that the reason the DOM is bad is because of webdevs like them.
So we will end up with a DOM in the terminal and the locust plague of JS developers will move somewhere else.
More concretely, it's using a totally different tool to get a subset of functionality you want vs other solutions.
Personally I don't see a reason why one would do that, but more power to you if you do. If anything, ordering the taco sometimes is going to give you a more refined palette than if you got the same burrito each time (i.e. more experience in the problem space you're working in).
(I'm getting hungry now...)
Are the original articles available somewhere?
Maybe https://archive.org/details/dflat is best now?
There's something about text interfaces that just fundamentally "flows" from most programming environments.
Turbo Vision was the best TUI framework, with a compiled language, in 1990, 32 years ago!
I really don't get this terminal nostalagia.
Everything is moving to the web or web-like. Most of the apps I use at work are web-based and mouse-heavy. The overall slowness and latency of the UI is killing me.
TUI apps represent a weird niche. People building them are _usually_ into providing a UX that is fast, productive, offline-first and composable (ie: lends itself to automation). It's a breath of fresh air.
Another point for TUI apps is that they are usually keyboard focused unlike GUI and webapps which require using the mouse (and are therefore less efficient)
But there is an issue if lru_cache is used on methods, like in the example given in the article:
1. When lru_cache is used on a method, `self` is used as a part of cache key. That's good, because there is a single cache for all instances, and using self as a part of the key allows not to share data between instances (it'd be incorrect in most cases).
2. But: because `self` is a part of a key, a reference to `self` is stored in the cache.
3. If there is a reference to Python object, it can't be deallocated. So, an instance can't be deallocated until the cache is deallocated (or the entry is expired) - if a lru_cache'd method is called at least once.
4. Cache itself is never deallocated (well, at least until the class is destroyed, probably at Python shutdown). So, instances are kept in memory, unless the cache is over the size limit, and all entries for this instance are purged.
I think there is a similar problem in the source code as well, e.g. https://github.com/Textualize/textual/blob/4d94df81e44b27fff... - a DirectoryTree instance won't be deallocated if its render_tree_label method is called, at least until new cache records push out all the references to this particular instance.
It may be important or not, depending on a situation, but it's good to be aware of this caveat. lru_cache is not a good fit for methods unfortunately.
I don't know how many years its been that I started working on the Terminal application, but it was only within the past week or so that I "bit the bullet" and figured out how to do finger pad/mouse wheel scrolling of the output buffer (See the 'main.onwheel' function in the source code for that little tidbit!). Since I required fine-grained control over the rendering process, I could not rely on the "naive" way of doing scrolling on the web (which is to simply let the browser take care of the entire process).
I've done web-based terminal-style renders in a number of different ways for my roguelike . I've done both DOM and canvas renderers. I found that the fastest approach to be:
1. Render each glyph to a canvas.
2. Only re-render glyphs that actually changed.
Doing that was much faster than using the DOM. I imagine I could go even faster using WebGL but at this point, I considered the performance good enough.
For anyone interested, my terminal library is written in Dart and is open source:
But of course, the idea of developing a terminal using a standard DOM-centric approach is usually not going to turn out very well... though Google is an exception here with the hterm js library that underlies the terminal output for Chromebooks: https://chromium.googlesource.com/apps/libapps/+/master/hter.... Most Chromebook users will never have the opportunity to put this to use, though people like me who put their Chromebooks in developer mode the first chance they get use it all the time.
(Jun 25, 2013)
I should fix it to use: https://api.dart.dev/stable/2.17.6/dart-html/Element/request...
I just never noticed because it wasn't causing any problems.
I'll give it a try.
> Unhandled Promise Rejection: ReferenceError: Can't find variable: webkitResolveLocalFileSystemURL
Per the Disclaimer in the Github README (https://github.com/linuxontheweb/LOTW/):
LOTW is developed in the crouton environment, which involves ChromeOS in developer mode. All development and testing is currently done on a Chromebook, using an up-to-date Chrome browser.
The system should basically work in any modern browser and host OS, but there are likely many tiny glitches that degrade the user experience in other browsers and/or operating systems.
The crucial fact of LOTW is that it is based around the concept of a full-featured, sandboxed file system in your browser. Only Chromium-based browsers natively support that kind of thing via 'webkitRequestFileSystem'.
That being said, there is a shim/polyfill that is supposed to load and take care of that (https://github.com/linuxontheweb/LOTW/blob/main/www/js/fs-sh..., created by Eric Bidelman when he was at Google). Last I knew, Firefox seemed to work with it.
I see the ie6 world is coming back
Anytime a software project develops enough complexity, there is no shame in targeting a specific platform. Given enough eyes on a thing, though, those kinds of "supported" issues always take care of themselves.
But I'm just a lone developer trying to do something never before done. I'm still trying to prove a concept...
I too am a lone developer but (not to judge) I consider any website I work on to only be complete if it works on Chrome and Firefox and maybe Edge. I mean, it's easily thousands of times easier to do this now than it was in the early 2000's when we were all forced to do IE5/6/7/8 compatibility, and the tooling wasn't remotely as good... so I won't have any sympathy for you LOL
Also no scrolling areas inside scrolling areas, which is always annoying.
Check out timg  to display videos and images in select terminals, or ranger, a file manager with image  & video  previews!
It would be good if the What we do page answered this.
>At the end of last year I took a year off to work on my Open-source projects and develop an idea that I believe will allow the terminal to eat some of the browser’s lunch. Turns out this idea was compelling enough to attract some sweet sweet VC cash and I am now hiring a third Python developer to join the company.
And? What does “land” mean here, exactly? The project is FOSS, so they won’t sell the software the traditional way. So how will they make money?
However, it's not a high-value business so it is somewhat surprising VCs are funding this.
Then it probably won’t be fully FOSS, will it?
Why do I need the terminal to become an interactive visual app? Why not use the tools that are designed for visual interactive applications- like GUIs ?
Am I completely missing something? Should I start thinking of replacing my desktop environment with a terminal emulator that does everything, including displaying images, videos, windows etc to gain some advantage I am unaware of?
- you may work mostly in the terminal and want to fire a quick tool: dev is a lot of that so it's great for this use case
- it's cross platform
- it works with no X so it works with ssh
- it doesn't eat a lot of resources
- it's fast to launch
- you usually already have a cli entry point, so this is a natural next step. The quick script becomes really nice
- it's very constrained, so devs have to focus on the most important things which makes the UI usually better than usual
- it's a common denominator so you can generate web ui and native ui form that
- TUI have naturally good keyboard workflow for free
- it's just really cool
Why might one use something like tmux instead of relying on Rectangle or a tilling window manager or what have you?
1) The terminal is far easier to gain access to cross-platform than any GUI window manager, so if you can keep most of your workflow in the terminal, you can work just about anywhere. You get it out-of-the-box basically everywhere but Windows (assuming we mean a Unixy terminal, here, not cmd or powershell) but it's very easy to gain access to a unixy terminal there, too, these days.
2) Maybe you are constrained to the terminal for some reason, as in the case of a remote server that doesn't have X installed.
Every OS has a terminal, so every OS can render those apps. Maybe we'll finally get a Electron-less slack one day.
Because those have become so shit the developers want to leave them.
The problem is that the developers who want to leave them are the ones who made them shit and haven't learned any lessons.
Terminals -> tuis -> guis -> single page apps -> terminals.
It's the cycle of life, the best thing to do is to completely ignore it, in 5 years everyone will have left and you'll be able to pick the two or three good ideas they brought with them.
There is also an old research showing that a small delay in the response time has a big impact on the productivity . The keyboard only interface and high performance/low latency can be an excellent choice for power users/users who care about productivity.
In theory we can have all that using regular GUIs, but almost no developer/UX cares about this today. The TUI constraints ends up being an advantage for getting things done in some situations.
The Economic Value of Rapid Response Time https://jlelliotton.blogspot.com/p/the-economic-value-of-rap...
Cool, but I’m surprised at VC money to be honest.
Something strange that I found was: If you redraw only two characters in the terminal, neither iTerm nor macOS's terminal would render the update. In my solution I always rendered some characters redundantly to get around this.
EDIT: I went back and looked at this code again based on the insights from this blog post and figured out a few more issues I had and fixed them.
It happened to GTK, it will happen to you too.
I wanted to build an CUA terminal editor with one of these for decades but micro is recently good enough. So am cheering on in spirit.
Please don’t turn to the terminal as a panacea for browsers. I don’t want my cli apps to suddenly be colorful or worse have css. Go back to making native apps, they are significantly better at doing interfaces than a terminal emulator.
I have been coding for about 40 years. I can’t even remember the number of advanced text-based applications I wrote way back when that was it. Full menus, trees, scrolling regions, pop-up dialogs, color, even mouse input when it became available. As they say, ‘been there, done that.
I decided to go console-based this time to roll out something quickly. We are working on a full GUI app, which was not going to be ready on time.
Well, I put “quick” in quotes above because it was far from quick. I guess I forgot how much work these things can be. And, to your point, how much you end-up reinventing a perfectly good wheel.
In retrospect, I should have told our customer to wait another week and deliver a far more capable product using wxPython.
Lesson learned. Again.
Don't forget the UNIX philosophy. We build composable programs for the good of other use cases.
Apologies in advance for drifting off topic: as (primarily) a Common Lisp developer, it makes me sad to see great Python projects that will never be replicated in my world. Perhaps there are 10,000 times as many Python developers as CL, so it is understandable.
EDIT: do you mean cl-tui?
Seems like Textualize is coming at it from the right angle, abstract as much as you can away such that a widget is something self contained and your UIs are actually composable à la web frameworks like React or Vue.
Declarative UIs are the future. Now... When will someone make a Go port...
Please tell me you can have window widgets
Like a modern Turbo Vision  where you can drag/drop them even one on top of the other?
We support multiple layers, but we are explicitly not advocating windows that can be dragged around. It wouldn't be hard to build, but I feel TUIs should avoid the requirement to shuffle windows around like a desktop app.
Also, please make them look like actual tabs, which TUI and Web designers avoid like the plague for some reason. :-P
Anyway, I really like what you're trying with Textualize.
So close. Window views would be the last deal breaker to remove for the product I was thinking about.
If eventually later gets implemented I'll definitively take a look/try to make a PoC!
Also reliance on the mouse in the demos made me feel a bit queasy. Anything involving overriding the default mouse semantics in a terminal window can go straight to hell
I'll give a small story from my distant past:
I went to a college with Sun IPCs and IPXs. It turns out, if you make the terminal beep on one, it is the HIGHEST priority thing the machine can do as far as we could tell. So when someone sent someone else, 2 megs of ^Gs via some mechanism usable in the 1990's, and recipient is sitting in the middle of a cathedral computing center. The machine literally will beep, and beep, and you can't stop it.
That is a lot of bleeping, beeping. They had to power the poor machine off. Since I've used visual bell on every terminal program I use. Since I've turned on Virtual Bell in every terminal emulator, and I don't trust much on a terminal.
May a terminal emulator author read my cautionary tale. (Before you ask: No I was not the user involved.)
This doesn't work for all emoji. Some are categorized as ambiguous width and their rendered width is system dependent.
I've been using terminals for too long and the expectation of what should be possible is engrained deep.
I want tmux to scroll and have hover states like that.
A bitmap sounds suitably compact and fast. There are likely to be large intervals of double-only or single-only items so it may be even smaller.
Regarding your hiring it would be nice to actually get a reply, and say why if you don't want a person.
It did well until the mid 00's when everything started expecting unicode support.
I started working on a TUI application over Christmas that used it but put it on the back burner until the css branch gets merged.
All requests need to be iterated on, in real time.
So no one asks the perfect question immediately - it takes attempts, where the answer is no, or failure.
So how fast we can interpret what the answer a terminal gives us is important.
And a picture, or immediate visual interpretation, is worth more than a thousand words.
I'm surprised how much has been done with it. The version in master has been stuck in limbo while we've been working on the CSS branch.
I know that you said you wanted to move away from Gitter, but somebody else recently had the same question .
It sounds like at this point you're starting to expect people closer to the code to use the css branch? 
 - https://news.ycombinator.com/item?id=31151315
 - https://gitter.im/textual-ui/community?at=62e799a2b16e8236e3...
 - https://community.textualize.io/t/api-stability-of-css-branc...
This use case is crying out for a Textual-based ASCII art editor.
> The second trick [...]
> The third trick [...]
This is pretty hilarious to consider when coming from Win32API.
Its traditional to clears the window before showing anything on Win32. Clearing the window happens so fast that you shouldn't see a flicker, even as the mouse moves over your window (each pixel your mouse cursor moves, Win32API will clear the window to its background, redraw the window (erasing the old mouse pointer), and draw the mouse pointer in the new location.
This "TUI needs to be overwritten, not cleared" idea seems quaint and slow. Win32API was drawRect(background color) for decades on ancient 386 machines and fast enough to deliver a good experience. Why is a 80 x 24 terminal window so much slower?
> In Textual the layout process creates a "render map". Basically a mapping of the Widget on to it's location on the screen. In an earlier version, Textual would do a wasteful refresh of the entire screen if even a single widget changed position. I wanted to avoid that by comparing the before and after render map.
Win32API creates and maintains the "invalidRect", the rectangle that needs to be re-rendered from scratch (ie: draw-calls called upon the hierarchy of "windows" from the background to foreground, in order , to make the overall window look unchanged).
Not only from mouse-cursor movements, but also as other windows "move" ontop of your window, or Clippy's speech bubble disappears (if you remember that little UI from the 90s version of Microsoft Word).
And again, this needed to be done every time the mouse moved one pixel, to erase the old mouse cursor (aka: redraw the entire window from scratch "over" the old mouse cursor, making it look like you've erased it) and redraw the mouse cursor on top of the fresh coat of paint. It was an incredibly common operation even in 20MHz 80386 land from the early `90s.
There's just no way a modern terminal is that slow, unless there's a billion layers of vsync / refreshes going on. There's definitely something wrong going on IMO here.
> Unicode art is good
This is true. Heck, ASCII art / symbols are often good enough to do many, many things.
There's something wrong with the terminal model at the fundamental level if you're a couple of magnitudes slower than the 1980s. I can't say I'm an expert on TUI (or guis for that matter), but... this whole blog post is kind of a horror story IMO.
That is not due to speed but due to Windows API being smart with clipping regions/rectangles to only allow pixel updates in damaged regions (e.g. uncovered parts of a window as you move it). Note that nowadays this only happens inside windows as toplevel windows are composed via DWM in an offscreen buffer. Windows also tries to doublebuffer window drawing updates to avoid flickering. However in Win7 (with DWM -aka Aero mode- disabled) and earlier you'd be able to see flickering by, e.g. resizing windows with complex UIs.
Also the "tradition" here isn't to clear the window but to invalidate the regions that were damaged so that the next update (which does clear the window) only affects the damaged regions instead of the entire window. This was most common during the 90s and early 2000s though, at some point computers and memory were fast and big enough to do double buffering during paint events (which in some cases you still need to do when working with GDI) to avoid visible flickering - and nowadays most common flickering issues are solved by Windows themselves doing it via composition.
> even as the mouse moves over your window (each pixel your mouse cursor moves, Win32API will clear the window to its background, redraw the window (erasing the old mouse pointer), and draw the mouse pointer in the new location.
This is what Windows did until Windows 3.1 (and you could see the cursor flickering when, e.g. a control drew itself while the cursor was over it) but with Windows 95 the system composed the mouse cursor to avoid that flickering. However nowadays (and for a long time now actually) the mouse cursor is drawn by the GPU as a separate hardware plane on top of the screen contents. All Windows do in that case is to send the cursor image to the GPU and set the registers that specify the plane (cursor) position whenever the mouse moves - no drawing takes place.
On the contrary. The modern GPU has so many buffers that its constantly recalculating every pixel with custom code pixel shaders (transparency and everything) every frame, maybe 60 FPS or faster.
Yeah, the CPU doesn't do any of that bit-blit stuff anymore. But the GPU does that constantly every frame, over-and-over again to compose modern windows. VRAM is 500GBps for a reason on modern GPUs.
In either case, "blanking" the screen and redrawing it is a fundamentally fast operation 30 years ago, let alone today. Today's computers are so much faster that there are layers of custom-programmable parts running on 2 different processors (CPU passing data to GPU over PCIe) every frame.
4GB+ VRAM buffers on the GPU allows the GPU to save off some work of course, but there's an incredible amount of calculations that occur on every pixel of every frame 60 times a second today.
In any case, a pure-text, maybe ASCII (or Unicode) window shouldn't be having these issues.
I was referring to the CPU side obviously since the operation you described in the message i replied to was done on the CPU at the time.
Also that plane i refer to is drawn at a later stage to composition - in fact it is used by older Windows versions (before 8) when DWM was disabled (or didn't exist, in WinXP) for the cursor, while the rest of the system did the region-based clipping approach.
FWIW it is also used by Xorg nowadays for the exact same reason - and like with Windows without DWM, Xorg without a compositor still uses a region-based clipping approach to drawing but the dedicated plane for mouse cursors to display the cursor.
Terminals just have the wrong primitives for colour. It’s irredeemably broken, needing complete replacement with an altogether different approach to colours, and I don’t even know quite what that approach would be, quite apart from the improbability of convincing people to implement it.
There are default foreground and background colours, and they could be black and white, white and black, or just about anything, really. (People speak of 16- and 256-colour terminals, but they’re actually 18- and 258-colour.)
You want to make your text stronger? Have fun. You’ll look at your terminal that does #ccc-on-#000 or similar, and try bold. One some terminals, this will also change the colour to #fff, but on others it won’t. You kind of want to, because bold-but-the-same-colour isn’t drawing the attention you want, so you figure you’ll set the colour to bright white. Well, now your text is completely invisible for may light terminal users. So you begrudgingly roll that back and decide to try yellow. Eh, it’s a bit dull, but not too bad. You’ll still get complaints from light terminal users, though, because although it does have the advantage of distinctiveness, it’s much lower contrast. Don’t even think of bright yellow, because that’s back to being almost invisible in most light terminals.
Light terminal themes have to decide whether colours 8–15 mean “bright” (increase lightness and perhaps saturation) or “higher contrast” (where you decrease lightness). Having played the game, I can report that both choices will break some things, but that “bright” is probably the more reasonable of the two. But know that you can’t rely on any particular direction in the relationship between colours 0–7 and 8–15.
You think you’ll get around all of this by setting background colours? Please don’t, this just guarantees that your app will feel completely out of place, and probably be unpleasant to use.
You’re fed up with light terminal considerations? Well then, perhaps you’d like some blue in your dark terminal. Pity that there are still widely-used terminals out there where blue so dark that it’s almost invisible and extremely hard to read. And even bright blue is commonly mildly painful to read. So blue’s out for any length of text.
My advice ends up: by default, you should not set any background colours, and for foregrounds you can use the default colour and colours 1 (red), 2 (green), 5 (purple) and 6 (teal), and I will graciously permit you to use colours 3 (yellow) and 4 (blue) for no more than one word at a time (e.g. “warning:” in yellow). Seriously. Until the user opts into anything else, treat the entire thing as a five-and-a-bit-colour terminal, because you’ll cause misery if you go any further. Never use colours 0, 7, 8 or 15 by default without determining what they and the default colours are, because they may be high contrast or zero contrast. You can also use bold, which may give brighter colours, whatever that means, but shouldn’t use colours 7–15 directly.
In many terminals it is possible to determine what the default background and foreground colours and other colours are, and if you have a really good reason why you want to use a bunch more colours you can try reading them and at least doing something simple like switching between a light and dark theme, but I recommend against that, because setting backgrounds is still just generally… non-native is probably a decent way of putting it. Stay colour-neutral by default, fit in rather than standing out.
This is hardly the only place terminals are using a fundamentally bad model. The article does talk about the problems of column widths, seen especially in emoji. We seriously need to burn the current scheme of terminals down and build something sound in its place. Of course, this is extraordinarily unlikely to happen, and only stands any chance whatsoever if compatibility can be maintained in some way.
16.7 million colors gives you a lot more control over emphasis, and de-emphasis. You can fade foreground / background, you can draw subtle borders, and you can boost saturation. Many of the techniques that web-developers have enjoyed for years.
It's true we don't make use of the user's theme, but you've laid bare the problems with ANSI themes. Respecting the user's color theme isn't going to guarantee readability (probably the opposite).
Terminals are not the web. As an extensive user of both terminal and web, and often a heavy customiser of both, I would strongly prefer that by default you prefer to fit in, even though it limits your degree of expression.
> There are several “standards” for writing color to the terminal which are not all universally supported. Rich will auto-detect the appropriate color system, or you can set it manually by supplying a value for color_system to the Console constructor.
with options None, "auto", "standard", "256", "truecolor", and "windows".
I can totally understand that support for (say) a Wyse WY60 may be lacking, and understand how the authors don't consider that to be an issue.
It looks like you can style to the terminal capabilities, using https://rich.readthedocs.io/en/latest/reference/style.html .
The technical aspects of colour support are fairly uninteresting. The inconsistency of the actual colours is the problem.
And as I asked in my followup, in which terminals ?
In any case, my point was that once you checked them, it looks like Rich can handle the configuration, and you can set up a theme to use, and textualize will let you pass in a Rich configuration.
You can’t guarantee that you can check them, and that’s the problem.
I haven’t directly surveyed terminal support for the relevant OSC sequences (I wish someone would make a thorough terminal feature comparison—in theory terminfo databases would be part of this, but in practice the scheme is thoroughly broken), but from what I’ve heard I would expect a reasonably strong correlation between support for that and support for 24-bit colour. That is, you are unlikely to be able to check the colours on the very environments where you most need to.
As far as static checking of terminal capabilities is concerned, I think it’s reasonable to say that everything’s completely broken by design there too. Everything builds upon the idea of a $TERM variable being right and terminfo databases having relevant entries and matching reality, and they’re just not. $TERM will be missing or wrong extremely often, and terminfo databases are regularly out of date, and if you try to use a “correct” value for $TERM you’ll often find various software (especially old software, but even new stuff sometimes) falls over, normally in the direction of claiming that your terminal is incapable of doing something that it is actually capable of. This got to be such a problem that $COLORTERM was introduced, which is more likely to describe the true capabilities of your terminal, but still not quite always even if it’s there, and much important software doesn’t look at it either. If you’re familiar with web history, it’s a similar sort of issue to what happened in the User-Agent header, and for similar reasons.
So then, you’re left with maybe some notion of the terminal’s capabilities, and maybe the ability to find out what the colours actually are, but you can’t depend on either of these, and so you can’t in the general case depend on the values of any colours.
Understood. I misread your mention of "checking" as meaning for an organization which wants to deploy a TUI internally, where they can check all of the supported terminals.
> That is, you are unlikely to be able to check the colours on the very environments where you most need to.
Which terminals are these? Do modern TUI developers need to support them? Or can they be ignored like how most modern web developers ignore IE 4 support?
That's why I keep asking you which terminals you refer to. Experience from 30 years ago may no longer be relevant.
> and much important software doesn’t look at it either.
Rich does. https://rich.readthedocs.io/en/latest/_modules/rich/console....
The most obvious case of missing support is macOS’s Terminal.app. Years ago I imagine you could theoretically at least query the colours by some side channel, but sandboxing will doubtless have prevented that. And maybe it does support the querying, which to my mind is the more important of the two pieces of functionality when it comes to accessibility.
<console width=109 ColorSystem.STANDARD>
# Convert to standard from truecolor or 8-bit
elif system == ColorSystem.STANDARD:
if self.system == ColorSystem.TRUECOLOR:
assert self.triplet is not None
triplet = self.triplet
else: # self.system == ColorSystem.EIGHT_BIT
assert self.number is not None
triplet = ColorTriplet(*EIGHT_BIT_PALETTE[self.number])
color_number = STANDARD_PALETTE.match(triplet)
return Color(self.name, ColorType.STANDARD, number=color_number)
There are 16 of these colors, which I visually confirmed using the demo "python -m rich" and counting the number of dissimilar colors.
This is in line with your suggestion to treat the terminal "as a five-and-a-bit-colour terminal" -- though 4 bits in this case.
HOWEVER, you added "they may be high contrast or zero contrast. You can also use bold, which may give brighter colours, whatever that means, but shouldn’t use colours 7–15 directly."
I confirmed that Terminal.app shows reasonable colors for this case.
> Don’t even think of bright yellow, because that’s back to being almost invisible in most light terminals.
I confirmed that bright yellow is clearly visible with a light terminal.
The two other missing features I could spot were lack of italic and strikethrough ANSI styles. Bold, underline, reverse, and blink are supported.
“The standard ansi colors (including bright variants)” are a sham. Basically no one has used those specific values for decades. (Well, actually, I think the Linux framebuffer console might still be? In a quick test, it does look about right. But few ever use it directly.) The whole approach is a crock. What you have to realise is that the first 16 colours are in a sense semi-semantic colours, though that’s not quite the right term for what I want to describe; it’s more a provided palette of named colours, perhaps. Trying to translate from exact colours to semi-semantic colours is… well, it’s generally not a great idea, not how you should treat those colours. For best results, you want to deliberately design for 16-colour and 24-bit-colour, deliberately choosing values that may not be the same value.
There are two classes of light terminal colour schemes: what I will call “true”, where black is black, white is white, bright/high-intensity yellow is brighter and higher intensity than regular yellow, &c.; and what I will call “flipped”, where black is white, white is black, bright/high-intensity yellow is darker and probably less intense than regular yellow, &c. The trouble is that the starting palette is ambiguous, because it was designed for dark terminals, but when you apply things to light terminals, some apps assume “true” treatment of colours (that is, that if I ask for white you’ll give me white, regardless of the default colours) and others “flipped” (that is, that when I ask for white, I actually just want a neutral colour that has high contrast with the default background colour); so both will be broken for at least some schemes. (But I should perhaps perform a survey of actual light colour schemes so that I can determine the prevalence of the two approaches. In my own, I went “true”.)
Though it's probably easier to fully support a different terminal.
Interesting trade offs.
Note that i only converted the UI, i didn't implement the calculator logic but that shouldn't make any practical difference.
But in the big picture holds. Wasn’t expecting folks to counter with Xlib or Win32.
I'm not sure what you mean with "oldschool", if anything working with something like Qt or Gtk directly where you either specify the UI by manually creating widgets/objects or by using a separate UI editing program (that often only has a fraction of Lazarus' features) that only edits an approximation of the UI and stores it in what are essentially resource files (not very dissimilar in concept to the Win16 resource editing tool) is a bit more old-school in my mind (Lazarus on the other hand edits live objects that are serialized to/from disk, which incidentally is kinda similar to how -e.g.- a modern game engine like Unreal works... though Unreal's UI toolkit is worse than any of Qt/Gtk/etc combined :-/).
The Python lib most likely needs more optimization, but my point was that a TUI program can still be more heavyweight than a GUI one (the original question about why rendering a GUI in a terminal after all).
I'd agree if you only referenced Electron, but you also referenced other GUIs :-P.