Hacker News new | comments | show | ask | jobs | submit login
Learning from Terminals to Design the Future of User Interfaces (brandur.org)
785 points by juancampa 72 days ago | hide | past | web | favorite | 370 comments



I had sort of an "aha" moment reading a non-technical co-workers conversation on Slack the other day. Someone built a Slackbot to show the menu for whatever food truck is outside our building that day, and people could use it just by typing "/foodtruck". They were blown away and loved it. Rather than opening a browser and navigating some disparate menu pages, they could simply fire off a command in their "console".

I realized that people love Slack because it has introduced them to the CLI paradigm of computing for the first time. CLI is so much more efficient and powerful if users can somehow be incentivized to put in the time and effort to become productive, which Slack has focused on a lot. As inefficient and silly as it may seem to us developers, Slack with it's endless command based integrations has succeed wildly at that where nothing else has so far.


Most successful interactions with Siri/Alexa/Cortana/GoogleNow also fall under the same paradigm. They have some finite number of templates/recipes that they can respond to, and just enough "NLP" to attempt a fuzzy matching between what you ask for, and the triggers to the recipes.

The key point is that communication is fundamentally serial, in both voice and basic text consoles, whereas GUIs allow parallel communication streams (and hence "exploration") organized spatially.

I find it ironic how the same people consider GUIs a more newbie friendly interaction paradigm than the terminal/console, and then say the same for voice interfaces over GUIs.


Maybe because, the cli mode means, you need to remember the commands?

In GUI mode, though inefficient, you know exactly where to access help and a few basic params (like dropdown menu on top, usually save under file, cut and copy under edit) that smoothens the learning curve. They have some solid ground from which to start exploring.

Whereas under console mode, there is no unifying paradigm. So each tool has a different way of doing the same thing. There is a lot of inertia at the start. Like what command is it for help 'h' or 'h?' or 'help' or 'H' ? What command is it for quit 'q' or 'quit' or 'ctrl+d' or 'ctrl+c' or 'bye' ?

For example, in gmail, I expected archive to be 'a' and delete to be 'd' or atleast "DEL". Very basic operations and basic expectations. Turns out, archive is '#' and delete is some char I can't remember now (I think '{' and '}' to indicate whether to go to the previous mail or the next mail. A 'd' to go to next mail and a 'D' to go to previous email would have been a lot more intuitive). Of course, you can customize the key bindings, but then you got to go do it across all your accounts.

While people may be quite happy to be able to wield so much power within the slack environment, they are very quickly going to lose their enthusiasm with the introduction of each new console tool, unless what they have learned in the slack environment translates into easier learning curve in the next console env.

Basically for users to be able to adopt the console mode, there is should be some sort of standardization.


This is related to DWIM (Do What I Mean) https://en.wikipedia.org/wiki/DWIM

It seems like DWIM features gained popularity from the 60s to the 80s, but more recent systems avoid it in favour of explicit (although sometimes cryptic) error messages ("weak types", like `==` in PHP and Javascript, are a remaining example of DWIM).

This might be connected to the rise of GUIs: more "general user" software became GUI driven, or menu driven, rather than using a free-form input language. That reduced the need for DWIM as a way to help new/casual users. It also meant that the CLI software that remained was generally more powerful, and therefore more dangerous (e.g. bash rather than zork), in which case there's a higher chance for a DWIM system to cause problems.


Just so no one accidentally deletes an email on Gmail, 'e' is archive and '#' is delete.


Also ctrl+enter is send email (immediately) which is very annoying after working in google sheets for a while where ctrl+enter is newline...


The most annoying for me is “j” in outlook. It means “move down” in Gmail and dozens of other apps - in outlook it means “mark as junk”.


TBH, outlook hotkeys generally are a mess like that - not even when compared with their "competitors", but also when compared with Windows itself. I can't even tell how many times I wanted to search for text in a long email only to find that "Ctrl+F" forwards the open email.


Weird. I’ve always done ‘y’ for archive.


That's what ctr+z is for isn't it?


To stop the current job!?

;oP


Press delete accidentally or intentionally and you should see a notification about the delete.

If you want to undo, then Ctr + Z. At least that's the way computers have worked for eons.

Ctr + Z is the reason I prefer computers to typewriters. Software without the ability to undo or reset actions are devolving the user experience to typewriter age.


> If you want to undo, then Ctr + Z. At least that's the way computers have worked for eons.

You're missing the joke. Way before Ctrl-Z started to mean Undo in GUI programs, it was (and still is) assigned to "stop current job" in terminals.


Thanks. Learned something new.


"In GUI mode, though inefficient, you know exactly where to access help"

I wonder how useful "help" is to the average user. I've personally never been aided in a strange GUI by visiting the help. Usually there is a cryptic search box that returns useless results when I try to look for what I want, or else I'm presented with what is essentially a book to read about the entire philosophy of the interface. *nix manpages/infopages have a similar problem, but Windows help pages seem to be worse for some reason.


Help was the first command that came to my mind. Atleast in my very early days, (this was in early 2000s) I did use lots of help. Maybe help was better then or I was so clueless that even the insipid quagmire of the help pages still proved to be useful.

But in general, I am talking about certain consistency among ui. Say you have opened a new program and you do not know where to start. We can go to 'File' and there is sure to be a "New Document"/"New Object"/"New Diagram"/"New <whatever-model-the-program-work-with>" available to start.

Similarly Edit, Insert, Preferences, Script, Debug and their visual and keyboard access methods are all paradigms that are consistent across GUI.

At the worst case, you can do a systematic exploration of the ui to find all possible actions one can perform. This has to be standardized to CLI also for a shot at widespread adaptation.


Yeah, I guess we're talking less about "help" (F1 in Windows, or at least it used to be) than about the user-interface guidelines that mandate the "File" dialog and so. As for doing a "systemic exploration", this is largely the role that manpages or "-h" fills in traditional Unix programs, but you're right that we need a better solution for widespread adoption of a "modern" Slack-y CLI for nontechnical users. Maybe show HELM-style autocomplete "menu" of possible subcommands in the input area or sidebar?


> (I think '{' and '}' to indicate whether to go to the previous mail or the next mail. A 'd' to go to next mail and a 'D' to go to previous email would have been a lot more intuitive)

It's "j" for next conversation, "k" for previous - comes from hjkl, vim (and older [0]) keyboard navigation. Then "n" for next and "p" for previous, for emails within the same conversation.

Thing is, just like vim, these are supposed to become short-circuited so when in the right context, you don't think of the letters anymore. It's not "that key is j, which is up", it becomes "that key is up". It's the same newbie discovery vs power-user productivity problem described in the post.

[0] https://vi.stackexchange.com/questions/9313/why-does-vim-use...


I think the parent is referring to “delete and go next” and “delete and go previous” rather than simply “go next” and “go prev”


Just as thinkmassive specified, I was talking about the two keys for "delete this email". One key deletes the currently open email and opens the preceeding email, another deletes the current one and moves to the succeeding email.


> Like what command is it for help 'h' or 'h?' or 'help' or 'H'?

It's all about smart desing of commands. In your example all of those options could work, plus options like 'he', 'hel', 'help', 'help!!!', 'Help???', and even 'HALP' or 'WTF?'. Combine this with smart autocomplete, and good hints (i.e. I don't understand 'hlp'. Did you mean 'help') and you can mitigate the drawbacks of CLI commands greatly. As for Gmail, you are talking here about keyboard shortcuts, NOT commands, and that's something I completely different IMO.


also there's no method of discovering the commands in a CLI.

A GUI shows you what you can do, by having all the buttons and widgets displayed.

People google for "how to close Vim". I know I did when I started climbing that learning cliff. GUI's usually don't suffer this problem (except Google's UI's, which for some reason are bizarrely complex and hard to navigate).


It would be interesting to make an Alexa-style device, which instead of natural language used a set of composable voice commands, like Bash and the unix utilities but pronouncable.


Yes. At the moment, voice interfaces are a novelty and it makes sense that we're assuming they will use the language we use. But I wonder if that will continue?

Given one of the problems with voice interfaces is false triggering between humans and machines and humans and humans ("You want to know what?" "No, I was just talking to Alexa!" "Sorry, I can't find any results for that.") I wonder if in the far future we will have worked out a "machine language" partly for this reason.

It might be a bit like radio procedure words mixed with "machine syntax". So rather than "Alexa, give me a list of the top ten companies by market capitalisation", or "Alexa, who was the director of the Poseidon Adventure?" you'd say "Chip list market capitalisation companies ten" or "Chip, director Poseidon Adventure", where "Chip" is the universal signal for invoking voice commands (followed optionally by the name of the device you're addressing) and the grammar of the query is ordered from general to specific. This would minimise linguistic ambiguities and allows accurate but imprecise answers if necessary (eg if the machine can't order by market cap). It might also be sufficiently different for humans in earshot to screen it out. There might be specialised words too, similar the Bash and the unix utilities you mentio (eg for recalling queries and modifying them, stealth mode invocation, even piping perhaps).


Google home already kind of have this, don't know about Alexa. I'm amazed at how simple commands can be when you learn the simplest form of a command.

And you could of course go nuts with ifttt and create your own syntax


Are you referring to Google Home being able to stack up a few commands or the routines feature?

eg Google, whats the weather and set a timer for 15min?

On Alexa you can enable a follow up mode which listens to the next command for a bit. It's a bit more natural, e.g. Alexa, whats the weather... you wait for her to complete, she then listens again and you can say, Set timer for 15min, she completes and listens, Read news, etc.

Sometimes I just feel like the machines are teaching us how they would like to be talked to vs us teaching them, as some commands require specific verbal markers in proper order to complete as otherwise they just give you a completely random answer or they simply don't process it at all.


Details wrt the first paragraph, please.



It makes on wonder if an Infocom-style VM could be adapted for CLI use. They did have a database as I recall.


Problem is it doesn't scale well for people. Imagine if everything you did in slack was via CLI - there are dozens, hundreds of commands to learn? Imagine if every app had dozens of commands you had to learn and memorize. Sure you could do it and be blazing fast, but that's a really steep learning curve a lot of users wouldn't adopt.


Look at the VSCODE command bar. You start to type what you want to do, and the search function is incredible, helping you find any command you want. Why couldn't something simple like that be applied to the command line interface for a shell maybe? Heck, with `zsh` I have a lot of autocomplete commands available, including several sub-commands and sub-sub commands. For example, when I type `git remote <tab>`, a listing shows all of the possible commands, such as `add` or whatever. Just make the console smarter, _with good algorithms that aren't slow_, and people would use it a lot more.


You achieve this by putting the following 4 lines in your $HOME/.inputrc file:

  "\e[A": history-search-backward
  "\e[B": history-search-forward
  set show-all-if-ambiguous on
  set completion-ignore-case on
From now on, when you type part of a command, uparrow and downarrow provide you with incremental search based on your command history.


I've been wondering this for years. It seems like the most obvious way to greatly improve CLIs, yet no one seems to talk about it...


Check out Fish Shell.


Zsh can also do this.


But it needs plugins, fish has it builtin. Nothing to configure


I’ve tried fish shell a few times and ended up preferring ZSH.

To my understanding Fish shell doesn’t have complete BASH compatibility and it breaks many scripts.

It also doesn’t seem to have nearly as wide of an adoption as ZSH so less overall community support in genera.

Sure you can just specify the interpreter with hash bang but it’s something worth noting before considering a switch.


This. Just please don't be so dogmatic, Fish, it's ok to also allow && together with AND and not break my scripts ...


Bash itself breaks compatibility with POSIX sh. You can supply --posix, but you can also just use bash -c from fish.


Yup this. If I wanted to carry around a bag of plugins all day, I'd just use bash.


On the plus side, by picking and choosing what you want, you can get the precise performance and usability ratio that fits you best. Fish is a great shell but nothing beats the zsh plugin ecosystem.


Or in my case: spend a bunch of time messing with and tweaking my shell config. Sometimes the tyranny of default options is helpful to getting things done.


Do you mean it just ships with support for a list of preset commands, or can it provide autocomplete for any command? (I have no idea how that could possibly work, mind)


The latter, to an extent; it parses man pages to provide completion/help for commands


Sure, but configuration is more powerful. I the shell a lot, I’m not going to choose a suboptimal solution for my use case just because of a little bit of configuration.


Gosh TOPS-20 had this back in the early 1980s (and this is why bash has readline...)


Yet there's no standard way for system and user programs to expose their parameters and help to the command line, like most TOPS-20 programs did. So bash's completion and help facilities pale in comparison.


PowerShell has this; all cmdlet parameters are parsed by the PowerShell environment, so you don't have to write argument parsing code over and over; parameters to cmdlets are a standard form with many using the same common names where it makes sense.

Because of that design they are long and readable words but tab completable, and ctrl-space will bring up a menu of available matches starting with what you've typed so far.


Indeed Don, why don't you extend gnu getopt to generate a special ELF section with an argument grammar and then modify bash to load this section? Shouldn't take too long and then this feature would become automatic!


Accessibility demands Tourette Syndrome Compatible noise word support.


I am sure that can be controlled by $LOCALE


TENEX and TOPS-20 (TWENEX) had a command line with built-in escape recognition, noise words and help, which most system and user programs supported.

http://tenex.opost.com/hbook.html

User-oriented Design Philosophy

A piece of system design "philosophy" had emerged at BBN and among some of the ARPA research sites that was to have a large impact on the overall feel of TENEX and, ultimately, TOPS-20. At the time, we called this "human engineering" -- we wanted the system to be easy to learn and easy to use, and we wanted the system to take care of as many grungy details as possible so that the programmer or user did not have to deal with them. Beyond that, we were willing to spend real machine cycles and real (or at least virtual) memory to make this happen.

This philosophy led initially to the human interface features of the EXEC, including "escape recognition", the question-mark help facility, optional subcommands, and "noise" words. Few people now argue against the need to provide effective human interfaces, but at that time there were many detractors who felt that it was a waste of cycles to do such things as command recognition. These kinds of things, they said, would "slow the system down" and prevent "useful work" from getting done. Other contemporary systems used short, often one-letter, commands and command arguments, provided no on-line help, and did not give any response to the user other than the "answer" (if any) to the command that had been entered.

[...]

Escape Recognition, Noise Words, Help

One of the most favored features among TOPS-20 users, and one most identified with TOPS-20 itself, is "escape recognition". With this, the user can often get the system to, in effect, type most of a command or symbolic name. The feature is more easily used than described; nonetheless, a brief description follows to aid in understanding the development of it.

A Brief Description of Recognition and Help

Typing the escape key says to the system, "if you know what I mean from what I've typed up to this point, type whatever comes next just as if I had typed it". What is displayed on the screen or typescript looks just as if the user typed it, but of course, the system types it much faster. For example, if the user types DIR and escape, the system will continue the line to make it read DIRECTORY.

[...]

Question-mark Help

Finally, if the user still isn't sure what input comes next in a command, he types question-mark, and the system will provide a list of choices that are legal at that point. The list includes specific keywords (e.g. FILE, DIRECTORY) and generic descriptions (e.g. "input file") Most importantly, the question-mark request does not destroy the previous command input. After the list of alternatives is displayed, the partial command is redisplayed, and input continues from the point just before where the user typed question mark.

As a result of this feature:

Users never have to go grab a manual and search around trying to find the name of a forgotten reserved word (command or parameter). This eliminates the "I know a word, can you guess it" aspect of many computer interfaces. The user can often figure out from the choice of parameters what an unfamiliar command or option will do. This further eliminates laborious searching of manuals.

Because the context of the current command is not lost when help is requested, the user can go step-by-step through a command, figuring out each field in turn. In systems where getting help is a command itself, the user may have to write down a long unfamiliar command on a piece of paper in order to be able to enter it completely. As menu-oriented interfaces have become more widely used, the advantage of having all choices visible to the user has become obvious. The question-mark help feature can, in retrospect, be seen as a "menu on demand" kind of approach, and it was one that worked even on terminals too slow to support full menu-based interfaces.

Origin of Recognition

The Berkeley Timesharing system for the SDS-940 had an earlier form of recognition. It didn't use escape however. On that system, recognition was automatic; whenever the user had typed enough to unambigiously identify a command or symbolic name, the system would spring to life and type the rest of it. This made for the minimum keystrokes in the ideal case, but had one major, and ultimately fatal, problem: if the user typed too much in one field, i.e. more than the amount necessary for the system to recognize that field, the input would go into the next field where it wasn't intended. For example, the system would recognize COP as sufficient for COPY and supply the "Y". But if you typed the whole verb, you would get:

        * COPY Y
             |
             |- typed by the computer
Then, you would at least have to erase the extra "Y". If you didn't notice what happened and continued to type the remainder of the command, what you had intended as:

        * COPY OLDFIL NEWFIL
would come out as

        * COPY Y OLDFIL NEWFIL
This would at least produce an error, and in pathological cases could do major damage. [In the foregoing example, note that the old file name winds up in the new file parameter field.]

[...]


I've been working this way in my shell for a couple of decades.


Isn’t that just apropos?


bash has this too.


Actually natural language scales quite well. Aza Raskin and his father Jeff did a lot of good work in this area.

The discoverability of commands in a system like Emacs is much more powerful than a menu based system. In fact, I bet most people navigate the web now days by typing part of the URL in the omni bar and letting the browser fuzzy match versus mainting large bookmark lists.


I’m accustomed to typing “s” for StackOverflow, “g” for GitHub, “gm” for Gmail, and probably several others I don’t even realize. This sort of “smart” / reachable interface seems like an improvement over having to type entire words over and over.

Is there a terminal with similar ability?


Fish's abbr feature does this. They're aliases that expand in place. For example I have gph abbr'd to "git push origin HEAD", and I can continue editing the command after expansion.


Is there a way to expand special commands (and optionally aliases) in BASH without executing them.

Like if I put `ll !$` it would show, say, `ls -al /home/user/downloads` on a new line.

I'd love tab style completion that offered explicit history as well as standard completions. So if I `find` and then press the completion key-combo I get a list of the last 10 unique find commands (and I can choose one and edit before running).


This may be possible with fzf but I'm not sure of all the features it's capable of.

This type of behavior would be a great addition to the terminal or popular shells. Instead of having the command prompt always display at the end of output, have it stay in the same place at the top or bottom.

Something like you get with the Emacs mini-buffer with tools like helm but for the terminal.


Actually I got this by turning OFF fuzzy search. I don't use bookmarks anymore. I just type gi<DOWNARROW> or gm<DOWNARROW>, or st<DOWNARROW> and press enter.


I use ctrl+r for this.


Emacs + Smex makes the M-x command discovery even more effective.


I agree about the learning curve but not about your point that it doesn't scale. I, and many people I know from my generation that basically live in a command line environment (including often vim) know thousands of commands and their options by heart. It's like another language.

At some point you are fluent and coming up with the right commands and options is effortless. The challenge becomes being efficient and elegant.

Even then you learn about knew tools or options regularly and it's not a problem. It's much like with a natural language, there is no point were you stop learning new words.


What you need then is a compact overview of what is available, like `man intro`, and something like `apropos`.


CLIs work well when the universe of commands is small or the user is very familiar with the application. Discoverability suffers, even with man pages. Where a GUI helps is for the casual user who needs help understanding the options. An interface that can allow both styles of interaction has the flexibility to give both kinds of users what they need.


Ding, ding, ding!!! I've been beating this drum for a few years now. The interesting thing about chat bots is the UI not natural language processing.

I think people are completely overwhelmed by the massive lack of consistent UI the Web has brought us. I also think the more "apps" that could be brought into platforms like Telegram, the happier users would be.

There's also evidence of this in China where 100's of millions of people use WeChat for a large percentage of their needs.


Really? It seems that with the advent of Bootstrap and responsive design, website UIs are more consistent and generic than ever before.

Every corporate landing page is a jumbotron/full-width image, followed by three columns of bullshit, followed by a few rows of random glyphicons and more vague nonsense to cross the minimum text SEO threshold, and a footer.

The early internet was a much wilder place. Frames or no frames? Tables for layout or not? Dare we use an imagemap? Fuck it, let's do the whole thing in Flash. It was chaos.

Now, one thing that has also come to pass is websites with straight up cryptic UIs. Buttons no longer have labels. We have hamburgers and hieroglyphs. I think that might be more of a driving frustration.


I'd agree with you when you're talking about websites that are, for the most part static and just conveying information. But when you're talking about actual web applications designed to get work done I think things are wildly disjointed.

If I were to show you the web apps I interact with regularly, both personally and professionally, you'd see what I'm talking about.


Fair enough. I had not considered apps :)


I realize it’s called “WeChat”, but isn’t it super popular because all of the mini UI-driven apps inside of it?

I don’t get the impression that people really talking to bots much using that app. They’re either chatting with humans or interacting with UIs.

http://a16z.com/2015/08/06/wechat-china-mobile-first/


At the very least all those mini UIs are cornered into a more standardized system just from the limitations presented by the chat environment. On the other end, most of the functionality can be driven from textual commands the way SMS "apps" used to be.

Of course the elephant in the room to this entire thread is advertising and analytics, which IMO is the biggest reason the web has devolved so badly. I mean it's much harder to generate revenue if you're just serving me textual content that I'm interested in.


You may check telegram’s games, services and other bots that allow to manage your entire world via chat (with programmatic buttons and menus to ease /cmds a little). I don’t use it much, but my coworker said that he almost lives there, visiting sites only for long, static content.

From this perspective, “we” don’t “need” as the article says, since it’s already there and writing a simple bot is a no-brainer. (Downsides of proprietary protocol are obvious.)


Genuinely curious, why don't you use it much?


I’m not really into social games in my language (maybe should try english areas). Anyway, discoverability is a hard part of telegram. Like you don’t see tg-links on the internet often. For bots part I just didn’t go there yet – no particular need. I’ve written two utility bots for my company though.


In a lot of ways, chatops is even better than a CLI.

It basically can turn a channel into a shared terminal, with history, search, and the ability to comment


Reminds me of a former coworker's Emacs workflow. He organized his work in orgmode. He used emacs for email, jabber, and irc. He would copy paste new tasks between email and orgmode, and he could copy paste the code snippets he was working on to irc if he needed help.


He'd get a lot of flack unless he gisted his snippet first. But, of course, we have gist.el.


Shout out to termbin for being a useable and simple cli option here.


This can be, and probably is, automatically targeted by some simple pre-send hook. Basically, if you're trying to send >4 lines, automatically gist them and send URL instead.


And display hypermedia.


I remember being giddy about the potential of Ubiquity to bring the CLI to a wider audience, and bring more power to existing CLI users. Too bad it never played out.

https://wiki.mozilla.org/Labs/Ubiquity


That kind of stuff comes up every so often. I heard rumors in the early 2000s about Microsoft implementing a version of Office that was more commandline oriented (this was when people were getting sick of their "smart" menubar, before the ribbon interface).


My partner is a lawyer and we both agree that the rich text editors are massively killing productivity in the space, where plain text is PLENTY powerful enough. The dumb shit they deal with documents and formatting and PDF's. It's fucking dumb, plain text & markdown for life.


It's the rediscovery of mIRC like interaction workflows.


IRC...

But yes, Slack is basically IRC with a built in bouncer.

And i case people wonder, a bouncer or BNC, was a personal proxy that would be running on some server somewhere, and that maintained a presence on the IRC networks for you when you logged off.

And when you logged back on, you got a log of all the traffic on the relevant channels and direct messages while you were gone.


Yes. This is why slack has been more successful than irc.


Got most of my music through mIRC :p


in cases like that how can you distinguish loving the cli paradigm and loving some one else doing the work for them


I remember on Hipchat you could edit your last message by sending a message using this (sed?) syntax:

s/flock/duck/

. I remember hearing someone saying that was the worst design they've seen in their life. I almost agree.

I do agree that text interfaces are better. I really wish that more products were exposed through a purely textual interface.

I think hipchat could have solved the issue with a text interfaces better.

Also, this is why I really like Emacs. It has a great library for creating text interfaces. I had trouble with it because it is written in elisp/lisp.


I don't think the sed syntax was implemented in an effort to make a good interface; rather, they simply implemented what many (albeit technical) users have been writing for years. I've been using sed-like notation to chat with colleagues for years. HipChat just chose to do something about it.

I believe you could also edit your last message with the mouse, or by pressing "up" (but I don't remember if that was opt-in).


Correcting yourself using sed syntax is a common IRC idiom. In that sense the design is sensible; you build your interface on what a lot of users already do habitually and let it have the desired effect. For everyone else it's just a very terse and simple syntax that anyway has an obvious mouse based equivalent.


Skype supports the regex s/search/replace/ notation as well. Which threw me off guard one evening.


Does it? I know it did 7 years ago on Linux clients, but AFAIR it doesn't now. It's definitely the feature I missed the most on Windows client.


Last time I used that feature was probably a couple of years ago (also on Linux). But I've not used Skype much (if at all) in the last 12 months so maybe things have changed since then?


Discord too.


Thats the same way with architectural CAD design programs like AutoCAD

You can do things with the toolbars but every competent architect relies on CLI tools instead for common things like "move" , "scale", "copy", etc

I've never considered slack as a CLI tool for the masses this is a great analogy. I should do more research on existing slackbots


Has Slack innovated in the last 5 years?


I dunno. Do you think that's important? Why?


Buggy mobile client (andriod) and dumpster tier performance are two areas they have been dropping the ball on since release, and requires innovation in some sense of the word.

Those are both very important. Innovation is generally important and I'm half curious as to why you ask.


> Buggy mobile client (andriod) and dumpster tier performance are two areas they have been dropping the ball on since release, and requires innovation in some sense of the word.

Making your program meet what really should be a minimum acceptable standard is considered innovation now? Multimedia chat clients that performed well and weren't crippled by bugs existed in the 90s.

> Those are both very important. Innovation is generally important and I'm half curious as to why you ask.

Has Slack's attraction ever been that it's an innovative product? My understanding is that it's all about convenience. It's like IRC+bouncer with some shiny things and without the hassle.


> Has Slack's attraction ever been that it's an innovative product?

Yeah, it's just IRC with a new hat, but sure, I'm willing to say it was innovative. Nothing like it existed and now many things like it exist. It was an innovation in the smaller parts - that judging by how things have gone, are maybe not so small.

> Making your program meet what really should be a minimum acceptable standard is considered innovation now?

It's not ~disruptive techmologi~ but it would require genuine innovation in terms of creating a proper cross platform native UI framework, or at the very lest a large shift in their product to move to multiple frameworks (innovation in the company rather than in tech generally.)


The animations serve a valuable purpose, though, especially for new users. They show what's happening. Lots of old UIs (like those running in VT100 emulators) had instant wipes from one view to another, but made it impossible to tell what had happened, or why. Even when I wish animation was faster (like with Spaces, sometimes), I rarely wish it didn't exist at all.

I often have people watching me, and with animations they say "you're working fast!", while in environments with no animation (like Emacs), it's just "I have no idea what you did". I want computers to seem efficient, not unapproachably magic.

What if animations started at a slightly slower speed, and gradually increased in speed the more you used them?

We shouldn't need to pick one animation speed for all users, but we also shouldn't make expert users tweak it manually. And I definitely want to see full animations for new applications that I'm not familiar with yet, but not those in old applications which I've seen 1000 times.


We can probably learn a lot about UI animation from video games.

In fighting games there are the concepts of animation priority[0] and cancelling[1], which essentially govern whether the animation for your previous move will block your next move, and whether an animation can be interupted by a new move.

Most good video game UI animates after-the-fact, so you can navigate very quickly and the UI animation lags behind your navigation a little bit. That’s mostly only possible with a controller input because with a mouse the anmiation must complete before the widgets are visible to be clicked on.

[0] https://www.giantbomb.com/animation-priority/3015-7740/

[1] https://www.giantbomb.com/animation-canceling/3015-1568/


YES! Please, Apple, listen. I hate that Safari scrolling on iOS is blocked during the animation of link opening in background tab. Let the icon fly, just let me use the browser meanwhile!


Classic Mac interface (one of best interfaces of all time, IMHO) actually had special animation to show how a folder opens from a small icon into a larger window with contents. The whole interface was very economical, they just could not afford extra cruft those days, yet they decided that this animation was crucial for understanding.


Agree the Classic Mac animations were really good and didn’t really slow you down.

I’d compare that with the too slow IMHO animation when an app goes full screen on macOS.

Or look at Split View, it was clearly designed for ios touch and then later shoehorned into macOS.

My pet theory is that they left out keyboard ahortcuts for Split View because moving windows about with the keyboard would be too fast and highlight how slow the animations are.

Using a touch pad or mouse is slow anyway so you notice less.


I agree. I've long thought that animation could potentially make it much clearer what's going on in Vim.

Like if the user types 'das' to delete the current sentence, and it quickly highlights the text to be deleted, and then shows it shrinking away, as the text that was following it moves in to take its place. (Obviously, you'd want the animation to happen pretty quickly).

Such animation could make it clearer what the command is doing, and this could also make it easier to tell if you'd accidentally typed a command that didn't do quite what you wanted.


You might like kakoune, a vim-like editor which focuses on interactivity: https://github.com/mawww/kakoune It flips around a few of vim's operations (like for example, to delete a word, you do 'wd' and not 'dw'). This allows it to highlight the text before performing an operation, which makes what you're doing much clearer.


I love Kakoune. Vim muscle memory and not being able to work out how to have per-language settings (for indenting) were the only thinsg that stopped me from switching.


That’s great for someone watching over your sholder eg if you’re doing a product demo but less good if you’re on your own. There it just slows you down when you want to move on to the next task.


I gave reasons why it could help the user themselves. (Here's another: for users learning Vim)

And why should it necessarily make things slower? The animation could be quite fast, and I don't see why such animations would necessarily need to block or hold up future inputs or animations. (I do think, though, that you'd need to actually try it out to see if it works in practice)


I agree it’s possible to have animations that add useful information and don’t block the user.

In practice they are rare, certainly in consumer products. I can think of more products with janky UIs than I can ones with insanely great UIs.


I think they'd be great for new users but we'd need to be able to turn them off. No matter how fast and intuitive an animation is it's never going to be as fast as no animation.


I'd suggest just switching to movement first, by way of visual mode. "v" turns on visual mode, "as" selects/highlights the sentence, then "d" to delete it or escape to return to normal mode.


JIRA pages are "animated" by new widgets popping in, then showing loading icons and then finally displaying their final content. You cannot target any single element on the page while it does that because everything moves around every 500 milliseconds or something like that. You just have to sit there and wait until all done. And then it only loaded X of Y items and you have to click to get the rest which again leads to more layout changes. It's as I am watching the same rube goldberg machine over and over again.

Ok, I can see that something is happening. But that renders the whole UI unusable while it does. And it gets tiresome after the first time.



Poorly executed X does not make X bad.


Then again, we're not talking about perfect true X, but the typical execution of X. Typical execution of animations is terrible, so it's a problem.


> What if animations started at a slightly slower speed, and gradually increased in speed the more you used them?

Ubuntu's Unity does this; it keeps track of the number of times you've minimised a window, and as you minimise windows more, it progressively speeds up the animation (maxing out after 100 times).

Source: https://ubuntuforums.org/showthread.php?t=2204321&p=12923084...


> What if animations started at a slightly slower speed, and gradually increased in speed the more you used them?

That's an interesting idea. Or how about if the initiation of a new animated action forced already running animations to rapidly complete? Then if you're doing a bunch of things that trigger animations in rapid sequence, then they would all effectively just be running at higher speeds, in proportion to your specific speed of working.


If you're very frequently cmd-tabbing between two apps on different spaces, that delay - any delay - becomes infuriating. I tried a 'one app per space' model a few months ago, because I wanted a one-key shortcut to any app, but the animation delay just made it unworkable.


I still think we should let expert users tweak it manually, but as a default I think that's a great idea!


The problem with the animations are not that they are bad, per se.

They do provide a visual guide to new users of what just occurred. That has value for the new user.

The problem is that too many systems/programs that provide these animations forget that the animation that was useful to one as a new user the first few times, becomes an irritating time sink when it has been watched the ten thousandth time. After the ten thousandth time watching it, many users already know what is going to occur, and just want it to occur and move on. But the system/program provides no "turn off animations" profile setting. So when one gets to the point that one no longer needs the animation to inform of what occurred, one is still forced to sit through its delays. That is the big problem with animations.

Real world example. My Android 7.1 phone came with all the standard android animations turned on at the outset. At some point after having it for a short time, I discovered the animation time adjustment settings inside the hidden developer menu. After I set them all to zero (i.e., do not animate) the phone suddenly felt like it was 1000% faster.

So, in this case Google did include a "turn them off" feature (good). But they hid it inside a normally hidden menu inside the settings app (bad). Why was this hiding of this setting bad? Because most users will never find the developer options menu (because it is hidden) and therefore will never know about the "turn off animations" settings that could make their phone feel significantly faster immediately. So most will be stuck watching animations that make their phone feel slow, even when they already know the outcome and no longer need the assist provided by the animation.


What about GP's suggestion: The more a particular animation occurs, the faster it runs. Remove some milliseconds from it each time, down to some very low limit, so new users get the clarity benefit and experienced users get the speed.

I remember reading about such auto-adjusting UIs long ago, but the idea from then was things like shrinking labels to make room for more advanced features showing up. Not good for consistency, but auto-adjusting animations...


To be fair, with some software we are often not even targeting new users at the expense of power users - we are targeting the audience for our product demos.

If our animations make a punter go wow when they see the product for the first (and only - unless we get to stage 2) time, and they buy it, then it has done its job.


The speed of animations is not important.

The latency they produce is.

If you had a 10ms animation that blocks UI, that would be annoying.

If you had a 10 second animation that doesn't block UI, that would probably be fine.

Even if an animation feels like it blocks UI, that is a problem.


A good animation should be barely perceptible. If it’s too slow, the application feels sluggish. Too fast, you may not know what happened. It reminds a UX talk from Airbnb, where they have this one guy that focus only on animations. It’s a good idea to provide the ability to skip the transition with the press of a button. It’s reminds me how one can skip the dialogues in video games.


> If it’s too slow, the application feels sluggish.

> It’s a good idea to provide the ability to skip the transition with the press of a button.

It is not really about animation speed, but more about responsiveness, commonly animation block all inputs because it is easier.

Few days ago there was a discussion on the BEAM/Erlang ecosystem designed so that everything is as responsive as possible. I believe that is really cool.


You just gave me an idea for a vim plugin... I guess it's already done? Recent changes could be highlighted with a background color. For example changes from the last 10 edits would have green background. That way if I pressed a key by accident and caused vim to change who knows what, I could spot it.


If your UI is complexly enough that it needs animations to show what’s happening, that’s a red flag.

You could add the animation, or you could remove things from the UI until it’s clearer.

Sometimes a “tool” for getting things done is actually a tool for helping you do bad things longer.


I wrote a comment under a different post just a few days ago. My comment got _way_ out of hand and wasn't as articulate as I had hoped. But the salient point I made with it is simple and applies equally well to this article.

UI is hard.

User interfaces seems really simple. Every programmer I know has looked at a UI and thought to themselves "I can code that in an hour!" and then ended up spending weeks, sometimes _months_ building the UI.

I believe UI is a big, unsolved problem in modern computer science. Just as hard as any other unsolved problem in our field. Right up there with general artificial intelligence and P=?NP. I'm not even joking.

We will eventually solve the problem of UI. But there are a thousand and one articles posted every day either A) complaining about existing UI or B) telling everyone the solution (without providing real, concrete new libraries, frameworks, code, etc). No one wants to admit that this stuff is just plain hard and we shouldn't beat ourselves up over the fact that we haven't solved it yet.

In other words, it's easy to complain. It's easy to display hubris. It's hard to put forth real, practical solutions. Or at least be humble and admit that we _don't_ have the solution.


I think that's absolutely fair. The pace at which the industry is starting to solve really big, hairy problems in standard ways is ever accelerating; the devops space right now is undergoing massive solidification under the Cloud and Kubernetes. UI is one of those problems I hope we're humble enough to admit that we got horribly wrong with HTML/CSS, and maybe we need to go back to square one.

But at the same time, lets pick on Slack. There's no good reason that it needs to take more than 5 seconds per team to load. Maybe its Electron, HTML, and CSS. Maybe they're loading too much data at the start. Maybe their Ruby servers are a little slow.

All of that could be true, but one thing is definitely true: There's a Product Manager or an Engineering Lead somewhere in San Francisco who looked at what their engineers had built and said "Yup, this lives up to the quality our users expect, ship it."

Jira is the same way. Its dog slow. Its full of UI bugs and inconsistencies. But people still buy it, and someone at Atlassian has to have said "we'll worry about this UI bug later, we have more reports for middle management we want to add to the product."

Our best minds will solve the tremendously difficult UI problem one day. But today, the best thing most of us can focus on is the People problem. Expect and Pay for better. Reach out to leaders at these companies on Twitter. A core problem in the software purchasing process is that, often, the people who buy the software aren't the people who have to use it day-in day-out, and often there's obvious lost productivity between the feature bullets on the marketing page and an engineer on an old PC her company won't upgrade for another year.


Will we ever solve UI? UI is the meeting place of tech and user, and in a sense it is THE problem for humanity right now. UI informs how we tend to use the internet, and for example we are currently involved with UIs that tend towards consumption (e.g. discovery mechanisms, feeds, etc). To solve UI would imply a certain optimal way to live our lives, if that exists. I agree with you that UI is unsolved and extremely complex, but I don't think it something to be solved as much as it is the political heart of technology.


There’s an inherent dichotomy in UI in that the most “efficient” interfaces generally have the steepest learning curve. You can be really efficient in a terminal or VIM. But figuring those out take a lot of work.

When we design interfaces we try to make them have a short learning curve but expose greater ability as you use them but it’s still hard.


Completely agree, but we can also figure out how to make it easier. Take a look at http://kakoune.org/why-kakoune/why-kakoune.html#_improving_o... (this is the section that introduces the flip on vim's command structure, but is otherwise a great article to read top to bottom).

There are a lot of ways vim is right or not right, but this lowers the barrier to entry in a profound way. We can keep advancing like this, and stacking those advancements on each other until vim isn't hard. Maybe? What do you think?

I think one of the big ways vim is hard is what Kakoune attempts to fix - visualizing selection. Adding a "layer" to working in vim, we had `u` and `^r` to figure things out - doing something and knowing we could go back. Now with Kakoune we have movement before action. This adds another way the program can converse with the user.

Disclaimer: I don't use Kakoune, but I dig what it's doing and I want to try it out. I think it is a fantastic critique of vim.


I'm not trying to say Kakoune's exploration is without value, but Vim very much does have "movement before action" mode. It's the visual mode, entered by the "v" key. It's the main feature vim has over vi.


That makes sense, for example Reddit has a very simple concept with the upvote/downvote, but also more advanced features if you need them. But I would also argue that UIs also implicitly convey a preference for how data on the web should be organized, and that affects usage.

For example the feed, besides being a UI pattern, also claims that content on the web is for consumption and not exploration, and so we remain stationary on these aggregation platforms instead of 'browsing' the web as we used to. This is efficient but politically bad for society, and we would need to introspect on how we use the internet in tandem with the development of new tech like decentralized platforms.


Duckduckgo David Galernter and Lifestreams. He predicted the feed over 20 years ago but his vision accounted for the exploration aspect as well.


I think this is nowhere more evident than in the world of games. In my opinion RTS games stagnated and died because they failed to innovate in interface department, outside of a few like Total Annihilation / Supreme Commander. The classic Dune2/Warcraft interface only works for selecting single units and squares. So when any bigger engagement happens, it's just people throwing blobs at one another and juggling group hotkeys.

I think RTS games should be controlled primarily by keyboard, not mouse. Select units of X type by pressing a key, target them at units of Z type, make the command 'attack' by pressing A. Vim-inspired.

It's not surprising that games are chiefly aimed at the least patient people - the goal of publishers is to sell to as many people as possible.


Hmm. To me, large-army games and small-army games are just very different games, and I have little interest in large-army games. I don't see Total Annihilation ever replacing StarCraft for me, for instance. If I could only control the game via keys, I'd be very uninterested as that'd be a very different kind of game. Most of these games went turn based for a reason.

I specifically like AoE II for the feeling of grabbing a pack of Mangudai and watching them quickly run somewhere and fire arrows. It's not realistic, but it's understandable and pleasing, in a way that a super serious military simulator isn't.

I think RTS, like many other genres, mostly "died" due to stagnation around clones, which I think has more to do with the industry at large that is not innovating all that much anymore, and RTS is often not the genre of choice for a small indie developer, either.


https://en.wikipedia.org/wiki/Learning_curve

> The familiar expression "a steep learning curve" is intended to mean that the activity is difficult to learn, although a learning curve with a steep start actually represents rapid progress.


Well, progress along the Y axis is only achieved if progress is made on the X axis too. If the curve is too steep, the user is expected to learn too much too quickly, and many will simply give up.


The X axis of an actual-jargon “learning curve” is effort, not time. Steeper slope means higher “learning ROI”—more learning out per unit of effort invested in.


> we are currently involved with UIs that tend towards consumption

Astute observation. Clearly this relates heavily to our culture of consumption and the asymmetry of creation/consumption.

But I wonder if another important part of this is related to the divide between how humans learn to do things and how computers can't teach humans very well. If we could automate showing people how to accomplish their goals, we'd be empowering creatives. So far, this task is relegated to youtube videos (e.g. "How do I make two shapes into one shape in Adobe Illustrator?").


>UI is hard

How many people are actually putting effort into their UIs? How many people do usability tests? I know Microsoft has done them before, but what about Slack? There are a lot of complaints here about their UI. I'm sure not many free software projects do usability tests.

You can't just slap things together and hope for the best, you need to have tests, and you need to run the tests again when you make changes. We've already learned this with the software itself.

I've been reading "Designing User Interfaces for Software" by Joseph Dumas from 1988. It enumerates common usability issues at the time, but many of them are still common. Inscrutable error messages, inconsistent terminology, unexpected interactions, the list goes on. These are solvable problems by they require a departure from "worse is better".

I'm not sure that UI is hard, it seems like it just requires effort to be put forth.


I designed a very convenient interface for my otherwise extremely simple Chess program.

In most chess games, keyboard interface works like this: one axis has labels 1, 2, 3, 4, 5, 6, 7, 8; the other has a, b, c, d, e, f, g, h. Then you movie pieces by inputing stuff like b4 d5. This is AWFUL. You have to look at the border of of the chessboard to see which row and column your chess piece is in. It wasn't designed for humans to play, it was for keeping records of matches and tournaments.

My interface works differently: each of your pieces having valid moves is assigned a key on keyboard. For example rooks could be labeled 1-8, others a-h. You select a piece to move simply by inputing its symbol. When that happens, spaces reachable with legal moves become labeled with symbols... You see where that goes. You look at a piece you want to move, read its symbol, then read the symbol of the place you want to move to. In each case you look at pieces or spaces that interest you, not at X and Y axis.


> I know Microsoft has done them before

And they added toggle switches to UWP (https://docs.microsoft.com/en-us/windows/uwp/design/controls...) that are usability disasters IMO.

When such switch used alone you cannot tell in what state it is - you need to remember that "on the right - is on" (yet what about RTL environment?).

While there is no such problem with old plain checkboxes.


I've mentioned his name elsewhere in this thread but Aza Raskin had some very interesting thoughts on this subject.

One of his main assertions was a big problem is the whole app paradigm. The whole notion of silod apps bound to native widget tool kits is very limiting and that the host platform should really just provide a facility for running commands.


I think a simple data-display UI shouldn't be that hard to code. Every program you run has access to a terminal, which it can use to display and read text. It shouldn't be hard to extend that to displaying and reading structured data. One dream I have is to have a Desktop where each program can send an ioctl to stdout that turns it into a stripped-down browser, so it can then just dump XHTML data and have it be visualised on screen, while stdin feeds you json-serialized user input, e.g. user entered text X in input field Y, or selected file Z. It won't be useful for multimedia or anything like that, but will make building simple one-off UIs much easier.


The difficulty is not just in coding the UI, but crafting one that's easily and intuitively understood by users. We haven't yet established a universal design pattern for GUI's yet, to the parents point, just a bunch of conventions that are often tossed in favor of something new.

A 'solved' UI example would be of a car (not the radio, but the operation of the vehicle). Learn to drive one car, and you can pretty much drive any car - same pedals, steering wheel, gear shift, turn signals, etc. I don't know if we'll ever get to something like that for software though.


IMO, intuitiveness is overrated.

As one snarky joke goes: only the nipple is intuitive, everything else is learned.


And as anyone with children can tell you, the nipple is far from intuitive.


Explain.


That's a hilarious joke, made me LOL. If you want your product to be used though you have to think about what users already know how to do and try to build off of that. This is why baby bottles and skeumorphic design are popular and effective.


Since when does intuition exclude learned information? It's very much relevant to, say, what interfaces people have used before.

You might be thinking of instinct.


One more thing RE car analogy. People are spending 30+ hours learning how to operate a car before being allowed to do it on their own on public roads. Software vendors no longer expect users to spend even 5 minutes learning. This is a cultural problem.


I wouldn't compared all GUIs with the car; the car is more like a particular type of GUI, say, of word processors. Plus, the basics may be standard, but everything else (mainly the dashboard) is still quite a mess. I wouldn't say it's solved at all.


If you know how to drive cars, you will still not be able to drive trucks, motorcycles or tractors. Software in general us more diverse than the types of vehicles that are in existence.

If you want to make a car analogy, then you should compare cars to browsers. And these have very similar UIs, the underlying stupefying complexity nonwithstanding. So we seem to create standardized interfaces for certain common classes of user interfaces over time.

I think that people tend to think in terms of discrete and dedicated appliances because this is how the real world is structured. Being faced with a small box with the overbearing set of capabilities of a swiss army jackhammer (atomic bombs included) ends with confusion unless lot of time is devoted to trying to discover and learn all the magic spells to which that infernally picky and annoying thing will respond meaningfully.


As someone who has only been driving for a couple of years, please don’t make my UI as complicated as my car's 'interface'! I take your point about 'portability' (although different cars are notoriously different to drive - even the gear pattern on the stick varies between different manufacturers) but the initial learning curve is incredibly steep. If I really wanted to, I could learn emacs from a manual. I can’t imagine learning to drive without another human involved.


Building UI/UX demands designing an API for people to use not people's code to implement. The developer must take into account the "existing libraries" available to their end users (people or code) and the human equivalent of "libraries" is generally unspecified, varies greatly, and is slow to teach and learn.


The process to get there takes time. A clever simple solution always look easy to come up with.

Creating Intuitive interfaces has been solved, you need a design centered culture, a good design system, and implementing user centered methodologies (look into human factors, cognitive engineering...)

It takes more investment upfront, but it’s worth it in the long run.


I think that one of our biggest problems with UI is that it is immutable.

Users can't really change most UI. Developers have to make one ultimate one-size-fits-all UI for all of their users.


It's interesting to me that nobody has yet mentioned the Bloomberg terminal, which is basically what the author is describing. It seems Bloomberg "got this" a while back, and dug their heels on for their keyboard-driven terminal-like UI whilst all other shops were going w32, Excel plugins, Web, etc. It's not sexy by any means, looks like it's from the 80s in a mainframe, but it works well for a lot of people.

Images, check. Fonts, check. Command-driven, check. Lightning fast, check.

Moving traders and other front-office staff away from BBG terminals is next to impossible due to the drop in efficiency and familiarity.

Pretty much every "function" (more similar to what devs might call an "app") can be launched via a command typed into their command bar. And it's incredibly well indexed and fast to search. Everything from reading news, checking messages, seeing a quote for AUDUSD (then subsequently purchasing). No mouse required.

Edit: typos


This is super cool! Are there any OSS like Bloomberg Terminal?


> Moving traders and other front-office staff away from BBG terminals is next to impossible due to the drop in efficiency and familiarity.

This is what happens when a technology becomes so entrenched. People get so used to it that it becomes nearly impossible to switch. It's why I laughed when people talked about google docs replacing MS Office. They just underestimate how entrenched Excel is in finance and corporate america overall (especially with the bosses). Hell, even MS Access is entrenched enough that it'll be around for decades longer even though MS is desperate to get their customers to switch to SQL Server.


I think this article points out some very valid problems (I relate a lot with Slack's loading, OSX spaces taking too long to switch between, etc), but the conclusions may be a little misguided. To address specific examples first:

1. Slack is slow to start, and, as others have pointed out, uses animations to remind the user that it's working and not just frozen. The fix here is to improve the application's performance, but there will never be 0 network lag.

2. Animations are good here so that you remain spatially aware of a "space" relative to other spaces. I agree that the animations are too long, and I myself have shortened their duration.

My biggest disagreement is with this quote:

"We should stop babying our users and try to raise beginners and the less technical to the bar of modern day power users rather than produce software that’s designed for the lowest common denominator."

I think the best software is intuitive to use for novices, yet leaves room for people to improve. I'm learning how to use OSX's Logic right now, and I've been really happy with the amount of guidance it gives me. There are options to expose advanced controls, and there are a fleet of really convenient shortcuts, but I have no problem getting around.


I think another example of this would be using something like Adobe Photoshop or Illustrator. Switching between tools can be accomplished by clicking an icon, a drop down, or some other GUI element. However, once you get to a certain level of skill, it becomes faster to just use the keyboard shortcuts.

For a novice, it's more forgiving. But for the expert, it's possible to be very efficient.

It's also possible to go too far the other way too. Not to date myself too much, but an old example of too much "expert" levels was back in the DOS days with Word Perfect. Keyboard shortcuts were the only way to do things (like formatting). So you either learned the magic codes (with three extra/meta operations per code) or you kept a cheat sheet on the keyboard itself. That was a learning curve to get to 'power-user' status.


I still use cheat sheets, but I design my own into wallpapers that fit my desktop perfectly, than have them rotating randomly every hour. Keeps my refreshed on shortcuts in my popularly used programs, and adding a new one to the rotation is a great way to learn all the shortcuts for a new program.

It's not like I ever leave my desktop visible for a pretty wallpaper to matter. My nice ones are on my lock screen.


Would you mind sharing some?


So to sum up, Terminals Are So Responsive & Fast Users Always Feel It [1]. Animations are bad because they take too long but add no value [2], and terminals render non-english characters super well [3].

I've been researching for a bit, and actually research on how to make a "good" and "accessible" terminal interface is pretty thin on the ground. You can find a lot of opinions but very few with any data backing them.

[1]: They're not. https://danluu.com/term-latency/

[2]: They do add value in helping direct user attention. https://www.nngroup.com/articles/animation-usability/

[3]: Right-to-left is still poorly supported by shells, and the installation and support process for custom glyphs in terminals is often extremely complicated.


I think your criticism misses the mark.

[1] You're answering an article that starts with a 45-second video of Slack opening (surely the worst offender among modern apps) with something about latencies measured in thousandths of a second.

[2] As Nielsen is focused on web apps and applications, this advice is less applicable to UI provided by the OS that you interact with all day in presumably familiar ways. Note that generally "directing user attention" isn't necessary in a shell: output always comes at the bottom. However, be sure to read the "Frequency: Don’t Get in the User’s Way" heading that includes almost verbatim the OP's points against animations that slow the user down.

[3] Yes, internationalization is still hard. At least for Terminal.app, this is basically a solved problem, but of course it's possible for terminal apps (ncurses etc) to need custom support. The situation with web tech is about the same, these things are solved for you if you stick with the basic technologies, but if you get fancy you may need more explicit support.


> 45-second video of Slack opening (surely the worst offender among modern apps

I hate to defend Slack as it's far from snappy, but comparing it to a console app seems hardly fair.

I wonder how much of it's slowness is due to network requests? I could make the argument that git is slow because when I clone a large repository it takes a while.


This goes to the article's point about caching; a chat app -- of all programs -- should cache conversations for a fast boot up. Sure, _updating_ the conversations -- the "cloning" stage -- may take some time, but why should you have to wait for the network requests to complete before seeing your past conversations?

The git equivalent would be if you had to wait for git to do a fetch/pull every time you ran "ls" on a git-controlled folder. It would be insane, and no one would use git (or any other version control).


Don't get me wrong, Electron has a lot of issues. I think Slack should switch toolkits. But I don't see how we argue that a famously poorly implemented app is somehow indicative of the entire space of GUI apps and that a space equally fraught with UX issues is somehow obviously better.

Especally when we have excellent examples like VS Code, which is cheerfully giving neovim and emacs a run for their money and displacing longtime contenders like sublime, because it's genuinely quite good and plenty fast enough for most folks.


Yeah this was my takeaway as well. Category error - author didn't like electron apps, and therefore wants to dump GUIs in favor of terminals (which would slowly become GUIs with his improvements).


Slack's problems have zero to do with Electron.


I think maybe it's security problems have something to do with Electron.

However, I agree Slack's got a lot of issues that are just bad implementation choices.


That's not really a fair comparison. Git changes but the version you have and are looking at is fixed. With slack the conversation changes and you can't let the user respond until they're up-to-date because if they did people would be on HN ripping Slack for causing confusion. Not only that but the messages can literally change from one moment to the next. What happens when a user deletes a message?

The equivalent with git would be having each directory be a submodule and someone is rewriting the history out from under you every few seconds.

For technical people it’s easy to assume we understand the problem especially when we see what looks like a questionable design choice but when we do this to applications we don’t have experience with it only hurts us collectively.


It's obvious from the outside, isn't it. Hindsight is everything.

But the reality is is that Slack is an application made by a real company by real people/developers who all faced real constraints. Go build Slack for yourself, through the same history they have, and lets see what we come out with.


I hope someone takes your advice so that we can dump the slow bloated resource hog that is Slack :)


I do not see how that comparison is unfair. Terminal IRC clients, for example, accomplish the same exact functionality as Slack in a curses UI in a vastly faster way.


Wouldn't say that IRC has the same exact functionality as Slack. But the Slack mobile app does, and it runs far better than the desktop version (starts up in 3 seconds on a low-powered device instead of 30 on a much faster machine).


I've read this opinion a few times and at one point (before trying mobile slack) I believed it. Is this true for iPhone users maybe?

I've used slack on mobile and I find it worse than the desktop version.

It starts slightly faster (10 seconds vs. almost 20) but navigating through channels is bad (and slower) and threads are completely unusable. It also runs my device hot like nothing else.

The faster startup speed isn't a big victory for me because it's still a factor of 10 away from what I would consider acceptable.

The "best" interface for slack, IMHO, is the web version, provided that you are already using Chrome.


But somehow there's all these people that want to use Slack over Irssi.


Probably because IRC is an outdated protocol with inferior features to Slack and very few good open source options for connecting. For example:

* DCC is still unreliable.

* There is no audio connection option, which is quite popular in 2018.

* Channel history management is ad hoc

* Authentication is done in band, in plain text.

* "secured" channels rely on this bad authentication, and if they don't (perhaps electing to manage it themselves) network flaws can completely steal your channel.

* IRC isn't even particularly open source, many servers and networks have private patches that are not shared publicly.

IRC isn't better than slack. I agree with you that it's confusing why people suggest that it ought to be.

Also, most IRC networks are controlled by entities even more inscrutable than Slack's executive team and board. I can go look up who runs slack, I cannot actually find good details on who runs any given IRC network. I have no idea what I'm dealing with or how they're using my data. I have no legal recourse if I do discover bad behavior, and I'm forced by the ossified protocol to keep using their insecure authentication mechanisms which make abuse trivial.


> ... I cannot actually find good details on who runs any given IRC network. I have no idea what I'm dealing with or how they're using my data. I have no legal recourse if I do discover bad behavior...

Ah, the good old days of the Internet. You make me so nostalgic.


Back when I was a kid this sounded romantic. Now I just know it means that folks spy on me for fun.


As opposed to the non-anonymous internet?


Certainly I use a lot more software products by a lot more diverse sources that all have a lot more accountability to the consumer. This, at least, is a positive trend. One of the few good parts of the commerical software movement is more accountability and higher minimum standards for the consumer. I think that shows in trends in modern computer adoption.


[1]:

Maybe you should read the article and see subsequent videos (e.g., animation jank) where we are in this time domain rather than skipping that part? TBF: the videos didn't play for me without opening them in a different tab. I'm not sure how they accomplished this, since typical embedding tags don't have this problem.

[2]: > As Nielsen is focused on web apps and applications, this advice is less applicable to UI provided by the OS that you interact with all day in presumably familiar ways.

Multiple interactions were offered in the article that were native apps that had animation. Principles of UX are not exactly the same between local and web, but the principles of how animation guides user attention and context are more universal than you make them out to be.

For example, I love to make fun of the 1pass animation but I think it does serve a valuable purpose: making sure the user has realized the environment is now authenticated as a result of password entry. A unique cue for that is a good idea.

> Note that generally "directing user attention" isn't necessary in a shell...

This is fantastically wrong!

We have a long history of working to make sure that user attention is directed in shells! From guidance on how to do a menu with highlighting (folks have settled on doing the chosen option in a select menu with inverse text and an extra glyph to keep degenerate cases like 1-tuples and 2-tuples from being ambiguous) to ongoing refinements to midnight commander.

Another example in a more command-line domain is silver searcher's bash integration, which makes history search a lot better.

And there is a whole world of UI around log aggregation, search and UI that extends onto the console and has seen rapid evolution over the last 5 years.

[3]:

> Yes, internationalization is still hard. At least for Terminal.app, this is basically a solved problem

It's really not. Lots of tooling breaks.

> he situation with web tech is about the same,

CSS makes this dramatically better on the web side, and we haven't even gotten to support for folks with physical differences that make precise keyboard input or traveling eyesight easy.

The web is WAY more accessible to non-english-speaking people, people with physical differences, and people with issues focusing in the way terminals must demand you do.


> The web is WAY more accessible to non-english-speaking people, people with physical differences, and people with issues focusing in the way terminals must demand you do.

A bunch of people (including Kay and Victor) have talked about this, but accessibility is only one part of the interface. Pieces of software that are productivity related should be easy to get started with and be accessible, but also allow you to become more productive. A lot of tools (like 1Password) focus on the former (accessibility to the lay user) but don't focus on the latter. This isn't some sort of unattainable ideal though: Excel and Powerpoint are great examples of tools that make it easy to just play around with as a beginner, but also to really reward power users.


And in the spirit of fairness and inclusivity to the console, Lotus 123!


> So to sum up

You start with this, and then make two-and-a-half points (I'll kinda give you animation) that the article doesn't. How is that a summation?


IMO animation is only useful in certain cases like notifications and smooth scrolling.

If the user is the one initiating an interaction though, there's usually no need to animate anything (opening a menu, for example) because he/she is already expecting something to happen

Edit: s/animation/transitions/g


In my opinion, animations should be like reverb in music. If it's noticeable, you're probably using too much (surf music excepted). A little bit of animation can smooth things out and enhance the experience with helpful cues. Too much makes the application a pain to use.

> he/she is already expecting something to happen

Just as an example, if I were to trigger Mission Control [1] in macOS without animations, it would be pretty jarring. In this case, I control the speed of animation with the speed of my mouse gesture, and it's quick enough not to get in my way, but animated enough so that I have a sense of continuity between my full screen web browser and the view of all windows on that desktop.

In contrast, the default minimize/restore animations in macOS are too long and cutesy for me.

[1] https://en.wikipedia.org/wiki/Mission_Control_(macOS)


I think animations are pretty useful for some touchscreen UIs, for example if you have a menu that can be swiped away, or to attract attention to things that happen without user interaction. The only issue is a lack of consistency. Also, animations can hide load times while making things seem fast (see iOS app opening animation)


I obviously disagree with your interpretation of the article. I think the author made these points implicit their demonstration.


What does "accessible" mean?

Screen readers sure work better on text than on GUI elements, for one.


How do screen readers work with curses interfaces? Genuinely curious.


Poorly, IMO. This became clear to me when I had to talk a blind Windows user through installing Red Hat Linux with the Speakup screen reader in early 2001, using Red Hat's text-mode installer. That was when I realized that Windows had actually become more usable with a screen reader than screen-oriented text-mode UIs in a terminal.


hmm I have a couple of blind friends, and one was an avid redhat user in the late 90s/early 2000s. I didn't reply to the question because I honestly can't remember, but in the late 90s, at least, dealing with Linux was a lot easier than JAWS on windows.


The transition from DOS to Windows in the 90s was difficult for blind users as well as screen reader developers. For some blind techies who were comfortable with DOS, Linux was indeed a more attractive next step. I was heavily involved in the blind Linux community from 1999 through 2001, and helped several newbies get started with Linux.

But that's ancient history. Even as I was deeply involved in the blind Linux community, Windows screen readers were getting good, particularly for everyday tasks like web browsing. Today, there would be no reason for any blind person other than a programmer or sysadmin to use a command-line interface, let alone a screen-oriented terminal interface.

To see why a screen-oriented terminal interface isn't in fact blind-friendly, consider that Red Hat text-mode installer I talked about last time. On screen, you have an approximation of a GUI using line-drawing characters, some ASCII art (for check boxes), and colors to convey where the focus or selection is. Suppose you're in a list of check boxes with buttons below it. What does the screen reader read when you arrow through the list? When you toggle a check box with Space? When you tab to one of the buttons? With the Linux Speakup screen reader in particular, the output wasn't at all intuitive, and one often had to use screen review commands to be sure of what was happening. I wish I still had a copy of a tutorial I recorded in late 2000 where I walked through the installation of Debian with Speakup. (The Debian and Red Hat installers were and are very similar in this regard.)

Contrast that with the Fedora or Ubuntu graphical installer running under GNOME with the Orca screen reader. Like other major GUI platforms, GNOME has an accessibility API. Screen readers and other assistive technologies can get a tree of UI elements, and receive events about those elements. Assuming the application implements its side of the API (and often the UI toolkit takes care of this), a screen reader has easy access to high-level information about the widgets on the screen, what's happening to them, which one has the keyboard focus, etc. So when you arrow through that list of check boxes, the screen reader can say things like, "Web server, not checked". Then when you hit Space, it can just say, "checked". Finally, when you press Tab, it can say something like, "Next, button". It's clearly a much better experience.

I'm happy to answer any questions if anyone is curious.


Generally you fool them into spitting raw text and carefully crafted menu summaries into a socket that speaks text at high speed.


I highly disagree that the animations are superfluous for the vast majority of users. Yes, they can be superfluous, but that's not by their nature of existing.

History shows us that consumers value good UX, of which animation is a key component. The iPhone wasn't the first smartphone, but it was the fist one to take UX as seriously as the hardware.

As for the examples:

- Slack: Yes, it takes forever to load and I hate that, the real problem is performance, not the animation. Would it be better if no animation happened and didn't inform the user about about what's going on? Keep in mind that the animation also serves to inform the user that it hasn't "frozen", so a still interstitial would be a regression.

- Spaces: The animation tells the user what is going on! Having the entire screen change instantly would be confusing for the vast majority of users.

I do value choice though, so perhaps there should be a setting for power users to disable/minimize them.


> Spaces: The animation tells the user what is going on! Having the entire screen change instantly would be confusing for the vast majority of users.

That's an interesting point, and it makes me wonder if there's a level of nuance to be found here. For example, animations are acceptable iff they do not extend the time required to perceive the requested action.

In other words, it's already going to take me some fraction of a second to perceive any change; animations within that fraction of a second are perfectly fine. Anything that extends the change past that fraction of a second, however, is eating into my productivity (or at least, so the author would claim).


When i upgraded my phone gro a nexus 5 to an S7 i was initially confused that my new (3 years newer, higher end) phone seemed sluggish compared to my old one.

It turned out that I'd enabled an option to double animation on the old phone. Enabling the same option on the new phone made everything as fast as expected.

That said, I really appteciate the (faster) animations. I wouldn't want to do a without them at all.


The user could be told what's going on without breaking the entire flow through. That's what status bars did. Even if they're not the answer now, there's endless ways to notify a user of action. The old rotating "e" of Internet Explorer. A progress bar embedded next to the relevant part that's loading. A tick mark that shows up then fades away when an item is done. Etc etc


> - Spaces: The animation tells the user what is going on! Having the entire screen change instantly would be confusing for the vast majority of users.

The video the author links from Minority Report of an "incredible and desirable" UI, has animations seemingly for this exact reason. One is basically the same as Spaces. https://www.youtube.com/watch?v=PJqbivkm0Ms

I think UIs that give intuitive feedback are the best UIs, even if they are a tiny bit "slower". I'm happy to memorize a bunch of VIM commands because I write code everyday and it makes me much more productive, but I don't want to have to do this for every application. Especially once we start physically interacting with them.


The single point that I disagree on:

> "Let’s dig into it by looking at the aspirational interface concept from a great movie: Minority Report. (…) I think we can all agree that the interface of this prospective future is incredible and desirable"

I guess, this single scene of a movie has distracted interface development like no other vision. However, it's just a terrible interface: Working over prolonged stretches of time, gesturing with stretched out arms, would be simply impossible, you had to memorize a complex alphabet of gestures, which made the command set of Wordstar shrink in comparison, not to speak of the visual clutter and the (perceptual) bandwith required. – Please, let's stop speaking about Minority Report in this context. (It's a nice visual effect in a movie, but nothing to aspire to in terms of real life – just as is true for most movie FX scenes.)


P.S.: If you're looking for inspiration by a movie, have a look at the status screens and their update process in 2001 – A Space Odyssey, which were grossly overlooked for other effects in this movie and didn't have much of a real life impact. (I guess, for the era they came from, the clarity and economy, these are just in line with the article.)



Interestingly, while the site (which is great BTW – thanks for the link!) features all kind of minor UIs in 2001, the major status screens, which can bee seen in various places and scenes, are not covered in any way. (Again!)


Even more relevant - the Sci-Fi Interfaces blog, which analyzes various science-fictional interfaces in movies from an actual UI/UX standpoint.

https://scifiinterfaces.com/category/ghost-in-the-shell-1995...


I recently got gifted an old Commodore PET. It boots straight into BASIC, so anything you type can be a command or a program, but what’s even cooler is the way the console (they call it “monitor”) works. If you press “up”, rather than scrolling through past commands one by one like in Bash, DOS etc. the cursor simply goes up the screen. You can modify anything you see and hit return to commit. This can be a previous command, a line of code or even the contents of the system’s memory (!).

It’s a really interesting form of direct manipulation that I have only ever seen matched by “document” style interfaces like Matlab, R etc.

If anyone is interested, I’ll do a write up later with some videos or something.


You may like Acme: https://research.swtch.com/acme


IIRC all Commodore machines did this. The C64 did. That's how you edited a program: you LIST'ed it, and then scrolled up and changed the lines on the screen. When you hit ENTER the new line overwrote the original one.

The C64 did not have any way to directly show or edit system memory, though. That's cool.


The C64 was indeed one of the only Commodore machines that did not ship with a Machine Language Monitor. You could however install one: https://www.c64-wiki.com/wiki/Machine_Code_Monitor


Sure. I even wrote my own, as an exercise in learning 6510 assembly.


Are you forgetting about POKE and PEEK?


No. But an actual monitor was a much easier way to directly access system memory.


Emacs’ Dired mode lets you interact with the file system this way. Imagine the output of ‘ls -l’ as an editable document — seeing this in action was one of the killer features that made me an Emacs user.


What's an Emacs? Running that command just seems to reduce my disk quota.

The functionality you're describing sounds pretty much like vidir (from moreutils[1]), though.

[1] https://joeyh.name/code/moreutils/


> What's an Emacs? Running that command just seems to reduce my disk quota.

It's an operating system that offer the sanest capabilities for productive work with everything that's primarily textual, or could be made to be primarily textual.


Ranger is a similar application that I don’t use enough.

http://ranger.github.io


vidir is a low-fidelity imitation of Emacs dired mode. Emacs has had dired since at least the 80's.


Funny thing is that this sounds similar to what was Raskin's big idea for computer interaction.


And many years later, Bret Victor


I used to love this on my Apple 2.

It exists nowadays as Emacs in the built in eshell.

The buffer is the shell is the buffer.


Yes! I am so glad somebody called out the UI animations! The one in 1Password bugs me every time, I don't use Spaces because it takes too long for the animations to play out. There are more examples.

Every time a programmer adds an animation, a settings option should also be added to "disable animations". Advanced users will love you for it!


I have a Wells Fargo auto loan. Their site is separate from the main Wells Fargo site. They have no animations (not even spinners) and basic web forms. The site feels so fast compared to other sites it's jarring. I often feel the need to double check that what I just did actually got applied because it feels like someone is presenting a mock UI demo with static images.


It would not surprise me in the least if the 1Password unlock animation is a deliberate attempt to hide the time it takes 1Password to decrypt your password list.


Stating that they are decrypting (and it takes time) would probably change users' opinion from "this is slow" to "this is secure"


I seem to recall reading this when I investigated before. Article is shooting the messenger in a way, the animation is not the problem. Agilebits (the co who makes 1password) generally aren't the types to introduce superfluous elements.


It's not the decryption process that is slow (computers can do AES really really fast these days), it's the deliberate slowness introduced by PBKDF2, which attempts to thwart brute force attacks.


Animations are cool the first time you see them. The tenth time, they are just aggravating.


I'm confused. By Spaces animation, are we talking about the horizontal sliding transition?

On my machine (High Sierra) the transition time between Spaces is dependent on the finger gesture swipe velocity. I'm not really sure I would even call this an animation- the Spaces x-offset is being adjusted as I move my fingers along the track pad in the same way as scrolling up/down in a browser behaves. There's literally no waiting for the "animation" to complete; when I lift my fingers I'm either in one space or the other.


Compared to i3wm on Linux, it takes ages. I can could probably switch workspaces three or more times (using the keyboard) during the time that a single Spaces animation takes to complete.

Also, I am on High Sierra too, and there is at least a half-second lag between when the gesture ends and when the animation is complete. Taken together with the time it takes to initiate the gesture, I'd say we're around 0.75s.


They are keyboard-based ways to switch spaces too.

I do find that if you use the keyboard to switch spaces quickly, the animations become faster too. Unfortunately the app in the other space do not get new keyboard input during the animation.


Incidentally, I am on Xubuntu, and I would love to have the workspace switching transition that macOS has. Switching workspaces on xfce is lightning-fast but I would love to have an indication on whether I moved left or right. (other than the small indicator in a Panel) Ubuntu solves this with a HUD (or at least did with Unity).


Agreed on the gesture, I feel that’s implemented well. But I suppose the author is talking about the animation you get when using the keyboard. I also find myself annoyed switching spaces on macOS vs. dwm, tmux, etc.

Something else: I got myself a tablet a few weeks ago and now find myself disliking the constant scrolling and wishing for (instant) pagination instead.


> Every time a programmer adds an animation, a settings option should also be added to "disable animations".

You can disable every animation in OS X itself via the command line (defaults write). I put them all in a shell script that I run on new installs. You may be out of luck with 1Password.


Unfortunately, not every animation. In particular, the Spaces transition animations that the OP is complaining about is not one of the ones you can disable with `defaults.write`.

I re-check if it's been added with every new macOS release; no luck so far :(


One way to make the Spaces animation tolerable is to enable "Reduce motion" in Accessibility preferences.

https://i.imgur.com/zg4gZA7.png


This is amazing - thank you!


A related sort of idea, which has been posted to HN before but never gotten a huge amount of attention is the Arcan project [1].

Basically an interesting implementation of a display server and desktop environment being worked on by a lone dev as far as I know. Really impressive stuff, and in the author's own words: it is keyboard dominant, building a better and more efficient CLI than the flaccid terminal emulators of yore

[1] https://arcan-fe.com/


lone dev here, and thanks for noticing - so this is where the traffic came from :-)

The lack of attention (and releases, not representative of the half a million lines of C code and about 100k of Lua it entail) is mostly by design - to a large extent, I prefer obscurity to the point that productivity dips and lethargy sets in around release bursts, it opens up old mental war wounds from academia (also - getting a ph.d wasn't worth it).

The posts etc. so far is much in the terms of documentation, not dissemination or politics. Coming soon: "Arcan vs. Xorg - far beyond feature parity" and "The Divergent Desktop"; that will show how these things fit together. The latter expands a lot on some of the ideas in this article.


I think it's a fantastic project! Seems like a lot of what I want in a desktop. Looking forward to the new posts.


Well, it's pretty awesome. Good work.


<Slow clap>

I'm switching away from macOS to Linux with the i3 window manager for precisely this reason. But all of his criticisms of terminals are spot on: no multimedia, no support for anything other than monospaced fonts, etc. Lord, somebody give me a terminal program that produces laid-out text and can show inline video.


The old Symbolics Genera had a "terminal" which was extremely interactive and object based, yet still worked much like a modern text only terminal. I still love working in that operating system. Pictures, fonts, diagrams, etc., were all supported well. I have not tried video nor seen examples of it but those computers were responsible for some CGI in movies in their time, so may have been supported.


I suggest Chrome or Firefox.

I find the conclusion of the article totally off the mark. The author seems to not understand that problems begins with multimedia support and other "gimmicky" stuff (as he puts it). You want video? Then use your terminal to launch a video player. A tiling manager is precisely perfect for this (I wonder why you switched to i3 if you don't know that, btw).


Oh, I know that. But what I'd really like is something akin to Jupyter, only for the shell. I think I'd like that, anyway.

For example, right now I can issue a shell command that lists cpu utilization by process (top). I can even have that command autorefresh, showing me changes in real time. But to do that it takes over the shell. It'd be neat to think about a shell where I could issue a top command, then command displays and exits, giving me a shell prompt again. But I have the option of asking the shell to update the old output every N seconds.

Yes, I could in i3 spawn a new shell and just keep top running in it. And maybe in the long run that's the better interface. But doing that screws up my carefully constructed window layout.

I guess what I'm saying is that we have two interface paradigms: the gui, and the command line. But interfaces like Jupyter and Mathematica show that there are middle grounds between those two extremes, and that middle ground is interesting.


Personally I use GNUScreen in addition to dwm, but mainly because I use a 15 years-old Celeron PC as a network console, which for some reason puts a one-two seconds tax on window creations. This setup is quite flexible, but you can get lost easily, in particular if in addition you use ctrl+Z carelessly.

An alternative is Emacs, which will give you shells and windows and splits and somewhat-interactive documents (org-mode) and has some support for images. If you're ready to sacrifice an hour per day to Emacs configuration for the next twelve years, it will do your biddings eventually...


> screws up my carefully constructed window layout.

To me this is off. With a tiling window manager I don't have to "carefully construct" a window layout. That's the window managers job!


There's nontrivial mental burden incurred every time the window manager changes where the windows are, at least for me.


How about instead of something merely like Jupyter you just use precisely Jupyter? I'm sure someone wrote a shell kernel, and if not, use ipython's shell magic.


That... could actually work. In fact there is a bash kernel for Jupyter. Not a lot of info about it on the github page, but it's there.

Then the issue becomes how well bash output takes advantage of Jupyter. Research forthcoming.


If you use mlterm (and supposedly xterm with the right compile flags) you can have inline pixelmaps via DEC -regis- sixel escapes.


Actually i think REGIS is for vector graphics, SIXEL (six pixels pr character) is the pixel graphics.

I seem to recall that at least one terminal browser can pull some tricks to insert images into the window in X as well.


Oops, you're right.

The browser you're thinking of is w3m.


mlterm also works great with non-monospaced fonts (but doesn't, to my knowledge, have a way to switch between fonts within terminal output, which seems to be what the OP wants).


I've always wondered what a tiling window manager provides over tmux. This isn't a rhetorical question, I've never used a tiling window manager.

Anyway, a separate space with full-screened iterm2 + tmux on macos is my preferred way to work. I can swipe left to check slack, browser and then swipe right to go back to my terminal.


A window manager manages windows. All of them. Tmux can't manage your PDF viewer or Firefox windows.


With i3, all windows benefit from the splitting/tiling behavior. Not just terminal windows.


You can get all of that with Electron-based terminals. Perhaps take a look at Hyper.


Ugh, I can't see any advantage that would justify my terminal emulator running on Electron.


I downloaded Hyper on your recommendation. I can't see how it supports inline multimedia. What am I missing?


It doesn't.


looks like you are ready for templeOS


Huh. Just went and read a review.

Not for everyday use, of course. Network connectivity (and application support) are nonstarters. But that shell interface does look <ahem> inspired.

More

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: