I can see this becoming one of those canonical pages that is still being referenced 10 years later. Being short enough to share and simple enough to understand makes this a great resource. I have tried to convey similar feelings to people who love the applications you've used as examples. Maybe this will help.
Great use of images to demonstrate your points. Many articles lately seem to just add in unrelated images for no reason.
> HTML and CSS gave developers perfect visual control over what their interfaces looked like, allowing them to brand them and build experiences that were pixel-perfect according to their own ends
I'm not sure this is quite right. It takes a ton of work to get an HTML/CSS page to display properly in every browser. I think my response is specifically related to the use of the word "perfect" -- maybe something else like "total" would be more appropriate.
All I could think of while watching the clip from Minority Report was how tired my arms would get from all that full-range motion.
I like the submitted title better than the title on the actual page. You should consider revising it!
Calling out Slack in particular may have been a little incendiary, but I hope that it's adequately conveyed that it's a general problem and not meant to be a particular slight to them.
> Many articles lately seem to just add in unrelated images for no reason.
Totally. This drives me nuts :)
> I'm not sure this is quite right. It takes a ton of work to get an HTML/CSS page to display properly in every browser. I think my response is specifically related to the use of the word "perfect" -- maybe something else like "total" would be more appropriate.
Yes you're right. "Total" seems more apt in this case.
> I like the submitted title better than the title on the actual page. You should consider revising it!
+1. I've been told by a few people (including you) now that my titles could use work — and they're right. Thanks.
Haha, thanks. It wasn't meant literally, but more in the form of "igniting" an emotional reaction. I tthinkkkk I'm using this one properly .
I agree with the general thrust of this piece, and think we're in a bit of a dark age of interface design right now. Too much attention is paid to visual design and not enough to interaction design.
But while speed of response in a UI is certainly a factor in usability, it's not as significant as things like mode, navigation, habituation, vocabulary or consistency. So to that extent I think the article isn't really addressing the main problem, which is that time spent on visual design should be better spent on designing for usability.
I'm also not sure what to make of the idea of calling for the terminal to be revised and considered the way forward in user interfaces. Apart from speed, what problem would that solve?
And I'm intrigued when it says interfaces should be "composable by default so that good interfaces aren’t just something produced by the best developer/designers in the world, but could be reasonably expected from even junior people in the industry".
I'm afraid I don't understand what that means.
It means that you should be able to re-use (compose) interface elements, so that even a junior developer could create a great interface (UX-wise) by assembling one from parts that are made to work well together.
Sort of like anybody can make a command line app and trivially have it work with other cli tools like grep, tail, awk, sort, uniq, cat, ps and the like.
Or like anybody could throw together a perfectly good hypercard UI.
I don't think "compose" == "reuse" as you suggest. Reusing well-designed interface elements gets you very little in terms of usability, because the important parts of UX design - like page structure and navigation - cannot be handed to developers in ready-made toolkits.
I do agree with the second half of your comment though. Mozilla's Ubiquity project  is the best example I know (see ) of on-the-fly composability in modern GUIs. Admittedly Ubiquity is somewhat underdeveloped, but the core idea is solid, in my view.
There's also things like IFTTT, Zapier, and Slack integrations, but:
1. They involve up-front configuration, and,
2. You need to redo this configuration for every pair of apps you want to compose together, which is obviously not scalable.
> I'm also not sure what to make of the idea of calling for the terminal to be revised and considered the way forward in user interfaces. Apart from speed, what problem would that solve?
So I didn't mean to imply exactly that this is definitively the way forward. What I meant to imply is that the terminal programs we have today are flawed, but overall closer to a better model compared to other interfaces we're producing — mainly the web.
Interfaces in web browsers are decently okay, but they have some fundamental problems that are unlikely to ever be tractable. For example:
* Speed. Even the fastest websites are slow compared to native applications. The median speed of a web application (for say your bank, credit card company, or local utility) is _terrible_ because that's the default given the current framework. You need to a high level of mastery and knowledge beyond what most developers have to build something better.
* Consistency. Every web app looks and behaves differently. Instead of learning common conventions once, users learn everything afresh over and over again.
* Usability. You'll never get better at using most web applications because there's no framework for advanced usage at all; instead all of them cater to the lowest common denominator. There are a few exceptions like Gmail's keyboard shortcuts, but they're rare, and not very powerful compared to something like Vim, where the more more you learn the greater your productivity becomes.
* Composability. I try to show in my GitHub copy + paste video that even copying things out of web pages is hard. (This one addressed further below.)
> And I'm intrigued when it says interfaces should be "composable by default so that good interfaces aren’t just something produced by the best developer/designers in the world, but could be reasonably expected from even junior people in the industry".
> I'm afraid I don't understand what that means.
I might have mixed a couple different ideas there, but when I'm talking about composability, think like pipes in a shell. Just imagine if I could say something like: "okay Credit Card App, pipe the list of charges that I've tagged with 'corporate' into Concur and file expense reports for each one".
The closest we can hope for something like that is for someone to build a third party app that uses the APIs of both your credit card and Concur and does this for you, but even there, you're still operating along the fixed rails provided by another app. Imagine if you had flexibility on your own terms that was available to even non-power users by having your credit card and Concur provide standardized primitives that your web shell could hook into and use.
I hope that helps to clarify some things!
> I might have mixed a couple different ideas there, but when I'm talking about composability, think like pipes in a shell. Just imagine if I could say something like: "okay Credit Card App, pipe the list of charges that I've tagged with 'corporate' into Concur and file expense reports for each one".
This, I think, is too generous to the command line, even as I am a vim/grep/etc fan. When it comes to real life data coming through grep, for example, cleaning that data, iterating through it with bash, and passing it along is often not worth the bother and I end up manually processing it. Unless it's a recurring script and I can reliably parse and clean the data, handle failure, etc, it's not worth automating.
Incidentally, Aza Raskin, one of the main developers of Ubiquity is the son of Jef Raskin who led the work on Apple's Macintosh.
At one point in college I was fascinated enough with Ubiquity to try to continue the work on it (since the project was shelved), but my programming skills were just not up to it. Perhaps I'll get back to it sometime soon :)
I've been thinking, with all the huge advances in AI/ML in the last few years, now might be exactly the right time for an ambitious project like Ubiquity, since it relies heavily on natural-language processing. Thoughts?
Direct manipulation is great at discoverability, when I'm exploring the choices available to me, but it sucks if I'm looking for something specific (for software with a minimum level of complexity, like Photoshop or Excel). Interface agents have the opposite characteristics, as the article explains. So, they're complementary techniques.
To provide a simpler example, not involving AI: if the user is a domain expert (say, a graphic designer), the ability to search for and perform a specific action quickly is far more important than discoverability. For example, the user may already know that GIMP provides a feature for drawing a path but have only used Photoshop in the past and now they just can't find that action.
Aside: it is telling that established design software like Photoshop / GIMP has such a bewildering maze of menus even today. But if you take Google Chrome, for instance, it provides a stellar searchable user interface .
Moreover, this was way back in 2008  vs I think Alfred in 2011 and the others are recent clones. I think Ubiquity was a few years ahead of its time - with today's AI advancements, they might actually be able to achieve a large part of their original mission. This has got me thinking...
Software interface design and HCI in general is a very young field. The web itself is a teenager. Now is not the time to start pushing to fossilise interaction conventions, because many of those we have right now are demonstrably bad and have been born of laziness and ignorance. As Isambard Brunel once said "I am opposed to the laying down of rules or conditions to be observed in the construction of bridges lest the progress of improvement tomorrow might be embarrassed or shackled by recording or registering as law the prejudices or errors of today."
> "I'm talking about composability, think like pipes in a shell"
This is in fact a very old idea in UI design, the last major push for concept based on it was a joint project between IBM, Apple and Microsoft called OpenDoc: https://en.wikipedia.org/wiki/OpenDoc That effort became mired in technical difficulty, but even if it had overcome those issues, there would have still been a problem, which your ideal scenario leads to: that outside the F/LOSS world, software of any kind is essentially a platform to sell things on. And to get investment, forecast sales and generally plan ahead, you need to make sure that the use of your software is predictable. This is why open APIs and things like Facebook Apps have had a rough history. The minute users start hacking out their own features or making a tragedy of the commons, then things get difficult for the participating companies. They are forced to respond by shutting down that behaviour or bring it within their own platform as new functions. For this reason, we will never see significant scenarios such as "'Credit Card App, pipe the list of charges that I've tagged with 'corporate' into Concur and file expense reports for each one'." Maybe in F/LOSS, in which case I suggest you join Richard Stallman.
So as I said before I think you are idealistically on the right lines - UIs are in general awful, and for reasons that their designers often cannot recognise or appreciate. The discipline of UX design in particular is extremely immature, is populated largely by idiots (I know, for I am one of them). It learns very little from its mistakes. For example, Bruce Tognazzini's design principles are barely ever observed, his findings of fundamental user interface issues over 30 years ago have yet to make it into mainstream design at all. This is despite many designers professing to be familiar with the work of Don Norman and others in this field. in the end, Design has been hijacked by the desire to create an emotional reaction, to be "visual" and to regard usability as suspect if it counters those desires.
Those are, I think, the real problem. Yours is fundamentally a technical approach to counter this. But it's the wrong one.
It's not a conventional terminal, in the sense of a monospaced grid of characters, but the architecture is the same: server and client are separated, all "business" logic resides or originates in the server, and the client is generic and reusable. I could offer public services over SSH, and anyone with an SSH client could connect. (Makes me think of BBSes.)
A specific application offered over SSH requires no pre-provisioning on the client; likewise web applications require no pre-provisioning. Of course, the web wasn't really intended as a terminal system, it was a hypertext system theoretically divorced from any given user interface, but it's certainly become an applications platform, and the way it's managed to inherit some benefits from terminal systems is a major reason why.
Nowadays probably most line-of-business, intranet applications are web-based, but there appear to be exceptions. I did see one business who kept its orders in some unknown application accessed via PuTTY, used quite proficiently by salespeople who AFAIK were otherwise nontechnical.
Since it was entirely keyboard driven it seemed pretty productive, probably moreso than the average webapp where even if you use Tab a lot there are cases where you're switching between keyboard and mouse a lot.
I find it frustrating that in 2017 I still spend plenty of time waiting for the computer to do something. Occasionally even typing into a text field in a web browser is laggy on my high-end late-model iMac. For every extra cycle the hardware engineers give us, we software engineers figure out some way to soak it up.
The terminal is not for everyone, but lately I've found it's the one environment where things can be instantaneous enough that my flow is not thrown off. For kicks, I installed XUbuntu on a $150 ARM Chromebook with the idea of mostly just using the terminal (and having a throwaway laptop that I'm not scared to use on the bus/train). I expected to mostly be using it as a dumb terminal to ssh into servers, but amazingly, most local tasks are still pretty instantaneous.
> I find it frustrating that in 2017 I still spend plenty of time waiting for the computer to do something. Occasionally even typing into a text field in a web browser is laggy on my high-end late-model iMac. For every extra cycle the hardware engineers give us, we software engineers figure out some way to soak it up.
I totally agree. In a very subjective sense, it feels like despite our massive advancements in hardware, computers aren't getting any faster.
I have vivid memories using web browsers around 2000 or WinAmp 3 back in the mid 90s and they felt like about the same speed as what I get today. Obviously the complexity of our apps have increased by an order of magnitude or two, but the things we're doing with them are not an order of magnitude more complex. In a very real sense it's like you say: we're soaking up all the advances that new hardware is providing, and mostly just because we can.
Wirth's Law: software gets slower faster than hardware gets faster. And the hardware is getting faster slower than it used to, too.
And then we have to periodically (like weekly) do a hard power-off and re-init, else the bloody thing gets slower and slower every day after a while.
This seems to be true in way too much of the tech field these days. My go-to example is drive storage. Every year, our hard drives get increasingly larger, by pretty big margins even, but game developers just make bigger and less compressed assets, even though they're not needed.
Imagine if instead of building apps inside a browser, you instead have a good OS framework that allows you to build a native app for which you can ship updates easily and which simply talks to your backend's API. It would be a very similar model to what most of us use today in our browsers, but would open a lot of doors around what we're currently getting wrong with interfaces on the web.
It also already exists in a few limited forms: apps on iOS or Android for example.
When I look at games in particular, I'm amazed that you can have AAA titles that get hotfixes for bad bugs on pretty much the day they're released. This is made possible by sophisticated content distribution networks like Playstation's.
Twenty years ago your pressed a master disk, crossed your fingers, and tossed it to the wolves. If there was ever a serious problem discovered after release, it would be a huge hit to your bottom line to ever have to try and recall everything that went out.
Testing must have been much more comprehensive before to make the old system workable.
For one example, Apple designers are considered amongst the best in the world, but every animation on the iPhone could stand to be 10x faster. It's frustrating for me even just waiting for it to move from an app to the home screen after I hit the "home" button. Even though the animation is relatively short, there's a non-negligible effect on my workflow, and that adds up as I do it a thousand times a week and thousands of times a year.
I'd personally rather see all animations disabled rather than what we have today (or even just an option available to us to do that).
Right above in "Increase Contrast" you can also trigger a "Reduce Transparency" setting which may speed up animations (but admittedly alters the UI style).
It's not yet what we'd want but it's what we have so far.
I'd also like to address "big fonts": I grew up with UIs from the turn of the millenium and back then I thought professional software had to show a lot of things on the screen and have extensive menus. But I also noticed I could burn a CD much more easily using a free wizard-like version of the burning program than the paid full version. Thanks to mobile, at the price of shortening applications to apps, the world got easier, friendlier, cleaner UIs. While we previously thought that reducing the font size on our websites made them look cooler, now we saw that using big fonts we could make text look good while actually being radable.
So let us not go back to terminals maybe. Let us use the right (visual) tool for the right job.
Text terminals are terrible for something I need to do every single day: Showing others what I am doing on the computer.
Without animations you are like a person with autism disorder: It works for you but nobody that is watching you can understand what you are doing, you are in your world. You do a key combination you know(but others do not) and the screen changes instantly.
Repeat this a few times and you have a lot of confused people in your audience.
I agree that we need everything: something extremely fast, intuitive while beautiful and useful and extremely easy(and cheap and fast) to program.
But nobody has done it because it is hard as playing violin to be able to design a simple design alone, let alone make it fast. In the real world you need to pick your priorities, to constantly triage and make decisions.
Side note: I expected to be downvoted heavily for my original comment given the bad tone (animations are a sore spot for me, I get emotional about it) but some how the opposite happened. Obviously I was 100% correct :) but I should have been less of a dick about it.
I used to sit next to my boss at a previous job, when I was first hired, so she could show me the ropes. She used a Mac.
Sometimes she'd get mail, but because she was working with me, she wouldn't answer it. And the icon would bounce up and down and bleet, over and over. Microsoft had Clippy, and Apple has Claptrap.
I think a useful way to accomplish a nice compromise between beautiful interfaces and usability is to allow the user to skip animations. For example, if I start typing after an action was performed the animation should skip to the end. In this case the user is clearly familiar with the interface and knows what they want to do next.
As a designer, it's concerning that so much of ux design right now is focused on facade. Even among teams that care holistically, the surface level things still take priority.
Nielsen's usability heuristics from 1995 is still, extremely relevant today.
As an aside, you can actually turn off the sliding animation in MacOS spaces. There's a "reduce motion" setting in the accessibility preferences. Although, reducing motion replaces sliding with a glitchy fade animation, so it's swings and roundabouts.
Although when reading ebooks I have page animation disabled, I guess because you mostly proceed to the next page, and there's no context switch it's all reading.
But I also prefer having two monitors for multiple contexts, no amount of virtual desktops with or without animation will beat that.
But I'm not sure I agree with most of the examples.
For example, Slack isn't really a power user tool. It's a tool that does its job best if everyone in the organization can understand and use it, and making it more like a terminal isn't going to help with that. Speeding it up would still be beneficial, of course. (Also, it looks like there are plenty of terminal clients for Slack if you're in to that.)
Things like animations can actually be very helpful in giving you an almost visceral understanding of the spatial logic of the UI. Without animations it becomes very abstract. It's about balance obviously, using repeated slow animations for branding purposes is not a good idea in a tool like a password manager that you unlock 20 times per day.
I would argue that this started much earlier and was because of the problems with distributing, installing, and updating desktop apps. We even had names like "fat client" (it was meant to be pejorative) to refer to traditional desktop apps and "client-less" (it was meant to sound magical) to refer to web apps. There wasn't a problem with desktop development frameworks. There was only one, Windows, and those people who used it enjoyed it.
> HTML and CSS gave developers total visual control over what their interfaces looked like, allowing them to brand them and build experiences that were pixel-perfect according to their own ends. This seemed like a big improvement over more limiting desktop development,
This isn't how I remember it. Developers didn't want total control, but publishers did. Browsers let you select background colors, font colors, sizes, and types. A website was never meant to render exactly the same. But then the publishers entered the picture and they expected a website to behave like a magazine. That's why we had whole websites that were made up of images only. CSS was invented to put a stop to this madness. However, it institutionalized the publishers mindset that websites should render the same everywhere.
Overall a lot of this article can be summed up as "just because you can, doesn't mean you should." As an industry, we do self-restraint very poorly.
Of course it is also a very cleanly designed system and it takes advantage of the parallelism today's processors give us.
If you work for a little while is Haiku, you will find everything else intolerable and slow.
I also frequently think back to the OS we had on the Nokia 3310 generation of phones; how easy it was to navigate to exactly what you needed (with shortcuts like menu button > keypad 3 > keypad 2 or something like that). There were no animations to slow down that navigation either.
I've often wondered about the history of the terminal emulators we use. From xterm and onward, they have all emulated a basic DEC VT-100. A historical accident, or inertia? The VT-100 wasn't very sophisticated, and most emulators don't even emulate it fully. There were much more featureful terminals succeeding it, with colour and graphics, yet we didn't add support for them. What caused this whole aspect of computing to become stuck in the late 1970s? There are also specifications like ECMA-48 which standardise control for font and size/spacing/justification and a number of layout features, colour selection and much more. It also defines separate data and presentation layers. Mostly unimplemented except for a minimal number implemented by xterm. Some emulators have also implemented rudimentary graphics, wider colour selection, unicode support and mouse reporting, but nothing truly ground breaking.
It strikes me that what's really missing here is the development of a new class of terminal emulator which implements a much more advanced presentation layer. For example a full PS/PDF-style drawing model, and/or OpenGL-style graphics facilities. Combined with an extended set of control codes to manipulate the data layer, you could effectively have a browser rendering engine and DOM equivalent in terminal form. Which could be driven by any language capable of using stdin/stdout, from a shell script to Python and C++. No reason it couldn't be xterm compatible either; there's lots of ways to extend the control code space, and we already have termcap/info to add support for new functionality.
That being said, I agree with the opinion about superfluous animations. More programs need to have the ability to turn them off.
I love material design's principles on this regard, where animation is used to convey how the app works (where a menu came from, what will happen if clicking on something).
As for the examples, the author needs a faster computer or connection, all the apps mentioned that I've used don't feel sluggish at all. This includes Slack, which although takes a while when first connecting, it runs great after that.
I guess the way I see it is, animation isn't bad, but it can be used ineffectively, but like anything else.
Once the apps are loaded, it's a simple alt-tab.
The other day i learned of the existence of feh, the image viewer.
It seems to straddle between being a CLI and a GUI program.
And it got me thinking that while for Apple and Microsoft it kinda made sense to sideline the CLI and focus on the GUI as their CLI offerings were anemic to be polite, this hard split feels misplaced on unix derived platforms.
Instead the GUI on _nix can be used to enhance the CLI.
That said, I'm super in favor of getting some modern takes on rich terminals going.
some lightweight animations might work to cover actual loading times.
> The learing curve is steep, but rewarding
That's almost a contradiction, at least considering parts of the curve. If you average an upward curve to a linear function, of course the slope will look less steep over time than it actually is at the time.
Citation needed. Seriously. People state it as fact when it is quite debatable.
I only use a monospaced font because using Vim is more important to me. Otherwise I’d probably switch to the Poly variant of Triplicate, which makes it not-quite-mono.
A 2nd word: regularity.
(If we had a monospace font here, the colons of the two lines would be aligned...)
I suspect in the Slack example, it's to cover for the fact that there's a bunch of network calls being made behind the scenes. A terminal version of Slack wouldn't be that much faster since it would still need to make those network calls.
Also, adding animations can improve the trust that a user has in the product. The CoinStar example is a great one...when they just immediately displayed the count of coins to users, they didn't trust the count because it was too quick. When they added a delay and played the sound of lots of coins bouncing around inside a machine for a while, people start to trust the count. And that's not unique. I've worked on at least 3 projects now where we've done something we felt differentiated our product but, in testing, our users notice. But after adding a delay and animation, we retested it and our users were much more impressed and happy with our product. By making it slow, it was much more apparent that the system was doing something impressive. Never mind the fact that we'd optimized the hell out of queries and made the execution time snappy, it needed to be slow for them to see the value.
Also, animations can be useful to draw the eye to a change that's happening in the system. When something changes in a UI, you can't just expect the user to notice it. The human visual system isn't good at noticing those small deltas without some visual cue to make the change pop.
None of this means that all usages of animations improve the user experience. But nowhere in the article does the author acknowledge that animations can serve an important purpose. We need to take a balanced approach to animations and make sure we test each and every animation we use with users to ensure that it's better than the non-animated alternative.
One other small point, as someone who has developed software for people over the age of 70, I believe the author will be singing a very different tune with regard to "overly-large fonts sizes" once his eyesight starts to deteriorate. I'd actually say that the tendency is actually worse in the other direction...developers make font sizes too small, since they're young and have good eyesight. Apps should be optimized for large font sizes with a setting to allow users that want smaller fonts to choose that. But the number of times I've seen my mother be unable to find the setting to increase the font because the default font is too small is a non-trivial number. And even when I increase it for her, it's a good bet that the app is unusable since that configuration hasn't been tested.
Though on the engineering guilt side, add to that dozens of layers of bloat, VMs, interpreted languages, and, "performance doesn't matter, scrum deadlines do" attitudes, and yeah, I bet you end up with 45 second load time for your chat client.
> Somewhere around the late 90s or early 00s we made the decision to jump ship from desktop apps and start writing the lion’s share of new software for the web. This was largely for pragmatic reasons: the infrastructure to talk to a remote server became possible for the first time,
> good cross platform UI frameworks had always been elusive beasts,
Technically true, but misleading. This sentence seems to imply that this mattered, and web was somewhat better. Both of these are false. Back in then no one cared about non-windows systems, and the amount of effort required to display site properly on all major browsers was staggering. It was way, way easier to make desktop apps which worked on 99% of all computers than web apps which worked on 99% of the browsers.
> and desktop development frameworks were intimidating compared to more approachable languages like Perl and PHP.
This was a time of VB 6, Java, Delphi, and later this fancy .net thing. Designing a desktop all was drastically simpler that creating a website of the same complexity.
> The other reason was cosmetic: HTML and CSS gave developers total visual control over what their interfaces looked like, allowing them to brand them and build experiences that were pixel-perfect according to their own ends.
This is so false this is not even funny. The desktop apps were trivial to make pixel perfect, the web took a LOT of work (I still remember the countless nested tables with 1x1 images in them)
Here is winamp 1, released in 1997: https://upload.wikimedia.org/wikipedia/en/0/09/Winamp1.006.P...
Here is web in 1997: http://royal.pingdom.com/2008/09/16/the-web-in-1996-1997/
now, which one is more customized?
> This seemed like a big improvement over more limiting desktop development, but its led us to the world we have today where every interface is a different size and shape,...
And of course, the author misses the most important reasons why people spent all the effort to make the web apps. Spolsky said it back in 2004 in
> Today I installed Google's new email application by typing Alt+D, gmail, Ctrl+Enter. There are far fewer compatibility problems and problems coexisting with other software. Every user of your product is using the same version...
Then the article goes on to advertise the advantages of terminals and "terminal programs": fast startup, no animations, "interface elements are limited", optimized for advanced user, "output that I can process in some way to get into another program"
This is accompanied by a picture of emacs running in a terminal.
The problem with that, of course, is those properties are not bound to "terminal" programs at all. Many software which comes from Linux/Unix world will have all of these properties, even if they do not require terminal to run. Even graphics editors like "gimp" start up fast, have no animations, etc...
Conclusion: the only way this article makes sense if the the author equates "the terminals" with "apps without animation". The author does not seem aware of what the "terminal software" means (all communications go through a single bidirectional pipe).
But before I get to that, there are a few places where non-developers have user interfaces that reward expertise. One prominent example are Bloomberg terminals: http://graphics8.nytimes.com/images/2013/05/13/business/sub-...
Notice that the interface (which is extremely customizable, feel free to look up other images, each will be rather different) is more-or-less a tiling window manager with terminals in each window that have rich media, nice fonts, non-ASCII UI elements (albeit ones that seem somewhat stuck in the '90s), etc. Quite bit to learn from there.
So here is the flaw: I am afraid the reason most interfaces that non-technologists have to use cater to intuition and a pleasant appearance rather than rewarding expertise is all to simple: No one wants to spend any time becoming an expert at using a gajillion specialized, but infrequently used, software interfaces, each of which would, according to your ideal, be optimally designed to allow an expert to perform the associated task efficiently and well.
The average person pays off their credit card once a month, and pays their taxes once a year. The incentive for them to learn to do these things more efficiently isn't very compelling, and the number of people who have 20 active credit cards for whom it would be compelling isn't large enough to be worth creating an expert UI for (that may change as another couple of billion people get online, if software markets don't fragment further)
Now, all that said, there is certainly a lot of room for improvement in web-based user interfaces: animations can be faster and more subtle, the use of whitespace can be reduced, typography can be more restrained, decoration and color can be used only when it conveys information (basically, everything Tufte has been telling us for a couple of decades).
Windows' Metro and Modern, and Google's Material, are both nice steps in that direction (with the exception of animation), and each represents a lot of difficult design work by large design and development teams. Less certainly is more, both in the sense of the return it offers but also in the investment required. The simpler and less cluttered a user interface is, the more you have to sweat the tiniest of details. This post on redesigning bits of the Chrome browser's, uh, chrome is a good case in point: https://medium.com/google-design/redesigning-chrome-desktop-...
You can expect user interface redesign churn to slow down only once display resolutions stop climbing (because they have exceeded what can be distinguished by the human eye) and form factors stop changing (because the only remaining meaningful constraints are ergonomic).
> First, there are a few places where non-developers have user interfaces that reward expertise. One prominent example are Bloomberg terminals:
I didn't mean to say that it was just developers that have access to these sorts of power tools, but it is the most common case. I would have actually used Bloomberg terminals as an example to support my arguments, and in general am hugely in favor of this sort of app that rewards the time invested in learning it all the way up to advanced levels.
> The average person pays off their credit card once a month. The incentive for them to learn to do it more efficiently isn't very compelling, and the number of people who have 20 active credit cards for whom it would be compelling isn't large enough to be worth creating an expert UI for.
Yes totally, but what if you had just one common UI that was pretty standard and which your credit card company could easily plug into while building interfaces for their users?
Modern native apps for smartphones are probably the best example there because even though they're not perfectly consistent, at least they have standardized toolbars, navigation, and controls (far beyond what you get on the web). I think this idea could be take even further.
Yes, well, the reasons that is unlikely is because banks need to differentiate their offerings and... you know what, scratch that. Just observe that this hasn't happened on the desktop in the way you describe, though every so often someone does try to reintermediate an industry in that way, which usually only works if you (a) control the distribution platform, so can bundle in the standard UI to the platform, and (b) manage to fool the industry into going along, eg. iTunes, Kindle). Banks, specifically, have yet to fall for any of the attempts along those lines, and there have been quite a few.
The situation is unlikely to change until/unless newly formed banks embrace splitting out the "dumb money pipe" as shared infrastructure and modern standardized interfaces so their value added services are separate (but integrated). To some extent you see glimmers of this in medicine with (almost, but not quite) portable electronic medical records. What progress has come about is solely due to the government ratcheting up the carrots and sticks to get "meaningful use" to happen, and those may get rolled back now.
See also ASCII POS terminals.
And there was an article some time back about Norwegian doctors getting floppies mailed to them because they refused to upgrade from their keyboard driven patient journals. This because once they had internalized the keystrokes, they could do them while maintaining eye contact and conversation with the patient.