GUIs provide discoverability so often times you don't need a manual. If you don't know how to use a command line tool you must open the manual. So in that way I would consider the man pages as part of the UI and those definitely get cluttered.
For a desktop paradigm, there is always a way to do better with GUI as you add more features; but you'll refactor / redesign a lot on the way and that has a cost in familiarity, which directly impacts user productivity. It's a trade off.
The dirty secret of discoverability is that only a tiny subset of features is relevant at any time in the UX, and identifying that subset + triggering user awareness about it is the "art" part. We learned a lot with mobile in that regard because the harsh limit on real estate becomes incentive to identify the essentials.
Discoverability doesn't have to suffer from complexity of the software, but it's not just UI design, it's UX.
For some reason, designers and some users don't like it, but a hierarchy (folders, menus, etc) is something many people find useful and effective.
But it seems pretty common to declare people can only handle a few options plus search.
There are great domain apps (e.g. medical, games, professional audio, amateur astronomy to name a few I've seen) that really break open the format:
- feels like a tricorder from Star Trek, where small is good because it doesn't try to be bigger than it is, rather embraces its limitations are features not bugs (and to his credit, Jobs + Ive successfully translated these ideas into the iPod and later on iPhone early iterations).
- feels like a great videogame UI where despite the complexity, once internalized it's a breeze (even a joy, physically) to manipulate. It takes a lot of testing and feedback and is admittedly one huge QA cost in video game making, so there's why most companies don't do it for 'normal' products whose UI/UX is 'good enough' (or so they think, for now).
- clever integration with features, e.g.: slider on a phone with a tiny motor feedback when you cross a unit bar or even for magnitude; a bigger-deeper 'womp' around the center coupled with a UI magnet that makes you 'feel' and 'see' the slider fix itself on the 0 position. This all designed so you interact while looking at something else, just like a physical button.
I don't know. I feel like we've made steps of giants in the 2000s and then progressively dropped the ball when we realized the simplest UX on mobile yielded the highest engagement (think twitter, insta, how "simple" they are, how limitations actually enhance user activity; in a very questionable way but that's besides the point in terms of profit thus research).
I sincerely hope someday a disruptor kills everyone else with awesome UX (just like Apple did in the 2000s) and forces the market to re-think its abysmal UX standards, and let's not talk about ads because that would be extra-snarky.
If I were snarky I'd tell you that. But I'm not, so I'll leave these remarks for someone else to make.
The market is allowed to exist because profit is a quite decent proxy for utility. But whenever you are optimizing using an imperfect proxy, it's possible to overoptimize - make the proxy metric look better, at the expense of the actual goal it was supposed to stand for. I believe software (along with much of other market sectors) have crossed that point a while ago.
The economy to me is a vast "optimization space", so I can only agree. It's the 'trap' of a local minima I suppose, wherein the friction of moving away to find another possible (but uncertain) minima (hopefully overall lower or more acceptable ethically) is too high. So you've indeed overoptimized your way into a local pit (e.g. the ad model now taking over the news like a virus, resulting in this new "hybrid" object sometimes called "informercial" or "infotainment").
It seems that only "disruption" (of a magnitude worthy of the name) gives enough momentum to escape the steepness of a local historical/economical minima. Unless you've got some new S-curve (which quite literally may represent the math surface of this "escape path"), history tells us we remain stuck and move in circles about the center of 'gravity' of this local minima.
Dunno if that makes sense to you, pardon the loose physical analogies.
ZBrush feels like that to me. Most people think that's it looks like a dogs breakfast at first view, and that it's desperately in need of an overhaul.
Yet when you just want to sculpt 3D clay, it feels remarkably intuitive once you know where everything is.
You mean like Apple did in the 1980s? They regressed in the 2000s.
The UI is one input and the "computer" figures out the rest.
Regular users don't want/understand hierarchies, they get easily confused. They also don't want to organize things themselves, see the many failed attempts at tagging files, websites, images, etc. manually.
I am probably not a "regular user", but I got a job about a year ago defined by and intended for regular users, and one of the major parts of it was managing emails in a departmental account, using (you guessed it) hierarchical folders, and another was managing project documents in SharePoint, using (what appear to be) hierarchical folders.
I'm aware that people will tell you not to do the latter in SharePoint, that it's not really designed to be used like a filesystem, just use lists, etc. But purely from an anthropological viewpoint, I see how people shove everything on a shared drive, and then they try to move it to something like SharePoint and recreate a folder structure.
So, you know, it's not about what I want, but it's an objective fact people do want hierarchies, even when other people don't want them to. Google is pervasive, but it's not all of computing.
A GUI can much more easily emphasize the most common options, it can visually group related options, and it can use common GUI widgets to, for example, show that some options are mutually exclusive with each other.
But the same could be said of CLIs tbh. After a couple dozen options, man pages, or `--help` lists become a dense read, where as a simple form on a webpage might be easier to grok. YMMV
You don't read a man page top to bottom, you skim and skip around, discovering the information you need.
The trend in the command line is in an orthogonal direction - more and more complex function-like options with minimal mnemonic labels, optimised for text representations and text processing.
Which is a problem of its own, in a way. Reading a man page top to bottom is few minutes. Reading a user guide/Info pages (for the good software that still comes with one) for a more complex tool (e.g. gdb) takes couple of hours. If you're going to use the application more frequently or for more than an hour in total, the time spent reading the manual will pay for itself - in reduced frustration, much quicker discovery, and much less StackOverflow searches (and getting confused by wrong/misguided answers).
Reading manuals top to bottom: an ancient, forgotten superpower.