Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> that's the beauty of command line options -- unlike with a GUI, adding these options doesn't clutter up the interface

GUIs provide discoverability so often times you don't need a manual. If you don't know how to use a command line tool you must open the manual. So in that way I would consider the man pages as part of the UI and those definitely get cluttered.




The discoverability of a GUI deteriorated as you add more options.


I've worked with music software GUIs, these are notoriously plagued by feature-creep.

For a desktop paradigm, there is always a way to do better with GUI as you add more features; but you'll refactor / redesign a lot on the way and that has a cost in familiarity, which directly impacts user productivity. It's a trade off.

The dirty secret of discoverability is that only a tiny subset of features is relevant at any time in the UX, and identifying that subset + triggering user awareness about it is the "art" part. We learned a lot with mobile in that regard because the harsh limit on real estate becomes incentive to identify the essentials.

Discoverability doesn't have to suffer from complexity of the software, but it's not just UI design, it's UX.


The other incentive with mobile is to reduce the functionality of either the entire app, or any particular view presented by the app. That's not actually the same as identifying the essentials, anymore than saying that "cutting is the essential action" really allows you to distill the _essential_ between an axe, a pair of scissors, a chainsaw and a cooking knife.


"The dirty secret of discoverability is that only a tiny subset of features is relevant at any time in the UX"

For some reason, designers and some users don't like it, but a hierarchy (folders, menus, etc) is something many people find useful and effective.

But it seems pretty common to declare people can only handle a few options plus search.


And search, something that Ubuntu HUD/Unity (now on the extinction path) solved in a very elegant way. Hundred of menu options nested in remote menu folders made available by typing names. Best of both worlds, if you ask me.


Some Android devices have that in their system Settings screen... and nowhere else.


If I were snarky I'd tell you that UX on both mobile and desktop in general is hurtingly bad compared to what it could be.

There are great domain apps (e.g. medical, games, professional audio, amateur astronomy to name a few I've seen) that really break open the format:

- feels like a tricorder from Star Trek, where small is good because it doesn't try to be bigger than it is, rather embraces its limitations are features not bugs (and to his credit, Jobs + Ive successfully translated these ideas into the iPod and later on iPhone early iterations).

- feels like a great videogame UI where despite the complexity, once internalized it's a breeze (even a joy, physically) to manipulate. It takes a lot of testing and feedback and is admittedly one huge QA cost in video game making, so there's why most companies don't do it for 'normal' products whose UI/UX is 'good enough' (or so they think, for now).

- clever integration with features, e.g.: slider on a phone with a tiny motor feedback when you cross a unit bar or even for magnitude; a bigger-deeper 'womp' around the center coupled with a UI magnet that makes you 'feel' and 'see' the slider fix itself on the 0 position. This all designed so you interact while looking at something else, just like a physical button.

I don't know. I feel like we've made steps of giants in the 2000s and then progressively dropped the ball when we realized the simplest UX on mobile yielded the highest engagement (think twitter, insta, how "simple" they are, how limitations actually enhance user activity; in a very questionable way but that's besides the point in terms of profit thus research).

I sincerely hope someday a disruptor kills everyone else with awesome UX (just like Apple did in the 2000s) and forces the market to re-think its abysmal UX standards, and let's not talk about ads because that would be extra-snarky.

If I were snarky I'd tell you that. But I'm not, so I'll leave these remarks for someone else to make.


Fortunately, I'm in a very snarky mood today. I absolutely agree with those things you didn't tell me. And I do blame the "when we realized the simplest UX on mobile yielded the highest engagement" thing. I believe this to be the cause - people figured out that the ROI on shiny is better than ROI on building useful things, so most wide-audience software is now just garbage (and quite often a vehicle to scam users out of their personal data).

The market is allowed to exist because profit is a quite decent proxy for utility. But whenever you are optimizing using an imperfect proxy, it's possible to overoptimize - make the proxy metric look better, at the expense of the actual goal it was supposed to stand for. I believe software (along with much of other market sectors) have crossed that point a while ago.


Indeed. The underlying economics (the harshness of no business model, i.e. no profit = wrong idea insofar as reality/markets won't have it) make it a very difficult proposition — sometimes I'm in awe at the fact that we could pull off "open source", that the idealism of that survived collision with reality long enough to demonstrate its value.

The economy to me is a vast "optimization space", so I can only agree. It's the 'trap' of a local minima I suppose, wherein the friction of moving away to find another possible (but uncertain) minima (hopefully overall lower or more acceptable ethically) is too high. So you've indeed overoptimized your way into a local pit (e.g. the ad model now taking over the news like a virus, resulting in this new "hybrid" object sometimes called "informercial" or "infotainment").

It seems that only "disruption" (of a magnitude worthy of the name) gives enough momentum to escape the steepness of a local historical/economical minima. Unless you've got some new S-curve (which quite literally may represent the math surface of this "escape path"), history tells us we remain stuck and move in circles about the center of 'gravity' of this local minima.

Dunno if that makes sense to you, pardon the loose physical analogies.


> feels like a great videogame UI where despite the complexity, once internalized it's a breeze (even a joy, physically) to manipulate

ZBrush feels like that to me. Most people think that's it looks like a dogs breakfast at first view, and that it's desperately in need of an overhaul.

Yet when you just want to sculpt 3D clay, it feels remarkably intuitive once you know where everything is.


That's what differentiates a tool from a toy. A tool is designed to maximize user's productivity in regular use. A toy has to look easy and shiny in the first few minutes of use, so that it secures a subscription.


"I sincerely hope someday a disruptor kills everyone else with awesome UX (just like Apple did in the 2000s)"

You mean like Apple did in the 1980s? They regressed in the 2000s.


That ability to search is helpful, but it's particularly helpful because the organization is terrible...and maybe searchability eliminates the incentive to fix the organization. In theory, it should allow rearranging things with less friction.


Hierarchies suck. That's why early web "search engines" failed, the old hierarchical portals, and Google won (once it added a good sorting algorithm to the what the initial search engines returned).

The UI is one input and the "computer" figures out the rest.

Regular users don't want/understand hierarchies, they get easily confused. They also don't want to organize things themselves, see the many failed attempts at tagging files, websites, images, etc. manually.


Every time this is mentioned, people say "shut up shut up shut up, nobody wants hierarchies". Obviously it's dogma. But if you have to keep saying it...

I am probably not a "regular user", but I got a job about a year ago defined by and intended for regular users, and one of the major parts of it was managing emails in a departmental account, using (you guessed it) hierarchical folders, and another was managing project documents in SharePoint, using (what appear to be) hierarchical folders.

I'm aware that people will tell you not to do the latter in SharePoint, that it's not really designed to be used like a filesystem, just use lists, etc. But purely from an anthropological viewpoint, I see how people shove everything on a shared drive, and then they try to move it to something like SharePoint and recreate a folder structure.

So, you know, it's not about what I want, but it's an objective fact people do want hierarchies, even when other people don't want them to. Google is pervasive, but it's not all of computing.


Yes, and so does the --help page or man pages of a CLI program, the more options you add. Soon, --help takes additional arguments for which category of options you want to list. Often, options are listed alphabetically, which means if you don't know exactly what you want to do, you have to read the entire list to find the option you actually wanted.

A GUI can much more easily emphasize the most common options, it can visually group related options, and it can use common GUI widgets to, for example, show that some options are mutually exclusive with each other.


I prefer the command-line over the GUI most times - it's a standardized interface for most interactions, and it's always scriptable.

But the same could be said of CLIs tbh. After a couple dozen options, man pages, or `--help` lists become a dense read, where as a simple form on a webpage might be easier to grok. YMMV


I would argue the discoverability of a man page is equal to the discoverability of a gui, when both are well designed.

You don't read a man page top to bottom, you skim and skip around, discovering the information you need.


I would argue that's untrue. Limited screen space discourages the proliferation of obscure GUI options and pushes the UX towards simple packaged tasks optimised for graphic representations - or at least representations with a strong graphic organisation, even if there's plenty of text involved.

The trend in the command line is in an orthogonal direction - more and more complex function-like options with minimal mnemonic labels, optimised for text representations and text processing.


> You don't read a man page top to bottom

Which is a problem of its own, in a way. Reading a man page top to bottom is few minutes. Reading a user guide/Info pages (for the good software that still comes with one) for a more complex tool (e.g. gdb) takes couple of hours. If you're going to use the application more frequently or for more than an hour in total, the time spent reading the manual will pay for itself - in reduced frustration, much quicker discovery, and much less StackOverflow searches (and getting confused by wrong/misguided answers).

Reading manuals top to bottom: an ancient, forgotten superpower.


There used to be curses-based launchers for some of the command-line utilities.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: