It's this: you're displaying a list of things that have various properties. It used to be the case that you're be allowed to sort and filter by any property. Many web apps nowadays seem to 'curate' my sort and filter options and in many cases a particular use is crippled because the property I want to sort or filter by is one that the author deemed a minority use-case.
Now the programming cost of allowing filtering by anything is minimal. The cognitive load on the user is minimal (once you've got the UI for one property then adding more doesn't make much difference)
I've found this deeply frustrating on many occasions.
Not only is this not true except in your imaginary app but it's also disregarding the incredibly bad UI that can come from allowing everything to be filtered.
A good design has constraints and thoughtful choices. A shitty design has everything possible thrown on the screen because "allow filtering by anything is minimal".
For example, here's Newarks's procuct chooser . You can sort and filter by any field. It remains uncluttered by default-populating only the most commonly-used fields, but it avoids sacrificing any capabilities via making these hidden fields easy to access (click the "extended" button right above the table header).
I'd like to point out that in lists like these, the parent comment is 100% correct in saying "[the] cost of allowing filtering by anything is minimal". And while I'm sure an experienced designer could improve the UI, this is a scenario in which sacrificing capability for beauty is absolutely an inappropriate thing to do.
That kind of language isn't going to further the discussion.
Arbitrarily adding constraints doesn't make design better. The default container for tabular data probably should allow for sorting by any column. Show the OP the benefit of the doubt.
Not impossible, but some applications weren't designed to support it from the beginning.
Remember, give me the next 100 of X less than Y is an O(N) operation.
Not really true. Users aren't understanding if "sometimes the list page takes over 10 seconds to load". They don't understand (or care) that they are doing a non-indexed query or that the alternative is they don't get to do it at all. Removing a feature is often not acceptable, and so often the only resolution to a problem like this is to add an index.
Of course, indexes increase the write times to your database. Indexes on every field increase the write times by a lot.
Except sometimes you can't simply add an index, because you're displaying a calculated value and only calculate it for the page you're displaying. In order to sort by that value, you have to calculate for the entire system, so now you're into O(n^2) or worse complexity. You can also denormalize or pre-cache calculated values in some way (even going as far as adding something like ElasticSearch on top), but that introduces another dimension of problems (expensive updates, more potential for bugs and/or stale data).
Do this for several fields, and you're now talking about quite a significant development effort, both up-front and for on-going maintenance, to support sorting that wasn't ever an actual requested feature and that only a small minority of users actually use.
I know this because I have inherited exactly this problem before. Even as thousands turn to millions (let alone billions), anything worse than O(n log n) becomes painfully obvious rather quickly, and it tends to happen to the entire system at once. Suddenly you have to fix a whole bunch of "this feature is unusable slow, you broke my business" problems at the same time, while getting further and further behind your actual planned schedule.
> Remember, give me the next 100 of X less than Y is an O(N) operation.
..IIF you spent the time to implement caching (and deal with the additional cache invalidation complexity problems).
Many apps do paging on the database, or only do expensive joins after paging. Heck, many apps are even stupider and query the entire dataset from the DB, do paging in code, then throw it all away afterwards (so every time the client asks for the next page, it transfers the entire result set from the DBMS).
I would only expect to see something like this at a company with glacial bureaucracy because that's a technique that was already garbage in 2007 when I had to babysit an app whose devs decided to retrieve a user's entire object graph whenever doing a User#show.
10sec load times are similiarly the domain of the DMV and trivial hobby apps on a free Heroku.
At the risk of exposing myself as an FB user and thus older than 25, I think a better example for discussion is FB's wall sort: "Top Stories" vs. "Most Recent." If you're like me, you like "Most Recent," but Zuck has his own ideas about how I should live my life and will switch me back to "Top Stories" anywhere from 6 to 24 hours after my last visit. You cannot hard-set your wall to "Most Recent." Educated me would think that sorting by date descending is the O(nlogn)-iest of sorts, yet here we are, with Big Data Massage turning out to be a forced default.
Point being, I hope: it's not just a choice between 10s loads and the elimination of features. The difference is rarely that stark, and with simple attribute sorts (even multiple) should be nigh-on unnoticeable.
I have a heuristic that if you're doing temp tables and derived values and whatnot, you're edging into reporting, so denormalizing and aggregating makes a lot of sense for that.
Apologies if I've ranted entirely past your point!
As an FB nonuser much older than that, I'm amused... The internet wins eventually.
> 10sec load times are similiarly the domain of the DMV and trivial hobby apps on a free Heroku.
Unfortunately, the app I was talking about:
1) Allows customization of several views, including adding several columns that are expensive (and users can add all of them, if they want). This was not a big deal with hundreds of items, but is with hundreds of thousands.
2) Has a hierarchy above these objects, but doesn't enforce any context. So literally you can load this view for the entire system (which can be hundreds of thousands of objects, with some columns that summarize data from tens of millions of rows)
3) Some of the expensive calculated values are time-dependent, eg "past hour", "past day" etc, which also makes pre-calculating a very expensive operation (not impacting user time directly, but adding overall load to the system).
4) Was built on top of MS SQL server, deployed both as a "cloud" model and as a product on-premise at customer sites. This meant several customers were heavily invested in SQL licenses, and beefy hardware to run it all. Even if we overlooked the challenge of updating our own product (half a million lines? give or take) and data migration (dozens of ten to hundred-GB databases, and hundreds of smaller ones), changing to another DBMS and telling our customers that their tens-of-thousands of dollars investments in hardware/licenses were useless was unacceptable.
So yes, in this case it was possible to get multiple-minute load times in the worst cases. Why would anyone want to see many of these columns for hundreds of thousands of rows at once? I have no idea, but they could, and it was slow, and they complained.
To your point: this absolutely should have been considered 'reporting', but the app didn't make that distinction. This was the main view of data, essentially, and it was possible for users to make it really slow.
We ended up redoing part of the UI at one point, and during that time, removed sorting from some of the columns. Users complained about this at the time, but got over it. We also put a hard limit on of 20,000 objects: more than that and it just says "please pick a group" first. Again, users complained about this, but got over it. Going through that was crappy, but the constant complains about being slow stopped. I should note, there is a separate reporting UI that does allow sorting by everything.
I wanted to go much further than that, and completely re-think the UI and some of the design concepts (such as not allowing a single list to be viewed for the entire system at once, and instead either summarizing multiple groups, or displaying detail for one) but at the time the business side (including up to the CEO) was unwilling to do a change like that, and we also had no formal product management to really push the issue.
It has an old search/filter interface that allowed filters on arbitrary fields, and indeed filtered results would sometimes take 10-30 seconds to display.
Now it is being phased out in favor of new, prettier and faster interface, that only allows filters on some fields, and on top of that has a much lower limit on filtered results displayed.
It does load a page of results on screen faster, but leads to never-ending frustration when trying to do anything useful: now, instead of filtering on what I actually want, I have to iteratively do a zillion filters on irrelevant items, trying to fit within the row limit, combine the output, and then filter that again manually, sometimes taking hours to do what is possible to do within minutes in the old version.
On top of that, for a longest time an absolutely essential filter was hardcoded to be on in the "new" version, making the whole thing useless entirely :(
The whole discussion reminds me of discussions of "feature creep" in products like Office. Indeed, most people use only ~20% of the features -- the problem is that these 20% are not the same for everyone.
Yep. The trick is walking the line: if you don't develop the features in the first place, you may not get some of those customers because you don't meet their needs. If you try to be everything to everyone, you end up with everyone being upset later when it's slow/buggy and you're inevitably forced to change it.
There's also going to be some portion, call it 20% or 5% or whatever, of your features that almost nobody uses, and that's the stuff that really can bog you down. I think it's healthy for a company to really take a hard look at this from time to time: take into account the costs of those features to calculate the profitability of those customers, consider their future potential, the market reaction/perception of having them (or losing them), and really consider if those features (ad customers) are truly worth keeping.
Very well could be, and this is actually a very good example of why unbounded features like this are terrible to have.
Using a main interactive view in the UI is not the way to achieve this. Data export, reporting systems, and/or APIs are the ways, and the big difference is performance is less of a factor. Over a second for a UI is a long time, but several seconds, or even several minutes, can be acceptable for data dumps/exports (that usually happen in the background, not user-interactive).
One way to look at this, write a user story of the form "As a _____, I want to ____ so I can _____", and then use it to rationalize modifying a primary UI (in a performance negative way).. if it's truly compelling enough, then spend the time to make the UI work. In my experience, that type of thing is better left to background exports (where performance is not critical).
As to removing features to avoid slow pages when using the feature, that's a horrible tradeoff.
Only in simple CRUD applications where the data being displayed is a 1-1 with how the data is stored.
In any non-trivial CRUD application this is not the case, and paging/filtering/sorting on a field that is being displayed quickly becomes non-trivial if you are trying to do it for every field.
The programming cost of allowing sorting/filtering on any field (especially if performance is not a concern) is small. Depending on the dataset, the resource (and programming, though this is usually less of an issue) cost of supporting efficient filtering and sorting on every column rather than a limited subset may not be.
> Only a small percentage (6.1%-16.7%) of configuration parameters are set by the majority of users; a significant percentage (up to 54.1%) of parameters are rarely set by any user.
The necessary question is, how many installations used at least one rarely-set parameter? Would those installations have happened without that parameter? How much effort went into developing that parameter, versus the profits of those installations?
The pull out quote being:
A lot of software developers are seduced by the old "80/20" rule. It seems to make a lot of sense: 80% of the people use 20% of the features. So you convince yourself that you only need to implement 20% of the features, and you can still sell 80% as many copies.
Unfortunately, it's never the same 20%. Everybody uses a different set of features.
Does "| while read line" not do what you want here?
I'd suggest the proper response to this is, "Make the common case easy, make the uncommon case possible."
At any rate, it's useless to talk about this in absolutes. There are times when your option is better () and when it's not.
I'd imagine it's when you're in full control of the input and output format for each tool, each tool can be configured to work with varying formats, when there are times you want to skip steps in the processing pipeline...
But personally, I find composing separate tools helps me learn and debug because things are compartmentalized.
I was messing with a tool (where the equivalent tool at other places I've worked was 5 or 6 separate command line tools) where the command line help is 1,441 lines long. I dread having to look through it and still don't really understand most how to use it.
The problem is that Windows way seems prevalent (or maybe I'm overestimating it). Perhaps, much to my joy, a focus would shift soon, but that doesn't look like it will given the abundance of new installs for the win10 nagware.
You can still replace one usage with another tool, if you like. I think that the idea of many simple tools is good in theory, but seeing that most important codebases last for ten or twenty years, use of many tools increases the likelihood that you've introduced a dependency on a product that may be discontinued/unmaintained. "Easy to replace" is fine if you're very familiar with the code and all the assumptions it makes about the product's behavior, but if a tool that some module of your system that no one has touched in more than five years all of a sudden disappears, replacing it may be almost as big an undertaking with replacing the one big tool, but the chances of that happening are greater (because you have more such tools, and small tools are more likely to be discontinued than big ones).
: Say, by no longer supporting your new hardware/OS.
Imho, tools should be treated just like code. You wouldn't write one big function that takes half a dozen parameters to change its behavior. You refactor it into multiple functions that are easier to understand and improve when requirements change.
A lot of configuration options come into existence not because they are actually useful, but because the team had a disagreement that they couldn't settle, or because writing code that just figured out the right thing to do would be too complex. Options should be a way of empowering the user, not simply offloading work to them.
It's very important to realise that our goals are not just related to the concrete problem at hand, but include things like "get out of the way and don't make me feel stupid".
The only way to make something both very simple and very useful for a domain is to cunningly cut out parts of the domain. The end result is a pretty, simple and opinionated tool that doesn't really solve anything for anyone, but sure looks good on screenshots.
> I'd rather have confusing software that does what I want, than simple software that doesn't do what I want.
Of course you would, or I would, or anyone who browses HackerNews would -- but it's not necessarily the best UX.
This was in XP era. Today the likes of Ubuntu/Gnome try their best to dumb down Linux.
That's how you get shiny toys, not useful tools.
The advice here is good if your primary goal is to successfully sell glorified interactive ads in a market where people buy software by looking how pretty it is and making their choice in 2 seconds. If that's your goal, then ok, one has to make a living, but in this case I don't want your software, and I'll discourage everyone from using it.
Sane defaults + flexibility are a way to go. Software should empower the user, and part of this empowerment may be suggesting a particular workflow, but it should not force one to use that workflow and never stray from it. And don't give me the excuse that "more options == much more complexity == much harder to add new features". That's only true if you write utter spaghetti code, and fuck it, programmers get paid so much so that they do this right.
One issue is that "sane" changes over time, but it's hard to change defaults without upsetting people who suddenly find their defaults changed because of an update.
(Although I'd argue that's best solved by letting them be upset until they understand why the default changed.)
e.g. a Video capture which previously defaulted to 320x240 might later default to 640x480 and now default to 1440x720.
It has a simple UI to guide the user into finding and setting the most common options. That UI must be as simple as possible for new users and can be changed regularly, according to the changing needs of the majority of users.
It also has an advanced, generic but type-safe UI (about:config) for finding and setting any possible option present in the software.
Configuration files are the traditional way to achieve this second level, but they don't always help the user in finding the options and are generally not type-safe.
For instance I was wondering if I could have one tab connect over a VPN while the rest were over my normal connection. I have no idea if that's an options somewhere in the millions of toggles
EDIT: You know how applications used to be have an "Advanced" button that'd give you a nicely organized display off all the "extra" toggles? Whatever happened to that paradigm?
I think your example of about:config is also an explanation for why that went away. It used to be possible to reasonably put all of the more esoteric settings on a single page (or a small, single-digit, number of pages), but that's no longer the case for many pieces of software. Also realistically the overwhelming majority of users probably already weren't going into the advanced settings, and those that were are probably more capable of handing about:config.
I can't possibly fathom how firefox tests all of their settings. There's just no way. And that leads to edge cases and buggy software.
A very low percentage of users even change the advanced settings, so any issues which might exist have a very low impact.
If a user does hit a problem with some combination of settings, it is more likely they will report it with some useful reproduction information, since they are likely more of a power user than the average user.
Finally, anyone changing advanced settings is more likely to understand that they may break something and are less likely to be upset, when compared to something fundamental breaking for all users.
Just recently one of the updates for chrome broke the ability to interact with extensions if the user had enabled the flag "Material design in the browser's top chrome". There were tons of angry posts everywhere about how unusable chrome was because of that change, and how they should test their software better...
As for only "advanced users" knowing what happens in there, just do a web search for "chrome://flags" and look at any one of the hundreds of blog posts, news articles, and tutorials showing you how to go in and turn on these few flags that "make chrome better".
Evidence of the stupidity of the masses is not evidence that there is anything wrong with leaving advanced options behind a scary warning and letting people break things sometimes.
In fact, I'd wager that if you took away those advanced options, people would find some more dangerous way of achieving their goals, and there would probably be even more breakage and more blog spam complaints over the status quo.
Things are fine. The sky is not falling.
Removing low-use features is a dangerous game; much of the best software out there (Excel, Photoshop, Viz, etc) is defined by an enormous feature set and a high skill ceiling.
Yes we developers and engineers as a class of human beings love being able to set things to the nth degree as can be seen by the comments here but that's not necessarily the right choice for the vast majority of projects.
Most normal human beings want simple tools that do what they say on the box and don't require much cognitive load at all.
Every setting, every "knob" introduces to the average user a feeling of frustration, confusion, complication.
To an engineering minded person they might find all those options exciting but to someone who just needs to GET SHIT DONE those extra options are hellish and anxiety inducing.
In a few rare cases an application is best designed with more knobs but it's very definitely the exception and almost always a clear sign of shitty design made by engineers for people who they imagine are like them and not professional designers who have actually worked with the general public and know what they actually want, how they think, what they need.
More knobs is almost always a sign of lazy thoughtless design.
I'll put forth an economic justification: specialization. Experts are happy to cope with the explosions of features, and so happy to take a tradeoff of less usable software if their one additional feature is in there that they're willing to pay a premium for it. They spend a lot of time with the software, learning the keyboard shortcuts and tuning knobs.
Conversely, novices like you and me don't like any of that. We want to get shit done, even if that shit isn't exactly what we wanted, or held to the high standards of a professional. The software they need is easier to write -- you don't have to implement obscure features, and you don't have to make the UI cope with all those features.
So the fact that Photoshop exists isn't a sign of lazy design, but of a market for professional users whose niche is being catered to.
Also, the dataset the paper in question focuses on is Apache httpd, MySQL, and Hadoop. These are not technologies for average users or the general public. These are clearly the domain of experts. It's not entirely clear to me that the fact that few people use given Apache config option means I shouldn't be allowed to change it. Sure, most Chef recipes don't use ModRewrite rules, but that doesn't mean I would be better served if Apache removed those knobs.
IME, people advocating for simple UI end up advocating for the removal of features. That's often fine, but it's also why people like Torvalds bounce between desktop environments as things are Simplified with no secret handshake to undo them.
Even the research suggests that over 50 percent of Apache config setting points can be removed without affecting more than 1 percent of the customer base. I'm not clear if that means each feature is 1 percent or 1 percent total. It's also not clear how they can feasibly disentangle people who use obscure features and don't need help with it. (Also, I'd love to see concrete suggestions for improvement from that pool to better understand what counts as obscure here.)
That said, the Apache config has plenty of room for improvement. Why do we have to write chef scripts to calculate MPM settings and avoid OOM'ing the machine?
Every single function in Excel, or Photoshop, is there because someone needs it. Should we leave those people out in the cold? Should we fragment major applications into dozens of sub-versions, each with a single specialized feature set, and force people who need more than one to juggle multiple programs?
I can get behind the idea of a "lite" program and a full-featured program in any given category. And, indeed, there are plenty of simple image manipulation programs for people who don't need all of Photoshop. (I'm not sure if there's an equivalent for Excel, but I'm also not sure Excel needs any changes to be accommodating to very basic spreadsheet use.) What's the benefit in removing options that someone is using?
Other options can live comfortably behind buttons like "One Time Configuration Settings" or "PROBABLY NOT WHAT YOU WANT, DON"T CLICK" - there's no reason that Insert mode in word needs a bumpable hotkey. But they should still be there. There's nothing worse than a program that obviously supports an operation, but has removed your ability to do it any way except a few presets.
Of course, all of this conflates 'professional' software like Excel and Photoshop with 'casual' software, but even in the casual case there's a lot to be said for not actually removing features.
It's absolutely true that "more knobs" is often evidence of laziness, or bad design, or sloppy thinking. Many (most?) programs should have fewer knobs than they do, and should elevate a few of their knobs while burying the rest. Knobs are distracting and confusing even for power users, and there's a lot to be said for putting 90% of the content behind a big red button reading "ALMOST CERTAINLY NOT WHAT YOU WANT".
I primarily objected to two things here:
One is the failure to differentiate general-use software from poweruser software; Photoshop and its ilk are hard to use, but their feature-rich approach is what makes them employment-worthy skills. Glibly applying usage statistics to Photo Viewer and Photoshop equally feels like a category error to me.
The other is bad statistics. "many features are used by <2% of users", well great. But that isn't grounds for hiding/removing them unless it's the same set of users! Most of the data gathered in this article overlooks questions like "will you break 2% of workflows, or 90%?" and "among rarely-used features, how many are one-time preference settings?"
I don't mean to imply that most software is good, we all know better than that. Fewer knobs should be a goal, when all too often more knobs is a goal. But that doesn't mean I'm especially impressed with this treatment of it.
I use photoshop maybe once every two months and have found that its interface is completely inaccessible if you don't use it full time.
I understand that it's great to have full control over everything, but there's a point where it's overkill in Getting Shit Done. Hell, it's gotten to the point with me where I'd rather use cv2 over photoshop/gimp for most things...
It doesn't make Photoshop a bad tool or overkill. It makes your choice in using it bad, and makes it overkill for your specific task.
A nail gun isn't a bad tool because a hammer is simpler and more intuitive, and a hammer isn't a bad tool because a rock is simpler and more intuitive. We don't expect roofers to use rocks, and we don't expect someone hanging a picture up to use a nail gun. Specialization is not a bad thing.
Really, I'm tired of being told "read the documentation" and "You just haven't used it enough" when documentation is given 5% of the energy it should be given.
I'm convinced that the only reason so many SV projects exist because of bad config management in OSS projects.
It is possible that these children might have some useful ideas about how to use their tools, which you did not anticipate.
I would encourage you to use this analysis to decide 'what to put on the basic config knob box', rather than 'what freedoms will I permit my users'.
this is the important bit to me.
i don't like using software that feels like it could fall over because i turned the wrong combination of knobs the developer didn't anticipate.
opinionated, fixed configuration is a nicer experience than an app that can do anything, if you bend it to your will.
Also, nothing really prevents one from having the settings hidden in the "advanced" menu. It's not either/or.
In a program I develop, I sometimes remove an option and people complain, then it turns out they were randomly tweaking that option to try to solve some problems that it was never going to solve in the first place. They just liked the feeling of being able to control _something_. So those are cases of bad options that should have been removed because they made it harder for people. People just can't help but try changing settings when something doesn't work, and hiding them in an "advanced" menu doesn't deter them. That's where all the most powerful settings are afertall! It encourages frustrating time wasting.
Even when those opinions are wrong?
1. How much is the problem of having too many configuration options mitigated by having sensible defaults ?
"a significant percentage (up to 48.5%) of
configuration issues are about users’ difficulties in finding or setting the parameters to obtain the intended system behavior; a significant percentage (up to 53.3%) of configuration errors are introduced due to users’ staying with default values incorrectly."
The former means the configuration options provided do not match the ones desired by the user, not whether there was too many or too few. If anything it encourages software authors to provide more knobs. The latter doesn't have a strong enough correlation to the number of configuration parameters at all. For example, say all these errors happened at Google because of high load, and they were using Apache with default configuration which was built for small and medium scale websites.
2. How does having many configuration options affect software update process ?
3. What percentage of users are unhappy due to having too many knobs ( decision fatigue, fear of missing out ) ?
17.3%∼48.5% of users calls to the technical support center and questions posted on forums. I assume this is a conservative estimate.
4. Does having too little knobs cause software to be forked and cause fragmentation ?
5. Does the result differ when applied to application software ( vs system software ) ?
Not in scope.
Need more research.
I use vlc. 99% of the time all I need is the standard ff, play, rew and the que bar. But I'm also one of those uses that uses many of the other buried features. If they weren't there I'd switch players but they're buried and so the basic ui is uncluttered
I've always assumes that enterprise software has complex configuration options because it's more bespoke than the supplier would have you believe. So rather than delivering a piece of software tailored to the need of one, or a few customers, the new behaviour just becomes a setting in a much, much larger system. The goal being having just one code base.
For example, we have 5-6 different settings for password policy. Most customers use the defaults, but the ones in banking and government usually have very specific security rules that require them to tweak the settings. One allows 10-char passwords but requires them to expire in 60 days. Another is 12 characters and 90 days. And so on.
Their solution was a UI change, from menus and toolbars, to the oft-maligned Ribbon. Advanced settings were kept almost the same as in previous versions, and the button to access those was an arrow in the section containing common settings.
That's my problem with the oft-maligned Ribbon. It changes it's appearance based on the window width.
So the first time an option might be a large-icon-with-text.
The next time you look for it, it may be a small icon, have no text, or be completely hidden. That's 4+ visuals for the same option.
Do it frequently enough and you can pretty much do it blind.
Most UX designers are not as good as David Smith; they worship Apple/Jobs' “function follows form” misunderstanding of the Star GUI, and think you need to remove functionality in order to remove complexity.
² video: https://www.youtube.com/watch?v=_OwG_rQ_Hqw text: http://archive.computerhistory.org/resources/access/text/201...
As I recall, product management forced the issue by making it so the dev team didn't have access to their own machines to tweak parameters. It didn't take long in VAX years for the OS to become more self-configuring. It was a draconian approach, but helped customer satisfaction.
But too manky knobs? How about Too Many Buttons?!
(It's a DJ parody video and actually hilarious):
"User-friendly" depends entirely on the user in question. I find Gnome 2 to be considerably more "user-friendly" than Gnome 3, because Gnome 3 frustrated my attempts to do what I expected to be able to do.
For that matter, I consider bash (and coreutils and family) considerably more user friendly than Gnome, for a given value of "user" (me).
This in turn lessened the transition friction, as people try to do something but do not find it where they expect it from memory.
"That which _can_ be configured _must_ be configured",
corr: "Defaults Aren't".
Inspired by my early experiences with early Windows 3.