I remember getting into a debate about this subject with a self-described UI expert. It was part enlightening and part frustrating. He failed to see the point that so many of his ideas were US-centric, and didn't make sense in any context outside of English-speaking languages, and to be honest, they barely made sense in English. The author brings up Gmail, which has an icon for "archive" that makes zero sense no matter how I try to connect the dot, but at least Gmail gives you and out: turn the confusing icons to text.
Despite being a programmer -- but not front end work -- I find myself struggling more and more with UI and especially icons. I think so much of it reflects the current trend of zero empathy for the end-user. Fortunately, I know enough about computers to get around many of these issues, but icons are the one area that I still struggle with.
Unless you have some site with several million users, teaching end-users is a wasted effort, and it does well to either piggy-back on other ideas or use text. Even Facebook is using text, and it seems a little odd that anyone smaller would feel they have some lessons to teach the end-user about UI. UI, in my opinion, doesn't mean "pretty," it means "usable," which is sort of implied by U meaning "user." If a significant portion of your user-base is computer illiterate, which will often be the default, it does well to UI to the lowest common denominator. Once your user presses the back button because your icons made them feel stupid, you lost a customer, and that is a very high price for "pretty."
I don't think this is "zero empathy for the end-user", I think this is desperation. As companies and "product managers" get more statistical data on real-world clicks and usage and do more usability studies, they see that 90% of users don't know how to do anything. And they get desperate to "fix" that.
They go for drastic re-designs, get rid of all the text because "nobody" reads it, hide all the features because they "confuse" and "intimidate" users and "nobody" uses them, and end up with something that requires a young professional software developer like me some real sleuthing to figure out, on behalf of friends and family.
Then, when 95% of users don't know how to do anything, they get more desperate, and the result is even worse...
Is that a thing ? I can't speak in general but I know that for the large mobile app I am working on, we are slowly migrating to our own solution and I highly doubt that users could deactivate it.
I think it's ment in the sense that power users are by far the most likely people to disable telemetry or "Send usage data to help improve this software" functions, if such a setting is offered.
Especially if the option is buried somewhere deep in the settings menu. Also the most likely users to have adblockers, though that's getting more and more common.
Depends. If you use a third-party, client side telemetrics provider it's probably on blocklists. If you track usage yourself on the server or integrated into your JS, then yes, you'll probably get good data.
if your user tracking is served from a 3rd-party domain, you won't see me or my family members in your stats. ublock, ghostery, requestpolicy. aggressive settings.
Most users don't change any settings. Which is why you want sane defaults. I first realised this (very much to my chagrin) when screen-resolution settings were first reported via JS. It was obvious (from the low and poor values selected) that the vast majority of users, under Windows, never right-clicked their desktop and tweaked display settings (800x600 @60Hz on CRTs, or worse, being the most common values).
The only users who (in significant number) change settings are power users. A small minority in any regard, but also the ones who then are more likely to defeat various tracking and feedback systems -- disabling feedback or preemptively blocking sites and systems which profile system performance, including Google Analytics, New Relic, etc.
Yes, if you can instrument within your tool (Web page, app) for response, great, but that's More Work For You, so it's far too easy to fall back on standard services. Which your best users are most likely to block....
I'm inclined to declare ploxiln's Law of Negative Usability Cascades.
This dynamic does seem to strongly reflect how marginal UIs get progressively worse.
Don't design for idiots.
Oh: and there's some truth in the observation that 90% users don't know how to do anything. People are far less rational and sensate than we typically believe.
Things about users don't know anything makes me really frustrated, they just need to learn something to use new application, even changes of color can be confusing. I usually ended up with most common design my target user use.
My bank did a redesign 3-4 years ago, replacing labels with icons. To do anything, even now, I keep clicking the stupid icons until I land on the proper page.
What if the answer is that humans weren't meant to use computer interfaces, and that this is the slow realization of that fact. What if the rampant increase in computation power and software has outpaced the human mind's ability to adapt (limited) and evolve (takes millions of years, considering these types of attributes don't usually select before breeding age), and this is the end for humans and the beginning for machines?
This is a huge issue with open source software - everything seems to be named Libre- these days and I haven't found many Americans who can pronounce it...let alone people in China or India.
More than that, Libre- is a very inward-facing name. It's not AwesomeOffice, FriendlyOffice or PowerfulOffice. It's "LibreOffice", something that simply doesn't make any sense to most English speakers.
The implication is "we choose this as our adjective, because we care more about our ideology than the quality of your user experience". Ok, the ideology says that "because we build our software like this, you get a better user experience", and I have a lot of sympathy with that. But by the time you have to explain the name like this, you've lost the potential user's attention.
(OTOH, I also suck at software naming so probably shouldn't preach.)
> The implication is "we choose this as our adjective, because we care more about our ideology than the quality of your user experience".
So how does your implication theory explain the fact that TDF is currently hiring a UX mentor and having a tender to develop and incorporate usability metrics collection for LibreOffice?
They didn't just rename the project for fun - Oracle held on to the OpenOffice trademark and the forkers had to think of something to replace it.
Since it's not particularly awesome, friendly or powerful, it would be a bit of a stretch to name it in such a fashion IMHO. It is in fact free though, and I never wondered about the "libre" part, but then again, I'm from Europe...
IMO, NGINX has taken the blue ribbon from GNU for stupid free software name (I say this as someone who loves both). How many people here actually know that NGINX is supposed to be pronounced engine-X?
I always just assumed it was like 'enjinks'; even the back end devs who deal with it.much more regularly than me call it that as well. The more you know. 'SQL' has that issue as well I suppose.
The problem with the text-only approach is that you can't change the layout or the names of the functions visible to the user in your app or you'd face a severe backlash or blowout from your user.
But in the case of icons, you could still move things around and the user would find it less demanding to locate the new place of the function by identifying the pictogram quickly taking into account that we humans are better equipped to identify pictograms and shapes than text very quickly and efficiently.
PS: Assuming a conventional method or approach to this problem and not a hybrid or innovative one.
But in the case of icons, you could still move things around and the user would find it less demanding to locate the new place of the function by identifying the pictogram quickly
FYI, this is almost certainly not true. Research suggests that users only recognise a very limited set of icons in an application, and where there are extensive toolbars full of different icons, it is often the position that the user is recognising more than the icon itself. Thus reorganising things like toolbars can have a profound negative effect on usability, and it seems likely (though I'm speculating now) that this would be much worse than something like reordering text items in a menu.
> "UI, in my opinion, doesn't mean "pretty," it means "usable," which is sort of implied by U meaning "user.""
In frontend land this is often the distinction between UI vs. UX. UX means usability, UI is the more aesthetic side of the exercise, though of course there is considerable overlap.
Part of the issue - at least from my corner of the industry in mobile app dev - is that a lot of good UI designers have renamed themselves UX designers - because UX designers are in greater demand - but are poorly qualified to actually assess and design for usability.
There are also more and more designers crossing over from other design disciplines - graphics designers and print designers are often transitioning to UX design and the results are not always good. To some degree they can bring fresh ideas to the table, but often it results in a lot of designs that aren't competent at a usability level.
> "If a significant portion of your user-base is computer illiterate, which will often be the default, it does well to UI to the lowest common denominator."
I agree with the gist of what you're saying - but I think there needs to be a bit more nuance here.
We're no longer in the 90s, the userbase for most websites and apps largely are not computer illiterate. They are in fact quite technically savvy overall - the issue isn't that your users are technologically ignorant, it's that they're not skilled in your software.
If you look at the modern smartphone user they have a lot of learned expectations and behaviors and they know full well how everything works. The issue comes only when you try to break the established knowledge and do your own thing - which is exactly what the blog post here is about, icons that aren't universally established and have vague meaning to non-experts (and even some experts).
One of the hardest things as frontend people is retaining the first-time-user mindset. You use your own software day in and day out and become experts at it, and your designs and considerations start swimming around that - you are more inclined to build power-user features and implement power-user shortcuts, and you gradually lose the ability to assess your own product from the perspective of a new (or even old, but irregular) user.
The issue isn't that people are technologically illiterate, but that they are not specialists in a very particular expert-user UI you may have designed.
This is made worse when designers start openly egging competitors' UIs, so now not only are you pursuing a confusingly non-standard UI, but this non-standardness starts becoming a meme in your specific niche.
The distinction is important IMO - with the exception of a few demographics (retirees?) it's actually pretty safe now to expect a reasonable amount of tech savvy from your users, but you have to recognize what you think is universal trained user behavior vs. actually widespread user behavior.
No, the term "UI design" has always meant the “usability” side of the design in professional circles. The aesthetic side has been called visual design or graphic design. However, the history of the whole field has been a fight to find authority inside organisations, and words has been weapons and casualties of this battle.
UI design was always about the “usability”, the “how” and “why” of the design, not the looks. For example, the first edition of Alan Cooper’s About Face (1995) had a subtitle "The Essentials of User Interface Design”. It was more or less the UI design bible back in the 90s for the practitioners.
However, back then, most managers and technical people incorrectly thought that the job of UI designer was to “make this ugly thing we built pretty”. This caused a problem for UI design as a profession, and people inside the field started to use a term "Interaction Design”. It was also a better term because it highlighted the temporal part of the design exercise: the user interaction flow. This is often the hardest part to design right and the term clearly separates the interaction design from designing visual surfaces. This change was reflected in the 2nd edition of About Face (2003), the subtitle was now “The Essentials of Interaction Design”. Also, IxDA, the Interaction Design Association, was founded the same year.
At the same time, there was another rising term, user experience design. It was used especially by Don Norman. While Interaction Designers tried to drive home the point that the interaction design was a separate profession than visual design, Norman took a more holistic approach. Instead of separating the fields, he actually highlighted the importance of aesthetics for the user experience, which culminated in his book Emotional Design (2005).
By the late 2000s, most of the leading software organisations had already understood the importance of UI design. Also, the rise of the startup culture and smaller teams meant that more and more people in our field worked on the product-level decisions, instead of working on the nitty-gritty of technical architecture. User experience design grew to mean a more holistic take on how the user experiences the product and became more or less the umbrella term for the field.
In frontend land this is often the distinction between UI vs. UX. UX means usability, UI is the more aesthetic side of the exercise, though of course there is considerable overlap
I've never understood how the concept of "UI" ever evolved to not encompass usability at it's core.
I was happy to cede UI territory to the designers who said they'd do it better. Even when I was insulted to my face that because I'm a programmer and liked my arcane typing based interfaces for myself, so therefore I couldn't possibly understand making usable software for others. I bit my tongue, put my head down and went back to the things I wanted to focus on anyway.
In the time since, user interfaces have experienced gradual incremental improvement just like everything in computing. But rather than going for "nothing else left to take away" design (or perhaps due to a shortsighted version of it) visual clutter was just turned into mental clutter. The number of WIMP nouns and verbs the average user has to understand are higher than ever, with less consistent behavior, and overflowing with one-off slightly different implementations to remember.
Rather than any grand improvements, I've only seen degenerate phenomenons for making user experience worse. My favorites being: A well-managed brand name, good aesthetics and software reputation means that when someone (eg. Apple) ships a bad interaction, users that would (rightly) blame the software first before working around it, start blaming themselves first for "not getting it". Or that products evolving quickly (such as the early years of Facebook) are bewilderingly unusable to people that don't log in often enough to keep up with the UI changes as they happen in small increments.
Then after waking up from their bender the UI folks start talking loudly about UX. I thought that was the goddamn point all along! Oh, but what do I possibly know? I only have formal training in what we used to call Human-Computer Interaction, and I quite like my console shells. And shells have bad aesthetics, so they must have bad ux too. And at least the UI guys managed to get 1 of the 2, I guess I'll just go back to holding my tongue now.
"UI" is the user interface. It's the facility through which the user interacts with a tool -- both receiving information and inputting their own responses.
"UX" is the user experience, which is the totality of the user's interaction with the tool. Not just how it looks, but inclusive of goals, success or failure in accomplishing them, frustrations or joys in the process.
UI used to mean the totality of the user's interaction with the tool, including things like usability, accessibility, learnability, and so on. After all, how can you possibly build a good UI without an awareness of these related topics?
Today, I think UX::UI as Agile::programming or Lean::startup.
That is, people who knew what they were doing did most or all of the things the buzzword implies before, and they had probably also a broad skill set that covered most of the useful areas implied by the modern buzzwords. However, a lot of other people feel the need to attach buzzwords or build silos to subdivide a field where they don't have a comprehensive skill set, because of all the usual motivations.
Here's one argument you can use against icon advocates: "The ancient Egyptians already tried hieroglyphics. They didn't work. What makes you think they will this time?"
Still waiting for a good answer to this one. When someone does successfully counter it, I imagine the rebuttal will involve Chinese pictograms or something else that has worked well for a while but isn't likely to survive the next thousand years.
Your main point is correct — no general-purpose writing system has ever been pictographic, not even Egyptian hieroglyphs. But all of your supporting points are wrong.
Egyptian hieroglyphs were in use for about 3600 years, and as other commenters pointed out, Egypt stopped using them because they were conquered by Rome, which then imposed Christianity — which banned a different kind of "icon", killing the hieroglyphs as a side effect. Also, far from being "icons" in the UI sense, Egyptian hieroglyphs were primarily phonetic, although they did have ideographic components — less so than Chinese characters (which are logographic, but only about 4% of them are pictograms) but more so than the Latin alphabet.
The Latin alphabet, which is a modified subset of the Egyptian hieroglyphs (via the proto-Sinaitic and Phoenician abjads and the Greek, Etruscan, and Old Italic alphabets) has been in use for 2100 years, about 1500 years less than the hieroglyphs.
If we're still using the Latin alphabet around AD 3400, your argument from observed adoption that alphabets work better than pictograms would make sense — if hieroglyphs were pictograms, which they mostly aren't.
Every invention of writing except the khipu started out pictographic: Chinese characters (which have been in use since the Shang dynasty a bit over 3000 years ago, contrary to what you seem to think), Sumerian cuneiform, Mayan hieroglyphs, and Egyptian hieroglyphs all have clear pictographic origins. But all of them developed phonetic components in order to expand the range of language that could be written, and those phonetic components came to dominate the script almost immediately, to the point that none of them have an identifiable pictographic-only period in archaeology. In fact, as the numerical nature of the decoded khipus and much of the early Sumerian tablets suggests, logograms for abstract concepts such as numbers may have been in use as early as pictograms or even earlier. So, the actual history supports your idea that systems relying entirely on pictograms have very limited applicability, even though it has nothing in common with the history you imagined supported that idea.
On the gripping hand, maybe computers are not the same medium as clay tablets and printed paper, and so maybe past experience is not entirely applicable.
Have you ever actually studied a language involving "pictograms" or are you just making assumptions? I didn't take Chinese long enough to get a feel for it one way or the other but in Japanese kanji will remain in the language from here until the end of the Japanese language. Japanese has a limited set of sounds and Japanese written completely phonetically is not only extremely long, but once you learn the kanji it becomes much more difficult to read and make sense of a sentence without the kanji.
I actually have studied Japanese for a while and have no clue what you are describing as extremely long.
In hiragana you have 48 letters for sounds which in most cases are a consonant + vowel. Hence Japanese written in Hiragana or Katakana is shorter than it would be if it was written with a western alphabet. Japanese is probably not the language with shortest words, but neither are they exceptionally long.
Romaji is not in use for the most part in Japanese. Romaji would of course be much longer but I was talking about a sentence written out in only kana vs a sentence written out normally with a mixture of kanji and kana.
We stopped using them because we changed the language we speak due to external factors like being conquered and dominated by foreigners who brought their language and writing system with them.
It wasn't a voluntary act, trust me and with the rise of the emojis as a medium of communication, I can say that we or the ancient Egyptians had it right all along in using hieroglyphs.
I'm afraid that the only one not making any sense on this thread is you. We didn't drop the use of our indigenous writing system in favor of that of the Romans or the Arabs because we thought that theirs are superior than ours, we dropped them because we were conquered and dominated and we had to adopt the cultural norms of the invaders.
Also, you're EXTREMELY overestimating the individual agency of the ancient Egyptian in this matter. Most of the citizens of ancient Egypt were illiterate and only a certain class or caste had the privilege to be able to read and write but the rest of the population were completely oblivious to those icons and therefore your argument is just baseless.
Colonial conquest is not the primary methodology of linguistic propagation. Trade is.
English is the primary programming language, not because a team of dedicated Anglo-Saxons are pointing rifles as the head of brown people, but because American trade provides the most profit motive between 1974-2008.
>Colonial conquest is not the primary methodology of linguistic propagation. Trade is.
A "primary methodology" only applies in "most" cases, not all of them.
Lots of languages have perished for outside forces other than trade -- the Hebrew wasn't abandoned because a better trade language came around, but because Israel was conquered and the locals scattered. Other languages were persecuted by decree. Others died because their native speakers were eliminated. And several other ways (e.g. Mussolini tried to smooth out local dialects of italian using the powers of government plus radio and early cinema).
Aramaic was the primary language of trade for the first civilizations during their initial rise. As other trade empires blossomed, more languages appeared, and as trade empires collapsed, those languages vanished.
Religion can and does preserve memes and linguistics, but only trade can scale it out.
>Religion can and does preserve memes and linguistics, but only trade can scale it out.
If it was just for trade, language learning would be relegated to merchants in those countries and few others.
Government, bureaucracy and occupation matters more than trade in this regard. That's how latin become the norm in a large area during the roman empire -- and not because everybody in those regions traded directly with Romans or couldn't agree on pricing otherwise.
Same e.g. for French -- it's not because of trade they got big as a language in the 18-early 20th centuries, but because the french run a big colonial empire.
And English, beyond trade, were the language of the big British colonial empire, and afterwards of the dominant culturally power that was America (Hollywood, rock, pop culture etc.).
>Oh, OK. Those damn foreign imperialists again, always oppressing our hieroglyphics
That would make sense if you had many debates with the parent and he always blamed unrelated stuff to imperialism.
In this case he talks about a specific historical example, where we have a specific historical account, which happens to agree with the parent.
They didn't just go out of style, they were phazed out when the country was invaded, along with other aspects of the local culture. For centuries after Alexander, in Egypt you wouldn't get promoted to the higher ranks unless you were Greek (or Greek speaking) for example, and the local population was kept as second rate citizens. Then came the Romans, then the Arabs, ...
But clearly you have a specific self-made explanation, that they just "couldn't work" and were abandoned for that, history be damned.
From your link, "pictograms" are glyphs that depict the objects they represent (for example, 下 directly depicts the abstract concept of "down").
Egyptian hieroglyphs, in their ornate form, clearly depict various items like reeds, birds, arms, and snakes. But those glyphs don't actually refer to the reeds, birds, arms, and snakes they show; rather, the ancient Egyptian writing system is largely alphabetic. Their formal letters were just much prettier than ours.
Really, you describe a script which has been around at least three thousand years and is currently being used by over a billion people as "worked well for a while"? What's your basis for saying it "isn't likely to survive the next thousand years"?
Despite being a programmer -- but not front end work -- I find myself struggling more and more with UI and especially icons. I think so much of it reflects the current trend of zero empathy for the end-user. Fortunately, I know enough about computers to get around many of these issues, but icons are the one area that I still struggle with.
Unless you have some site with several million users, teaching end-users is a wasted effort, and it does well to either piggy-back on other ideas or use text. Even Facebook is using text, and it seems a little odd that anyone smaller would feel they have some lessons to teach the end-user about UI. UI, in my opinion, doesn't mean "pretty," it means "usable," which is sort of implied by U meaning "user." If a significant portion of your user-base is computer illiterate, which will often be the default, it does well to UI to the lowest common denominator. Once your user presses the back button because your icons made them feel stupid, you lost a customer, and that is a very high price for "pretty."