Hacker News new | comments | show | ask | jobs | submit login
The best icon is a text label (thomasbyttebier.be)
603 points by ZeljkoS on Dec 18, 2015 | hide | past | web | favorite | 203 comments



I remember getting into a debate about this subject with a self-described UI expert. It was part enlightening and part frustrating. He failed to see the point that so many of his ideas were US-centric, and didn't make sense in any context outside of English-speaking languages, and to be honest, they barely made sense in English. The author brings up Gmail, which has an icon for "archive" that makes zero sense no matter how I try to connect the dot, but at least Gmail gives you and out: turn the confusing icons to text.

Despite being a programmer -- but not front end work -- I find myself struggling more and more with UI and especially icons. I think so much of it reflects the current trend of zero empathy for the end-user. Fortunately, I know enough about computers to get around many of these issues, but icons are the one area that I still struggle with.

Unless you have some site with several million users, teaching end-users is a wasted effort, and it does well to either piggy-back on other ideas or use text. Even Facebook is using text, and it seems a little odd that anyone smaller would feel they have some lessons to teach the end-user about UI. UI, in my opinion, doesn't mean "pretty," it means "usable," which is sort of implied by U meaning "user." If a significant portion of your user-base is computer illiterate, which will often be the default, it does well to UI to the lowest common denominator. Once your user presses the back button because your icons made them feel stupid, you lost a customer, and that is a very high price for "pretty."


I don't think this is "zero empathy for the end-user", I think this is desperation. As companies and "product managers" get more statistical data on real-world clicks and usage and do more usability studies, they see that 90% of users don't know how to do anything. And they get desperate to "fix" that.

They go for drastic re-designs, get rid of all the text because "nobody" reads it, hide all the features because they "confuse" and "intimidate" users and "nobody" uses them, and end up with something that requires a young professional software developer like me some real sleuthing to figure out, on behalf of friends and family.

Then, when 95% of users don't know how to do anything, they get more desperate, and the result is even worse...


UI usage statistics are often biased (power user deactivate tracking). A/B testing and watching real end users using your product is better.


>power user deactivate tracking

Is that a thing ? I can't speak in general but I know that for the large mobile app I am working on, we are slowly migrating to our own solution and I highly doubt that users could deactivate it.


I think it's ment in the sense that power users are by far the most likely people to disable telemetry or "Send usage data to help improve this software" functions, if such a setting is offered.

Especially if the option is buried somewhere deep in the settings menu. Also the most likely users to have adblockers, though that's getting more and more common.


If it's a web app, you can't do that.


Depends. If you use a third-party, client side telemetrics provider it's probably on blocklists. If you track usage yourself on the server or integrated into your JS, then yes, you'll probably get good data.


if your user tracking is served from a 3rd-party domain, you won't see me or my family members in your stats. ublock, ghostery, requestpolicy. aggressive settings.


Most users don't change any settings. Which is why you want sane defaults. I first realised this (very much to my chagrin) when screen-resolution settings were first reported via JS. It was obvious (from the low and poor values selected) that the vast majority of users, under Windows, never right-clicked their desktop and tweaked display settings (800x600 @60Hz on CRTs, or worse, being the most common values).

The only users who (in significant number) change settings are power users. A small minority in any regard, but also the ones who then are more likely to defeat various tracking and feedback systems -- disabling feedback or preemptively blocking sites and systems which profile system performance, including Google Analytics, New Relic, etc.

Yes, if you can instrument within your tool (Web page, app) for response, great, but that's More Work For You, so it's far too easy to fall back on standard services. Which your best users are most likely to block....


It's about blocking google analytics (or whatever else).


I'm inclined to declare ploxiln's Law of Negative Usability Cascades.

This dynamic does seem to strongly reflect how marginal UIs get progressively worse.

Don't design for idiots.

Oh: and there's some truth in the observation that 90% users don't know how to do anything. People are far less rational and sensate than we typically believe.

P.T. Barnum had something to say on that.


Things about users don't know anything makes me really frustrated, they just need to learn something to use new application, even changes of color can be confusing. I usually ended up with most common design my target user use.


My bank did a redesign 3-4 years ago, replacing labels with icons. To do anything, even now, I keep clicking the stupid icons until I land on the proper page.


What if the answer is that humans weren't meant to use computer interfaces, and that this is the slow realization of that fact. What if the rampant increase in computation power and software has outpaced the human mind's ability to adapt (limited) and evolve (takes millions of years, considering these types of attributes don't usually select before breeding age), and this is the end for humans and the beginning for machines?


Actual UI experts have been advocating Text+Icon for ages:

> always include a visible text label. As Bruce Tognazzini once said, “a word is worth a thousand pictures.”

https://www.nngroup.com/articles/icon-usability/


+1

This is a huge issue with open source software - everything seems to be named Libre- these days and I haven't found many Americans who can pronounce it...let alone people in China or India.


More than that, Libre- is a very inward-facing name. It's not AwesomeOffice, FriendlyOffice or PowerfulOffice. It's "LibreOffice", something that simply doesn't make any sense to most English speakers.

The implication is "we choose this as our adjective, because we care more about our ideology than the quality of your user experience". Ok, the ideology says that "because we build our software like this, you get a better user experience", and I have a lot of sympathy with that. But by the time you have to explain the name like this, you've lost the potential user's attention.

(OTOH, I also suck at software naming so probably shouldn't preach.)


> The implication is "we choose this as our adjective, because we care more about our ideology than the quality of your user experience".

So how does your implication theory explain the fact that TDF is currently hiring a UX mentor and having a tender to develop and incorporate usability metrics collection for LibreOffice?

They didn't just rename the project for fun - Oracle held on to the OpenOffice trademark and the forkers had to think of something to replace it.


Since it's not particularly awesome, friendly or powerful, it would be a bit of a stretch to name it in such a fashion IMHO. It is in fact free though, and I never wondered about the "libre" part, but then again, I'm from Europe...


There is a tendency to make names that are okay for print but hard to pronounce like X.org or XFCE.


IMO, NGINX has taken the blue ribbon from GNU for stupid free software name (I say this as someone who loves both). How many people here actually know that NGINX is supposed to be pronounced engine-X?


I always just assumed it was like 'enjinks'; even the back end devs who deal with it.much more regularly than me call it that as well. The more you know. 'SQL' has that issue as well I suppose.


"ex dot org"; "ex ef see ee".

At least by me.


I thought the first one was trying to be fancy and be pronounced 'zorg'


"ex org" and "ex face" for me


I've been worrying over clicking archive vs delete for ages in Gmail now.

Even though you've always got undo I'd much rather not have to click undo.

I let out an internal comforting sigh once I realised I could just replace the icons with text.


> Even Facebook is using text,

The problem with the text-only approach is that you can't change the layout or the names of the functions visible to the user in your app or you'd face a severe backlash or blowout from your user.

But in the case of icons, you could still move things around and the user would find it less demanding to locate the new place of the function by identifying the pictogram quickly taking into account that we humans are better equipped to identify pictograms and shapes than text very quickly and efficiently.

PS: Assuming a conventional method or approach to this problem and not a hybrid or innovative one.


> or you'd face a severe backlash or blowout from your user.

https://xkcd.com/1172/

UIs undergo changes, them staying the same for all eternity is not a promise anyone wants to make.


But in the case of icons, you could still move things around and the user would find it less demanding to locate the new place of the function by identifying the pictogram quickly

FYI, this is almost certainly not true. Research suggests that users only recognise a very limited set of icons in an application, and where there are extensive toolbars full of different icons, it is often the position that the user is recognising more than the icon itself. Thus reorganising things like toolbars can have a profound negative effect on usability, and it seems likely (though I'm speculating now) that this would be much worse than something like reordering text items in a menu.


> "UI, in my opinion, doesn't mean "pretty," it means "usable," which is sort of implied by U meaning "user.""

In frontend land this is often the distinction between UI vs. UX. UX means usability, UI is the more aesthetic side of the exercise, though of course there is considerable overlap.

Part of the issue - at least from my corner of the industry in mobile app dev - is that a lot of good UI designers have renamed themselves UX designers - because UX designers are in greater demand - but are poorly qualified to actually assess and design for usability.

There are also more and more designers crossing over from other design disciplines - graphics designers and print designers are often transitioning to UX design and the results are not always good. To some degree they can bring fresh ideas to the table, but often it results in a lot of designs that aren't competent at a usability level.

> "If a significant portion of your user-base is computer illiterate, which will often be the default, it does well to UI to the lowest common denominator."

I agree with the gist of what you're saying - but I think there needs to be a bit more nuance here.

We're no longer in the 90s, the userbase for most websites and apps largely are not computer illiterate. They are in fact quite technically savvy overall - the issue isn't that your users are technologically ignorant, it's that they're not skilled in your software.

If you look at the modern smartphone user they have a lot of learned expectations and behaviors and they know full well how everything works. The issue comes only when you try to break the established knowledge and do your own thing - which is exactly what the blog post here is about, icons that aren't universally established and have vague meaning to non-experts (and even some experts).

One of the hardest things as frontend people is retaining the first-time-user mindset. You use your own software day in and day out and become experts at it, and your designs and considerations start swimming around that - you are more inclined to build power-user features and implement power-user shortcuts, and you gradually lose the ability to assess your own product from the perspective of a new (or even old, but irregular) user.

The issue isn't that people are technologically illiterate, but that they are not specialists in a very particular expert-user UI you may have designed.

This is made worse when designers start openly egging competitors' UIs, so now not only are you pursuing a confusingly non-standard UI, but this non-standardness starts becoming a meme in your specific niche.

The distinction is important IMO - with the exception of a few demographics (retirees?) it's actually pretty safe now to expect a reasonable amount of tech savvy from your users, but you have to recognize what you think is universal trained user behavior vs. actually widespread user behavior.


No, the term "UI design" has always meant the “usability” side of the design in professional circles. The aesthetic side has been called visual design or graphic design. However, the history of the whole field has been a fight to find authority inside organisations, and words has been weapons and casualties of this battle.

UI design was always about the “usability”, the “how” and “why” of the design, not the looks. For example, the first edition of Alan Cooper’s About Face (1995) had a subtitle "The Essentials of User Interface Design”. It was more or less the UI design bible back in the 90s for the practitioners.

However, back then, most managers and technical people incorrectly thought that the job of UI designer was to “make this ugly thing we built pretty”. This caused a problem for UI design as a profession, and people inside the field started to use a term "Interaction Design”. It was also a better term because it highlighted the temporal part of the design exercise: the user interaction flow. This is often the hardest part to design right and the term clearly separates the interaction design from designing visual surfaces. This change was reflected in the 2nd edition of About Face (2003), the subtitle was now “The Essentials of Interaction Design”. Also, IxDA, the Interaction Design Association, was founded the same year.

At the same time, there was another rising term, user experience design. It was used especially by Don Norman. While Interaction Designers tried to drive home the point that the interaction design was a separate profession than visual design, Norman took a more holistic approach. Instead of separating the fields, he actually highlighted the importance of aesthetics for the user experience, which culminated in his book Emotional Design (2005).

By the late 2000s, most of the leading software organisations had already understood the importance of UI design. Also, the rise of the startup culture and smaller teams meant that more and more people in our field worked on the product-level decisions, instead of working on the nitty-gritty of technical architecture. User experience design grew to mean a more holistic take on how the user experiences the product and became more or less the umbrella term for the field.


In frontend land this is often the distinction between UI vs. UX. UX means usability, UI is the more aesthetic side of the exercise, though of course there is considerable overlap

I've never understood how the concept of "UI" ever evolved to not encompass usability at it's core.

I was happy to cede UI territory to the designers who said they'd do it better. Even when I was insulted to my face that because I'm a programmer and liked my arcane typing based interfaces for myself, so therefore I couldn't possibly understand making usable software for others. I bit my tongue, put my head down and went back to the things I wanted to focus on anyway.

In the time since, user interfaces have experienced gradual incremental improvement just like everything in computing. But rather than going for "nothing else left to take away" design (or perhaps due to a shortsighted version of it) visual clutter was just turned into mental clutter. The number of WIMP nouns and verbs the average user has to understand are higher than ever, with less consistent behavior, and overflowing with one-off slightly different implementations to remember.

Rather than any grand improvements, I've only seen degenerate phenomenons for making user experience worse. My favorites being: A well-managed brand name, good aesthetics and software reputation means that when someone (eg. Apple) ships a bad interaction, users that would (rightly) blame the software first before working around it, start blaming themselves first for "not getting it". Or that products evolving quickly (such as the early years of Facebook) are bewilderingly unusable to people that don't log in often enough to keep up with the UI changes as they happen in small increments.

Then after waking up from their bender the UI folks start talking loudly about UX. I thought that was the goddamn point all along! Oh, but what do I possibly know? I only have formal training in what we used to call Human-Computer Interaction, and I quite like my console shells. And shells have bad aesthetics, so they must have bad ux too. And at least the UI guys managed to get 1 of the 2, I guess I'll just go back to holding my tongue now.


Your understanding of UI/UX doesn't match mine.

"UI" is the user interface. It's the facility through which the user interacts with a tool -- both receiving information and inputting their own responses.

"UX" is the user experience, which is the totality of the user's interaction with the tool. Not just how it looks, but inclusive of goals, success or failure in accomplishing them, frustrations or joys in the process.

https://m.fastcompany.com/3032719/ui-ux-who-does-what-a-desi...


UI used to mean the totality of the user's interaction with the tool, including things like usability, accessibility, learnability, and so on. After all, how can you possibly build a good UI without an awareness of these related topics?

Today, I think UX::UI as Agile::programming or Lean::startup.

That is, people who knew what they were doing did most or all of the things the buzzword implies before, and they had probably also a broad skill set that covered most of the useful areas implied by the modern buzzwords. However, a lot of other people feel the need to attach buzzwords or build silos to subdivide a field where they don't have a comprehensive skill set, because of all the usual motivations.


Here's one argument you can use against icon advocates: "The ancient Egyptians already tried hieroglyphics. They didn't work. What makes you think they will this time?"

Still waiting for a good answer to this one. When someone does successfully counter it, I imagine the rebuttal will involve Chinese pictograms or something else that has worked well for a while but isn't likely to survive the next thousand years.


Your main point is correct — no general-purpose writing system has ever been pictographic, not even Egyptian hieroglyphs. But all of your supporting points are wrong.

Egyptian hieroglyphs were in use for about 3600 years, and as other commenters pointed out, Egypt stopped using them because they were conquered by Rome, which then imposed Christianity — which banned a different kind of "icon", killing the hieroglyphs as a side effect. Also, far from being "icons" in the UI sense, Egyptian hieroglyphs were primarily phonetic, although they did have ideographic components — less so than Chinese characters (which are logographic, but only about 4% of them are pictograms) but more so than the Latin alphabet.

The Latin alphabet, which is a modified subset of the Egyptian hieroglyphs (via the proto-Sinaitic and Phoenician abjads and the Greek, Etruscan, and Old Italic alphabets) has been in use for 2100 years, about 1500 years less than the hieroglyphs.

If we're still using the Latin alphabet around AD 3400, your argument from observed adoption that alphabets work better than pictograms would make sense — if hieroglyphs were pictograms, which they mostly aren't.

Every invention of writing except the khipu started out pictographic: Chinese characters (which have been in use since the Shang dynasty a bit over 3000 years ago, contrary to what you seem to think), Sumerian cuneiform, Mayan hieroglyphs, and Egyptian hieroglyphs all have clear pictographic origins. But all of them developed phonetic components in order to expand the range of language that could be written, and those phonetic components came to dominate the script almost immediately, to the point that none of them have an identifiable pictographic-only period in archaeology. In fact, as the numerical nature of the decoded khipus and much of the early Sumerian tablets suggests, logograms for abstract concepts such as numbers may have been in use as early as pictograms or even earlier. So, the actual history supports your idea that systems relying entirely on pictograms have very limited applicability, even though it has nothing in common with the history you imagined supported that idea.

On the gripping hand, maybe computers are not the same medium as clay tablets and printed paper, and so maybe past experience is not entirely applicable.


Have you ever actually studied a language involving "pictograms" or are you just making assumptions? I didn't take Chinese long enough to get a feel for it one way or the other but in Japanese kanji will remain in the language from here until the end of the Japanese language. Japanese has a limited set of sounds and Japanese written completely phonetically is not only extremely long, but once you learn the kanji it becomes much more difficult to read and make sense of a sentence without the kanji.


I actually have studied Japanese for a while and have no clue what you are describing as extremely long.

In hiragana you have 48 letters for sounds which in most cases are a consonant + vowel. Hence Japanese written in Hiragana or Katakana is shorter than it would be if it was written with a western alphabet. Japanese is probably not the language with shortest words, but neither are they exceptionally long.


Romaji is not in use for the most part in Japanese. Romaji would of course be much longer but I was talking about a sentence written out in only kana vs a sentence written out normally with a mixture of kanji and kana.


Usually the difference is about 30% by character count, but you can write the kana smaller before the text is too small to read.


(a) they're not pictograms and (b) hieroglyphs worked for thousands of years.


(a) Yes, they are (https://en.wikipedia.org/wiki/Chinese_character_classificati...).

(b) So why'd the Egyptians stop using them? What makes them right for today's needs?


We stopped using them because we changed the language we speak due to external factors like being conquered and dominated by foreigners who brought their language and writing system with them.

It wasn't a voluntary act, trust me and with the rise of the emojis as a medium of communication, I can say that we or the ancient Egyptians had it right all along in using hieroglyphs.


Oh, OK. Those damn foreign imperialists again, always oppressing our hieroglyphics.

I'm out. This thread is 200TB of crazy on a 10TB drive.


I'm afraid that the only one not making any sense on this thread is you. We didn't drop the use of our indigenous writing system in favor of that of the Romans or the Arabs because we thought that theirs are superior than ours, we dropped them because we were conquered and dominated and we had to adopt the cultural norms of the invaders.

Also, you're EXTREMELY overestimating the individual agency of the ancient Egyptian in this matter. Most of the citizens of ancient Egypt were illiterate and only a certain class or caste had the privilege to be able to read and write but the rest of the population were completely oblivious to those icons and therefore your argument is just baseless.


> only a certain class or caste had the privilege to be able to read and write

And in fact that's why they're HIEROglyphics, they were the priestly writings. (compare to demotic, as also seen on the Rosetta stone).


Hieroglyphics and demotic are the same writing system using the same letters. They're like print and cursive.


Colonial conquest is not the primary methodology of linguistic propagation. Trade is.

English is the primary programming language, not because a team of dedicated Anglo-Saxons are pointing rifles as the head of brown people, but because American trade provides the most profit motive between 1974-2008.


>Colonial conquest is not the primary methodology of linguistic propagation. Trade is.

A "primary methodology" only applies in "most" cases, not all of them.

Lots of languages have perished for outside forces other than trade -- the Hebrew wasn't abandoned because a better trade language came around, but because Israel was conquered and the locals scattered. Other languages were persecuted by decree. Others died because their native speakers were eliminated. And several other ways (e.g. Mussolini tried to smooth out local dialects of italian using the powers of government plus radio and early cinema).


Aramaic was the primary language of trade for the first civilizations during their initial rise. As other trade empires blossomed, more languages appeared, and as trade empires collapsed, those languages vanished.

Religion can and does preserve memes and linguistics, but only trade can scale it out.


>Religion can and does preserve memes and linguistics, but only trade can scale it out.

If it was just for trade, language learning would be relegated to merchants in those countries and few others.

Government, bureaucracy and occupation matters more than trade in this regard. That's how latin become the norm in a large area during the roman empire -- and not because everybody in those regions traded directly with Romans or couldn't agree on pricing otherwise.

Same e.g. for French -- it's not because of trade they got big as a language in the 18-early 20th centuries, but because the french run a big colonial empire.

And English, beyond trade, were the language of the big British colonial empire, and afterwards of the dominant culturally power that was America (Hollywood, rock, pop culture etc.).


On the other hand, if not for the previous several hundred years of western europeans pointing rifles...


Mostly at each other.


>Oh, OK. Those damn foreign imperialists again, always oppressing our hieroglyphics

That would make sense if you had many debates with the parent and he always blamed unrelated stuff to imperialism.

In this case he talks about a specific historical example, where we have a specific historical account, which happens to agree with the parent.

They didn't just go out of style, they were phazed out when the country was invaded, along with other aspects of the local culture. For centuries after Alexander, in Egypt you wouldn't get promoted to the higher ranks unless you were Greek (or Greek speaking) for example, and the local population was kept as second rate citizens. Then came the Romans, then the Arabs, ...

But clearly you have a specific self-made explanation, that they just "couldn't work" and were abandoned for that, history be damned.


Are you familiar with Alexander son of Philip, of Macedon, there was a song about him.


From your link, "pictograms" are glyphs that depict the objects they represent (for example, 下 directly depicts the abstract concept of "down").

Egyptian hieroglyphs, in their ornate form, clearly depict various items like reeds, birds, arms, and snakes. But those glyphs don't actually refer to the reeds, birds, arms, and snakes they show; rather, the ancient Egyptian writing system is largely alphabetic. Their formal letters were just much prettier than ours.


My understanding is that they were both, depending on context.


Really, you describe a script which has been around at least three thousand years and is currently being used by over a billion people as "worked well for a while"? What's your basis for saying it "isn't likely to survive the next thousand years"?


For someone who follows HN somewhat, I really don't keep up on any current cool-and-hyped stuff. I'll just see websites become more and more useless with three-stripe icons for menus and then just random mysterious icons, and I'll tell myself that that no doubt these no-context meaningless icons must be derived from whatever iphones are doing this week.


Today, I went to log into Evernote on the web. My password manager fills out the registration form, of course, but it won't submit because it wants an email address, not a username. So I go hunting for the "log in" option. Is it one of these attractive green buttons? oh, no, those are also sign up buttons. Is it in some weird corner like Tumblr for some reason? Nope. Maybe it's after this sales junk? Nope, just scrolly pictures and then some footer junk.

OH! I FOUND IT! it's a little small button under the register form!

I love how Evernote has managed to seamlessly combine the stupid web design tropes of 1) the hidden log-in form and 2) the incomprehensible hamburger. They've got a little something for everyone to hate. It's certainly not just them, though. My own employer's public website just hid all of its useful links behind a hamburger (I'm steering clear of the communications department for a while for their safety) and about every three days I start filling out the registration form on Github before realizing it's not the one I want.


"My password manager fills out the registration form, of course, but it won't submit because it wants an email address, not a username."

Big UI pain: sites that are not compatible with browser-stored login info.


My bank just broke compatibility with my password manager (could be the plugin's fault, I suppose) and another financial institution specifically told me that their site was not compatible with password managers. They have this 'nifty' feature which turns all but the last few chars of your username into stars on blur and I suspect they just didn't want to bother to do their js event handlers properly. Fortunately, it turns out I can make it work with the right sequence of clicking inside and outside the input boxes.


Even though the login behavior on a web app's home page can cause hiccups with password manager compatibility, the alternate login form on the "You are now logged out" page is usually barebones enough to be compatible.

Compare the login forms for Verizon Wireless:

Home page - https://www.verizonwireless.com (upper right corner)

Logged out page - https://login.vzw.com/cdsso/public/c/logout

Tweaking your password manager settings to log you in through a site's logged out page can often bypass the compatibility headaches.

Using 1Password's Chrome extension, this trick has worked for me so far, although your mileage may vary.


I used to use that method for the one that "didn't support password managers" before figuring out the click dance. Just tried with my main bank and they don't have a login form on the logout page.


I agree on the more and more useless progress of web interface design. The hamburger icon (three-strip icon as you call it) being particularly egregious, akin to the travesty that is the 'start' button on windows.

However, to be fair, Apple designers are very much against this trend as well (not to defend the recent downturn in Apple UX design).

I recommend this 2014 WWDC talk on UX[1]. The entire talk is worth watching even for non iOS UX design. but it goes into the problems with hamburger menus specifically at 32 minutes.

1: https://developer.apple.com/videos/play/wwdc2014-211/


Calling the start button a travesty is ridiculous. The start menu is a useful and appropriate menu where programs and other shortcuts are listed that nobody really has a problem with outside contrived complaints on the internet that were made for the sake of complaining. The real travesty was the abrasive and confusing replacement they made for it in Windows 8.


You weren't around for the release of Windows 95, I take it. The OS tried very hard to point out that you have to click the start button to, well, start doing things—but people usually just Didn't Get It anyway until someone guided them through the process.

Probably the best thing Microsoft could have done back then was make a little pseudo-video† walkthrough, showing people what's in the Start Menu and what happens when you click on a few of the items in there.

† What do you call a video stored as a sequence of automation triggers for software, rather than as pixels? Are you allowed to call it "machinima" if it's not a game?


The Start button may not be brilliant, but it came about through actual usability studies. Having just the one button drastically reduced the time it took people to find things.


But you only have to learn it once. Just like the hamburger.


Start button is where you go to start a program. Hamburger button is where you go when you've tried everything else.. I.e. It's a placeholder for random stuff which is why it's bad design.


> Start button is where you go to start a program.

Or to shut down the computer.


Start button is where you go to initiate an action.

I hate this joke.


> Or to shut down the computer.

That should be "Start to shut down the computer". (grin)


> Probably the best thing Microsoft could have done back then was make a little pseudo-video† walkthrough, showing people what's in the Start Menu and what happens when you click on a few of the items in there. They did that. Windows 95 came with a whole bunch of tutorials. IIRC they worked like you describe. There was also the notorious "Tip of the day" via the welcome.exe.


"Streaming is available in Safari, and through the WWDC app."

It's 2015! I didn't think 'Best Viewed with Internet Explorer' was a thing anymore.


It's a thing when they do any product launches as well, I end up having to paste the stream into VLC.

Don't own any Apple stuff but I like to watch what they are up to (often because I'll end up supporting their stuff at some point).


I always think "Am I turning into a stereotypical old person? Unable to use a computer?" I always feel like I'm just out of touch.


Rose colored glasses. Remember Win 3.1's inscrutable "staple" icon? What the hell does a "yellow" traffic light mean in the context of Window controls?

And don't forget: software was shit when we were younger. Win 9x, Mac pre-OS X. Buggy crap all of it. I can't remember the last time my desktop hardlocked, though Android and iOS these days are similarly shitty to how Windiws 9x was.


It was shit because it had to be, though.

Windows 3.1, for example. It had a very elegant Virtual Machine Manager with pre-emptive multitasking and true process isolation... and then GUI programs were all placed into a single cooperatively-multitasked event loop, because anything else would have killed performance. And the OS also had to continue to support IO device drivers written for DOS—that could disable interrupts system-wide and thus freeze the computer!—because otherwise your (horrible, un-QAed) scanner wouldn't work.

I fully believe that software hasn't gotten much better over the last 40 years. The hardware constraints have simply relaxed to the point that we're no longer forced to make as many Faustian bargains.


And Windows 95 had to run on 4 MB of RAM.


The DOS drivers and disabling interrupts applies to Windows 95 too.


I guess we gain perspective. We have the stereotype of old people being cranky and stupid (like I come from the era of cd-rom cupholder jokes, faxing images of floppies, etc.), but probably there were intelligent old people out there telling us about missing context and we didn't listen or by its nature what they had to say couldn't get press or something.


I think you're on to something huge and important.

Some years ago I was working for an educational software company. We sold directly to schools. As expected, we'd get a lot of feedback from teachers using the product. For a while, everyone from the execs on down would ignore this feedback with a laugh. The majority of teachers we dealt with were absolutely clueless about anything involving software. Their reports reflected this ineptness.

One day, I had an epiphany that maybe, just maybe, these people were worth listening to. We went over the written feedback, translated their ramblings into something approximating proper bug reports, and it turned out they were doing a wonderful job of pointing out many of the problems (especially UI) hiding in our blind spots.

We had dismissed them because of a perceived cluelessness, and probably because of some internalized ageism & sexism (most of these teachers were middle aged+ women and most of my team was not). And yet when we finally listened to them, we put out an update which led to several accolades, a sudden drop-off in complaints, and probably our best selling product.


Do you remember whether your epiphany seemed to stem from something, or just happened?


We were in a crunch to get version 2.0 out. Had the weekly progress meeting with the director. He had received some feedback from our remote sales team and he read one out loud that he thought was particularly funny. And it was, at least at first. Sounded like a old person flailing around with computers. But, then it hit me. This teacher was describing an edge case where a certain sequence of buttons would cause the whole program to lock up.

It was funny in part because the sequence wasn't something any of us would consider logical. So we had never tested for it. But I was able to duplicate the crash on my system and fixed it.

That was the epiphany. I convinced the director to push out the release date and we went through the backlog of ramblings and rants. Over half of them ended up being very useful.


CD-roms and disc media don't change for years, though. Popular apps change their interfaces every few months, it seems, and don't come with a manual. Learning the basics of a computer is more or less once-and-done, whereas keeping up with idiot designers is a constant task and a huge pain in the ass even when you're trying. And I say all this as a college student, which makes me one of the prime demographics for most of these apps.


> I'll just see websites become more and more useless with three-stripe icons for menus

Hilariously enough, this is now known to be a UX nightmare. HN has had multiple articles on why it is a UX nightmare, even when users know to click the hamburger menu (and everyone does by now), they don't engage or explore as much as they would with a layout that doesn't hide half (or more) of the user's choices.


I think the first sighting of the hamburger was actually from the flat/sans Google redesign of 2013. It popped up on Android first, then the web apps, then everywhere.


Should replace the hamburger icon with the letters "MENU". Problem solved.


No. Problem will be solved when you put UI in logical places so that you don't need the 'either try clicking here as a last resort or we probably don't have that feature' trash can of a menu.


I think we forgot many of the lessons from the 90s, in naive attempt to emulate Apple. Here's list of standard things that every application in the 90s had, that are frequently missing from the current UIs:

- Menu - Typically at the top of the screen or window, you can go there and find everything you can do with the application, at the consistent place; moreover, as you go over menu entries, an helpful explanation what does what will appear at the bottom of the screen.

- Toolbar - A place which has most commonly used tools. They are represented by icons, text, or both (and you can actually make a choice in settings). Typically, you can configure the toolbar to your heart's content. Toolbars can also depend on the context.

- Context menu - Right clicking on some object will give you menu of things that you can do with that object. Again, explanation of what each of these actions does appears helpfully at the bottom of the screen.

- Tooltips - As you mouse over any UI element, it will helpfully explain you what it's purpose.

- Buttons - Things that are clickable look visually differently than other things that are not. For example, they have different shading. Buttons may also give feedback that they are clickable when you mouse over them. Also, if you click thing that is clickable, it will give a feedback that it was clicked by changing the shading.

I think the big problem here is that UX/UI people want to be artists and so create art, not useful application for end users (which often means follow some standard!). So the end result is even more disastrous when programmers design UI (at least they are rational about it, in some sense), but the cause is the same - it's putting your own ego (behold at my artistic creation!) in front of actual usability.


While some of these are obvious - removing borders from buttons on iOS made everyone confused, in my personal observation - others are not necessarily "a naive attempt to emulate Apple". Eg.: you can't have (useful|easy to figure out) tooltips or "right click" on touchscreen devices.


That's because touchscreen devices suck for the real work! Their popularity is actually mostly due to Apple, too. Still, that's not an excuse, think out of the box, come up with some other way to do the same thing.

There are ways. Some applications for example had a "query" button which let you examine the UI, what each element does. I don't see why touch devices couldn't have the same thing as a (hardware) button.

Steve Jobs hated buttons, but they have their place. If you eliminate all of them, of course you end up less productive.

And yeah, unfortunately, my list is very obvious.


A long touch often brings up an explanation and/or context menu on iOS. Not very fast, but it does the job...


> Don’t use an icon if its meaning isn’t a 100% clear to everyone. When in doubt, skip the icon. Reside to simple copy. A text label is always clearer.

The article is misguided, in that it assumes that the meaning of an icon only exists in the lines/color/visual form of the icon. Icons are visual language. You have to teach the user what the icon means. Either the user has seen the icon before (such as in airports), or if the user hasn't, your UI has to accommodate that.

Once that happens, then icons are way faster.

Icons are like visual acronyms. The sequence of letters 'T', 'C', 'P', 'I', 'P' means nothing to someone who doesn't already know what 'Transmission Control Protocol/Internet Protocol' is, but once you do, TCP-IP is way faster to recognize, speak, type, and to share.


> You have to teach the user what the icon means.

Recognition speed presumably goes up with familiarity. I have Age - 6 years of experience reading, I have far less than that with whatever icon you come up with.

Let's take starting an app as an example.

On my Smartphone I use the email client "Nine". I probably open up the app manually (versus tapping on a notification) about 5 or 6 times a day.

It took me ~3-5 months of use to be able to look for the icon first versus looking for the text. Note that this is WITH keeping the icon in the exact same place on my home screen the entire time.

So that is what, 450 impressions for, well, no real speed improvement.

In a sea of words, a good icon can help an app stand out. But I'd argue that a good textual name helps more.

If you have multiple products, having your company name start off the title of all your products just means I cannot find your product in my alphabetized app list.

Changing app names is another huge problem. Google's constant re-branding of how I send SMSs on my phone has been very unconductive to me remembering anything. Half the messaging apps out there have the same type of icon (a speech bubble), the textual names make them unique.

That said Facebook cheats by calling theirs Messenger. :)

Also, enough blue icons. Stop it. Everything is one of 3 shades of blue, gross color differentiation kicks in WAY before shape matching does. The three easiest apps to find on phone are the three apps that still have "out of fashion" black icons.

> Icons are like visual acronyms.

I disagree that the analogy is 100%. Our brains have this giant section dedicated to nothing but recognizing characters in our native written language. Sure we have a lot of brain power going to shape recognition in general, but character recognition is so incredibly trained in our heads that it should be our go to.


> I have Age - 6 years of experience reading, I have far less than that with whatever icon you come up with.

When I mean icons are a language, I literally mean that they're a language, in that there's a lineage and history of these languages.

For example, I bet you have decades of experience recognizing the 3.5" diskette 'save' icon, and a similar familiarity with the landline phone receiver as symbolizing 'phone'. These are icons that are part of a shared language, so it's safe to say that using them will be instantly recognizable, despite the fact most people don't use floppy disks or landlines anymore.

If a company decides to use a new icon that's not easily recognized, it's like trying to introduce new language; it has to be done smoothly, otherwise most users will not understand what it means. This is not a problem unique to icons; it's a problem across languages.

If I had a new command-line tool that, instead of `./cmtool --verbose`, I implemented `./cmtool --consult` or `./cmtool --magicify`, you'd have to `man cmtool` in order to understand what the commands mean. This is because '--verbose' is a flag that's common oft-used enough to be understood, generally, what it means, while 'magicify' or 'consult' are ambiguous and rarely used. This confusion has nothing to do with the nature that the UI is text-based (vs. icon-based), and everything to do with the fact that user interfaces, like a language, involves shared conventions.


> character recognition is so incredibly trained in our heads that it should be our go to.

This is also a problem. You cant avoid reading text when its presented to you, so extra text makes the page cluttery, and hard to find the text you really want to read. It is very useful to replace an ever same-looking label for an recognizable icon (Like "Close", "Open", "Save" etc).


I don't think you're correct. Yes, you can't avoid reading text, but you spend far less effort reading it than you would trying to decipher icons, which you also can't avoid doing if you want to know how to use the program.


>Our brains have this giant section dedicated to nothing but recognizing characters in our native written language. Sure we have a lot of brain power going to shape recognition in general, but character recognition is so incredibly trained in our heads that it should be our go to.

Yea, but you still have to read them, i.e. focus this "machinery" on a single word/sentence. So you're working in sequence when trying to identify a bunch of unrelated labels. On the contrary, you can probably differentiate an entire group of icons just by a glance in their direction (provided they are appropriately designed, of course).

Plus reading speed is pretty much the same no matter how many times you repeat the task. Recognition on the other hand improves with repetition.

If you're using only text, you're pretty much stuck with being able to differentiate your icons either by location or by actually reading them. Icons add another component to this.

Totally agree with the color-related comment though.

IANANS btw, so your mileage may vary.


> If you have multiple products, having your company name start off the title of all your products just means I cannot find your product in my alphabetized app list.

Oh great, now I'm flashing back to hunt-and-peck through the Windows XP Start Menu trying to find a particular piece of software in a list categorized sometimes by company, sometimes by software, sometimes by built-in Windows category. An absolute mess.


> You have to teach the user what the icon means

Who's in charge here? The software works for me, not the other way around.

I don't want to be learning your glorious new UI, I want to be using its essential feature and getting out ASAP.

I understand that for progress to be made, ideas must be explored and risks must be taken.

But leaving out a label is not progress, it's just someone trying to save screen space on a cluttered design, and it doesn't help me.


It's kind of silly but google changed the "share" button icon in their android youtube app a while ago and I couldn't find the share button for MONTHS even though it was in the same place. That is because the icon now looks like a "reply" icon and I never guessed that now button is used to 'share' something.


Adding text provides strain as you cant avoid reading text. It also produces huge clutter.

If you replace all the icons for text, you will feel the site looks from the 90's. Try it out on any major site with the browser.

Text label is not a silver bullet.


You can't look up an icon. You can't google an icon. You have to hope there's a tooltip for it.

I can look up a word written in a phonetic alphabet.


How invested is the user in your app? If this is an enterprise scenario and your user makes a living using your app he'd probably be willing to expend the time to learn the meaning of your icons, and eventually he will be more productive for having done so.

In a consumer app, especially one which the user only uses occasionally the bar is much higher, so your icons should be much more obvious, because the user will just delete your app if he can't figure out your icons.


> Icons are visual language. You have to teach the user what the icon means.

Which is the first problem, we don't care, we don't want to learn your crappy custom little visual language. We already know how to read, use words.

> Once that happens, then icons are way faster.

That never usually happens; use words, icons suck and we're all tired of them. Works work better, we all know how to read and don't need you to "teach" us anything.


Great comment. This reasoning is probably intuitive to anyone who's familiar with a logographic writing system.

I feel like you can also draw a parallel with programming languages. Consider:

  def sum1(array)
    sum = 0
    array.each { |e| sum += e }
    sum
  end

  def sum2(array)
    array.reduce(:+)
  end
sum2 isn't just a different way to write the same thing. It's a construct that expresses a specific pattern of evaluation and aggregation. It doesn't describe what it's doing like sum1 does. It's more expressive than that. It's "clearer" once you learn it.

Similarly (at least in the US) green means proceed/go/forward/continue and red means cancel/go back/stop/error. This is something that was learned, not something that was known by instinct.


> red means cancel/go back/stop/error. This is something that was learned, not something that was known by instinct.

Well...[1][2] There are some precedents for red being a no-go colour even in nature. Which is likely why we use it as a stop colour. It would be interesting if someone knows of a significant number of cultures that use it as a "go ahead" colour. I mean, the samples are probably skewed because everyone has been a colony these days but still.

[1]: http://www.noaa.gov/features/resources_0109/images/fire1.jpg [2]: http://museumvictoria.com.au/spidersparlour/images/en000008....


Transit lights were standardized during the XX century. You won't find a place that uses them differently.

But there are cultures where red means good things.


I think the author does address this - further down, he gives examples of icons that have little meaning to outsiders, but that regular users will understand.


I always laugh when I see that "unclear laundry icons" image, because I recently had the experience of trying to use a combined washer/dryer machine in a foreign country. The icons truly are asinine, you can't even get close to guessing what you want.

For fun, I looked it up. Some of the icons are obvious, a few make sense once you know the basics, and still others seem almost sillier once explained.

http://www.textileaffairs.com/lguide.htm


Well, I'm not sure I'd call them "asinine". Like any icon set, they require a bit of learning—they're not necessarily meant to be immediately recognizable to a first time user. They address a different problem: it's simply impractical to print laundry + care instructions in all 24 official EU languages (or even just a few of the most common ones) on a tag the size of a postage stamp.

European road signs[1] are another example where text labels wouldn't be practical. Many of the symbols are meaningless in and of themselves, but you of course pick up their meaning pretty quickly when you're learning to drive. If you're an American driving over here for the first time, you wouldn't expect to automatically understand the system without first at least glancing at a travel guide or some other reference.

All that said, I do think it sucks that the whole laundry symbol system is copyrighted.[2]

[1] https://en.wikipedia.org/wiki/Vienna_Convention_on_Road_Sign...

[2] https://en.wikipedia.org/wiki/Laundry_symbol#National_and_in...


I encountered this machine, which had a manual on top, hooray! But it didn't include explanations of any of the symbols:

https://imgur.com/Sdwrvqv

I ended up going for "40 degrees shirts and pants" since that matched what I was actually trying to wash, but God help anyone trying to use that machine with any specific requirements.


"aws 51012 labels" brought this as the first result:

http://docs.whirlpool.eu/_doc/W10802912EN.pdf


The standardized laundry icons are way better than the alternative (no standard icons, or verbose text in a potentially foreign language).


If you're in a location where that language is spoken, it's probably reasonable to assume you have a method of translation available (like a dictionary). It's much harder to look up icons. You could try to make an exception for laundry, but you'll encounter so many different interfaces over the course of the day, and if they all had their own icon language it would be hopeless. Text is much more universal.


Doing laundry in Poland, I was grateful for the standard icons, whose meaning a quick Google search revealed.


Thanks, I came in here looking for that. Here are some more explanations: http://www.davisimperial.com/resources/care_lables.html

I assume the "No Christmas Crackers" symbol means "Do Not Wring Out".


One of the best things I've read lately. At my last job, I voiced exactly the same concerns over the clarity of icons, and was always told that the users are going to get used to it after training (it was an enterprise product). It was really frustrating as a (back-end) programmer to load up my work in the browser and to hover over each icon every time to test it. I didn't get used to it, no one else ever was going to.


The fundamental problem with icons is that they can mean so many different things, as the example with washing shows. I've heard this summed up as "a picture is worth a thousand words, but you don't want a thousand words when one will do."

I don't use Apple Mail, so this is an example of what a completely new user --- albeit one who has used computers for a long time --- thinks when they see those icons along the top:

- It's a closed envelope. Mail? Send? Close?

- Write? Edit? Compose? Sign?

- No idea what this is.

- Trash can. Deleted items? Delete?

- Left-pointing arrow --- but coming from bottom and looking like it expands outward. Back? Open message in separate window?

- Two left-pointing arrows. Rewind?

- Forward to next message? And why is this arrow not coming out of the bottom, unlike the two to its left?

- Flag. This is probably the clearest of them all.


"It's a closed envelope. Mail? Send?"

On web sites, it used to be "send mail to site operator". Now it's "spam this page to someone else so we can monetize it."


Most of those are pretty standard on phones.

- Closed envelope: No idea - Compose - No idea either - Delete - Reply - Reply to all - Forward - Flag. The meaning of this changes, but not the overall idea.

The problem is, WTF are those doing in an desktop application?


I'd love to see some empirical data, but my intuition is that text + colored icon is the ideal (in terms of how quickly you can find what you want) for both new and experienced users. Every little clue helps navigation, and makes me feel more confident about using an application.


When I see a pushback against icon-only buttons, I think a lot of it has to do with the fact that the mobile space is changing so rapidly that things don't have a chance to "settle down" like they did for previous technologies.

I mean, I don't think it's inherently obvious that a triangle pointing to the right means play, a square means stop, and a circle means record. However, it didn't take cassette players and VCRs to be out that long before everyone knew what they meant. In that case, I think having more info (like the text 'Play') is unnecessary.

In the mobile space, I feel like there is pushback because it's also not immediately apparent that 3 horizontal lines means "menu". I wonder how long that will be the case, though.


I feel like the hamburger button is a special example, since the fact that your mobile app has a "here's where we shoved the features we couldn't fit in anywhere else" menu is usually a smell that you have deeper design issues at the information architecture level.


Hamburger button? Is that what people are calling the three lines button? I think that's one that will catch on. I didn't get it at first, but I think now it's here to stay. It also isn't necessarily a sign of bad design. I think it's mostly used as the "main menu" on a small mobile screen interface where you typically use the majority of the screen for information display. Not a bad design at all.


I'd much rather apps include a menu of extra features that don't quite fit into their interface paradigm, rather than omit them until they can find the One True Interface that can incorporate them all.

Or even worse, make the common case harder just so the interface can include all the features.


Or you know... it's a symptom of needing a menu.


I only have an anecdote, but: I've discovered that I have no ability to differentiate between icons while working. On my own computers, I use WMII, a minimalist tiling window manager; that is: text and no icons. Before that, I'd used "traditional" window managers that showed the icon and program name in the bottom bar (Gnome, Vista, XP, 95). Anyway, at my last job, IT decided that the computer on my desk should run Windows 7, which only has the icon in the bottom bar. It was totally unusable to me; if I was working on something and wanted to switch windows, my brain would never know which one to click; more often then not, I'd just try clicking on each of them until I got the window I wanted.


> Anyway, at my last job, IT decided that the computer on my desk should run Windows 7, which only has the icon in the bottom bar.

It's generally a good idea to look through the settings or search on the internet if you don't like something on your PC.

Windows 7/8/8.1/10 all allow using labels and text for the taskbar buttons.


I was forced to use Windows 8.1 recently and had a similar experience - I decided to go with icons-only to see how long I could stand it before I got used to it, or reverted to icons+text. I reverted after a few days because it was really difficult to find the right button. Even icons for apps that I'd surely have seen and used many times, like Visual Studio, were much slower to find amongst the ~dozen others without the text.

The other real irritation comes from the fact that the selected button no longer has a "pushed in" effect like Windows Classic used to have, but only a very subtle colour change (and the colour varies depending on the colour of the icon too... so instead of just one "active" colour for everything, I now have to figure out what each app's active colour is to find it quickly):

http://answers.microsoft.com/en-us/windows/forum/windows_7-d...


I don't think it was meant as a complaint about Windows as much as it was an anecdote about icons vs text.


This has been considered industry best practice for quite some time, and there are studies proving effectiveness (although I can’t cite them at the moment). However, the current design trend is minimalism, and we're stuck with that for awhile. It will eventually adjust itself back to a new baseline normal, somewhere in between.


If you decide to use icons, offer a hover-tooltip.

Even if you decide to use a textual button, offer a hover tooltip if there's a keyboard shortcut. (Hello phpStorm, I'm looking at YOU!)


I can't for the life of me figure out why apple is so insistent on not using tooltips in OS-X (example, Safari).


This doesn't work for touch-based devices, unfortunately.


In Android apps you can long-press action-bar icons to get a tooltip -- assuming the developer provided one.


Yeah but I for one won't do coding on a system where I'm typing on the screen. IDEs massively benefit from keyboard shortcuts because you can leave your hands on the keyboard.


Don't we have "ForceTouch" or something now that can provide more information about touched thing?


I can't stand tooltips. Either they have a delay and it takes forever for you to get them to appear so you can read them all or they appear instantly and are always obstructing your view when mousing over tools. Besides that, a great many programs use hotkeys for their various functions/tools and NEGLECT to include those hotkeys within the tooltip description. Ugh.


Tooltips aren't going to be much help on mobile though.


> Tooltips aren't going to be much help on mobile though.

Not today, but if pressure sensitive touch becomes a widespread thing, distinguishing the "hover"/"click" distinction could be a thing on touch interfaces as well -- bringing back some of the depth of interaction that touch has lost with comparison to desktop.


I recall there also being experimental touch sensors that can detect your finger hovering over the screen. That way one could move the cursor (hover) without clicking.


I hope not for the sake of old people everywhere.


I thought about this a lot while designing my latest iPad app. On the one hand, icons make everything look beautiful and cohesive, and give you an almost intuitive sense of the UI if done right. On the other hand, it's hard to convey the meaning of a complicated tool using an icon alone, and I hate guess-and-checking the meanings of icons in other apps. My solution was to keep unlabelled icons for the obvious UI elements (in my case, undo/redo, play/rewind/record, erase, and metronome) and add small text labels underneath the icons that referred to more complicated or unique actions. For visual coherence, the labeled icons were kept in their own section: http://i.imgur.com/EeLzlVH.png

So yeah, I think the article is spot on.


I agree. Whenever the option to change icons to text labels is available, I will use it. It's a shame that Apple's finder is ostensibly shit when it comes to text label support. No forward, no change/shadow when `pressed`, etc. It's been maddening trying to use most websites these days that load 20MB of javascript and abstract their interface elements behind incomprehensible picons and boxes.

http://i.imgur.com/YyOYqxV.png


I have no idea what that screenshot is of, but it sure as hell ain't the proper Finder. Maybe it's a look-alike skin for Gnome or something?


I have skinned my OS X install with custom fonts[0], traffic lights[1], and icons[2][3].

[0]: https://github.com/dtinth/YosemiteAndElCapitanSystemFontPatc...

[1]: https://github.com/alexzielenski/ThemeEngine

[2]: http://freemacsoft.net/liteicon/

[3]: https://www.xs4all.nl/~ronaldpr/emaculation/Mac_Icons_and_Te...

as you can see, I'm obviously a SCARY elite hacker that should not be messed with ;)

http://i.imgur.com/bl5hdAN.png


Do you experience any performance issues with a skinned OS X? I'm a graphical junkie and would love to start hacking away at my GUI.


Uh, no. Maybe if you use that awful flavours program that hooks into the system graphics API and swizzles all of the changes, but I patch the system files themselves (ThemePark, [S]ArtFileTool, ThemeEngine, whatever AZielenski comes up with next). In fact, the GraphiteAppearance and SystemAppearance.car files are /smaller/ than they were when I started. All you need to do is log out/back in, restart, or simply kill and restart the program for the changes to take effect. You probably need to disable that incredibly stupid rootless `feature` on 10.11 though


I'd agree that many Icons are context sensitive and many require the user to learn their use, but they do, mostly, provide a language agnostic approach to navigation. An icon is the same size in any language, whereas the title could be very different.


> I'd agree that many Icons are context sensitive and many require the user to learn their use, but they do, mostly, provide a language agnostic approach to navigation.

They are "language agnostic" in that an icon system is its own language.


And it sometime appears that almost every icon system is its own special snowflake, unique more for its creator's convenience, with more creativity than reusability.

The result is more like having dozens or hundreds of different dialects of Esperanto - each system intends to be rational and useful, but the overall effect on users of multiple icon systems is more like cacophony than expressive consistency.

OTOH, the core challenge of using text labels is finding texts that work and are meaningful and reasonably consistent across dozens of written natural languages.


Non-text icons is one of the reasons I don't use IDEs all that often. I'm always wondering "damn, what did that icon do again?" or "I wonder where I can find the button to run this thing" but then I click the wrong thing sometimes and totally screw up my layout then spend a while looking for another button that will get me partially back to the way I was before I can even start looking for the button I wanted again. I guess I would get used to it eventually but it bugs me.

(I'm looking at you Eclipse)


On the one hand text labels are convenient when you can read them, on the other hand when you can't (say because you're in japan and japan loves the hell out of its text labels) icons you might be able to decipher are better than text you definitely won't.


The best icon is one you can opt-into. Don't eliminate them.

For instance I liked OS X's original toolbars, in that you could easily shift between modes that displayed text or did not display text. (Then they entered their phase where toolbars couldn't be customized and all icons were the same shape, which is far less sensible.)


I agree; e.g. word processors, photo editing, etc. can have vast numbers of options and at some frequency of use having a bunch of little icons on the screen is a reasonable choice vs. alternative. Having the traditional "text, icons, or text and icons" option seems like a good deal there defaulting to something with text.

I also set up my browser such that half of the tab bar is favicon-only bookmarks; I can very easily get to the few sites that I look at most days or want a reminder to look at. I also use favicons to identify tabs. I remove most of the standard browser icons in the interface and just leave the few I use regularly, so I can remember what they all do.

So maybe it is best to think of icons as visual interface optimization that is best performed by the user. Especially for web sites or default application modes I agree with the author that there should be text involved most of the time.


No mentioned of the red | yellow | green buttons at the top of a window in MacOS.


Amen to the Apple Mail icon problem he mentions. It gets me every time, too.


Ctrl-click anywhere on the toolbar and select either "Icon and Text" or "Text Only". Problem solved!


Great to know, but I wonder "how on Earth is anyone supposed to know they can do that?"


Indeed; I've noticed often while observing people of older generations that one of their big problems using apps today (especially mobile) is that they don't have the same understanding of iconography as younger generations. It's usually not intuitive for them.


Is it really a matter of intuition or just familiarity with the interface? I suspect it's the latter, likely older people you've encountered aren't "early adopters" of technologies vs. their younger contemporaries and simply not yet learned the meaning of the symbols.

OTOH a younger person confronting a piece of "antiquated" equipment, e.g., an old camera where everything had to be set manually might well feel totally lost vs. the old-timer who knows how to use it from prior experience.

It's not really a generational thing, more associated with environment, exposure to the device in question, education and similar factors. It's all too easy to assume everyone else has the same background and knowledge base as we ourselves, but that's in fact rarely the case. When others aren't similar to us, it's not safe to believe their apparent deficits are due solely to factors like age or obvious personal characteristics.


I agree with you.

There's a series which includes examples where kids are asked to use old technology, like a VCR ( https://www.youtube.com/watch?v=kesMOzzNBiQ ) or rotary phone ( https://www.youtube.com/watch?v=XkuirEweZvM ). They fumble, rather like their grandparents might have fumbled with the unfamiliar interface when it first came out.

The major difference is that children don't worry so much about making mistakes and feeling dumb.


Icons, but not text, can be language agnostic and probably more resilient to cultural variation. Obviously well-recognized icons like arrows fit this property more so than time and culturally dependent things like floppy disks for save buttons.


this english-centric viewpoint of the article really bothers me.

no mention of i18n, no language variation. just try using labels with a more verbose language like german. welcome to hell.

a good icon works across cultures and languages, allows optimized and pixel-perfect UI.


As someone who speaks both languages fluently, it doesn't bother me at all. The iconography could use some tweaking, but this false sense of being offended is ridiculous.


what I am saying is that once your app is using multiple languages, is available in multiple geographies this "just use labels" stuff goes out of the window.

Cancel becomes what, "Zurücksetzen"? "Abbrechen"? All longer strings, good luck with your tiny button.

Not even mentioning Mandarin or Japanese yet.

Oder aber das ist einfach Enterprise-Software Problematik und die Consumer-App Afferl spielen weiter mit ihren Zehen.


Who's offended? We're talking about good design.


I used to flash different ROMs on my HTC Desire frequently a while back. I was experimenting with MIUI ROM and accidentally flashed a Chinese ROM. And, it so happened that I had to travel and I wasn't able to flash an English ROM back for a couple of days. I was surprised that I could manage just fine for a couple of days with a Chinese ROM (I don't know Chinese), I remember thinking about the UX because though I couldn't read the menus, I could manage just fine using the icons, for example a trash can in messages menu is definitely delete message.


On Windows Phone 8.1 the settings screen is "text labels" and is completely unusable. They added icons in 10 and this resulted in a huge improvement.

Text-only is as-bad as icon-only. Combine the two.


I always say that a picture, plus a few words, is worth a thousand words.


I personally prefer Icon+Text over Text-only. And icon-only is often confusing on websites/apps - it only works if one uses the website/app regularly like the Office <=2003 toolbar.

Also the recent trend for black and white icons makes them sometimes harder to understand. Colorful icons worked fine, in older Office <=2010 and elsewhere. Though finding a good icon set with 500+ generic icons that fits one needs means taking compromisses.


The best icon is a text label, written in the language of the user? Sure.

But the easiest icon is that weird squidgy thing that looks like a ... well, I don't know, I'm expecting users will click on it and eventually figure it out. Squidgy thing takes maybe an hour to create. The internationalization team's SLA is not an hour.


I have two modes of thinking when I use a computer program: text mode and graphics mode. I am in text mode while I am reading or writing. I find icons confusing as I have to change from text mode to graphics mode to figure out what the icon means.

The only time I prefer icons is when I am already in graphics mode while using graphics software.


Definitely agree with this. This is also something we're looking to update, along with the great article about how the hamburger icon fails on mobile: http://deep.design/the-hamburger-menu/


In browser UX, you simply mouse-over an icon to get it's tooltip. But screens are big so there's almost always room for labels. In mobile UX, there's just not much room for labels. I don't see why more apps don't use a tap-and-hold approach for showing a tooltip.


I doubt a lot of users would know that you can even do this, as handy as it is.


The paradox here is that the most effective UI would be one where you'd have minimal "signs" (text or otherwise) telling you what to do. The problem with this is that you'd have to _learn_ how to master it. What we have is a compromise. Something that tries to cater both to casual as well as power users.

I'm thinking about for example a swipe based touch interface. A lot of functions now are click here, then click there, when they could be done in a simple gesture. But the problem is that one would have to learn the gesture somehow in the first place. And people don't read manuals, even if there were any nowadays.


Icons can transcend different languages though, a huge boon for small devs.


And using text labels means your labels are different dimensions in different languages.


So? Use an automatic layout algorithm that can deal with that. Users will want different font sizes for your labels, too.


also known as Mystery Meat Navigation

https://en.m.wikipedia.org/wiki/Mystery_meat_navigation


The article took a wrong turn for me. Seemed to say that there was a fallacy in assuming frequent users would understand you iconography, then goes on to praise exactly that, for popular sites.


Usability testing with eye tracking software. It's the only way to objectively show designers how crap their design ideas are. It's not even expensive to get this done anymore.


I think companies need to test their UI's with senile people first. Nothing has given me more insight into user experience than watching my grandma trying to use an iPad.


Also on (Windows) desktop you can (could) always hover over to see a description.

On mobile you have to try and hope it is not the "irrevocably delete thread" button.


I was an original Macintosh Beta tester back in 1983, and I raised this exact same issue. I built all my apps with text + icon buttons and got penalized for it. One of several reasons why an early Mac beta developer left Mac development, not writing code for the OS for a decade afterwards. (PC users were lucky to have Windows 1.0 around that time, and I did well writing GUIs for corporations that wanted better.)


Wait, I've been googling gcal for the last 2(?) years when my calendar tab needs populated when I could have just been clicking on the chessboard thing in gmail??! I was wondering where the hell gcal went!

>> Google decided to hide other apps behind an unclear icon in the Gmail UI, they apparently got a stream of support requests, like “Where is my Google Calendar?” <<


The best icon has a text label.

I think icons next to a text label are very useful because they guide the eye so you can quickly see where your buttons are.


Icons are little symbols that are equally incomprehensible in any language.


Discussed at length a couple days ago:

https://news.ycombinator.com/item?id=10738891


You level criticism at Twitter for being unclear to new users, then excuse that same behaviour from Tumblr because meaning is clear to existing users. Isn't it a bit inconsistent?


  > Facebook as a final example: they lately traded their
  > unclear hamburger menu icon for a frictionless navigation
  > that combines icons with clear copy. Well done
No, not well done. They went from almost conforming to the OS' (on Android) design guidelines, to totally ignoring them.

The 'hamburger' button is not "unclear", because it's so commonplace that at the very least it has an intuitive meaning in the context of an Android app - it's where I expect more options, settings, etc. How do I know that is now kept behind the much less clear icon that seems to show a man moving quickly?


This comment has been unexpectedly contentious - points have been going up and down in about equal amounts (currently 0) - but nobody wants to comment on why they disagree?


Various icons without text often make me feel like dumb...


And then there are those of us who block so much crap that icon fonts sometimes don't load. Hover text is usually enough, though.


Yes, BUT, you have to look no further than at the top of this page to see how utterly unclear text labels can be.


I came here looking for tips on how to turn on text labels in various UIs, and have not been disappointed.


Amen, the ribbon is the worst thing to happen to MS office.


In latin, for genericity.


Actually, Latin was once the international language, but has been largely supplanted by English. Hence if I use English labels in my app, I'm being cosmopolitan!




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: