The use of metaphors worked when people didn't know what a computer was, and the only way to make its UI make sense was to mimic real-world objects. Now, this is no longer necessary. It's been 40 years.
While I agree that new macOS icons aren't great (see the Game Center icon), the old ones were silly. I'm 35 and I have probably seen an actual contact book only once when I was little. It doesn't make sense to have a skeuomorphic contact book as an icon for Contacts. The old icon for Pages? I don't even know what that is, I'm not into calligraphy.
The only thing I'd agree with is hiding UI controls. Apple has been making its apps less usable to make them look pretty, hiding important elements in an attempt to declutter the interface. I hate how Safari hides the full address in the address bar, for instance. Or, how they removed scrollbars and force people to actually scroll every piece of the interface to check if there's something more to see, while before you could tell just by looking at scrollbars. Of course, there are settings to go back to the old behavior for both my examples, so power users are fine, but I fail to see how these moves improve things for regular users.
I also disagree that Steve Job's death was detrimental to macOS's UI. He was the one who kept Apple looking outdated with his obsession for skeuomorphism, I'm glad they went for a flatter look right after his death.
Of course, everyone's taste is different, but I still think this is a bad article.
There was a rhetoric not too long ago where OSx fanboys would make fun of Windows' interfaces for still using the disk icon as the "save" button. Now that guy is complaining that the photos app icon doesn't look like a Point And Shoot camera, when fewer and fewer people use those.
Skeuomorphism doesn't make sense anymore when the real world items it's based on are vanishing.
Case in point – telling a young kid how to answer a call on the iPhone: press the green banana button.
Does that really matter? A perfect case in point was the old Pages icon in OS X: a quill pen in front of a cup of ink. Do you know any person who has ever used a pen like that in real life? Have you ever even seen a pen like that? I'm willing to guess not. But everyone still recognized the contents of that icon, and correctly inferred that the app was for writing things.
Furthermore, the use of ye olde imagery in that icon was playful, like the app was inviting you to an older time when writing was simpler and you could just focus on your words. The app was aiming for that kind of simplicity too. The use of way outdated imagery in the icon did not prevent Apple from conveying deep meaning to a modern audience. If anything, it enhanced the point they were trying to make with that icon.
You point out that while the quill pen was long outdated when it was first used, "everyone still recognized the contents of that icon", but that's exactly the problem that the GP and GGP are pointing out: more and more of the old skeuomorphic icons reference real world icons that younger users (and indeed some older users) actually don't recognize. Notepads, sure, we've still got those; contact books, eh, you'll see them once in a while, but tbh when I see a bare "contact book" icon without a label it occasionally takes me a second to figure out what I'm looking at; floppy disks, as has been argued to death, are entirely a thing of the past, with the exception of old systems and archives still in use in dusty university basements. Young users today essentially just know the image of a floppy disc as "the save button" without any skeuomorphic rationale backing it up.
The skeuomorphic link between computers and the physical objects we use is is constantly degrading, to the point that using skeuo icons can sometimes actually inhibit the user experience and slow the user down while they try to figure out what they're looking at. We have common patterns emerging with no or very little connection to the real world; a great example would the "hamburger" menu button. If there's any metaphor there in the user's mind, it's to the row items that will appear when you click on it, not to anything physical, yet it's perfectly comprehensible to anyone who's been using digital devices for any length of time.
Yeah, but so what? It is, nevertheless, recognizable to nearly everyone. In a world where cars still advertise their "horsepower" and pencils have graphite compound "leads" we can probably live with an icon that is well understood but whose original referent is no longer familiar.
This isn't an argument against vestigial iconography. It's not even an argument against skeuomorphism. It's a recognition that skeuomorphism increasingly fails to serve its intended purpose of conveying a meaning. Once we recognize that the old icons are dead metaphors, that we often times keep them only because of inertia and not because they have any intrinsic value, we can build momentum on the necessary work of establishing the visual language of the digital age on its own terms.
Language is full of dead metaphor. Words and symbols have meaning because they have been given meaning; most of the associations might as well be arbitrary. Doesn't matter; humans are very good at recognizing these associations and deriving the intent.
As for the rest of your comment, I'm not sure what you think I was trying to make sound difficult. I was arguing that it's entirely possible to replace the floppy disk with an arbitrary symbol, and it's just inertia (user training and a strange sacredness afforded this one random icon) that really keeps it around. I think assosciating a distinct symbol with an action is pretty damned easy, actually. (Designing a good symbol can be hard, though.)
Of course it's possible to pick a different symbol to represent the action of saving data to permanent storage: but why bother? We have a symbol, and it's as good as any other arbitrary symbol might be. Changing it would create confusion for no benefit, since it's ultimately the association of the symbol with the action that matters, and not the history of that symbol's origin.
The younger ones don't seem to know what one is, but the older ones do.
If stop signs showed the picture of the brake lever from a Ford Model T, you'd have a point.
I agree that the octagon is arbitrary. But that's beside the point, because it's self-evidently arbitrary. Nobody assumes there must be a deeper meaning behind a basic shape. The iconography of a floppy disk is not a basic shape.
I guess I don't see the point; you'll just confuse people who already know what the old icon means. Our letters aren't particularly brilliantly chosen (arguably other systems are more logical or easier to learn), and yet who wants to replace them?
Importantly, the user has no reason to directly contrast the new icon to the old one. The user doesn't answer the question "Is this icon as effective as that old one?", they simply have to answer the question "Do I know how to perform the action I want to perform?", and as long as your save icon clearly communicates its meaning, there shouldn't be much/any confusion when the user tries to save. They'll look for something that seems to say "Click me to save", they'll see your icon, they'll say "Hey, that looks like it means 'Save'!", and they'll try it. (Aside: I don't say this randomly, I'm speaking from experience here; there have been plenty of applications in recent years that have tried out new "Save" icons, and I can't say I've ever had a problem figuring out how to save with any of them.)
As far as what "the point" is in changing out the icon, the point is that the entire reason for using action icons on buttons, etc. is to give the user an intuitive sense of what action will be performed when they click it, and as time goes on, the link between the floppy disk and digital storage will become weaker and weaker. And while it may be true that we could drag that symbol with us by convention, my question would be, why bother? If we can come up with something better, especially if we can find something that isn't tied to any specific technology (and I'd argue that we have), isn't this an improvement? I can't think of any advantage the old icon has over new ones other than the small advantage that it's familiar, but, as I said above, I don't think that's enough.
In other words, instead of asking "Why should we get rid of the floppy disk icon?", I honestly think the better question is, "Why not get rid of the floppy disc icon?"
So in summary, while I agree that it's sometimes fine to use an old icon if enough people understand what it means by convention, this is not a good reason to avoid using the newer, less skeuomorphic icons that the linked blog post was trying to argue against.
skeumorphism <-> devoid of metaphor
flat <-> three dimensional
So what you're saying is that we should have skeuomorphism, but mostly with things we've seen in TV shows and movies, especially period ones. This would tend to make everything look like stuff from pirate/fantasy/sci-fi/superhero movies. Looking back on indy developer graphics, that does seem to be a trend.
Surprising and excellent talk.
So it might make more sense to ask "to what degree are people used to this icon?" rather than "how many know where this icon came from?".
In this sense, changing established icons (like I would argue the photo app's one) doesn't seem such a good idea.
We've seen other benefits like folks have mentioned. It draws speakers out from behind the podium.
The theater we use is popular and we often didn't have the stage setup until 2AM the night before a 9AM start time. As organizers/hosts we couldn't be up that late.
Sometimes you come in and the stagehands at the venue read the notes about putting down the carpet and they decided to put two carpets instead of one. And the carpets are secured with gaffers tape. And you don't really see an issue with it so you leave it be.
Lis' talk is excellent though.
The old icon also had a photo on it though which made it clear it was the Photo's app, even if you don't know what the other thing is in the icon at all.
It's a really difficult problem. So many of the things we do today are done on a rectangle with a screen. That doesn't make a good icon. Maybe icons are obsolete altogether?
And yet, people do grow up knowing that the green banana button means "phone" and the notched rectangle means "save", and often never even question "is there a real world analog for this icon?". It isn't that icons are obsolete: it is that we are graduating from a language of pictograms to a language of ideograms, yet for some reason we have large numbers of people telling us that that is somehow a horrible thing to do.
One explicit advantage that these sorts of icons have is that they allow for a nice symmetry between Save and Open icons and upload/download icons (Glyphicon is again a good example; see glyphicon-open and glyphicon-cloud-download). This ties into another, perhaps more arguable advantage, a blurring between local and remote save actions. As applications become increasingly web-based, device-independent, and portable, it makes more sense to me to intentionally separate the "save" action from it's destination; I don't care so much where or how my data is saved, I only care that it's save and that I can get it back later.
I'd love to hear responses to my thoughts here; they sort of developed as I wrote the comment, so they're rather fresh at the moment.
I don't follow all the points in the article but I can't count the times that I overlooked photo's for example.
I once read that one country decided to keep the steam train on their road signs based on such results because it makes it easier to distinguish it from other signs.
Edit: https://www.quora.com/Who-designs-the-icons-on-the-UK-road-s... claims this was indeed intentional.
Actually a problem with this icon is that it is misleading. The iPhoto app is not a camera app. It is a photo library with some minor editing capabilities. You can't use it for creating photos which is what the icon suggests.
A better icon would be a photo album or a stack of photos but that's hard to fit in an icon.
It makes sense for older people who remember them, who also are the folks who are more likely to have trouble understanding and interacting with computers. If your target audience was just young kids with neuroelasticity and hipsters who live this stuff, you could deploy a Brainfuck UI/UX and not only would they get it, they'd love it.
You have no idea how much this change makes the URL field palatable for regular Joe. Previously: hhtpttpp://faceblablah.com/techmumbo/jumbo?not=h4ck3r. Now: <lock icon>facebooook.com.ru => "wow this really is not Facebook!". I basically witnessed such behaviour change first hand.
> Or, how they removed scrollbars and force people to actually scroll every piece of the interface to check if there's something more to see, while before you could tell just by looking at scrollbars.
No need to scroll, you can check just by resting fingers on the touchpad (except in Chrome) and look at the scrollbars.
Never knew this, really handy, thanks! Any idea if it is possible on a magic mouse? or only the trackpads?
Anyone who does that is uneducated and ignorant; the proper response to that is to educate, not to remove the ability of those who are educated to do other things.
Hang on to your hat, because every other browser has been playing with the exact same feature, for the exact same set of reasons.
It should be obvious by just looking how social engineering is working and have been using the same tricks for literally thousands of years. Classic children literature, before modernisation (like Disney), gives you a good idea on the same lessons needed to be taught then.
Well, yes — they should. Would you rather communicate with speech or by pointing and grunting?
We do need better languages for conversing with our computers, but we can't get rid of the need for language.
As a software developer, only one of those things are in your power.
I disagree strongly. If you were new to a machine, is there anything in the new Photos icon that tells you that it anything to do with images, photos or image manipulation? No.
The new icon for Pages is fine by the way,
Looking at my desk and my habits there are actually very few objects that I use for work most of it happens on screen. All in all, I think good icon design will become harder.
Someone that, right now, actually honestly has no idea what a camera looks like.
EX: http://www.buttonsoundbook.com/site/595626/product/ISBN%200-... (Recommended for ages 3 years and older.)
Also, young people don't live in some sort of post-technical isolation. When they go to school, they will probably see the photography students walking around with cameras. And kids still burn music CD's for each other. Because it's cute and sentimental, and it feels a lot better than texting someone a link to a playlist.
Who are these supposed people who are buying Macs but have never seen a camera? This set of people must be miniscule if not empty.
Cameras are still in use, even if most people have replaced them with smartphones. Walk around any city and you'll see gobs of people taking selfies on their phones, but you'll also see a decent number of people carrying dedicated cameras.
Now the question is: should the icons represent something "iconic" or something that most people use?
I don't know what you mean by "iconic". If you mean "abstract and without independently-identifiable meaning", then I think in general, no. Icons should be immediately identifiable when possible. The overlapping rainbow paint chips that make up the current photos icon seems utterly meaningless. The camera was meaningful. A stack of photos would be meaningful. An abstracted "photo" like Windows 10 uses (and like Google uses in the sidebar of their online photos app) would be meaningful.
For the record, I think Android/Google's photos icon is also utterly meaningless to most people. I think it's supposed to represent a camera aperture (same as Apple's icon?), but most people probably don't know what a mechanical aperture is.
For example, in the current world maybe 10% of the people are taking pictures with actual cameras but we still represent the camera as a "box with a huge lens". Few people use land lines but the phone icon is still a banana. To send and e-mail in general we use an envelope but we send way more electronic messages than snail mail.
So you either use something "iconic", or every icon is just a small screenshot, or I guess you make something up that is meaningless.
Yes, it says "Photos" below it
Going back to the original article, it doesn't really matter what the icon looks like, as long as it's unique. You're not looking at each icon and asking yourself "is this a camera?" to get to the Photos app. You know it's a circular array of colors, and your finger goes there.
Back in 'teh day' when my icons were all cutesy skeuomorphs, my computer came with maybe a dozen apps and most of them were on my desktop. Now, I have over a hundred sitting in my global Apps folder alone - never mind user Apps, sub folders, etc.
This gets compounded even more on phones - on my iPhone, I've accidentally placed 1Password next to the Settings app - both have a grey background with a circular center. Settings' center is grey gears, 1Passwords is a blue ring with a keyhole. On examination, they're not similar at all, but I can't even begin to tell you how many times I clicked on when, meaning to click the other - even _knowing_ the differences and kicking myself each time.
I think a lot of what the designers are trying to do is create an icon that stands out visually, and is easily found from amongst a large set of other icons, rather than trying to impress upon us what its functionality is from a metaphor.
Really? Because when I used an iPhone, I most definitely didn't know where most of my apps' icons were. I knew most of them on the first screen and the rest of the screens were a mix of random stuff.
I'm far faster at picking text labels from a list than picking the icons. I see the text label as an icon.
The text "icon" is simple and unchanging. I can pick it out much faster than funny pictograms that change with each OS version.
Tell that to my muscle memory that types ⌘A > iP for iPhoto and ⌘A > iCal for iCalendar.
But what has actually happened is a mess of inconsistent designs. Older apps still use skeuomorphism. Newer apps copy the flat look.
Skeuomorphism implemented a design principle - discoverability. You could argue with the choices made for specific icons, but there was a consistent goal - to make the OS as usable as possible.
The flat look is an anti-pattern - fashion over discoverability. The point seems to be to allow Sir Jony to impress everyone with his aesthetic skillz. User experience has become secondary to internal politics.
That's a huge problem for a company like Apple, because it'a fundamental shift in focus.
And there have been obvious effects. Watch isn't doing brilliantly, precisely because the user experience is nothing special. iOS is creaking under the weight of new features apparently added with no overall strategy, many of which remain invisible to users because the literally never find them.
MacOS is going the same way. Some of the new features are certainly welcome. But the Apple UX generally is suffering badly - from bugs, from questionable design choices, from popular products like Logic and FCP that have been cut down, then more or less abandoned.
There's no one in charge to obsessively fine tune the conflicting needs of product momentum, reliability, UX, and aesthetics.
Jobs was very good at that. Cook doesn't even seem to realise there's an issue.
Icon can help later in quickly locating this program, but I do not really care what it is. Does firefox, chrome icon or internet explorer show you that it will display websites? Internet explorer icon became "Internet" icon for a lot non technical users I think.
The rest of the aticle's arguments seem like personal, subjective distaste for flat/minimal design and a yearning for the good ol' days of skeumorphism.
I really don't see why visual languages should not evolve along similar patterns. Visual communication would hardly get easier if we switched the existing symbols over to black, rectangular screens for all the tasks that have recently been consolidated in the smartphone as a single, multifunctional tool.
Design language is still distributed, but not nearly to the same degree. If Apple decides that an old skeuomorphic logo is ambiguous, or not understood by younger user, or some other compliant like that they can change it and the new one will enter the design vocabulary very quickly.
Often these are helpful. A little logo with just a person is _much_ clearer to me than a contact book. Over time it would be best to shed all the strange associations with real world objects in favor of a design language that actually describes what the software does, instead of making a poor analogy to a real world object.
Even 'digital natives' live in the physical world. We start learning how it works before we ever touch a computer, and even the most dedicated nerd spends more time interacting with physical objects than with digital interfaces. It doesn't take additional learning to know that an object casting a shadow on another is in front of that other, for example. Failing to leverage that existing knowledge is tantamount to shutting down whole swathes of users' brains.
My first computer had a text display and a manual, in 1983. I had no trouble teaching my college house mates how to use it for word-processing.
Everything else that I did with the computer involved text symbols such as GOTO. ;-)
Today, when I use most software, beyond a few familiar icons like the floppy disc symbol, I have to hover the cursor over each icon to figure out what it does. When I learn keyboard shortcuts for those icons, I use them, to reduce eyestrain and wrist fatigue. A great feature would be a single button that reveals all of the icon descriptors at once.
Enter the smartphone.
The smartphone does everything. It makes calls, takes pictures, receives messages, tells time, shows TV shows (we'll have to stop calling them TV shows, won't we?), etc. Since icons resemble the tools used to carry out given tasks, what are we to do when one tool does all tasks?
That's the problem for skeuomorphism. What are we to do for icons when one tool does everything? Create icons depicting people doing those different things, a bit like emoji? A selfie icon for the camera, a person talking on the phone for voice functions, a message bubbles icon that mimics message interfaces for messages?
I don't know. I guess a language to replace the skeuomorphism language will emerge.
The Photos icon "metaphor" doesn't work for me, for example
- Chrome is a brand, and people have been educated to expect that from it (same with Ps and Ai - "everybody" knows what's that)
- Photos is an utility, it does not exist outside of Mac OS X. Like the fuel icon in a car gauge, it is supposed to be discoverable
There's an option to show the full URL under Preferences -> Advanced -> Show full web address
These historically grown metaphors make lot of sense, where it comes to how we relate to objects. The word "camera" isn't only short and distinctive, it's also evoking a sense of the history of the medium, and it may provide some emotional bonding to the object which may hardly be achieved by a "multi-image" – while the latter may actually sound a bit fancier in a 1970s SF-movie. I think, there are some valid points made by the article, and, besides, on an empirical level, I'm losing time in the newer, uniformed MacOS GUIs, too. (Not to speak of Miller's magical number 7 or 3.7 bits working memory, which are still valid in the age of flat design. But this is an other story.)
This can be changed on MacOS from Safari -> Preferences -> Advanced -> Show full website address.
This just seems weird to me. Contacts books were replaced in a very limited way by PDAs for early adopters, but it wasn't until smartphones and their easy address books that they really started to die off. So post-2008, really.
Were you not in professional settings until your late 20s? Still be surprised if you are both 35 and genuinely unaware of what an address book is.
In the 90s I knew people who'd set their browser homepage to a 'quick reference info' page on their server which included frequently needed numbers.
The only tech savvy people I've known to use contact books in the last twenty years seemed a bit eccentric to me.
I had a contacts section in my DayTimer back in the 90s, but I've used a contacts program ever since I switched to using a computer-based calendar.
I disagree strongly. It is now more needed than ever. 40 years ago, the majority of people using computers knew what they were doing. Now, they're everywhere, and people interact with them every day. It's very common to have to interact with a computer we have never interacted with before, and the need to give visual clues to the user is far more important than ever before. It's also very common now for people to simply reach out and grab new software; a well-designed interface that uses visual clues and cues to help people achieve their goals is much more important now than the days when people would actually sit down with the manual to work out how to operate their new piece of software.
I am routinely unable to immediately answer a phone that isn't mine because none of the pictures drawn on the screen are obviously about answering a phone.
You think that 10 and 20 year olds growing up with computers and even smartphones available for all of their life, know less of computers than people in the past?
Even a 7 year old that plays games can almost run circles around a 1980s propellerhead computer programmer in using a GUI. Compared to regular people (e.g. office workers) from 40 years ago that just used DOS and some word processor or POS or accounting program, there's just no comparison.
The difference is of course that today computers are used by more of the general public for communication, office, and media consumption, rather than (only) bunch of tech savvy specialists.
Most of my generation and the generation that followed barely have a concept of a computer beyond "press the thing to make things happen". They know no more about their telephones now than did the people using the landlines of 20 years ago, or how much most people understood the internals of TV or radio or fridges.
As for running circles around people while using a GUI, while I accept there are specialized fields and exceptions that disprove the rule, outside of marketing presentations I just don't see it happen. As much as any generalisation can be valid, the GUI does not run circles around anyone that can actually program and is generally the mark of the computer novice/consumer...
That is patently false. There are hard core GUI users, from VFX and 3D artists to DAW and NLE operators, graphic designers and many more, than run circles around any "command line" person for the tasks they actually do.
Just as there are tons of programmers using Visual Studio and other GUI platforms, than can program far more efficiently with the intelligent autocomplete, integrated debuggers, profilers, and such, than some CLI-jockeys who think they are more efficient with their pimped Emacs or Vim.
Is Rob Pike and his GUI editor/environment a "novice/consumer"? What about tons of excellent Windows programmers? What about Notch?
I wonder. Every time I see a colleague using command line to do git operations I get itchy and think to myself "oh, come on! I could've done this in gui in seconds". Each interface has its time and place, but to me using a GUI is more of a mark of valuing one's time than that of a novice.
What operation could you possibly perform significantly faster in a gui than on the command line? I can think of tons that would be far, far slower in a gui.
> to me using a GUI is more of a mark of valuing one's time than that of a novice
I would argue the exact opposite. GUIs are there to make things accessible to non-power users. A command line is just infinitely more expressive and will let you be much more efficient if you learn to use it effectively.
With nearly every program that I use I start by depending heavily on the GUI and then transition to using almost exclusively keyboard shortcuts as I become a power user, as GUIs are fundamentally inefficient.
Sticking to the Git example:
- visualising history (in gui it is just there)
- opening old versions of files
- visualising a complete log of a file and then jumping to individual diffs/commits
The fact that in a decent GUI everything that could possibly be a link is, is very useful. I do not ned to go around copy pasting SHA1 sums. I drop down to command line when I need an occasional filter-branch or do some arcane incantations. But maybe Git is a bad example because it has a notoriously bad CLI.
Some other example, debugging.
For me it seems that you can actually see whether a programmer uses a visual debugger or a cli. If they have to drop down to GDB then their code will most probably be sprinkled with useless debug macros.
Setting break points, jumping from function to function is easier with visual debugger and a good IDE. (note that the IDE can be emacs or vim running in a terminal session for what I care)
> With nearly every program that I use I start by depending heavily on the GUI and then transition to using almost exclusively keyboard shortcuts as I become a power user, as GUIs are fundamentally inefficient.
Keyboard shortcuts are awesome of course, but I think they are so efficient because there is a GUI around. In a GUI you can always see more state at the same time. This is because graphics can sometimes pack more than text in the same space (e.g.: a visualised Git tree or a graph spitted out from callgrind)
Need to pull some new code though, or commit changes? I think "git pull" or "git ci -m 'Message'" is faster than opening up a complicated GUI window with lots of decisions to make.
To me, using CLI is like having a conversation, with much richer vocabulary, than GUI. That's just pointing at things.
I think your conversation metaphor is good, but in a GUI the answer can be richer and interactive. A CLI can only manipulate text, a GUI can manipulate text where necessary and use a better medium (images, graphs, tables) where necessary.
CLI does not need to manipulate text only necessarily, see the command line in AutoCAD for example.
But yes, GUI in many cases is a better way, I wouldn't want to have cli-only Photoshop-like app (or even cli-only CAD modeller). The point is to recognize effectivity of both approaches for a given problem, at the given abstraction level.
There would be no point in micromanaging the car using a limited vocabulary.
Sometimes it is better to mass delete files with a find, grep and rm, sometimes an auto-filtered search and cmd+a cmd+delete is better.
A lot of the CLI usage feels fast because its busywork.
Most of them don't know a thing about computers. It's a magic box with a small selection of shiny buttons on. They use it for passive media consumption.
Even a 7 year old that plays games can almost run circles around a 1980s propellerhead computer programmer in using a GUI.
Well, they clearly forget by the time they're twenty.
You'd be surprised.
Except by "know about computers" you mean they known about interrupts, and cache lines, and filesystem design, and other stuff that are completely inconsequential to using their computers.
>Well, they clearly forget by the time they're twenty.
Actually the linked article states the opposite:
>"This current generation of young people has never lived without tech," said Linda Rosen, CEO of Change the Equation. "It's second nature to them." Yet, using technology for social reasons doesn't make a person adept at using it in other settings, she said.
So it's not that they "forgot" something, it's that they never bothered to learn it in the first place, e.g. Excel or whatever. But computers, for what they do like to use them for, are "second nature" to them according to TFA.
I disagree. The catalogue of complaints listed in this article are the norm.
It seems that this happened somewhat recently.
Suffice it to say - just like you felt when you were younger - most thoughts that boil down to "damn kids these days!" are almost certainly not true.
Of course it's silly because of the global productivity gains. And it may just be my innate misanthropy which made me a nerd when I was younger now manifesting in this new form.
I'm already an old man annoyed by people my age and annoyed by older people. But truth is, I'm annoyed by most non technical people.
PS: I like discussing technics with people from non tech fields though, luthiers, masons, cooks etc...
Yes. Aptitude with screwing around with a GUI isn't very relevant.
40 years ago, computers were at best used by clerk type people for specific tasks at a terminal. Accountants were using tabulation machines, written materials were on IBM selectrics.
The people engaged in professional work with computers were mostly programmers or others doing "data processes" or working with business analyst types to model business process around workloads that could live with the available computing resources.
It is relevant to actual stuff they want done.
Unless they are programmers aptitude in screwing around with cli commands isn't very relevant.
er, what? i'm 32. i and everyone i knew used contact books until i was probably age 20, especially in offices.
Maybe internationalization was a consideration here? Yellow paper doesn’t read as anything recognizable internationally. (Yellow sticky notes are probably internationally known, though.)
My overall point would also be that taste colors opinions in this case. Or taste at least leaks into them. I think it’s important to be very careful with that and to try and avoid to let taste color too much of what you think. (My taste is very different from that of the author and as such I think many of his points are just plain wrong-footed. There certainly are some good points in there, but taste plays too much of a role.)
I think the bigger issue with the new Notes icon is the weak branding. Previously, you could tell Notes and Reminders apart by just looking at the colours. This was really important when you told Siri to "remind me to buy milk tomorrow" - you would either see a bright-yellow note, or a black-red-white reminder. Now, everything is "almost white", making it really hard to tell what content lives in which app. (Notes and Reminders have a lot of conceptual overlap, especially now that Notes supports checklists.)
The Notes app is far better than that, the white it uses now implies a more permanent organised feel which better reflects the app. The texts I store within it are important to me, they're not final documents but something I'd treat better than a disposable scribble on a yellow-pad. With the app's formatting, cloud and folder abilities this seems a good fit.
I'd always assumed it was dyed yellow to hide poor quality paper. Is there any truth in this?
Perhaps yellow pads are the future.
When I buy cheap recycled-paper notebooks the paper is often off-white and has a yellow look to it. At least to me; yellow implies disposability or cheapness.
In 1844-45, two individuals invented the wood paper-making process. A Canadian, Charles Fenerty, and a German, Friedrich Gottlob Keller, both involved in lumber industries and recognized the cost and durability that wood pulp provided over cotton. Within thirty years, wood pulp paper was all the rage on both sides of the pond. While wood pulp paper was cheaper and just as durable as cotton or other linen papers, there were drawbacks. Most significantly, wood pulp paper is much more prone to being effected by oxygen and sunlight.
Wood is primarily made up of two polymer substances – cellulose and lignin. Cellulose is the most abundant organic material in nature. It is also technically colorless and reflects light extremely well rather than absorbs it (which makes it opaque); therefore humans see cellulose as white. However, cellulose is also somewhat susceptible to oxidation, although not nearly as much as lignin. Oxidation causes a loss of electron(s) and weakens the material. In the case of cellulose, this can result in some light being absorbed, making the material (in this case, wood pulp) appear duller and less white (some describe it as "warmer"), but this isn't what causes the bulk of the yellowing in aged paper.
Lignin is the other prominent substance found in paper, newspaper in particular. Lignin is a compound found in wood that actually makes the wood stronger and harder. Lignin is a dark color naturally (think brown-paper bags or brown cardboard boxes, where much of the lignin is left in for added strength, while also resulting in the bags/boxes being cheaper due to less processing needed in their creation). Lignin is also highly susceptible to oxidation. Exposure to oxygen (especially when combined with sunlight) alters the molecular structure of lignin, causing a change in how the compound absorbs and reflects light, resulting in the substance containing oxidized lignin turning a yellow-brown color in the human visual spectrum.
Since the paper used in newspapers tends to be made with a less intensive and more cost-efficient process (since a lot of the wood pulp paper is needed), there tends to be significantly more lignin in newspapers than in, say, paper made for books, where a bleaching process is used to remove much of the lignin. The net result is that, as newspapers get older and are exposed to more oxygen, they turn a yellowish-brown color relatively quickly.
As for books, since the paper used tends to be higher grade (among other things, meaning more lignin is removed along with a much more intensive bleaching process), the discolorization doesn't happen as quickly. However, the chemicals used in the bleaching process to make white paper can result in the cellulose being more susceptible to oxidation than it would otherwise be, contributing slightly to the discolorization of the pages in the long run.
Today, to combat this, many important documents are now written on acid-free paper with a limited amount of lignin, to prevent it from deteriorating as quickly.
Is it? I know yellow is the default colour for sticky notes, but for everything else -- at school, university, work, home -- the paper is white.
Ryman stock over 500 notepads. Three have yellow paper, and are described as "Being Yellow in colour [they] may appeal to people with dyslexia as the coloured paper can aid reading and writing in people with this condition".
Not sure how well known that brand is outside of Europe.
My brand new notepad (DIN A5, 5mm grid, spiral bound at the top) for my board game evening today doesn’t say where it was made, just that it’s from a German company (it’s labelled predominantly in German, though somewhat prominently also in English – language designated as “UK” – and Turkish, as well as French, Italian and Dutch in much smaller print).
Googling the company doesn’t tell me where and even whether they produce the notepads. Maybe they just relabel someone else’s notepads and resell them?
All I know that the text on their website makes me want to vomit: “Kyome’s target group is primarily women between 30 and 50 who want to combine the practical with the attractive. SoHos (Small Office or Home Office) are increasingly finding their place in living rooms and kitchens. For this reason, kyome products are surprising, as according to the brand promise, with nice, clever ideas, are pleasantly functional and a long way from grey, everyday office life.”
Firstly that’s some really bad English, secondly that’s insultingly sexist.
But back to the topic at hand: I think the important point is that white paper with some ruling (lines or grid, with margins or without) and bound in some way (left or top, spiral or glue) is a widespread internationally recognizable look for notepads. The details then don’t matter that much.
That said, I was thinking along similar lines a few weeks ago. I was at a home improvement store browsing the power tools for a jig saw when I came across a hot pink drill kit.
"Hot pink?" I thought to myself. "Did they see that the number of women interested in home improvement is rising and figure that women are simple enough to fall for that? To buy your shitty drill just because it's pink?"
Feeling grumpy, I told my mom about it over lunch the next day. My mom bought an old foreclosed-on house in BFE Appalachia last year, and took it upon herself to renovate it -- it went from complete, unlivable dump to nice, cozy home as she replaced the floors, the ceilings, the roof, the cabinets, all of the bathroom fixtures, all of the doors, etc. My mom is no girly-girl and has never been afraid to get her hands dirty, and she's physically stronger than most men I know (including myself). To my surprise, upon hearing about the pink drill, she declared, "I want one!"
Let me tell you, my mother is far from simple. Beyond being strong, dedicated, and resourceful, she's also very intelligent. I know that she knows that the company doesn't actually care about women doing home improvement and is just trying to make a quick buck by "tapping" a market that's already been tapped by your typical orange or yellow or black unisex drill. But you know what? If you like something, you just like it, even if it happens to be stereotypical for you to like it. Stereotypes exist for a reason, and businesses would not be constantly wielding them in attempts to appeal to their target demographics if they didn't work in the market. There's clearly no ill-intent behind it, just business.
If I were out buying eyeglasses and saw an advert for some new sort of lens or coating to suit people who stare at computer screens all day, and the advert featured a nerd typing furiously on a computer with a fake lightsaber mounted on the wall and a set of D&D books on the shelf, should I be insulted? Or should I be glad that someone is finally making glasses for me? Chances are, I'd be excited. I might even wait around for a bit to see if I can find a new cleric for my party...
The world needs diverse things and something for all tastes. What I dislike very much, however, is strict bucketing or stereotypically selling those things.
Pink drills? Why not, though maybe blue, green, orange, magenta and so on drills would also be cool to have. And please don’t write “drills for the female renovator” above them.
Also, if there is only one shitty pink drill and the rest of the stuff is not available in pink, wouldn’t you say that sends a message, too? It says something about how normal it is for women to do e.g. home renovations. It says something about their status and role and as such is pretty shitty. See the wider context.
(Also, your assumption that you are somehow uniquely positioned for glasses for people looking at screens all day is itself somehow weirdly sexist. So many people look at screens all day for all kinds of reasons, irrespective of their gender.)
> I think people should make diverse aesthetically pleasing things (even if only some people find some of those things to be aesthetically pleasing).
I agree completely.
> The world needs diverse things and something for all tastes.
I agree here as well.
> What I dislike very much, however, is strict bucketing or stereotypically selling those things.
I understand why someone might find that distasteful. The problem is that marketing budgets are only so big, and companies need to identify some well-defined subset(s) of the population in order to effectively advertise and (hopefully, to them) sell their products. Perhaps it's unfortunate, but the straightforward way to advertise to some group of people is to identify things that some large percentage of them have in common, and appeal to those things. If the selected strategy doesn't work, it's time to abandon it and come up with a new one. If Kyome's advertisements have been along the same lines for some time, then it's likely that it's been effective. If the adverts aren't working, Kyome will eventually ditch them in favor of something else. For what it's worth, there is (usually) no ill intent behind it -- it all just comes down to trying to effectively advertise without spending a fortune creating tailored advertisements for everybody. If you let it get to you, then you're going to be constantly offended by all the advertisements that (unfortunately) fills the modern world.
> Pink drills? Why not, though maybe blue, green, orange, magenta and so on drills would also be cool to have. And please don’t write “drills for the female renovator” above them.
Again in agreement.
> Also, if there is only one shitty pink drill and the rest of the stuff is not available in pink, wouldn’t you say that sends a message, too? It says something about how normal it is for women to do e.g. home renovations. It says something about their status and role and as such is pretty shitty. See the wider context.
I didn't notice any other pink tools, but if they were there, it's likely I overlooked them. I'm not the most observant person in the world, especially when I'm locked on target. The only reason I even noticed the pink drill was because it was out of place, not with the other drills, but on the counter with the "display models" of a bunch of handsaws rather than on a shelf.
The thing is, up until recently, it hasn't been normal for women to do home renovations in the US. There has been growing interest in DIY home improvement and construction projects among women in just the last few years. Of course, there have long been some women interested in it (my mom, for example), but they didn't constitute a large enough segment of the market to convince companies to produce demographic-targeted tools. That's apparently beginning to change.
> (Also, your assumption that you are somehow uniquely positioned for glasses for people looking at screens all day is itself somehow weirdly sexist. So many people look at screens all day for all kinds of reasons, irrespective of their gender.)
I've read and re-read what I wrote here, and I couldn't at first figure out where you got the idea that I think I'm somehow "uniquely positioned" for such glasses. I gather that you're German, though I'm not sure German is your native tongue (your written English is very good). If it is, it may be a "language barrier" type thing, and I think the misunderstanding likely comes from this phrase: "someone is finally making glasses for me". I can see how that might be taken to mean that I thought the manufacturer literally had me specifically in mind when creating their product or their advertisement. However, this is a common figure of speech (a hyponymic form of synecdoche) in US English that I suppose could be read "for me and people like me in some relevant way", with the subtext that it feels as though they might as well have had me in mind while creating it. One alternative, "for us", is too nonspecific and ambiguous -- who's "us" in this case? me and you? unspecified people I happened to be with when the event was occurring? everyone in the whole world? glasses-wearing people who look at computer screens all day?. Compared to "for us", "for me" also puts more emphasis on the fact that I myself am a member of the group (to the point that it might even be the only reason why I care). Another alternative, "for people who wear glasses and spend all day looking at computer screens", while it has the virtue of specificity and unambiguity, is just way too verbose. One middle-of-the-road alternative would be to say "for me and people like me", which itself is perhaps ambiguous enough to lead to similar confusion because it doesn't specify in what way(s) the referents are similar to myself. I hope that helps clear that up. Please let me know if I've made any wrong assumptions here or if I'm not making sense.
Finally, I really don't know how you're able to read sexism into that little anecdote. I didn't mention gender or sex in it anywhere. I didn't even use any words that connote or otherwise imply gender. Regardless, you don't know me and you don't know anything about my gender, but your assumptions here have left me too confused to be offended.
If you have suggestions on how I could make my use of language better or more clear in the future, please let me know. I'm always striving to improve my communication skills.
I'd imagine Rhodia is pretty correlated with the popularity of fountain pens.
Photos of typical yellow pads: https://www.google.com/search?q=yellow+legal+pad&tbm=isch
History of the yellow legal pad: http://www.legalaffairs.org/issues/May-June-2005/scene_snide...
That's a bit funny, since it's France's revolutionary units which have been imposed on almost the entire world.
I'm an American who's worked in offices the last 10+ years and I have stacks and stacks of yellow legal pads from years of note taking in my closet.
I think it is probably to do with it being cheap recycled discolored paper that was cheaper to dye than bleach. Might be better for the environment than bleach as well.
I switched to dot paper about 4 years ago, though, and I'm relatively happy with the decision.
Some say yellow is better because it doesn't change color over time. I personally prefer yellow, but it's certainly odd.
My law firm in the USA uses white now.
In my view the reason behind many of these user interface changes is not really improved usability. The simple reason is that no matter how good interface you design, after some years it just starts to feel old and boring. Old and boring is hard to sell. Fresh and exciting is better. Therefore we keep on changing stuff, even though from pure usability perspective it would be better to stick to the old and boring but familiar.
Easy to use systems make happy customers, but they don't necessarily win the customer's heart at that point when the purchase decision is made. Maybe this is one issue for Apple? Maybe the "old Apple" was happy giving out xx% of their sales for a bit of ideological reasons, but the one needs to find growth where it can? One could see this kind of hints in the product lineup. I would say back in the days it was pretty opinionated, now there's 4 different iPad models (and countless variations).
Thank you for trying to introduce thoughtful conversation to the embers of a flame war.
When Apple first put forth their ideology it -was- new and shiny. It worked in their favor. I agree with you that now it's "boring" and "old."
Which is why it makes no sense as the icon for a photo management application. The iOS Camera icon still looks like a camera, but having a camera as the icon for a gallery application that isn't actually capable of taking pictures is a confusing concept to many users.
In such a world, if you didn't choose to use an icon of a camera to denote "photo", what the hell else could you possibly use that would make any natural sense?
Yes, usability has degraded during the recent 'flat design' craze, but not so much because skeuomorphism was tossed out, but because the many little visual design changes that kill discoverability.
The mobile operating systems started this trend where a lot of advanced functionality was hidden behind 'magic' touch and swipe gestures that go way beyond the simple and intuitive tap, zoom and rotate gestures, like 2-, 3- and 4-finger swipes, long and short touches, etc..., important features cannot be visually discovered (how do I close an application again, on iOS, Android and Windows8? how do I flip between applications? how do I take a screenshot?).
It's the many small things that kill usability for the sake of visual design:
- the famous shift-key on iOS, what the hell were they thinking?
- buttons are often indistinguishable from non-interactive label, leading to idiotic trial and error clicking to find out which UI elements do something
- scroll-bars that are hidden by default, loosing the information how far I am into a document (OSX)
- changing and moving things around just for the sake of confusing existing users, not making anything more intuitive (especially Windows is guilty of this)
And so on and on... the icon design is the least of the problems (and every OS worth its salt should allow to replace the icon theme anyway).
One important reason I'm going back to the command line more and more is because UIs have become so unusable for anything that goes beyond browsing an image collection. Change itself is only good if it results in improvements, but in the area of UI design, things that have been working just fine for 20 years have been broken for superficial visual effects.
It's like 90's web designers took over and are building operating system UIs now (and may be there's a bit of truth in it).
It's not like the past was perfect of course, I mean... Alt-F4, Alt-TAB, ... but that was on Windows which was always laughed at for its poor usability (at least from view of AmigaOS and MacOS users).
> - buttons are often indistinguishable from non-interactive label, leading to idiotic trial and error clicking to find out which UI elements do something
How come everyone understands hyperlinks on the web, but when the OS follows the same pattern you can't figure it out? I know a few apps have gotten this wrong on occasion, but as long as the text for an action is coloured differently than regular text then it's pretty obvious.
> scroll-bars that are hidden by default, loosing the information how far I am into a document (OSX)
On small screens (remember macbook airs are only 11" screens which I consider small) this saves precious screen space. If you really need to know where you are in a doc, resting your fingers on the trackpad and moving slightly brings the bar back. I thought this was a brilliant design to reduce clutter on the screen, not unlike how our browsers have been reducing their chrome to give the content more room.
I'm all for removing clutter from my screen if it helps me focus on the content.
Christ, yeah, I still remember before Microsoft attempted to patch Windows 8 into a somewhat less unusable OS, I had opened one of those Metro apps to see what it was like. And then exactly that, I sat there for probably at least ten minutes not knowing how to close it, again.
Eventually, through pure chance, I moved the mouse to the top of the screen, and that then unhid what was essentially a window titlebar with a close-button on it.
But no indicator for it, no tutorial shouting at me how to do it, there was simply no reason for me to ever move my mouse to the top of the screen.
This is not surprising, because our sense of beauty comes from the physical world.
So what is the problem with skeuomorphism?
Tech enthusiasts would like their phones to look like something from the future, not something from the past. But ordinary everyday people prefer for it to look like things they are already familiar with, or can relate to.
Tech enthusiasts worry that the skeuomorphism was getting totally out of hand, particularly where the UI metaphor started limiting functionality (e.g. an address database that's limited to what a Rolodex can do, rather than exploiting what is possible with a computer). But this is not really true. For example, iBooks has instant search, something only possible with a computer.
Some people point out that many skeuomorphic elements reference things that a large part of Apple's audience hasn't used in a long time, if ever. True, but here's the thing: It doesn't matter whether the user has ever seen a reel-to-reel tape. What matters is whether the visuals depict a physical object that the user can model in his mind. If it is too abstract (that's the opposite of physical) then non-tech-enthusiast users will find it hard to intuit.
Some people say skeuomorphism looks tacky. This is partly true. Skeuomorphism is hard to do. When done poorly it does look tacky. But when done well it looks very beautiful.
By removing all skeuomorphism Apple is throwing the baby out with the bathwater.
Worst part of the article by far was
> OS X packaging, once very elegant and eccentric (and printed on a physical box), has become thoroughly unremarkable.
This is 2016, no one uses CD's anymore. And that leopard print box design looks like packaging for some kinky underwear.
It's funny witnessing how certain people that a couple years ago considered Apple designs the pinnacle of design and far ahead of everything else using the same arguments today that Windows fans used.
The cycle is really tired, and their wild claims have worn out their novelty to non-Apple people.
Haha, exactly my thoughts :)
The author must be 90 years old.
Buttons still look pushable, input fields still look editable. The Dock didn't lose any functionality whatsoever by having the 3D effect removed.
In my opinion, the El Cap UI requires just as much talent as the overdesigned (but very pretty) icons and graphics from the previous era. I don't miss the brushed metal and pinstripes, though.
The old "Pages" icon was instantly recognisable. The new one looks, at first glance, exactly the same as text edit and notes.
I don't think that the UI actually needs to get out of the way. We humans are perfectly able to ignore even the most obnoxiously designed mess (cf. banner blindness). But we are not very good at picking from many similar-looking things. Replacing the colorful sidebar icons with simpler monochrome versions now requires us to actively look for the icon you need, instead of just picking it intuitively.
Contrary to popular belief these days, contrast is not the enemy.
Getting a bit fed up of listening to fellow designers preaching about "cognitive overload" over a few button shapes and icons then proceeding to ship designs where all the controls are un-styled blue text with the occasional semi-abstract line art icon.
Things looking better actually has a positive effect on usability.
I'm a long-term Inkscape user, they recently 'improved' the icons; it all looks wrong (but handsome in a minimalistic, low-visibility of chrome, sort of way) and disturbs my workflow considerably.
I'm not sure why the parent comment was downvoted. If the above statement was intended to mean that being consistent and intuitive is more important than aesthetics then that is almost certainly true, in my experience designing and testing UIs. Of course, the ideal is to have it all by using the aesthetics to support the functionality. Being attractive and being functional aren't mutually exclusive.
This is where, IMHO, a lot of generic minimalist/flat designs following the current trend go wrong: they sacrifice so much detail and so many possible ways to be visually distinctive or interactive that what remains inevitably all looks very similar and loses some of the visual cues that could help to guide the user in how the system works.
But being fashionable and being functional are often opposing forces.
Seems to me flat web design was a reaction as the antithesis of an over-indulgence in skeuomorphism. We just appear to have thrown out a lot of affordance and visibility in that reaction.
I sort of understand, but no-one outside my family uses my desktop and when I worked in an office it was just me or the IT support people, so how is it about branding in the general situation. For media stuff everyone is likely to be using the default.
I recently had to use a computer with OS X Mavericks on it (10.9 I think). I was struck by how beautiful the interface was. I'm having trouble finding a good screenshot illustrating this, but compare
Everything just looks better on Mavericks. The gradients may be over-the-top but they're at least consistent. Transparency on El Capitan is pointless and ugly. Maybe I like the system font a little better.
Firefox is also a really good example; it looked great on Mavericks but has not been able to fit in since.
Usually, I prefer simple UIs: i3, terminals, etc.. But the look they have done for Yosemite/El Capitan just doesn't work.
It looks out of place on El Capitan and on macOS Sierra it's downright disturbing. I don't want tab titles in Times New Roman (at least it reminds me of that).
Really? The tab titles on my firefox are very clearly a sans-serif font, and don't look at all out of place.
The problem with this approach is that the web has no guidelines whatsoever, beyond user-agent defaults. So each and every site does their own thing (whether 'good' or 'bad') and Apple (+ Google, etc.) decides to cherry-pick what is 'popular' or thought to 'look good', seemingly without thinking through the impact on usability. Or, possibly worse, they have considered the usability impact but deem the tradeoff worthwhile.
I'm glad they removed all the silly shadows, 3d effects and animations and defined more strict UI guidelines.
I don't need my OS to look like a Christmas tree.
The puzzle background goes contrarian to the text label and makes my eyes/mind jump while trying to read the labels.
The first button makes me think of Wikipedia, the second one of Facebook. It's precisely that kind of cognitive dissonance where you read "blue" written in red and get asked to say the color or the word.
The last button has embossed text for some reason, possibly in an attempt to make it readable in face of the noisy background, but in turn it makes it stand apart from the other buttons.
The whole theme of the design reminds me precisely of the design language of Mac OS X from its origins to Leopard. It's not bad per se, but people don't need as much no-so-subtle hints as before in the UI, which get perceived as distracting noise. It's not zeitgeist anymore.
I count at least 2 unnecessary textures, 4(or 5?) different fonts and a color palette that makes no sense to me.
I know some people are enraged about how many "generic 3-column flat UI" websites there are, but give me one of those any time instead of something that can't make a decision over what unnecessary decorations to use around a button that's already cluttered with a background texture and 3d effect.