Skeuomorphic realism certainly didn't start with the iPhone. Anybody who has dabbled in music production knows it's been around since the 90s. Here are some pieces of software I've been using for the better part of the last decade:
Unlike apps for the iPhone, these virtual knobs and sliders don't benefit from a touch screen. No keybindings. No tabbing between them. Just reduced functionality and wasted screen real estate for the sake of looking ``cool'' to people with crappy taste.
For this reason, I've come to despise so-called realism in technology. Even the tried-and-true desktop metaphor seems like an outdated crutch for helping old people understand computers. How does imagining manilla folders help the next generation of computer users understand directory structures when they've seen more computers in their time than file cabinets? It's time to can the metaphors and let computers be computers.
Ableton Live has completely toned down the realism but kept the knob-based interface. When you interact with them using a mouse or on a touch-screen, these on-screen knobs seem skeuomorphic, but you can 'keybind' them to a knob on your hardware (e.g. on your USB midi controller).
The 1:1 mapping between the hardware and on-screen knobs is a great design decision.
Unlike apps for the iPhone, these virtual knobs and sliders don't benefit from a touch screen. No keybindings. No tabbing between them. Just reduced functionality and wasted screen real estate for the sake of looking ``cool'' to people with crappy taste.
I strongly disagree; where apps are emulating the sounds of particular pieces of musical hardware (which plugins frequently are) there's a distinct plus in having the same layout as the emulated device.
While there's a lot of value in providing more direct controls in this context (eg envelopes, EQ curves, and so forth), many of these are not ideal for performance (as opposed to composition) because they either constrict the performer by requiring use of a mouse, or provide insufficient tactile feedback because of the limitations of glass. Despite the limitations of abstraction, knobs, sliders, and buttons still work incredibly well for physical performance - hence the popularity of generic MIDI controllers which map to virtual controls in the software plugins.
Now it's also true that one could use physical controls to manage the sort of manipulations that are hard to abstract in hardware while being relatively easy to do in software, such as additive or granular synthesis; and there have been a few applications that combine excellent sonic potential with excellent UX (such as Borderlands Granular [1] or AudioGL [2] but making the most of these requires a good deal of domain knowledge that makes them inaccessible to other musicians.
I eventually (mostly) stopped using software plugins, but not because of dissatisfaction with the UX paradigm; I just went back to using hardware. I like having ~100 physical controls, plus my deck looks like the Enterprise :-)
PS in general I hate skeumorphism and strongly prefer flat design...but I think there's much more justification for it in domains like this, not least because one is often blending a mix of old and new technology in music production; it's not unusual to combine digital aduio manipulation with analog or acoustic sources and treatments.
It is. The first hour or so is sort of tough to get with because the interface is so minimal, but it's very powerful. He's financing development through the beta and although I don't do much composition on the computer now I think I'm going to buy a license from him because it's so different from anything else out there. It sounds filthy, and I mean that in a positive way.
The author seems to confuse UI patterns with skeuomorphism, and both of you overlook the many years of UX/UI evolution of the original device. Skeuomorphism is there to make it seem familiar. UI patterns are often repeated because previous layouts are optimimal.
As for folders it's simple. They look like folders today because early folder icons looked like folders and that's what people look for to find them.
Can you clarify? I would say some UI patterns are skeuomorphic (on/off toggle switch) while some aren't (dropdown menus). But I don't see where there's a confusion between the two?
An on / off switch has two states and the current state should be clear and obvious. That's a UI pattern that applies to both real world switches and on-screen switches.
Making an on-screen switch look like a light switch, or a metal toggle switch from an old piece of equipment - that's skeuomorphism.
Actually, when I wrote that "it all started with the iPhone" I meant the current trend of realistic toggles, checkboxes, wood textures, etc. in mobile and web apps.
I completely agree that skeuomorphism and realism are nothing new, but I do think the iPhone gave them a newfound popularity among designers.
But at least with music interfaces the levels are quite different. As a long-time user of music UI, pianos tend to have glossy white and black keys, knobs may have shading, there may be fake screws in the faceplates. Those elements are meant to look like and mimic their physical counterparts.
There is no debate between skeuomorphism and flat design. There are people who think that there should be a debate and thus spend 5 pages spewing personal opinion, but in the world of business and making products there is no debate.
There is nothing wrong with either and both can be over done. The reality is you have to meet somewhere in the middle, and most importantly, you have to do user testing for your specific application. Reading every blog post ever written about skeuomorphism vs flat design isn't going to help you build a more usable product, or a product that sells for that matter.
And for god's sake there's no such thing as "visual realism". The Internet and mobile phone apps are "real". This is what skeuomorphism is: https://en.wikipedia.org/wiki/Skeuomorphism . Anytime anyone feels compelled to define it, don't. Just link to the wikipedia page.
I think it's a great first picture for the article as it shows the concept existed before and has a meaning expanded past Internet related technologies.
You don't just design for a medium, you design for a culture. If you don't understand the cultural context of the design elements and terms you're using it's going to show in you work.
Seems to me like people are agonising over skeuomorphic details as basic as using drop shadows and gradients to make a button look 'real'.
In recent months I've come to realise that (excluding so-called 'flat' design) such processes have sublimated and that we have actually converged on a new visual language for communicating a control's affordance.
Let's look at the basic button: we no longer make a button look like a button, per se, so much as we simply add contrast to the control's edges (according to taste), and perhaps overlay a low frequency over the entire control (otherwise known as a 'gradient').
Then, when a user activates the control, we rotate the phase of the control's spatial frequencies through 180º to communicate its state (equivalent to reversing all shadows and flipping the gradient).
I think flat design sucks because it effectively excludes you from using visual frequencies this way - it's a bit like asking someone to record a song, but they have to run everything through an extreme high-pass filter, or a bitcrush, or something (I dunno, someone help me out with this metaphor here).
If you reversed all shadows and gradients, wouldn't your active control's lighting become completely inconsistent with the rest of your interface? Or did I misunderstand?
But you're right that flat design takes away some of these tools. That's why it's important to look at each situation as its own thing.
For example, a big call-to-action button on a homepage cannot be toggled and lives by itself, so making it flat works. On the other hand, making an "on/off" toggle flat can potentially create a lot of ambiguity.
But even then, it's up to the designer to find workarounds and new solutions, such as http://drbl.in/giHY
You're probably right, I'm still trying to figure out how to express this idea properly.
I like the good old fashioned red and green on your example, but I feel any toggleable control should have two two-axis modes: unpressed/pressed, and on/off (e.g. in Cocoa Touch, UIButton can be highlighted, and/or selected), which is typically difficult to convey with flat shapes and colour alone. If you can do it, awesome.
Now that computers are ubiquitous in the our culture, I think that people have forgotten how non intuitive graphical user interfaces can be. Twenty years ago, an "introduction to computers" class or video would focus on how to interact with menus, open and close windows, and how to use a mouse with more than one button. If you want a good laugh, search YouTube for introductory videos to Windows 95.
I've found that people that are uncomfortable with computers have problems with flat design, often because it is harder to for them to find buttons. It seems like a trivial thing to people who are comfortable surfing the web, but for those who are not, it makes for a frustrating experience.
On another note, I found it curious that the author didn't mention Microsoft Bob and the other contemporary "real object" graphical shells.
Most notoriously, one controlled the volume not with a slider element, but with a rotating dial that appeared to require a circular motion to operate, not an easy feat with a mouse; only with experimentation did one learn that a linear motion also worked.
> one of the earliest Apple forays into both skeumorphism and what the post terms "realist visual design"
You're not serious are you ? Maybe you should look back to day 1 of the Macintosh:
Trash Can, Files, Folders, Deck Accessories, UI Controls e.g. sliders, buttons and almost every single icon that existed in those days. They all tried to model the real world and were critical in the success of the Macintosh.
Basically it was a good exercise in responsive design, and was also a chance for me to try out publishing a book on Amazon.
And since this is just static HTML, it was also nice to write something completely free of the constraints of a blog layout and see how the content shaped the design.
In fact, I'm now thinking I might reuse that layout for my blog itself.
It's been around for awhile, Jason Santa Maria probably did it most famously with his blog way back when and more recent CMSs like Visual Idiot's "Anchor" were intentionally created to allow people the ability to design for the post, rather than the site.
Is anyone else entertained by the fact we finally have decent support for rounded corners for web design, and everyone is moving towards right angles? :)
I'm not.. moving towards right angles, that is :P (okay, to be honest, I find myself toning the border radius down after the initial excitement, but I still find it useful)
We see this curve with most new software tools: we get them and start using them for everything, because we couldn't before. Then we realise this is bad, and stop using them at all. Finally a balance is struck.
Only in nanotechnology can we create a rectangle with a single molecule defining the perfect corner (excuse hyperbole). That, and digital displays where a single pixel can occupy that position. In nearly every other manufactured object the corner has at least a small rounding.
"Flat design also forces you to really care about typography and layout, two areas where web design has traditionally lagged behind its more established print cousin.
And on the mobile side, flat design can make it easier to focus on animation and interaction design, as apps like Letterpress and Clear have shown us."
This bit is a good summary. I think flat design on mobile really frees designers up to look into animation and interaction design. Letterpress definitely proves that. It also frees designers up from focusing on detailed effects and stylized UI. Focusing on typography and layout will create much better and more usable applications.
I disagree to the general notion that 'skeuomorphism is bad'. it is just harder to focus on both the texture and the other subtle parts of it and still not focus on the usability, transitions and the flow.
In the Metro (flat) design it becomes easier to focus on typography and interactions which makes it easier for designers, I think both are equally exploitable by a good designer.
I'm certainly not saying that skeuomorphism is bad, and I don't think anybody that knows what they're talking about is saying that. And it's also important to keep in mind that skeuomorphism and realism are two different things.
But it's definitely true that if you do decide to go the route of realism, you impose a lot of constraints on your design that will A) take up a lot of your time and effort and B) close the door to a lot of options because they would break realism.
I didn't mean that you implied. I meant the general assumption, especially the Google designers discourage you specifically from being skeuomorphic. they call it bad.
I agree with your point completely.
I recently read three books: one on the Kindle iOS app, archive.org's PDF scan of a real book, and finally, a real, printed-on-paper book.
I used to dislike skeuomorphism as exemplified by Apple faux leather. So it came as a surprise when I found I enjoyed the PDF scan much more than the Kindle version. (The paper version came out way on top, because it was an old book much enjoyed by bookworms of human and insect types, and was a pleasure to hold and smell as well as read)
I now appreciate that natural variations and noise inherent in paper and other real-world materials have a calming aesthetic effect, while ultra-Spartan black-on-white is perhaps _too_ antiseptic for comfortable human consumption.
The design tension then, is between
- being true to the medium, in the sense that the design should do its job within the abilities and limitations of the medium, rather than gratuitously imitating and importing look/feel from previous mediums just for the sake of familiarity.
- being human, respecting the human aesthetic sense, honed over millenia of exposure to fractal and noisy nature.
Extreme skeuomorphism results in campy, tacky faux leather. Extreme minimalism results in bland, "inhuman" interfaces. At the golden mean lie things like "noisy" linen backgrounds, subtle shadows, which assuage the human need for variation without imitation of real objects and grandfathering in their functional limitations.
I agree with the author - Google has found its design feet, and they're not bad!
This generalisation of interface visual styles being either 'flat' or 'skeurmorphic/not-flat' concerns me.
I've experienced interfaces both good and bad that sit at either ends of the spectrum. It's hard to say which is better than the other because in reality most interfaces seem to land somewhere in the middle.
I find the whole debate rather shallow. As designers we should be educating others that a style is the result of a variety of factors such as branding, fashion, originality, time constraints, content, function, hardware, software etc etc.
I've already been asked by clients for "flat design" and it makes me cringe every time.
Soulver seems mindblowing; I wish the article included more examples of radical divergence from dominant UI trends to reach something even more usable.
Mostly I wish Soulver was available on Android or PC...
The comparison is blatantly unfair, though: the first two examples are iPhone apps that require large touch targets, while Soulver, an OS X app, requires none. A Soulver-like app for iPhone would benefit from history and extended functionality, but for optimal usability, it would still need a screen mostly full of buttons.
The normal on-screen keyboard (or physical keyboard for those few phones that have one any more) works just fine for text input to an app like soulver. There's no need for huge dedicated buttons when the normal text input works just fine.
This lead to the development of several calculator replacements to take advantage of the fact that a desktop has perfectly good keyboards hooked up, can afford to show good history, etc. I wrote one (which is now being maintained and ported to QML by someone else), but the most popular one seemed to be SpeedCrunch (http://code.google.com/p/speedcrunch/) or Qalculate (which KDE's KRunner supports using for inline expression evaluation nowadays, IIRC).
No one wants to be using the same app for 10 years. Or the same UI guidelines for 10 years. So naturally, there's always going to be some direction that people tend to flock towards.
I don't think it's always primarily for the better, although that is usually the goal. It's better and different. I'm sure in the vast open space of design, there are quite a lot of different equally good directions. Most of them just require people growing used to them.
That's why it seems we've made full circle in design. Simple flat colors, sharp lines -> fancy transparent effects -> complex -> back to minimalistic yet functional.
I would say that designers and early adopters like change, people only like improvements. My mom is still using her 10-year-old XP machine and would hate it if it did change just for the sake of change. My Mac running TextMate also looks pretty much the same as my iBook did in 2005. Or look at the public reaction to Windows 7 vs Windows 8, or the way Apple keeps basic designs the same (only geeks can tell an iPad 1 from an iPad 4 on the subway). Apple's only products that are still changed for the sake of change are cheap MP3 players that people buy on a whim.
Interesting, this might be where one of the major issues come from. "We" like change, I suspect more casual users do not.
I'm trying to think of a good example -- I probably catch a plane every 18 months or so, and I'm glad that after a few attempts, I feel I've now masted the process of getting through my local airport. At this point, I'd be annoyed if they made any changes to the process, throwing me back into a position of knowing nothing. If in the long term it made things better I would cope, if it was just a change which might end up going back in another 10 years, I'd get really annoyed.
What surprises me is that many people see Microsoft's Flat/Metro/whatever design aesthetic as something of a recent development. Just look at UI of essentially anything from Microsoft around the turn of the century (Encarta, Office XP, what was called .NET UI then and so on, and importantly: Pocket PC). One would say that it is return to old roots after short detour into land of gradients and semi-realistic textures.
Regarding the iPhone's role in all this: in the early 2000s, Apple's OS X HIG used to make the distinction somewhat clear when to use the "more" skeumorphic/realist brushed metal style and when to stick to the core Aqua interface with its realist textures, yet abstract meaning. The developer was supposed to ask herself/himself the question that this post is invoking as well: is this specific interaction more easily understood if it maps to a physical object? They used QuickTime as an example, since it related to a real world object: a VHS deck/DVD player.
Then Apple started breaking with its own HIG. The Finder was the prime example, once it adopted brushed metal.
The main shift, however, came not with the iPhone but two years earlier: Dashboard widgets were encouraged to have extreme visual richness, independent of whether there was a real world equivalent or not. Take the Weather widget, for instance. Previous to the slight adjustments in iOS6, the Weather app spoke the exact same visual language as the 2005 widget. You even got the grey linen when you flipped widgets to get to their settings.
Of course, iOS devices were the factor with enough impact to proliferate this style into so many designers' visual language, but this distinction in the timeline slightly takes apart the nice logic behind Apple's decision to push this type of design. Mobile devices had less to do with it than Apple's desire to create a wow factor in their UI. The touch target argument doesn't apply to widgets, neither does the "device becomes the software" point, and of course Apple was completely fine with having 100% inconsistent, heavily colored, heavily textured interfaces be on screen side-by-side, since that was the whole point of putting these widgets together on a Dashboard: they all came and went at once.
Now, I am actually not saying this to criticize the style. In my opinion, it showed guts, curiosity, and showmanship. And it worked.
Nor do I want to attack the article - lots of great points in there. However, I thought it would be interesting to underline the typical Apple manner in which this stuff came to be.
(Of course, if anyone really wanted to stick to their guns, they could point out that Cupertino was already working on the iPhone at the time Dashboard came around. Maybe they were just giving the style a test run with full knowledge of its role in their future. The timelines only overlap sparsely from all we can tell, but it's possible. Still. That doesn't invalidate the wow-focus its implementation in Dashboard implies.)
For the people arguing over how long skeumorphism has been around in software design, all based on different thresholds of levels of "realism": Any graphical interface you can think of has skeuomorphic properties to some extent. Even a bare terminal window is (even if for purely practical reasons) analogous to the strip of paper pushing out of old printer terminals.
The bare concept of a buttons, mouse pointers and icons are all analogous to physical objects, which makes sense from a usability perspective since people mostly relate to new things metaphorically.
The idea of shaded and highlighted buttons on a leather textured surface, however, makes no particular sense from a material or metaphorical perspective. I recognize it as an aesthetic choice more than anything, and an ugly one to boot.
I did agree much with that the article said, and i've been leaning towards a more minimalistic approach recently.
A nice font, some decent shadow, and maybe a gradient with a border.
That being said, I'm a strong believer of "you want a design? Code it"
It's not enough being able to sling it on photoshop, you have to learn the code to make it happen.
One thing this author didn't mention is the reduced computational effort of rendering flat design. Is there any data showing a meaningful difference in battery life?
https://farm1.staticflickr.com/16/92028519_6c7f4a6d50.jpg
http://www.migmusic.com/bin/pro52.jpg
Unlike apps for the iPhone, these virtual knobs and sliders don't benefit from a touch screen. No keybindings. No tabbing between them. Just reduced functionality and wasted screen real estate for the sake of looking ``cool'' to people with crappy taste.
For this reason, I've come to despise so-called realism in technology. Even the tried-and-true desktop metaphor seems like an outdated crutch for helping old people understand computers. How does imagining manilla folders help the next generation of computer users understand directory structures when they've seen more computers in their time than file cabinets? It's time to can the metaphors and let computers be computers.