Maybe if I can make a counter-point: a lot of these patterns are common place right now! And much more so than whatever golden era we want to imagine existed long ago.
- Gestures in a lot of applications have made things more confusing by hiding functionality that you now need to stumble into to discover.
- Sound cues are used all over the place. Anyone who's ever worked in a kitchen hears the godforsaken ubereats alert sound in their nightmares.
- About ten minutes ago, I got startled by my phone deciding that the "you should stand up" vibration pattern should be three long BZZZZ-es... amplified by it sitting on my hollow-sounding printer.
- If another fucking god damn website asks me to chat with an AI agent in it's stupid little floating chat bubble, only appearing AFTER I interact with the page so it's allowed to also make an annoying "chirp!" sound, I WILL become a chicken farmer in some remote forest eating only twigs, berries, and improperly-raised chicken eggs.
All of these things annoy me, and actively make me hate computers. A silent glass brick can go in my pocket because I know it's not going to bother me or beg me to talk outloud to it. If it was some sensory-overload distraction machine (which, by default, it is) it would find itself over the side of a bridge rather quickly. It's getting in the way of my human experience! The one where I'm the human, not the computer!!
Also, just to add to that because it's on my mind now, I think there's a ratchet effect to "UI that screams at you" or at least "UI that tries to tap into my senses". The more of it becomes common place, the more people expect to be able to annoying you, via your devices.
It doesn't matter that I can force my phone's vibration motor to only output an anemic "buhhhh..." no mater what coeffienct of bothersomeness some app sends to it. The person causing my phone to make that API call still expects the cacophony of pain to emit from it. We all become numb to how annoying this all is because it becomes the standard TO BE annoyed and distracted.
The uber eats sound is annoying because it conveys nothing except "whatever you're doing is unimportant!!!! PAY ATTENTION TO UBER!! UBER THINGS ARE HAPPENING!!!". There's a million other better ways to do that, so *I* find the information. *I* go to the stupid glass brick when *I* can take on a new order. But because we already set the expectation that the user is allowed to set off an alarm in any kitchen in the city for the low-low price of overpaying for food, the stupid glass brick tells ME when it's time to deal with it.
Spatial computing (like the example of a note taking app) now introduces all of the extra work of cleaning to a digital note. The computer wants me to sort my own notes now. It opens up the potential of being an e-slob for no reason other than my ability to make it as equally messy as my desk.
I don't know why we would expect this even-more sensory-focused model of computing to not also ratchet up the stress and dread of being alive.
I'm 27 going on 95 I guess, just send me to the old folks home now lol
This is how I live, and I cannot comprehend how people live otherwise.
I have no work email or slack on my phone. The only notifications that appear on my lock screen are texts, calls, and when there’s a new crossword available. Seeing other people’s phones buzz every time they get an email, or every time their news apps have a new article… seems utterly insane to me. Why would you give every one of those the opportunity to demand your attention, without warning, at any time? How do you live like that?
News apps are baffling, I've worked in the business and couldn't comprehend why colleagues needed notifications for news and socials.
Can't see how it's possible to be effective whilst so reactive in a professional context. For a person who's job doesn't involve keeping tabs on the world it makes even less sense. Damaging and stressful.
I think the 24-hour news cycle in general has gotten too obsessed with "fast" and "breaking" over "news worthy" and "attention worthy". I've realized as much as anything that what I want is slow news and that the old models were possibly best: daily morning paper, maybe an afternoon rag. That's it. Real news doesn't seem to me to actually move faster than that, we've just sort of let "entertainment" and "anxiety" and "engagement clickbait" substitute for "news worthy" for long enough that people think they need the constant attention to it.
I like to disable most notifications, but I'm confused about the Uber eats thing. If I'm a restaurant and I take orders via Uber eats, I probably want to be notified about new orders. That seems quite important to me. It's not the kind of thing you can leisurely discover an hour or two later. It's on the same level of importance as a customer walking into the restaurant. I can't really imagine many things more important than that.
Similarly if I'm on the receiving end I need to know that the delivery is here. I want my food and the courier wants to get going. I live in an apartment building, so I like being downstairs ready to pick up my order when they arrive.
So that's something I want to get notified about. I don't use Uber eats but I use Wolt, where I can monitor the couriers location and get a notification when they're close. I don't turn those off because these are actually important in my eyes. There's a real person potentially waiting for me right now.
Snapchat on the other hand gets muted immediately. Whoever came up with notifications for someone typing should be locked up. Not to mention all the bullshit like "some influencer or whatever posted something" notifications. I barely want to know when my friends sent me something.
I like Apple's growing approach of distinguishing "Time Sensitive" notifications and "Live Activity" notifications from all the other types of notifications from an app.
An Uber Eats delivery is a "Live Activity" with a tracker that is sticky on the lock screen (and Apple Watch "activity area"). That makes a lot of sense. "Deals" and other random garbage Uber Eats wants to send to me aren't "Live Activity" so can be filed to later/slower delivery.
A buzzer notification from my condo buildings front door is a "Time Sensitive" notification that gets priority. But that same app's "weekly neighbor updates" isn't and can be filtered differently. Those things are great.
I can send most notifications to "Notification Summaries" which give me digests of all the notifications in roughly ~4hr chunks. There are very few things that I need faster than that (esp. when apps properly support "Time Sensitive" and "Live Activities").
Of course, with great power comes great responsibility. I've caught LinkedIn abusing "Time Sensitive" notifications (because of course it would, LinkedIn loves notification spam too much not to) and entirely revoked its privilege to send them.
That sounds pretty great, especially if it's handled from the developers side and we don't have to customize it.
I know I can customize notifications for my apps but when I go to look at it an app has like 50 different types and I just don't care enough to deal with that so I just mute the whole app instead.
When iOS had a couple gestures to get away from needing physical buttons, things were pretty good.
However once you realize that you can add new gestures without having to defend adding a physical or screen real estate button, it takes a lot of discipline to avoid adding more. I like to think that Steve would have told most of their people to fuck off and we’d have one or two new gestures now, instead of twice as many. They would have found some other way.
For me personally a similar thing was when Ableton Live transitioned from having a more "direct" interface to having popup menus for absolutely everything, and it took time for me to adapt to it for live performances. To be fair I never really adapted and just moved to something else.
Rather than coming up with creative solutions like they did before they just kept adding things to those popup menus. The app went from magic (by enabling me to perform live effortlessly) to frankly difficult (by having the interface become difficult to memorize and getting in my way).
Coincidentally was when they also started racking up bugs so much that they needed a couple years without new features just to clean up bugs.
I think it’s a damn shame that so few apps have made serious use of dual screen modes.
If you’re doing something like a sound mixer you should be able to move more things to a second screen. Run the main app on your new tablet and the ancillary functions on your old one. Or a small monitor if it’s view only.
Even Google ended up adding more stuff to their homepage in the end.
For a long time they tried to keep it super minimal- and it still is, but there's footer links and signed in header and a whole bunch of other links as well.
Mind you in the early days pages used to have hundreds if not thousands of text links all over the place, the only sites that do this now are the hardcore conspiracy sites where the author just adds several new links a day.
So in this dimension at least web UIs have changed for the better.
When divisions are rewarded with prestige and that prestige is numerate in public visibility, you either need a site per division or a very, very busy homepage, where the links aren’t organized by user need but by political clout.
It’s almost like your aunt and uncle who always bicker at family reunions. Keep that drama shit out of public spaces.
This is interesting, chickens mostly sleep at night and the roosters crow at day break. I wouldn't call it calm or considerate - but at least they know when to keep it down!
Yes. The only exceptions I can think of are intrusions in their enclosure and surprising events (fireworks etc), in which case it's probably justified.
Now, "day break" can mean 5AM during the summer for instance. It can be tough at times.
If there are multiple roosters on a farm, living in the same coop or different coops within hearing distance, they will trigger each other to crow earlier, like it's a competition.
Especially young roosters will try to establish themselves by being first. The big old rooster who knows he is the rooster in the henhouse can afford to wait, with his big testes energy.
Yeah, my roosters get started 1-2 hours before dawn, and they'll crow now and then throughout the day for various reasons, usually something like, "Hey, stay away from my hens, buddy."
Hens are pretty quiet. They'll do some clucking after they lay an egg or when one of them finds a worm, but you'd have to be a very sensitive neighbor to be bothered by their nose.
This is a beautifully designed and illustrated page.
But I couldn't disagree more with the premise. It complains that computers have been reduced from physical, tactile, hulking mainframes to neutered generic text interfaces, but I've watched the opposite happen over the past two decades.
My phone is physical -- I swipe, pinch, and tap. It buzzes and dings and flashes. I squeeze my AirPods, I pay by holding my wrist up to a sensor, I tilt my iPad to play video games and draw on it with a pencil.
Everything the article complains about, we've already solved. All of its suggestions, we already have. It wants "multi-modality" but we already have that too -- I can change the volume on my iPhone with physical buttons while I dictate. I can listen to music while I scroll.
Our interfaces haven't lost their senses. Our interfaces have more senses than they've ever had before.
> This is a beautifully designed and illustrated page.
Hard disagree. It's incredibly distracting and the constant movement of text, the introduction and disappearance of images within the medium makes it incredibly difficult to concentrate on the message.
It screams 'look at me, I'm really smart with all these neat effects'. But you know what interface for articles like this has served us pretty well for > 1000 years? Just the words. Please, just display the words rather than this conceit.
In 1985, after a year of finding that pretty but unlabeled icons confused customers, the Apple human interface group took on the motto "A word is worth a thousand pictures.
This is advice many modern designers need to know - I don't like seeing an icon and having no idea what it does without clicking it, and having to guess what the icon might mean, where a label could easily fit, or replace the icon, and be a vastly better UX, but "looking good" is more important to most designers.
I've seen some contexts where this is what's happening. IKEA instructions would be one, or I've also seen it in some board games, where things like cards will use icons so only the instruction book needs to be translated.
But in UIs you usually have to have some text equivalent somewhere, on hover or long-press or in a menu or just as text for screenreader users, so you don't generally get to avoid translation even if you take visible labels away.
The images are also irritating and jarring when you notice that the bokeh is fake and that they're all AI generated (and AI generated images have really headache inducing depth of field effects).
Thanks for pointing this out. Never thought of it like that, but I guess you're right, I sensed something in those kind of images. Except for the sugar rush of saturated colors and overall soft blur of course, that always seems to be there with AI images.
That struck me as well. Whatever interesting message the author may have been trying to convey was lost on me, and probably many others, because of the visual distractions. Visual distractions are precisely the problem that we're facing with modern interfaces.
I'm currently stuck on LTE due to a power outage. The page is horrible to try to read due to most of the images being either in the process of being loaded or not loaded at all.
If it was a book I wouldn't buy it. The theme does seem to fit a kids book.
What I find hilarious is that I cant tell if this took years to draw and compose or if it was 5 minutes worth of prompts. Did they knit everything? I sometimes see my art on low effort articles, I'm 99% sure they think it's AI.
The only detail I really liked was how the arrow representing the computer communicating to the user has a ring on the back so that the user can be roped in like a whale.
I have great news for you. The article is also perfectly structured, which means it shows flawlessly on reader mode.
Reader mode is a standard feature on all major browsers on both desktop and mobile. Given you're so vocal about how articles should work by just "displaying the words", I'd suggest that you acquaintance yourself with the one feature that does exactly that.
Thanks to reader mode, you get to concentrate on the message. And we get to keep our joy.
I have bad news for you. This is cut-and-paste directly from reader mode in Firefox mobile.
"Then came terminals and command lines. Physical knobs turned into typed commands—more powerful, but our digital world became less embodied. Then came terminals and command lines. Physical knobs turned into typed commands—more powerful, but our digital world became less embodied. Then came terminals and command lines. Physical knobs turned into typed commands—more powerful, but our digital world became less embodied. Then came terminals and command lines. Physical knobs turned into typed commands—more powerful, but our digital world became less embodied. Then came terminals and command lines. Physical knobs turned into typed commands—more powerful, but our digital world became less embodied. Then came terminals and command lines. Physical knobs turned into typed commands—more powerful, but our digital world became less embodied."
I stopped reading after that. There are also missing full stops, which means it's difficult to understand what's happening.
Reader seems to be broken on iOS Safari—after the first few paragraphs sentences start repeating 8 or so times in a row
Plus the longer paragraphs, confined to the height of their parent image, are cut off on my iPhone mini, leading to sections reading e.g.:
> controls with GUIs—graphical user interfaces. We skeumorphed the heck out of our screens, with digital switches, flat sliders, and folder icons. But we kept some of the the functionality in the physical world, with slots to stick disks into and big
This complaint is like visiting a flower garden and complaining that it is an inefficient use of space because it doesn't grow enough root vegetables.
The style and emotional feeling of the page is the message. An article consisting of only words is not an "article like this", and if you are starting from that premise you already totally missed the author's point.
They might have gotten the point but disagreed. In particular, if the style and feeling of the page is the message, and they are saying they don’t like the message and page feels bad… then, it seems like the premise was understood and rejected.
> The style and emotional feeling of the page is the message.
And for many people, that message is, "go away, this is not for you".
Which is a valid take if that's what the author intended, but generally speaking, when people take time to pen manifests, they expect them to be read and heeded.
The question is whether the text on the page is supposed to match the message. If it’s purely an artwork, then perhaps the text doesn’t matter - but that’s a bit confusing. The problem with the site is that it’s making a claim in its text and, for many people, refuting that claim with its presentation.
Our interfaces have more modalities than before but they are disconnected from both physical and emotional reality. Buzzes and dings and flaps are nothing like hearing a happy shout from a friend or feeling the 'clunk' of an actual motor starter engaging.
I totally agree with OP that the 'flat' visual style is appalling. (And gray-on-gray text is an obscenity.)
I don't want my computer's interface to be overtly "emotional". I don't want it to have the same effect as "a happy shout from a friend". I want it to be unobtrusive and functional, so that I can pay attention to the message my friend recorded with their happy shout that is actually real. And I prefer my cars quiet because I want to keep my focus on the environment.
And what do you mean, disconnected from physical reality? I listed the examples where I pay with my wrist, squeeze my earbuds, draw with a pencil. I also snap photos and take videos, record with voice memos, send a pin with my location. I track AirTags, I identify plants by pointing at them, I learn constellations by aiming at them. Computing is more connected to physical reality than ever.
The two best experiences I had with touch were the the Sony WH-H900N [0] and Procreate on Ipad. The headphones had a touch surface where tap was play/pause and swipe up/down manipulate volume and swipe left/right change track. Due to the large surface, it was easy and quick to do these actions and natural.
Another good experience was shaking my phone (an Android Motorola) to turn the flash light on and off. Another great natural movement is taking my iPhone and transfer the playing music to an Homepod by tapping on the two together.
For almost everything else I loathe touch devices. While older ones may be clunky visually, they are far more ergonomic. Yes, my phone can do a lot of stuff, but the whole process for a specific one is always clunky and companies go out of their ways to block you from altering the UX.
> This is a beautifully designed and illustrated page.
The artwork on those is stunning. It's hard to imagine someone would spend so much time and effort illustrating an article with so little content. (the other articles on that page are similarly well illustrated)
If this article were created 5 years ago, I would be downright impressed. Sadly, I've defaulted to assume the artwork was generated by AI those days. (even though I have no evidence of that)
It doesn't help that the author claims they "empower devs with AI" in their home page, and their older webpage from 2019 [0], while still very beautiful, isn't illustrated to the same standard.
I just wish authors were upfront that they generated the artwork with AI (with a little caption, footnote, something; the same kind of thing newspapers use to credit their photojournalists). I really have nothing against using AI for this kind of thing and regardless of whether AI was used, the author of the article for sure has a lot of artistic merit for the composition as a whole.
The article reads like a description of personal computing in the late 90s to early 2000s. It also reads very similarly to Apple’s early marketing around multitouch displays.
There has been a number of attempts at making screens create tactile bumps and provide direct feedback which haven't yet worked which might improve physical interaction somewhat so we can get buttons and switches and knobs in a programmable way that isn't hardware specific to the task but we aren't there yet.
Having no tactile interaction on computers saddens me very much. A couple decades ago, a colleague and I did a mind experiment on pixel-level tactile interfaces and imagining all the affordances that would provide - including of course for those that are sight challenged. All humans have a very very strong tactile aspect to their neurophysiology. Cortical Man shows that clearly.
It would be sad if in 10,000 years we evolved to lose our tactile senses.
I'll also add the buttons and switches and knobs are not all that tactile. They are a modern human creation.
This, by the way, is partly why MacBook trackpads are so good - they have excellent haptic feedback for clicking that is superior to most (all?) physical trackpads.
> buttons and switches and knobs are not all that tactile
But they are, though. You can touch them without pressing, and, once you get used to them, they have different textures.
Back in the ball-mouse days I had this vague notion of including some kind of togglable magnetic resistance on the mouse rollers, to make physical tactile ridges and divisions in the user interface. Probably completely impractical, but it would open up some interesting possibilities.
Beautiful? It looks like utter garbage to me. I really can't abide that twee visual style. The designer is trying way too hard and completely lost the plot.
Honestly using GenAI slop pictures to illustrate the article about the soullessness of modern computing clashes with the message in a way I don't think the author intended.
As far as GenAI goes, this ain't slop. Guess people wouldn't be so hostile to GenAI used as stock footage if it were of this quality and consistency. But this sort of output is hardly "type a prompt and press a button", not sure what was used but I imagine style transfer or LoRAs were involved - or at least few rounds of prompt refinement
> Our interfaces haven't lost their senses. Our interfaces have more senses than they've ever had before.
Hard disagree. Let's take a very simple example, Wikipedia[0]. Took way too long to build in a dark mode and when they do they have the options "light", "dark", and "automatic". YET the default value is "light". WHY THE FUCK IS THERE AN AUTOMATIC IF THIS ISN'T THE DEFAULT!? Obvious stuff like this is everywhere.
I find a lot of interfaces INFURIATING. My car wants to do things with touch screens while I want to feel because I want to keep my eyes on the road. My iPhone won't capitalize the letter I and will change not just the word I'm typing but the word previous to it making swipe style texting painful to use. Speaking of the iPhone, it's 2025 and there's no universal back. I still don't know how to exit the YouTube popup asking me to activate my free trial of premium other than swiping close the whole app and reopening[1]. Or I scroll through an app with threads (e.g. Twitter) and I move slightly left or right and bam I'm on a different tab and when I move back I'm not where I left off but somewhere completely new.
You may say "well that's a 'you' problem, I'm happy with the way things are" and my point is that humans are all different. There's no one size fits all. Maybe that swiping thing happens because our thumbs are different sizes or our phones are different sizes. Maybe you like light mode and don't open any websites with the lights off. But that difference is what makes us human. The problem is that things are converging to things that are bad for everyone. Design matters a lot and getting used to a design is very different than designing things around people. A well designed product needs no instructions (obviously not absolute), just see the "Norman Door."[2] We shit on backend developers for making shitty UIs (as a 'backend' person, I agree, this deserves criticism) but I don't think the front end people are at all concerned with design now a days either. There's a special irony with Apple, considering the magic was the interaction between Jobs and Woz. The magic is when the good backend meets good frontend. Yet now we're just doing both like it is a competition of who can create the worst thing the fastest.
I don't think these are so different. The reason a lot of UIs feel like they've lost their minds is because they are not adapting to humans. Which that is the argument for using more senses. I mention Norman Doors because this is that intersection. I could definitely have communicated better, but I think these things are fundamentally related.
Disagree: Our malaise is not boredom from simplicity, but fatigue from inconsistency.
"Flat" interfaces aren't bad because they lack an ineffable whimsy of embodied human experience, they're bad because they threw out the baby the bathwater, tossing decades of conventions and hard-learned accessibility lessons in the name of supporting a touchscreen.
Compared to 20 years ago, everyone is shipping half-website-half-desktop abominations (e.g. with Electron[0]) and reinventing UX wheels. Too many apps/sites impose "their own look" instead of following what the user has already learned. [1] Often users must guess whether certain things are even clickable, how a certain toggle looks when enabled, whether certain settings are a single-select option or a multi-select tickbox... And memorize those rules with per-app or per-website granularity.
> You can talk while clicking, listen while reading, look at an image while spinning a knob, gesture while talking.
Those are all things people do after "make computer do what I want" has become automatic.
Now when--for example--trying to find the 21st item they just added inside a list that is vertically limited to 20 and the custom grey-on-grey scrollbar is always hidden unless you've currently hovering a mouse exactly in the right 5-pixel-wide strip between two columns of the interface.
[1] That may be due to deliberate "remember us" branding, whatever was fastest-to-ship, because things to look new to get somebody a promotion, because they want to create a switching-cost so current users feel bad trying to use a competitor's product... Or because someone like the blog-poster has misguidedly tried to make a "richer experience."
To add insult to injury, not only is everything inconsistent thanks to incessant wheel reinvention, nearly all of the reinvented wheels are halfassed at best and missing functionality compared to what they’re replacing. When a company writes a new widget to match their theme they only build the bare minimum necessary to visually match mockups. UI controls have become vapid and devoid of function.
Agree. The constant UI reinvention of lately is super strange to me. Companies want to save money but most developer time in simple apps is spent on it.
I remember 20-25 years ago mostly using Windows widgets to make Enterprise apps. It was fast to make, fast to run, and users back then knew how to use. Didn't look the best, but at least was consistent.
The next 5 years we sort of tried to do our best, but most things were still sort of standard-ish.
Then for about 5 years or things like Bootstrap, Material, etc, dominated. It was nothing special, but at least consistent between apps.
But in the last 10 years pretty much every company I worked on had a custom UI built ENTIRELY from scratch by a designer and a small army of developers to implement it. It looks "the same but different" in an uncanny valley way.
I honestly feel like this is the worst possible use of frontend developers, period. Not only from a financial perspective but also from an end result.
If most software were following DDD, UI should be a generic domain. But they bind themselves to Electron and bring the whole kitchen with them. And then a note app bring a whole audio and video ecosystem among others.
Before software only needs to be useful. Now the C Suite thinks it needs to be engaging and isolating like a casino.
The "great flattening" fashion has made computers harder to use for many people.
Older UIs separated computer output from where user input is expected. Compare Windows 7 or 2000 to Windows 10 or 11 and the older Windows versions win. It's the same with Android.
UX designers follow fashion as much as the garment manufacturers on the catwalks of Milan, even if the fashion is uncomfortable.
Just because something is new, that doesn't mean it is better.
I'm currently using Gnome and their UI may not be the most beautiful or complete, but they've gone all in on consistency. I don't mind software like Blender, Audacity, and others having their own design systems as their domain is much more complicated. But a lot of software only needs a few controls and the native ones should suffice.
I don’t think it’s a coincedence that out of the Linux DE ecosystems, GNOME has probably the biggest presence in little third-party utilities made to match the environment. The DE itself is quite flawed in my opinion, but its consistent and opinionated design system catches the eyes of devs and would-be devs and motivates them to build things.
A similar effect I believe is what’s been largely responsible for the healthy botique indieware scene on macOS too.
I think what motivates people to patch over Gnome deficiencies is its position as the de facto standard "enterprise" DE, where you basically have no choice but to use it.
There’s a handful of third party apps that serve that function, but that’s really more the domain of GNOME shell extensions.
What I’m talking about are apps built not because there weren’t serviceable options in their categories prior, but because there weren’t any that made an effort to be at unity with the larger GNOME desktop. Apps like Errands[0], Folio[1], Shortwave[2], and Newsflash[3].
There is also a big elephant in the room that we are sort of ignoring with the whole AI stuff, which is when flat design came about, a lot of the designers who weren't really good now suddenly had jobs because everybody could put a flat thing on the page and call it a "button".
Good designers still exist but they are simply crowded out.
The same is happening today with the AI generated apps. Most front ends now in another 10 years will be filled with AI generated apps. But good design and applications will be around but they will be crowded out.
And you see this in almost other industries as well. For example, architecture has simply gotten worse. A building from today looks much, much worse than let's say a building from even 300 years back.
So we will simply have worse software and worse performing software which breaks down all the time in the near future and we will all suffer but there is no solution out of this.
> A building from today looks much, much worse than let's say a building from even 300 years back
You don’t even have to go back that far. They still knew how to build decent buildings just 80-90 years ago. I think it all kinda changed after the Second World War. Maybe they needed to conserve money and build as much as possible?
I'm not sure how accurate it was in its argument for causality, but I recently saw a video arguing that architecture worsened from the invention of caulk for building sealant. With the ability to fill all gaps easily, buildings no longer needed to be designed and built with overlapping layers to prevent water and other elements getting in. This significantly reduced complexity and led to very simplified buildings.
The argument is that pre-caulk, the aestetics of a building came in pert due to design requirements. Those designs requirements disappeared post-caulk, as you can just fill gaps between any 2 panes with caulk. And with all things, we seem to regress to the lowest common denominator.
These beautiful images (AI generated, perhaps?) make for a great showcase, but I find myself disagreeing with almost everything here - except for the core desire to make interfaces more engaging.
The real challenge is that UI designs are ultimately constrained by their hardware. This means major interface innovations often limit where the software can actually be used.
Take tablet-optimized apps, for instance. They can fully embrace touch interaction, but this leaves desktop-only users completely out of the loop.
So unfortunately, truly revolutionary interfaces tend to require equally revolutionary hardware to match .
What are those floating letters? Does the keyboard have 3 rows of keys, or 4? What's going on near where the esc key should be? Why does the screen look like the back of a park bench?
> The real challenge is that UI designs are ultimately constrained by their hardware.
Sure, but part of designing a product is recognizing this and the author seems to be making that point. Surely they aren't saying you should have sound and haptics in devices with no speakers or motors. Certainly I think the author would argue that cars should have physical knobs and not touch screens.
The problem is what you mean by "UI"
UI means "User Interface". It does not mean "Software defined User Interface".
User interfaces are composed of one or more layers, including a human–machine interface (HMI) that typically interfaces machines with physical input hardware (such as keyboards, mice, or game pads) and output hardware (such as computer monitors, speakers, and printers).
https://en.wikipedia.org/wiki/User_interface
> truly revolutionary interfaces tend to require equally revolutionary hardware to match
The prime examples given were about mixing and matching capabilities that most hardware already has. Most computers and tablets already have a microphone and some kind of tactile input (touch or keyboard).
So, I wouldn’t say that you’re wrong in tying UI innovations to hardware, but it feels like perhaps you didn’t read the whole article. We can innovate by remixing existing functionality without having to wait on entirely new paradigms being adopted and universally available.
Fantastic design. Normally pages with funky scrolling behavior and boxes whizzing all over the place and all that are annoying but it really works here. Not to mention the adorable visuals.
That being said I think it misses what made the old physical interfaces so appealing and useful. It's not that there's something inherently superior about multimodality; it's that physical interfaces are permanent, with defined edges and definite shape. Unlike screens you know exactly what's where, building muscle memory every time you use it. There are no hidden menus or moving parts.
Multimodality - such as being able to see the position of a slider at a glance, or feel its position by touch - is useful because it reinforces the absolute existence of a control and its state across multiple senses. Interfaces using voice and gestures like suggested are the exact opposite of that, because each point of interaction becomes even more disconnected and vague.
>Fantastic design. Normally pages with funky scrolling behavior and boxes whizzing all over the place and all that are annoying but it really works here. Not to mention the adorable visuals.
On my phone, there are several pictures that erratically resize themselves whole scrolling past, and that card stack section completely flips out when scrolling back up. Aside from being very visually noisy, I'd say it just doesn't work.
This kinda reminds me of how, in the wake of the smartphone, for a few years every company thought they needed to boost engagement with their product. Even if their product was something in the background that people are happiest not thinking about. Do we need to engage with our oil filters? With our clothes washers? With our insurance policies?
Some things are best if they stay simple, efficient, reliable stable, and quiet. Not needy, demanding, high-maintenance, attempting to ensnare us through as many of our senses as they can get their claws on.
Some things are an experience, other things should just be quietly useful. Do we ask ourselves which we should be, before adding another colorful icon, with a red dot in the corner, with a number inside the red dot, to the poor user's screen?
And I hate haptic feedback. I keep my phone on silent 24/7 just to not feel my phone creepily zapping my fingers, and for some reason silent mode is the only way I can accomplish that.
It didn't help that bigger companies were hungry to integrate vertically and build the biggest moat they could, to the point it became a "eat as much as you can" landscape.
When Steve Jobs was saying DropBox is not a product but a feature, it surely didn't reassure DropBox that they could keep making simple and transparent services that the platform owner will gently play nice with.
This is a true gem of a thought that every designer should consider at the outset of a project: "Some things are an experience, other things should just be quietly useful."
I think the article laments over a lack of something that interfaces have legitimately embraced for some time. Gestures, audio control, interactivity, visualizations, and so on are all things we’ve seen an increase in over the decades, not vise versa. Whether it’s done to a degree and in a manner that suits the author is another matter. That in itself leads to another rebuttal: As someone who is easily overwhelmed by their senses, simplicity and accessibility should be the priority. Surely, there are times when a rich interaction can be extremely useful (why just talk about physics when you can also let the reader interact with the concepts[0]). On the other hand, it’s easy to become flustered when someone imposes their artistic flair, conceptual model, or worse (when businesses weaponize interfaces against the user). I look at the author’s note organization mock up and I feel legitimate anxiety as it looks like little more than chaos on the screen.
It's a lovely set of sentiments. I think another aspect of UI that has been lost is discoverability - finding out how to do things in a new interface seems harder than it used to be when there was one app-level menu bar. Too many things are hidden in context menus, found only by right-clicking or long pressing on just the right spot. A set of multi-modal interfaces might just make discoverability even worse.
Consistent use of context menus would actually be a boon, because it’s a single mechanism that can be applied everywhere, and just opening a context menu is a benign interaction (no fear of triggering some undesired action). The disappearance of context menus is one thing that I lament about modern UIs (another is tooltips). There may be “share” or “ellipsis” or long-press menus, but they are highly inconsistent, and you never know where to look for desired or possible actions.
But don't you love buttons with ad-hoc icons and no text and no explanation of what they do and they don't even have any visual indication that they're buttons? :)
Ah yes, often found within a Fisher Price user interface, wherein standard UI controls are thrown out in favour of a custom "artistic" and often unintentionally infantile interface. Good thing that never happens in today's world...
I was reflecting on something similar to this this while photographing the recent lunar eclipse with a Fujifilm X-T5, a highly tactile camera that is just an absolute joy to operate.
I was on my roof in the dark at 1:30 in the morning in the cold and wind. I'm tired, can't really see much, but still need to actively work with the camera's controls. Thankfully, the X-T5 is covered in physical dials, switches and buttons. Without looking at the camera's screen, I can quickly change shooting modes and the majority of the settings I care about and be confident that I changed the right things.
The same cannot be said about a large number of modern cameras, which opt instead for a more digital approach.
In terms of modern "computing" devices, my cameras are an absolute joy to use compared to most of my other hardware.
So much so that I've recently been finding myself looking to recreate this tactile experience on my general purpose computers. I've been looking at weird bespoke dials, switches and various input hardware to make processing the photos (among other tasks) feel more tactile.
>Thankfully, the X-T5 is covered in physical dials, switches and buttons. Without looking at the camera's screen, I can quickly change shooting modes and the majority of the settings I care about and be confident that I changed the right things.
> The same cannot be said about a large number of modern cameras, which opt instead for a more digital approach.
I feel you. I can only imagine the horror of looking at a small but very bright touch screen in the dark, eyes adjusting, etc.
> So much so that I've recently been finding myself looking to recreate this tactile experience on my general purpose computers. I've been looking at weird bespoke dials, switches and various input hardware to make processing the photos (among other tasks) feel more tactile.
A physical knob on my desk to precisely control the volume of the speakers is a very handy one for me. Don't know how to live without that. Especially because one big yank on the knob mutes unsuspected annoying sounds.
I got agitated looking through that due to the excess of flourishes. Fancy elements should punctuate focal points. If there's too many, the focus is lost.
There is a certain beauty of a webpage about user interfaces failing to load under strains from traffic volume. I couldn’t read much, but it would appear the best interfaces are the ones that work!
Yes, flat design is too flat, and AI chat is too devoid of friction.
But mobile and tablets are better at certain things [1], and we shouldn't get rid of that either.
I saw somewhere (Bret Victor?) that tools have two parts: the part that fits the problem, and the part that fits the human. The example was a hammer; the head fit the problem (the nail), and the handle fit the human (the hand).
Notably, the two parts must fit their respective things, but they also have to work together.
That is what we should be doing: creating harmonious tools that fit the problem and the human. What that looks like will be different for every tool.
Our interfaces currently have two problems:
* Because they can have any appearance, appearance gets more attention than being a good tool. Example: flat design (good appearance) overriding skeuomorphic design (human fit).
* No one wants to redesign everything, so we all reuse the same base stuff (Electron, Qt, etc.) even if the result won't fit (one or both ends) or harmonize.
I would love to fix both of those problems, but because people are lazy, it essentially means creating a GUI framework that is flexible enough to fit almost any problem and any human (accessibility included) while making sure that flexibility does not destroy harmony.
While I am working on that, it is a tall order, and I am almost certain I will not succeed.
> Compare the feeling of doomscrolling to kneading dough, playing an instrument, sketching... these take effort, but they're also deeply satisfying. When you strip away too much friction, meaning and satisfaction go with it.
Kneading dough sucks if you do it a lot. It's monotonous and tiring. That's why frequent bakers use mixers with bread hooks.
Instruments are designed to be as friction-free as possible, given physical constraints. Friction makes expression more difficult. A violin is the most expressive because it has the least "friction" of valves and hammers and buttons getting in the way.
Sketching similarly is low-friction. That's why it's so much easier than oil painting. You can express yourself hundreds of times more easily, which is why oil painters start with tons of preparatory sketches.
I fundamentally disagree with the premise that friction is desirable. It's not.
> I fundamentally disagree with the premise that friction is desirable. It's not.
I agree that too much friction is terrible.
But what I was saying, and the article seems to be implying, is that too little friction is terrible too.
Using Stable Diffusion is lower friction than sketching or painting. But the latter two are better.
The difference is that there is friction that leads to a good outcome, and friction that does not. Mixers with bread hooks are eliminating bad friction, whereas Stable Diffusion is removing good friction.
And in fact, there's actually more friction when using Stable Diffusion if you have an end in mind; trying to get it to output what you want is high in bad friction.
> Using Stable Diffusion is lower friction than sketching or painting. But the latter two are better.
No, the latter two aren't "better". All three are totally different tools for achieving different purposes. I'm going to use Stable Diffusion to raise engagement on my blog with a hero image and a relevant thumbnail, I'm going to sketch to explore visual ideas and improve my skill of seeing, and I'm going to oil paint to carefully craft something designed to hopefully hang on someone's wall for a long time. (Well, not me because I don't know how to oil paint, but you get the idea.) I'm certainly not going to oil-paint something to illustrate my blog. Oil painting isn't "better".
And when I use ChatGPT to ask questions about math or physics or history or culture, the last thing I want to do is to make the process more difficult. I already spend enough time typing a prompt the AI can clearly understand. There's no way in which it would be made better with "good friction".
I mean, I literally don't know what you mean by "good friction". I don't think I've ever encountered it in my life. Life in general is challenging enough without having to add more challenge for no reason.
Friction with growth is good friction, so long as the friction is minimized for the amount of growth.
And by "growth," I mean anything that helps people to "level up," such as learning, gaining a skill, becoming more Christ-like, whatever.
Growth cannot happen without friction. Your use of AI is stunting whatever growth you could have gained from those processes.
"So what?" you may say. However, someone who applies friction to growth consistently in their blog/code/whatever will find that growth compounds like interest, and though they may be less "productive," their productivity will be better in the long run because they will have the skills to go beyond anything you could ever dream of doing.
Then I'm just gonna say, my life has more than enough "good friction" and growth already. The last thing I need is more.
Since you use the example of a blog, just writing blog entries has plenty of friction inherent in thinking and researching and writing.
I don't want more friction in generating hero images. That's not something I want to level up in. AI is not stunting anything, because if it weren't for AI, my blog wouldn't have images at all, or they'd be stock images that were even worse. But images help your posts reach an audience, so they're necessary. So Stable Diffusion is great.
I'm not living my life to build all the skills. Skills are a means, not an end. If I can choose between spending quality time with friends vs. building skills illustrating hero images by hand, I'm going to choose the quality time with my friends, because I already don't have enough of that.
Also, the very first example of supposedly "good friction" was kneading dough. That's not leveling you up each time you knead. Just use a dough hook if you've got one.
I specifically said that kneading dough was bad friction.
Anyway, as someone with a blog, people actually have conplimented me for not using images unless they support the point in the post. I see plenty of negative comments about blog posts with header images.
> I specifically said that kneading dough was bad friction.
Ha, quite right, that's what happens when you get the bad friction of spreading an exchange out an hour or two... ;)
I think I understand what you're saying, but I don't think it's what TFA was calling friction, and I don't think it should be called friction at all. I think you're just talking about investing in building skills if they will be beneficial in the long run, and I don't think anyone will argue with that. But calling that "friction" is confusing.
And that's why your original comment that "AI chat is too devoid of friction" or later saying "too little friction is terrible too" still makes no sense to me. You should never add friction, or complain that something's friction is too low. AI chat is a tool like a dough hook. It gives you the answers you need faster than driving to a library or trawling through Google links. Adding friction to ChatGPT makes as much sense as saying you should use a card catalog rather than a library terminal to find a book's shelf. There's enough friction in life already. Don't add more.
Even kneading dough can be good friction. If you’re making one loaf of bread at home it can be a very nice part of the process. If you’re baking hundreds of loaves in a commercial bakery of course it’s neither practical nor desirable.
Hence the whole point of the article. We've reduced our UIs down to minimal friction ("easy to read") where, like creating a drawing, a higher friction ("harder") interface could well be more rewarding.
I think it ends up proving the opposite. I did not feel like scrolling through all this pointless flair was in any sense rewarding. Quite the opposite, it's distracting enough to be annoying, so if anything, it feels like a counterpoint to the message in the text.
I think of this trend every time I try to connect my bluetooth headphones to a third device. They'll tolerate two just fine but if you want a third you have to puzzle out which other two they're connected to, go find one of them and disable bluetooth on it. Then you can power cycle the headphones and your third device will now be your second.
I want some kind of magical piece of string which I can touch to both devices as a way of saying:
"you two, communicate now"
And then later, to break the spell, I'll just touch the ends of that string together.
I don't want to have to dig through settings, I want to manipulate physical objects around me.
What would make sense is tapping the devices together, and using NFC to pair them. The hard part would be figuring which parts of devices to tap together so it would probably be more like rubbing them together to find sweet spot. Also, it needs to be seamless and automatic, not involve downloading any apps.
There is a reason we are using a keyboard to interface with this stuff. We’ve been writing to think for millennia. Using a keyboard to do it is just marginally more efficient and less of a strain on your wrist.
In The Great Flattening section of the post the author literally argues that the way we interacted with computers back in the 50s-70s was better because it was more of a full-body experience. That's a silly argument to make. As far as the status quo HCI paradigm goes, we've obviously made a lot of progress over the last 50 years.
However, I think the post is striking a chord because it's pointing to a deeper truth: after 70 years, we are still only scratching the surface of all the ways that humans and computers can potentially interact with each other.
Flat design is much more of a blank canvas than old desktop UI toolkits were, but companies are loathe to invest in "Juice" because it requires a level of attention to detail that works against fast iteration.
It might just be me but I find the thesis of the article to be very confusing.
> but we should have made typing feel like painting.
Maybe painting should should feel like painting and typing should feel like typing? I don't know about others but when I type, I just want to type, as efficiently and quickly as possible. I definitely don't want typing to feel like painting.
By the way, loading 92 MB of images to make me read 6 KB of text is brutal!
By the way, loading 92 MB of images to make me read 6 KB of text is brutal
That's what I get for wanting to read the article before the comments here. Waited minutes only to be greeted with mediocre AI-generated images too, to add insult to injury.
Seems to be a call for the return of skeuomorphic UI, and combining it with things like haptics (actually, fairly classic).
TBH, I'm not especially against the idea, except that, if you make something look like a real-world object, it's important to make it behave like one.
There's a hell of a lot of digital interfaces (not just touchscreen stuff -digital dials and switches can also have the issue), that look like they should behave a certain way, but don't actually do it.
First of all: this is utterly beautiful, thank you!
There is an important thing about computer interfaces which does not hold for playing an instrument, or woodworking, or dancing, etc: it very often needs to be done privately and discreetly enough, both to avoid publicizing your private affairs, and to allow several people work near each other, or work near people who are not working, without bothering them.
This significantly limits the use of the body language (finger gestures are OK; gestures, so-so; poses, no), sound (listening is OK provided headphones, speaking or singing, much less so), haptics (some force feedback is OK, noisy vibration and clicking, less so).
This does not mean that all these things are useless! Not at all; audio feedback may be welcome, haptic feedback may be welcome, but both should be optional (e.g. in a game), unless sound is what you work with anyway (e.g. in a DAW).
What I miss is the richness of erstwhile interfaces, with their colors, shapes, textures, pseudo-3D effects, the occasional skeumorphism. They gave so many clues, helpful hints that encouraged exploration, and provided large amounts of subtle feedback. Sadly, they were considered "noisy" and removed by æsthetics purists, likely the same people who designed the sharp, wrist-biting "clean" edges of certain Macbook models. This regress needs to be revested.
Something has to be said about non-keyboard controls. They are few, and non-standard, with the exception of touchpad / touchscreen and mice. They allowed direct manipulation and gestures to flourish! But, unlike knobs, these cannot give reasonable haptic feedback. Musicians have all the best devices like that, and those who need such controls repurpose MIDI devices with their knobs and faders; GIMP, a raster graphics editor, even has a special MIDI device configuration section in settings. Another good source of haptic controls are mice with their wheels, and, more rare, game pads with joysticks. Neither is assumed to be connected to an average laptop though, which limits their use in interfaces, however optional.
> Compare the feeling of doomscrolling to kneading dough, playing an instrument, sketching
The whole article felt disconnected from reality to me, and this might be the core part that underlies it all.
The author seems to be hostile enough to smartphones that they don't really see much more physical using them has got. In particular in the last decade or so we got rid of so much of frequent polling to see if something needs attention or not, and notification management has become leaps and bounds better, at least on android.
But the author's focus seems to be more on social media ?
And they also idealize what people were doing before smartphones.
People scrolling Twitter rage thread today would probably not have been sketching birds in their backyard. Some perhaps, most surely not. TV was already widely popular for a reason.
There is a niceness to more kinesthetic input devices, to dials, knobs, and pens. I'm always experimenting with trying more. The unfortunate thing is they tend to be niche and unsupported. Try finding a nice dial to control things like zoom or volume, it's harder than it should be and costs over $100 or is not a great experience.
A Brief Rant On The Future Of Interaction Design [1] was great. The comments in this thread are my first time hearing about that blog post. Send me more blogs/books/videos/etc. like that, please.
The problem I see here is the more senses you engage - really, the more coordination you demand of the user, the harder it is to make the interface feel intuitive.
I think what UI needs is talented people to come up with new ideas.
When it was purely utilitarian only a minority of people could use it.
When it became tailored to a wider audience it was bland and uninspired.
The further it goes towards art\skeumorphism the more niche the userbase.
The further it goes towards corporate capture the angrier the userbase.
I think we shouldn't pretend like there's a big obvious solution that everyone else has ignored.
Shit I remember reading the Windows 8 designer blogs. Microsoft did serious testing to come up with the most hated interface in their history. From an academic perspective they did a lot of things correctly. Still shit. In fact Microsoft have form here, I remember being one of the few people who enjoyed the original Xbox controller, which was fairly rapidly redesigned due to market angst.
It seems from the finer details, a lot might be AI generated, which at least to me would defeat the message about how our interfaces are less human. I'd be very glad to be proven wrong, and it's just the style.
As an evocation of adding "texture" to a "flat" experience, I think it does the job quite well, and is pleasant to look at - it feels well thought out and crafted to me!
It just feels strange to muse about the embodied experience of drawing while having a computer spit out a semantic average of your general idea with little creative control.
I agree with other commenters that it seems AI generated (patterns/noise in fine detail, nonsense text, and inconsistency in style, particularly in the faces of the "dolls").
The first few images are impressively well curated and consistent though. I think the "rotoscoping"/"compositing" helps quite a bit to make it feel more cohesive and intentional - certainly it's a step above the usual "blog post header image" slop.
Yeah I'm having a hard time telling as well, but I think it is AI. Following threads has a surprising amount of consistency and coherency that we don't commonly see in AI art. I would expect AI art to be VERY bad at felt imagery. The image here[0] looks too consistent though there are definitely parts that are edited. And here[1] the blurring is very consistent (other than the stitching you cans see when scrolling). Although this one[2] really makes it look like the text on the computer screen is AI generated. A lot looks good but the coat on the main character flattens out. But here is a really good one where you can see the AI errors pop up if you look at the bow[3] (the blurring is also inconsistent here)
There are definitely issues that pop up but I find it weird people are calling this "slop". "Slop" is that bullshit people are posting where they take the first output and don't pay attention to details. Clearly this person played around with the parameters a lot. The problem with AI Art is taking the art out of the art (and the data theft). But if you're using AI, iterating, focusing on details, then I'm not sure how it is different than any other medium where you do the same things. Art takes time and makes you feel. Considering how it added to the message here and how clear it is that this person put time into the details, I find it hard to call it "slop" just because AI was used.
This is a beautiful article with great visuals, like many other comments have said. But the actual point being made is worth paying attention to:
> Computers used to be physical beasts.
> We programmed them by punching cards, plugging in wires, and flipping switches. Programmers walked among banks of switches and cables, physically choreographing their logic. Being on a computer used to be a full-body experience.
It’s about working in a physical environment and not just isolated digital interfaces, which is how many different jobs work today (not just programmers). The personal touch is lost. But I’m not sure it can be fixed. There is no commercial justification for making using computers or phones “enjoyable”.
I can get behind the vision of computers being more physical. That's a potent vision.
The claim that current technology is a regression compared to past technology is hard-to-swallow, though. I have a family friend who took CS classes in the early 70s. Punching out 0s and 1s, waiting a day to have a chance to run the program, etc. I do not get the impression that she views this as the pinnacle of enjoyable HCI.
My mom also has a funny story about visiting this friend during the chaos of finals. The friend showed my mom the computer area. My mom vividly remembers seeing students frantically trying to get their final programs done and punch card papers being scattered EVERYWHERE.
Back to the main point, I don't see a lot of extensive, enjoyable physical interaction in this past paradigm. Punching holes in paper would probably get tedious. Carrying the stack of papers over to the mainframe operators would also get annoying. And then you read out the results of your program one page at a time on physical paper. Sure, it's more physical, but is this really more enjoyable in the long run?
So exactly what point in the past is the author reminiscing about?
Computers were never "physical beasts" in terms of their connection to the human senses. If anything computers are vastly improving in the way they interact with the human senses. I'm an optimist. I think we're in the early days and in 100 years the current computers will seem terribly primitive from a sensory standpoint.
- Gestures in a lot of applications have made things more confusing by hiding functionality that you now need to stumble into to discover.
- Sound cues are used all over the place. Anyone who's ever worked in a kitchen hears the godforsaken ubereats alert sound in their nightmares.
- About ten minutes ago, I got startled by my phone deciding that the "you should stand up" vibration pattern should be three long BZZZZ-es... amplified by it sitting on my hollow-sounding printer.
- If another fucking god damn website asks me to chat with an AI agent in it's stupid little floating chat bubble, only appearing AFTER I interact with the page so it's allowed to also make an annoying "chirp!" sound, I WILL become a chicken farmer in some remote forest eating only twigs, berries, and improperly-raised chicken eggs.
All of these things annoy me, and actively make me hate computers. A silent glass brick can go in my pocket because I know it's not going to bother me or beg me to talk outloud to it. If it was some sensory-overload distraction machine (which, by default, it is) it would find itself over the side of a bridge rather quickly. It's getting in the way of my human experience! The one where I'm the human, not the computer!!
reply