Hacker News new | past | comments | ask | show | jobs | submit login
The Decline of Usability (datagubbe.se)
996 points by arexxbifs 40 days ago | hide | past | web | favorite | 694 comments



Ubuntu got worse at 18.04. Logging in on desktop now requires "swiping up" with the mouse to get the password box. The "swiping" thing is to avoid problems with unwanted activation when the device is in your pocket. It's totally inappropriate to desktops.

Then there's icon mania. I've recently converted from Blender 2.79 to Blender 2.82. Lots of new icons. They dim, they change color, they disappear as modes change, and there are at least seven toolbars of icons. Some are resizable. Many icons were moved as part of a redesign of rendering. You can't Google an icon. "Where did ??? go" is a big part of using Blender 2.82. Blender fanboys argue for using keyboard shortcuts instead. The keyboard shortcut guide is 13 pages.

Recently I was using "The Gimp", the GNU replacement for Photoshop, and I couldn't find most of the icons in the toolbar. Turns out that if the toolbar is in the main window (which is optional), and it's too tall to fit, you can't get at the remaining icons. You have to resize the toolbar to give it more space. Can't scroll. There's no visual indication of overflow. It just looks like half the icons are missing.

(I have no idea what's happening in Windows 10 land. I have one remaining Windows machine, running Windows 7.)


> Logging in on desktop now requires "swiping up" with the mouse to get the password box. The "swiping" thing is to avoid problems with unwanted activation when the device is in your pocket. It's totally inappropriate to desktops.

Anecdote: The first time this happened, I had no idea why it wasn't working and naturally started clicking on things and pressing buttons to try to get it to do the thing. I thereby discovered that you can get the password prompt by pressing Enter.

Having used it this way for two years now, your description of this behavior is the first time I'm learning that it is also possible to do it by dragging the mouse upwards. The discoverability of this behavior apparently does not exist -- I assume if pressing Enter hadn't worked I would have had to use a different device to look it up on the internet.


> Anecdote: The first time this happened, I had no idea why it wasn't working and naturally started clicking on things and pressing buttons to try to get it to do the thing. I thereby discovered that you can get the password prompt by pressing Enter.

One more anecdote. I found this screen for the first time when I got my laptop to demonstrate something to a student. I didn't know what to do, started to do random things, trying to figure out what happened, and the student interrupted my attempts, got mouse from my hand and swiped with it. I felt myself old and stupid. It is the thing like 20 years ago when I taught my parents to use standard UI. Only new it is me who needs help.

I asked, she didn't see Ubuntu before, and nevertheless she managed it better than me. I think I'm growing old, and just couldn't keep up with a pace of changing.


I think it's the familiarity which is actually hurting you there. If you come up to a device which as far as you know is alien technology, you don't know if it should behave like a Mac or an iPhone, so the thing that works on an iPhone feels like more of a valid possibility. If you come up to it knowing exactly what kind of device is and that it isn't at all like an iPhone, it doesn't.

Because any hope of guessing it comes from knowing that phones do that, so the less like a phone you know it to be, the less likely you are to try that. Notice that even on phones it isn't intuitive if you've never done it before. If you tap the lock screen on Android it shows a message that says "Swipe up to unlock" -- because how else would you know that for the first time? But at least there it serves a purpose.


I still remember the first time my young niece sat at my traditional desktop computer a few years ago. She must have been about five or six years old. She immediately started using her hands to try and interact with the screen and was utterly confused as to why nothing was happening.


Like Scotty in that iconic scene..."Hello, computer...........Hello, computer......" someone hands him the mouse, which he holds like a microphone "Hello, computer...."


For those unfamiliar, from the second greatest Star Trek film: https://m.youtube.com/watch?v=xaVgRj2e5_s


> I think I'm growing old, and just couldn't keep up with a pace of changing

On the other hand, when my iPhone suddenly would connect with a caller, but neither party could hear the other, redialing didn't help, turning it off/on didn't work, I remembered the ancient trick of "cold boot". Which resolved the problem.


You put an iPhone in the freezer and booted it, and that fixed it? Wow. I thought that was just for saving spinning disk drives and preserving the contents of RAM for an attack.


A cold boot means shut off all power to the device. The normal "off" button the iPhone puts it in standby mode, not off. This is so it can still listen for calls.


I tried to power off an iPhone today for 5 minutes, failed, and just gave up. Why Apple, why would you make this simple action so damn obscure.


My iPhone 10 locked up hard a few days and I couldn’t get it to turn off using the standard method.

After some digging, I found a new trick that I guess is implemented at a lower level: press and release volume up, then volume volume down, then press and hold the main button until it powers off.


Wait, is it not press-and-hold power like on Android?


No, on the newer iPhones press and hold simply brings up Siri. Gotta press and hold main button and either volume button simultaneously. If that doesn’t work you should be able to use the trick I describe above.


you have to press the "power" button and a volume button afaik


In my experience Ubuntu shows a little animation of up arrows, and maybe even says "swipe to start" if you look long enough at the login screen.

It is pretty confusing the first time, and annoying every time after that, didn't conciously know about the "Enter" trick before now.


Windows 10 has the same mechanism, she might have learnt it from there first and just applied it here.


I’ve been actively using computers since CP/M. On windows, that I use daily, just random click and press and shake the mouse until I get a password prompt to login. Since there is slight delay until something happens after each action I’ve never been patient enough to figure out what exactly it is that works.


The period between wake-up-from-sleep and getting a usable desktop has been a UX nightmare for a long time. There are so many awful things and question marks that happen in the flow:

1. What do you need to do to invoke a wakeup? Press a key? Are there any keys that don't wake the machine? Move the mouse? Click a mouse button?

2. Multiple monitors: During the wakeup sequence, you first have one display turn on, then you think you can log in but surprise! Another display turns on, and the main display briefly flickers off. For some reason, after 25+ years, display driver writers still can't figure out how to turn a second display on without blanking and horsing around with the first display.

3. Once the displays are on, some systems require some kind of extra input to actually get a login prompt to display. Is it a mouse click? A drag? Keyboard action? Who knows?

4. Some systems allow you to type your password as soon as the computer wakes. But there is some random delay between when you invoke wakeup and when it can accept input. What this usually means is I start typing my password, the computer only recognizes the last N characters and rejects it, I wait, then type it again.

These are some irritating bugs that affect everyone who uses a PC every time they log in. Yet OS vendors choose to spend their development time making more hamburger menus and adding Angry Birds to the Start menu.


Windows 10 allows a normal mouse click as well, unless Microsoft changed that afterwards.


It still does. When you do a mouse click, the lock screen moves up very fast - the animation is also a hint for the future. If you have a touch-enabled device and touch-press the lockscreen, it jumps up a little and falls back down, suggesting the swipe movement. On top of that, just typing on your keyboard works.

So MS has managed to make an interface that works just the same on desktops, laptops and touch-enabled devices, and the UX isn't bad on either.


Generally yes, except I seem to run into an annoying bug where if you start typing your password too quickly after the initial unlock keypress, the password input control sometimes decides to select everything you've managed to type so far when it initialises, so your next key press overwrites everything. Plus at home I've kept the full lock screen because I actually like those random landscape pictures, but at work they've set the computers to directly boot into the password entry, which of course rather plays havoc with your muscle memory.


It hasn't changed.


Windows 10 has the same stupid UX, they likely learned it there. It also supports hitting Enter, or any other key. I don't know if Ubuntu supports other keys too, I use Xubuntu to avoid these exact pointless changes.


I don't know about Ubuntu but on Windows you can also just click with the mouse to open it, no need to swipe up.


I’m 20 and it took me literally minutes to realise I needed to swipe up. I thought the computer had frozen.


It's not your fault.

It's the shitty ergonomics that have been pervading software UI design for several decades now.

From the huge number of responses to this article, it's clear the software industry has a very major problem. The question I ask is why haven't users/user complaints been successful in stopping these irresponsible cowboys.

Seems nothing can stop them.


> The question I ask is why haven't users/user complaints been successful in stopping these irresponsible cowboys.

Most users don't complain, because technology is magic to them and they have no point of reference; they assume things must be the way they are for a reason. From the remaining group that does complain, many do it ineffectively (e.g. complaining to friends or in discussion boards that aren't frequented by relevant devs). And the rest just isn't listened to. It's easy to dismiss few voices as "not experts", especially today, when everyone puts telemetry in their software (see below), and doubly so when user's opinions are in opposition to the business model.

Finally, software market isn't all that competitive (especially on SaaS) end, so users are most often put in a "take it or leave it" situation, where there's no way to "vote with your wallet" because there's no option on the market that you could vote for.

The problem with telemetry is that measuring for correct things and interpreting the results is hard, and it's way too easy to use the data to justify whatever the vendor (in particular, their UX or marketing team) already thinks. A common example is a feature that's been hidden increasingly deeply in the app on each UI update, and finally removed on the grounds that "telemetry says people aren't using it". Another common example is "low use" of features that are critical for end-users, but used only every few sessions.


I would like to add the use of "taps" as an engagement metric to your list of misuses of telemetry. There used to be a rule of thumb in UI design that important actions should be as few clicks away as possible. Measuring engagement through taps encourages the opposite.

I also don't like things measuring "dwell time" when scrolling, as it encourages attention-grabbing gimmicks and rewards things that are confusing as well as things that are useful.


An organizational problem seems to be that UX decisions are owned by the UX team, who tend to be extremely tribal in their opinions.

As in, if you are not a UX professional, your opinion is inconsequential.

See: replies on most Chrome UX feature requests over the last decade


> Seems nothing can stop them.

Of course. The people complain. They say fork it and change it yourself or use what we give you.

The people that can't just suffer through it. The ones that know enough use something else.

I login in linux tty. And startx starts dwm. No fancy login screen for me.


I don't care how "user friendly" you think you are, not being able to log in is an absolute fail. Even getty has a better user interface, imagine that: regression all the way back in to the 70s.

This is why I don't touch GUIs from the major binary distros or gnome3 with a 10 foot pole. If I can avoid it I don't ever install anything from those projects.


I have avoided gnome as much as I can since the gnome2 days. The entire project is rife with UX decisions that leave a bad taste in my mouth.

[0] is the example that always comes to mind. I guess this made sense to somebody at the time, but it adds overhead to a process that was simple before, and isn't enabled just for "Enterprise" deployments, it's just dumped on the user to figure out how to configure screensaver hack settings by creating / modifying a theme.

[0] https://wiki.gnome.org/Attic/GnomeScreensaver/FrequentlyAske...


The problem with gnome is that they don't seem to validate their ideas at all. As a result, gnome users are directly subjected to the unfiltered whims of gnome's UI "designers".

Instead of spending the substantial donations they received[1] on who knows what, the GNOME foundation should have spent some of it conducting proper focus groups.

1: https://www.gnome.org/news/2018/05/anonymous-donor-pledges-1...


I don't know if that's true. GNOME 3 is just a dark enlightenment experiment, like an attempt at neomonarchy or the Empire from Elite. Instead of feature creep and user accommodation to Windows idioms, they slash and burn despotically to ease developer burden and try to flesh out their own vision. Do you use Dropbox? Too bad. :3 There's extensions that break but whatever we don't care fuck the tray.

I think it's an interesting and worthwhile experimental path; I just wish it wasn't the "default" as much as it is. But I also feel that way about Ubuntu. And Windows. xD


This page isn't new enough to include more recent failures.

One of my least favourite was when it was not possible to configure the screensaver timeout to never turn off the display. IIRC you had a choice of several fixed times, from 5 minutes to 4 hours, but no "Never" option.

Not useful for systems which display information and are infrequently interacted with. That use case was completely ignored, and for no good reason.


> This page isn't new enough to include more recent failures.

oh no doubt, I have another comment from 4+ years ago about the same topic https://news.ycombinator.com/item?id=10883631 and even then it was ancient history IIRC

man, just looking at that page again reminded me that Windows Registry for Linux^W^W^W^W gconf exists.


Between this and https://stopthemingmy.app/ and the various other systemd/freedesktop “anti-hacker” initiatives, I’ve been finding Linux to be more and more becoming the opposite of the operating system I’ve used for the last 20 years.


Still, I try to avoid GUI stuff and freedesktop stuff as much as possible (I do use a window manager, but not GNOME or KDE or whatever), and write better programs when possible, but a few things don't. I don't use any desktop environment at all. The web browser is one thing that isn't so good. Really, many things should be moved out of the web browser and made to work by simply nc, curl, etc. And some things work better with local programs, but others with remote, with whatever protocol is applicable e.g. IRC, NNTP, SMTP, etc, please.


Man there is not space on this comment box or time for all the criticism that link deserves.

>Icon Themes can change icon metaphors, leading to interfaces with icons that don’t express what the developer intended.

Icons were never sufficient metaphors to start with which is why we have text labels.

>Changing an app’s icon denies the developer the possibility to control their brand.

What does this even actually mean.

>User Help and Documentation are similarly useless if UI elements on your system are different from the ones described in the documentation.

This is only true if the user is completing an action that is solely based on clicking an icon with no text which we have already established is bad.

>The problem we’re facing is the expectation that apps can be arbitrarily restyled without manual work, which is and has always been an illusion.

Why has this worked generally fine in lots of ecosystems including gnome?

>If you like to tinker with your own system, that’s fine with us.

Earlier discussion seemed to suggest that lots of gnome developers were in fact not fine with this because it hurt gnomes "brand identity"

>Changing third-party apps without any QA is reckless, and would be unacceptable on any other platform.

Reckless?

> we urge you to find ways to do this without taking away our agency

Your agency?

> Just because our apps use GTK that does not mean we’re ok with them being changed from under us.

Nobody cares if you are OK with it.


It's really just the popular distros that are following redhat. If you build an OS from scratch or use something like alpine or gentoo it's not so bad.


A neat idea that I hadn’t thought of until your comment:

Because it’s now possible to run multiple VMs at once (containers, etc) perhaps it’s time to run a simple, minimal, admin friendly hacker vm inside Ubuntu desktop?

Let Ubuntu configure all that it needs to get a good functional machine out of the box (working sleep mode for laptops, WiFi management, GPU support, systemd if that’s what it wants.) I then deploy the minimal VM I actually want to poke around with inside that installation.

This is pretty much what many people do in macOS. Apple’s OS supports the bare metal, vagrant / VirtualBox give me my tractably scrutable dev environment.

It’s not a particularly ground breaking concept but it might cheer me up a bit when battling with the volatility of user facing Linux distributions.


> Because it’s now possible to run multiple VMs at once (containers, etc) perhaps it’s time to run a simple, minimal, admin friendly hacker vm inside Ubuntu desktop?

> Let Ubuntu configure all that it needs to get a good functional machine out of the box (working sleep mode for laptops, WiFi management, GPU support, systemd if that’s what it wants.) I then deploy the minimal VM I actually want to poke around with inside that installation.

If there's anyone like me here they might be happy to know that KDE Neon exists and is something like:

- Stable Ubuntu base.

- Infinitely (almost) customizable KDE on top.

- And (IMO unlike Kubuntu) sane defaults.


Thanks for the tip about KDE Neon. I use Kubuntu, but it definitely misses the mark.


Just being pedantic: containers aren't vms. Containers use the native Linux kernel namespacing and segregation facilities to run multiple applications in a somewhat segregated way.


sounds a little bit like https://www.qubes-os.org/intro/


Gnome and KDE are both worse than useless - They poison other, useful projects.

There is never going to be a unified GUI for Linux; that requires a dictator. KDE tried to provide the carrot of development-ease, Gnome tried to generate some reality distortion, but nobody cared. Carrots don't work. As far as I'm concerned, the experiment is over and it is time to embrace the chaos.

Now, this is easy for me to say, I'm mostly a command-line person anyway, and have spent most of my working life dealing with horrible UI. But it does have a lot of implications for Linux that I think a lot of people are not ready to accept.


Honestly the problem with all software is people trying to “innovate” too much. They made this thing called a book once upon a time and those have worked for centuries. Same thing with UIs: the stacking window managers work well from Windows 95 and XP why change it?


"Honestly the problem with all software is people trying to “innovate” too much."

You are spot on, and your 'book analogy' is perfect. If it works perfectly don't change it — that is unless an innovation arrives that offers a significant improvement and that's just as easy to use.

Unfortunately, most so-called UI improvements over the last 20 or so years are not improvements at all, in fact many have been quite regressive. They've annoyed millions of users who've collectively wasted millions of hours relearning what they already knew (and in the end nothing was added by way of new productivity)—and that doesn't include the developer's 'lost' time developing these so-called improvements. It's time that would otherwise have been much better spent fixing bugs, providing security improvements and or developing software for altogether new applications that we've not seen before.

The question I keep asking over and over again is what exactly are the causes behind all this useless 'over innovation'. Why is it done on such a huge scale and with such utter predictability?

Is it marketing's wish for something new? Are developers deliberately trying to find work for themselves or to keep their jobs or what?

It seems to me that many a PhD could be earned researching the psychological underpinnings of why so many are prepared to waste so much money and human effort continuing to develop software that basically adds nothing to improve or advance the human condition.

In fact, it's such an enormous problem that it should be at the core of Computer Science research.


> Why is ('over innovation') done on such a huge scale and with such utter predictability?

Promotion & NIH management sydrome.

New shiny gets a promotion. Fixing a niche bug in a decades-old stable system does not.

And by the time all the new bugs you've introduced are found, you'll have a new job somewhere else.

So essentially, project managers' bosses not pushing back with a hard "Why should we change this?"


GNOME, Mate, Pantheon, XCFE, KDE, Deepin, UKUI, LXQt etc. created an unmaintainable mess of competing forks _while all using stacking window managers_. It's maddening how similar they all are to each other. Someone should build a dating site where understaffed Linux projects can find a matching project to merge with.


Well, it’s great to experiment. And gnome 2 for example worked really great. I guess I am thinking more in the realm of things like “force touch” gestures, multi touch swipes and such. They could be useful as an added bonus for power users, but I think the traditional paradigm for the OS should work by default: 1) on desktop, single/double click, drag and drop, tool tips, mouse wheel. 2) on mobile quick tap, tap and hold, basic swipe gestures (on mobile these work well but sometimes not intuitive).

I’m probably missing some stuff, but I think people out to at least be able to “feel” their way around a UI. Lately there’s been so much push for minimalism like omitting scroll bars and such that make it confusing.

But, again that experimentation will root out what works and doesn’t. And new devices like VR of course have yet to be discovered paradigms.


Before codexes – the kind of book we use today – scrolls had worked for centuries.

> the stacking window managers work well from Windows 95 and XP why change it?

To get something that works better.


> To get something that works better.

Despite all evidence to the contrary.


"Experiments are bad. We've tried them once, didn't work"


> embrace the chaos

Well said. Is your machine shop stocked by a single brand of tools all in the same color, or is it a mix of bits and pieces accumulated, rebuilt, repainted, hacked, begged-borrowed-and-stolen over the course of your development as an engineer?

A free software Unix workstation is exactly the same. It’s supposed to look untidy. It’s a tool shed.

Apologies if I’ve touched a nerve with the Festool crowd with my analogy.


"There is never going to be a unified GUI for Linux; that requires a dictator."

Agreed, but I can never get to the bottom of or reason why developers do not provide alternative UI interfaces (shells) so that the user can select what he/she wants. This would save the user much time relearning the new UI (not to mention a lot of unnecessary cursing and swearing).

For example, Microsoft substantially changes the UI with every new version of Windows—often seemingly without good reason or user wishes. This has been so annoying that in recent times we've seen Ivo Beltchev's remarkable program Classic Shell used by millions to overcome the problem of MS's novel UIs.

Classic Shell demonstrates that it's not that difficult to have multiple UIs which can be selected at the user's will or desire (in fact, given what it is, it has turned out to be one of the most reliable programs I've ever come across—I've never had it fault).

It seems to me that if developers feel that they have an absolute need to tinker or stuff around with the UI then they should also have at least one fallback position which ought to be the basic IBM CUA (Common User Access) standard as everyone already knows how to use it. If you can't remember what the CUA looks like then just think Windows 2000 (it's pretty close).


> Agreed, but I can never get to the bottom of or reason why developers do not provide alternative UI interfaces (shells) so that the user can select what he/she wants.

It's because everybody wants you to use their thing and not some other thing. If people have a choice then some people will choose something else.

This is especially true when the choice is to continue using the traditional interface everybody is already familiar with, because that's what most everybody wants in any case where the traditional interface is not literally on fire. Even in that case, what people generally want is for you to take the traditional interface, address the "on fire" parts and leave everything else the way it is.

Good change is good, but good change is hard, especially in stable systems that have already been optimized for years. Change for the sake of change is much more common, but then you have to force feed it to people to get anyone to use it because they rightfully don't want it.


> embrace the chaos

Linux was all about chaos and herding cats until a short number of years ago.

It's the "standardisation at all costs" brigade who have killed the goose that laid the golden eggs. It's now far worse than Windows in many aspects. Freedesktop and GNOME deserve the lion's share of the blame, but RedHat, Debian and many others enabled them to achieve this.


Linux GUIs have always been worse than Windows as far as I remember (going back to the mid 90s).


That's very subjective, and not at all related to the point I was making. It wasn't about the niceness of GUIs.

Over the last decade, we have experienced a sharp loss of control and had certain entities become almost absolute dictators over how Linux systems are permitted to be run and used.

Linux started out quite clunky and unpolished. It could be made polished if you wanted that. But nothing was mandatory. Now that's changed. A modern mainstream Linux distribution gives you about the same control over your system that Windows provides. In some cases, even less. Given its roots in personal freedom, ultimate flexibility, and use as glue that could slot into all sorts of diverse uses, I find the current state of Linux to be an nauseating turn off.

And I say that as someone who has used Linux for 24 years, and used to be an absolute Linux fanatic.


People have had similar complaints for a very long time: https://www.itworld.com/article/2795788/dumbing-down-linux.h...


True. In that article though, they thought that the commercialisation wouldn't seriously affect true free distributions like Debian. Shame that did not turn out to be the case. The fear of even slight differences has effectively forced or coerced everyone to toe the RedHat line, even when severely detrimental. What we lost was any semblance of true independence.


I'd be fine with an "UI dictatorship" or standardization if it gave us better UI and UX. Gnome's "dictatorship" has only brought bad experiences and inappropriate interfaces.


"This is why I don't touch GUIs from the major binary distros or gnome3 with a 10 foot pole"

Exactly, but what perplexes me is why aren't these issues that are so obvious to us not obvious to them. Why do they think so differently to normal users?


I actually like Gnome 3 a lot. I feel like it’s the first DE I’ve tried that I could interact with with the appropriate mix of 90% keyboard 10% touchpad on a laptop.


This behaviour has been removed in the next gnome release


I dislike it too, but doesn’t Windows 10 do the same thing?


Windows 10 is even worse. It swipes up for you when you press a key, but it won't pass that key on to the password box. So you have to press a key and then type your password. At least with gnome you can just type your password and it works as expected...


It also requires you to wait until the animation has finished. I regularly lose the first character or two of my password, because I start typing too soon, while it’s still animating away


iOS has been this way for a couple versions, and I can't imagine how it ever passed testing.

Animations that block use input are the sort of stupidity that becomes evil in its own regard. There was even a calculator but with this. This is a massive failure at the management level; somebody actually codes these things, but that it's not caught anywhere before shipping shows that the wrong people are in charge.


It's the sin of losing the user input.

Don't make your user repeat something twice :)


I do not have this experience on the screen lock screen (on Ubuntu 18.04 at least) — just typing a password will put the full password into the box once the animation is gone.

I do not log-in frequently enough to remember how it behaves on log-in (even my laptop has ~90 days of uptime).

FWIW, moving away from GNOME 2.x to either Unity or GNOME 3 was a hard move to swallow, though in all honesty, Unity was better (though pretty buggy and laggy until the end)!


When Ubuntu have removed Gnome2, I have looked at Unity, got really unhappy about it and installed gnome3. ... In a few minutes, I started to think that Unity wasn't THAT bad, after all, and went back to it.

Now when it is gone, I'm using xfce, which seems be the last decent desktop environment.


Another one to add:

Besides waiting for animation, in Windows 10, if you type the password fast enough the first character gets selected and the second character you type will replace it.

This happens frequently.


It's not just the logon screen. In many apps if you type CTRL+O followed by a file name to bring up the File Open dialog and populate the name field then the application frequently loses the first few characters of the file name. Or type CTRL+T followed by text. This opens a new tab but the text appears in the old tab if you don't pause.

These things used to work reliably. I think most of the problems are caused by introducing asynchronicity into apps without thinking about how it affects keyboard input. Keyboard input should always behave like the app is single-threaded.


Application developers have ceased to care about input focus for keyboard entry.

Here's an example I encounter whenever I use Microsoft Teams at work. I go to "Add Contact", and the entire screen becomes a modal entry box into which I have to enter a name. There's a single entry field on the screen. It's not in focus, even though that's the sole action that I can perform at this time. I have to explicitly select the entry field with the mouse and then type. It's such a basic usability failure, I really do wonder what both the application developers and the testers are actually doing here. This used to be a staple of good interface design for efficient and intuitive use.


It is the same on chrome or firefox. Empty window only with url bar. You need to type in the url bar to write something. Writing software is hard. Using your brain while writing software is even harder.


Learn this habit: press backspace a couple times first, to wake up the screen and show the password input.


Lol, you've just described such a beautiful regression...


On my Windows 10 if I press a key nothing happens. My fiction is something else has taken the focus away from the "login app". If I press Alt-Tab then suddenly another key will wake up the login app and give me a password prompt.


I always just press a CTRL key on both Windows 10 and Ubuntu.


Windows seems to respond to almost any input to display the password box.


Probably, but Windows is terrible at everything UX so that doesn't mean much.


Windows 10 with Windows Hello does face recognition logon without any need to swipe or press key combinations. On a touchscreen, it responds to a swipe upwards. On a machine with a keyboard it responds to apparently any mouse click or keypress.

Presumably Gnome copypasted it from Windows, because otherwise where did that idea come from into multiple distinct projects simultaneously? Windows has always had ctrl+alt+del to logon, Ubuntu hasn't had a precedent of having to do something to get to the logon prompt, IIRC.


Windows hello is such a nice experience (at least just for local login), but I will never use it because I don't trust Microsoft with my data.


"the info that identifies your face, iris, or fingerprint never leaves your device." - https://support.microsoft.com/en-us/help/4468253/windows-10-...


> "the info that identifies your face, iris, or fingerprint never leaves your device." - https://support.microsoft.com/en-us/help/4468253/windows-10-....

Of course not.


Of course not what?


> I thereby discovered that you can get the password prompt by pressing Enter.

This is the same this Win10. I was super annoyed they add one more step to a very frequent action, for no benefits on a PC computer. Hopefully I don’t have to use Win10 too much, but this is symptomatic of the mobilification of computers.


You can just start typing the password and it will input it correctly, the Enter is not actually needed (at least on 19.10)


Gimp is a great example. If you look up screenshots from the late 90s of gimp 1.0 you think

"Hey wow, that looks pretty great! I know where the buttons are, I can quickly scan them and it's clear what they do! It isn't a grey on grey tiny soup, they are distinct and clear, this is great. When is this version shipping? It fixes everything!"

Apparently almost everyone agrees but somehow we're still going the wrong way, what's going on here? Why aren't we in control of this?


What's crazy is Gimp has always felt user-hostile to me. Loads of other people share this complaint. With every new release, there are people recommending giving it a second/third/fifty-third shot saying "They finally made it easy to use this time!"

At this point it feels like a prank that's been going on for a quarter century.


"What's crazy is Gimp has always felt user-hostile to me."

You're not wrong, I wish you were but you're not. By any measure GIMP is a dog of a program. I wish it weren't so as I stopped upgrading my various Adobe products some years ago, but alas it is. It would take me as long as this 'The Decline of Usability' essay to give an authoritative explanation but I'll attempt to illustrate with a few examples:

1. The way the controls work is awkward, increasing or decreasing say 'saturation' is not as intuitive as it is in Photoshop and the dynamics (increase/decrease etc.) just isn't as smooth as it ought to be. Sliders stick and don't respond immediately which is very distracting when you're trying to watch some attribute in your picture trend one way or other.

2. Most previews are unacceptably slow; they really are a pain to use.

3. The latest versions have the 'Fade' function removed altogether. I use 'Fade' all the time and I don't like being told by some arrogant GIMP programmer that the "function never worked properly in the first place, and anyway you should use the proper/correct XYZ method". You see this type of shitty arrogance from programmers all the time[1].

4. GIMP won't let you set your favourite image format as your default type; you're forced to GIMP's own XCF format and then export your image into say your required .JPG format from there. (I understand the reason for this but there ought to be an option to override it, if the GIMP developers were smart they'd provide options for various times, for instance: 'session only'.)

5. As others have mentioned, there's icon and menu issues, menu items aren't arranged logically or consistently.

Essentially, the GIMP's operational ergonomics are terrible and there's been precious little effort from GIMP's developers to correct it. (GIMP's so tedious to use I still use my ancient copy of Photoshop for most of the work, I only then use the GIMP to do some special function that's not in Photoshop.)

[1] The trouble is most programmers program for themselves—not end users, so they don't see any reason to think like end users do. (I said almost the same thing several days ago in my response to Microsoft's enforcing single spaces between sentences in MS Office https://news.ycombinator.com/item?id=22858129 .) It doesn't seem to matter whether it's commercial software such as Microsoft's Office, or open software such as the GIMP or LibreOffice, etc., they do things their way, not the way users want or are already familiar with.

Commercial software is often tempered by commercial reality (keeping compatibility etc.) but even then that's not always so (take Windows Metro UI or Windows 10 for instance, any reasonable user would have to agree they're first-class stuff-ups). That said, GIMP is about the worst out there.

"At this point it feels like a prank that's been going on for a quarter century."

Right again! GIMP developers seem not only to be hostile towards ordinary users but there's been along-standing bloody mindedness among them that's persisted for decades; effectively it is to not even consider ordinary users within their schema. Nothing says this better than the lack of information about future versions, milestones, etc. All we ever get are vague comments that don't change much from one decade to the next.

Perhaps it would be best for all concerned if GIMP's developers actually embedded this message in its installation program:

"GIMP is our play toy—it's prototype software for our own use and experimenting—it's NOT for normal user use. You may use it as is but please do not expect it to work like other imaging software and do not bother us with feedback for we'll just ignore you. You have been warned".


I've only hacked on GIMP a small bit, so I'm no means an authority, but the sad truth is that GIMP is an extremely small project driven mostly by volunteers. There is a desire to correct these issues but there are many, many other issues to prioritize them against. It's been a very infrequent occurrence for them to have the resources to work with UI/UX designers. I'm not trying to dismiss your complaints, but I think you would see some better results if you didn't wait for someone else to fix it for you. My only suggestion is that GIMP actually has very complete Python/Scheme scripting interfaces that can be used to make a lot of little UI tweaks, although the APIs are not well-documented.

>they do things their way, not the way users want or are already familiar with

In my experience, a program that has never had a feature removed (or unintentionally broken) is an exception, not the rule. It takes a lot of effort to keep things working over the years, and if there is no will to maintain that, then those things will disappear.


If you read my post underneath then you'll appreciate I'm quite in sympathy with your views. What I said above I did with my imaging hat on.

Users, show precious little allegiance to any app when it balks them or they cannot find an easy way to do what they want to (run a help desk for a week and you'll get that message loud and clear).


it’s almost as if good design is not free


Tragically, with few exceptions, the evidence confirms that's true. I say this with great reluctance as I'm a diehard open/free software advocate (ideologically, I'd almost align myself with RMS [Stallman] on such matters, that's pretty hard-line).

As I see it, there are great swathes of poor and substandard software on the market that shouldn't be there except for the fact that there's either no suitable alternative, or if reasonably good alternatives do exist then they're just too expensive for ordinary people to use (i.e.: such software isn't in widespread use). I base this (a) on my own long experience where I encounter serious bugs and limitations in both commercial and open source software as day-to-day occurrences; and (b), data I've gathered from a multitude of other reports of users' similar experiences.

(Note: I cannot possibly cover this huge subject or do it reasonable justice here as just too involved, even if I gave a précised list of headings/topics it wouldn't be very helpful so I can only make a few general points.)

1. The software profession has been in a chronic crisis or decades. This isn't just my opinion, many believe it fact. For starters, I'd suggest you read the report in September 1994 edition of Scientific American titled Software's Chronic Crisis, Wayt Gibbs, pp 86-95: https://www.researchgate.net/publication/247573088_Software'... [PDF]. (If this link doesn't work, then a search will find many more references to it.)

1.1 In short, this article is now nearly 26 years old but it's still essentially the quintessential summary on problems with software and the software industry generally (right, not much has changed in the high-level sense since then, that's the relevant point here). In essence, it says or strongly implies:

(a) 'Software engineering' really isn't yet a true engineering profession such as chemical, civil and electrical engineering are and the reasons for this are:

(b) As a profession,'Software engineering' is immature, it 'equates' [my word] to where the chemical profession was ca 1800 [article's ref.], (unlike most other engineering professions, at best it's only existed about a third to a quarter the time of the others).

(c) As such, it hasn't yet developed mandatory standards and consistent procedures and methodologies for doing things—even basic things, which by now, ought to be procedural. For instance, all qualified civil engineers would be able to calculate/analyze static loadings on trusses and specify the correct grades of steel, etc. for any given job or circumstance. Essentially, such calculations would be consistent across the profession due to a multitude of country and international legally-mandated standards which, to ensure safety, are enforceable at law. Such standards have been in place for many decades. Whilst the 'Software Profession' does have standards, essentially none are legally enforceable. Can you imagine Microsoft being fined for, say, not following the W3C HTML standard in Windows/Internet Explorer to the letter? Right, in this regard, software standards and regulations are an almighty joke!

(d) Unlike other engineering professions, software engineers aren't required by law to be qualified to a certain educational standard [that their employers may require it is irrelevant], nor are they actually licensed to practice as such. When 'Software engineering' eventually becomes a true profession then these requirements will almost certainly be prerequisites for all practitioners.

(e) With no agreed work procedures or mandated work methodologies across the profession 'software engineers' are essentially 'undisciplined'. As such, the SciAm article posits that software programmers work more akin the way of artists than that of professional engineers.

(As a person who has worked in both IT/software and in an engineering profession for years, I have to agree with Wayt Gibbs' assessment. There are practices that are generally acceptable in software engineering, which if I attempted to equate them to an equivalent circumstance with my engineering hat on, then I'd likely end up in court (even if no one was killed or injured by what I'd done. Here, the rules, the structure—the whole ethos is different, and both ethics and law play much stronger roles than they do in software-land).

2. You may well argue that even though Computing Science is not as old as the other engineering professions, it, nevertheless, is based on solid mathematics and engineering foundations. I fully agree with this statement. However, without enforceable standards and licensed/qualified software practitioners, the industry is nothing other than just 'Wild West' engineering—as we've already seen, in software just about anything goes—thus the quality or standard of software at best is only that of the programmer or his/her employer.

3. As a result, the quality of product across the industry is hugely variable. For example take bloatware: compare the biggest bloatware O/S program ever written, MS Windows, with that of tiny, fast and highly efficient Kolibrios OS, built on Assembler https://kolibrios.org/en/ (here I'm referring to methodology rather than functions — we can debate this later).

4. The commercial software industry hides behind the fact that its software is compiled, thus its source code is hidden from public view and external scrutiny. Its argument is that this is necessary to protect its so-called intellectual property. Others would argue that in the past loss of IP was never really the main issue, as manufacturing processes were essentially open—even up until very recent times. Sure, it could be argued that some manufacturing had secrets [such as Coca Cola's formula, which really is only secret from the public, not its competitors], but rather industrial secrets are normally concerned with (and applied to) the actual manufacturing process rather than the content or parts of the finished product. That's why up until the very recent past most manufacturers were only too happy to provide users with detailed handbooks and schematics; for protection from copies they always relied on copyright and patent law as protection (and for many, many decades this protection process worked just fine). It's a farce to say that commercial 'open source' isn't viable if it's open. Tragically, this is one of the biggest con job the software industry has gotten away with—it's conned millions into believing this nonsense. More the true reason is that the industry couldn't believe it's luck when it found that compilers hid code well — a fact that it then used opportunistically to its advantage. (Likely the only real damage that would be done by opening its source is the embarrassment it'd suffer when others saw the terrible standard of its lousy, buggy code.)

4.1 'Software engineering' won't become a true profession until this 'hiding under compilation' nexus is broken. There are too many things that can go wrong with closed software—at one end we've unintentional bugs that cannot be checked by third parties, at the other we've security, spyware and privacy issues that can be and which are regularly abused; and there's also the easy possibility of major corruption—for instance, the Volkswagen scandal.

5. Back to your comment about 'good design not being free'. I'm very cognizant of the major resource problems that free and open source software developers face. That said, we shouldn't pretend that they don't exist, nor should we deliberately hide them. I accept that what we do about it is an extremely difficult problem to solve. My own suggestion to up the standard of open software is a sort of halfway house where cooperatives of programmers would be paid reasonably acceptable remuneration for their contribution to these major open projects. In turn, there would be a small nominal free (say $5 to $20) levied on large scale open software programs such as GIMP, LibreOffice, ReactOS etc. to ensure that development could go ahead at a reasonable pace (the projects otherwise would be revenue neutral—there would be no profits given to third parties).

Let me finish by saying that whilst commercial software has the edge over much free/open software (for example MS Office still has the Edge over LibreOffice), that edge is small and I believe the latter can catch up if the 'funding/resource' paradigm is changed just marginally. Much commercial software such as MS Office is really in a horrible bloated spaghetti-code-like mess and with better funding it wouldn't take a huge effort for dedicated open software programmers to beat their sloppy secretive counterparts at their own game. After all, for many commercial programmers, programming is just a job, on the other hand open software aficionados are usually doing it for the love of it—and that's a true strategic advantage.

I firmly believe that for open software to really take off it has to be as good as and preferably better than its commercial equivalent. Moreover, I believe this is both possible and necessary. We never want a repeat of what happened in Munich where Microsoft was able to oust Linux and LibreOffice. With Munich, had it been possible to actually demonstrate that the open code was substantially and technically superior to that of Microsoft's products, then in any ensuing legal battle Microsoft would have had to lose. Unfortunately that was not possible, so the political decision held.

One thing is for certain, we urgently need to raise the standard of software generally and it seems highly unlikely that we can do so with the way the industry is currently structured.


> When 'Software engineering' eventually becomes a true profession then these requirements will almost certainly be prerequisites for all practitioners.

This wouldn't work. Most software isn't life-and-death. That's a big difference from bridge engineering, nuclear engineering, and aeronautical engineering.

If you're hiring someone to do Python scripting, there's little point insisting they have a grounding in formal methods and critical-systems software development. You could hire a formal methods PhD for the job, but what's the point? The barrier-to-entry is low for software work. Overall this is probably a good thing. Perhaps more software should be regulated the way avionics software is, but this approach certainly can't be applied to all software work.

If your country insisted you become a chartered software engineer before you could even build a blog, your country would simply be removing itself from the global software-development marketplace.

> compare the biggest bloatware O/S program ever written, MS Windows, with that of tiny, fast and highly efficient Kolibrios OS

I broadly agree, but in defence of Windows, Kolibri is doing only a fraction of what Windows does. Does Kolibri even implement ASLR? One can build a bare-bones web-server in a few lines, but that doesn't put Apache out of a job.

> My own suggestion to up the standard of open software is a sort of halfway house where cooperatives of programmers would be paid reasonably acceptable remuneration for their contribution to these major open projects. In turn, there would be a small nominal free (say $5 to $20) levied on large scale open software programs such as GIMP

This doesn't work. It means a company can't adopt the software at scale without implementing licence-tracking, which is just the kind of hassle Free and Open Source software avoids. If I can't fork the software without payment or uncertainty, it's not Free even in the loosest possible sense.

The way things are currently is far from ideal, but we still have excellent Free and Open Source software like the Linux kernel and PostgreSQL.

> open software aficionados are usually doing it for the love of it—and that's a true strategic advantage.

Agree that this can be an advantage. Some FOSS projects are known for their focus on technical excellence. That said, the same can be said of some commercial software companies, like iD Software.

> One thing is for certain, we urgently need to raise the standard of software generally and it seems highly unlikely that we can do so with the way the industry is currently structured.

Software firms today are doing a good job of making money. If the market rewards regressions in UI design, and using 50x the memory you really need (thanks Electron), what good would it do to regulate things?

Apparently most people don't care about bloat, and they prefer a pretty UI over a good one. That doesn't strike me as the sort of thing you can tackle with regulation.


Krita matured nicely over the years and last time I found it quite easy to use.

UI is hard. It got replaced by "UX", but nobody agrees what that really is. So it boils down to whatever impracticality designers dream up. When UI was easy, there were real research, data backing up claims of improvements and laid down rules to enforce some consistency. This became "unfashionable" and was removed.


It was a hard structured science, hicks law, conservation of complexity, goms analysis, fitts law ... we've tossed these decades of hard work in the garbage can because somebody in marketing didn't like the colors.

It was like during the VCR wars of the 80s when consumers wanted the most features but yet the fewest buttons. Then they complained how you had to basically play rachmaninoff on their sleek minimal interface to set the clock.

We need to be like other industries; "that's too bad". Seatbelts are inconvenient? "that's too bad". You don't want to stay home during a pandemic because the weather's nice? "that's too bad" ... you want a bunch of incompatible UX goals that leads to trash? "That's too bad".

Sometimes the opinion of an uninformed public shouldn't matter. We don't go to a doctor and pass around ballots to the other people in the waiting room to democratically decide on a diagnosis. Knowing what to not listen to is important.


The uninformed public is pretty vocal about what is going on in most apps.

The UX propellerheads come back with statistics from user telemetry that always agree with them.

UX is the problem — designing “experiences” geared around an 80/20 approach is substituted for the harder task of building tools that work.


Having rarely seen a VCR that wasn't flashing "12:00", I came to the conclusion that a clock was simply feature bloat.


One of the arguments, given in the Supreme Court, by Mr. Rogers, to permit the VCR to be legally owned (I know how crazy this is sounding but it's real) was to time shift programming ... which requires a functioning clock.

Fred Rogers, 1984: "I have always felt that with the advent of all of this new technology that allows people to tape the 'Neighborhood' off-the-air ... they then become much more active in the programming of their family’s television life. Very frankly, I am opposed to people being programmed by others. My whole approach in broadcasting has always been ‘You are an important person just the way you are. You can make healthy decisions’ ... I just feel that anything that allows a person to be more active in the control of his or her life, in a healthy way, is important."

see https://en.wikipedia.org/wiki/Sony_Corp._of_America_v._Unive....

There's definitely non-crazy ways of doing this ... but it requires what at first blush, would appear to be a complicated interface.


Just when I thought I couldn’t be thankful enough to Fred Rogers for saving public television in America as he transformed what it could be, now I find out that he’s also pivotal in standing up for fair use rights and inadvertently supported the analog hole before the digital one was even invented to be closed. He truly was a person who stood up for his beliefs on behalf of the fellow person and an example of goodness in the world without pandering or compromise.

https://en.wikipedia.org/wiki/Analog_hole


"https://en.wikipedia.org/wiki/Sony_Corp._of_America_v._Unive.... "

Ha, reading that link now one feels a delicious sense of irony. Imagine how Sony would react today seeing that it has become one of the biggest purveyors of video/movie content. ;-)


Time shifting doesn't require a clock, if you are home to start recording, but a clock helps with programmatic recording.


That's an extreme nitpick. The whole argument for time shifting is to be able to record something while you're not at home.


Let’s hear them out in the spirit of debate. I’m curious how this hypothetical VCR is programmed, what the remote and interface might look like. I might even like it, or at least want parts of it as concepts to integrate with other things that already exist. Could shake loose some ideas.

Honestly I don’t know why VCRs are so hard to program but all of the buttons can’t help. I might be getting old but the Roku remote seems about right as far as complexity in the device goes and I can see how a nice interface with relative timekeeping could do what you need without a clock per se. Inertial guidance for timekeeping? A self winding DVR?


I remember what setting the time on a VCR was like and it's interesting to think of all the assumed knowledge you actually need in order to have it seem intuitive.

Two things off the top of my head that I can think of: 1) knowing that a blinking number is indicating some kind of selection and more generally 2) seeing the UI as a glimpse into a larger abstract space that can be navigated. Or in other words, having used computers for many years, what my parents sawzl as just a blinking word, I would see as a menu where up/down/left/right had meaning.

There's also some more abstract thinking involved there - for me it's very spatial so I think of it as being able to keep track of your place in this 'abstract map'. You had to learn some non-obvious things like "if the down button stops working, it probably means I'm at the 'bottom' of my available choices" or "if I start seeing the same choices again, it means I have 'wrapped around' and in a logical sense I'm back to where I've been before".

I actually remember thinking something like this as a child when we got a VCR. I think I remember that realization that "this is a menu I can explore". The exploratory skills you pick up when you have to figure out how to use something technical generalize really well to other technical things.

TL;DR: I think VCRs were hard to program because the limited UI of buttons and a tiny screen meant that you actually needed a fairly built-up mental model of the process to keep track of what you were doing.


I really like how you brought to the fore this concept of intuition as it relates to UI/UX in technology products. There’s a certain cachet in being able to operate technical devices. There’s similar social capital to be gained in creating useful results using technology. If only the embedded intuition of operating the device worked with the goal of creating useful results with the device.

The biggest “what were they thinking” part for me is why they cram a whole GUI with config options and menus into a clock when almost every use case for a VCR is already connected to a perfectly workable display which is much better suited to a GUI in the form of the TV. Later VCRs had onscreen rather than on-device GUIs but by then institutional momentum was too far along to redesign the remote when they moved the GUI out of the device and onscreen. Truly a missed opportunity.

I don’t know anyone involved in any VCR product. If I did I’d be asking them a lot of questions. But I have a hard time thinking they meant to make it so hard. They probably were clapping each other on the back and congratulating each other. They were inventing future ways of using content and for that they deserve praise. They just sucked at understanding how hard it is for non experts to put themselves in the mind of experts, someone whose inner mental world has jarringly different contours and whose mental model of reality may have little to no correspondence whatsoever with their own.


This is a great observation! The blinking-indicates-editable-via-buttons-mode is a mental model you either have or you do not. It is certainly not axiomatic and needs some experimentation to learn. Digital writwatches with those standard three buttons also relied on this mental model.


Not really, that's more a cliche. I set the clock on mine to tape programming when I wasn't there, and if anything it was probably easier than setting clock radios and watches now.


I was a child during the heyday of VCR, but I don't think any of my family was aware of timed recording. The whole concept just didn't exist in our lives until Tivo. Non-obvious features plus buying things second hand just meant you never learned everything your stuff did back before you could find manuals online.


My family could afford most things new, we had the manuals. I think my parents read them too, as they knew how to use the timed recording feature. My grandma knew how to use hers!

Many people would keep the manuals near the TV, so they could remind themselves how to use the rarely used features.

The Panasonic VCR we had included a barcode reader in the remote. The printed TV guide has barcodes for each program. This interface was very easy to use -- scan, and press "transmit".

Edit to add a link to an advert: https://www.youtube.com/watch?v=tSGUbE1v5RA -- the sheet of timecodes was necessary if you didn't have a TV guide with them printed, as shown here: http://www.champagnecomedy.com/panasonic-barcodes-saving-you...


"Non-obvious features plus buying things second hand just meant you never learned everything your stuff did back before you could find manuals online."

That's the outrageous point. You shouldn't need manuals for operation of ordinary domestic appliances! If you do then you automatically know its design is substandard!

(The only reason you should need a manual is for some info or maintenance function that's not normally associated with its user-operated functions.)


Anecdote: a little while after the start of this covid thing, I broke the band on my wristwatch. My first impulse was to run to the store, or order a new one from Amazon. Then I realized: I'm working from home, there are clocks all around me, and I don't need to be anywhere at any particular time (and my computer warns me about upcoming Zoom meetings).

So now my wristwatch is sitting on the desk.


I think of programming the Betamax(VHS rival) everytime I scrolled through the interface on 1990s/2000s printers. I built the nested file structure in my head rather than reading it on the screen. It has made navigating the digital world so much more natural(for me).

*The hard part of programming the Beta(and early VHS) for me was getting the family to leave the tuner on the channel I/we wanted to record.


Speaking of feature bloat...

I hate that ovens and microwaves have clocks on them. I don't need two devices in my kitchen to tell time. It's ridiculous since they usually next to each other, and most of the time have different displays. Just because there is an LCD/whatever, doesn't mean it always has to display something!

At least on the latest power outage, my microwave stopped showing the time. The oven still flashed, so I set that time and only have one clock in my kitchen now.

Even my vehicle has two clocks in it, one on the instrument cluster and one on the infotainment system. So stupid!!!


> The oven still flashed,

What's even more crazy, increasingly often I've started to encounter ovens that don't work until you set the clock. I.e. if the clock was reset and is blinking, the heater won't turn on. Took me a while to figure it out the first time I saw it.


A lot of ovens have a delayed bake feature that uses the time. I’ve never seen a microwave with that feature, though, and it’s also the less essential device.


If they couldn't make it easier to set I think the clock should have been less prominent. It's necessary if you're doing a scheduled recording.

It's too bad time sync over power lines didn't catch on widely (or broadcast over the radio). It would still be saving everyone from changing their digital clocks during DST.


They tried compromises like VCR Plus+[0]. It was basically a 6 digit code that would be printed next to the show name in places like TV Guide. You would enter the code into your VCR instead of a time, and it would figure out how to record it. I think it still required a working clock, though.

[0] https://en.wikipedia.org/wiki/Video_recorder_scheduling_code


Do you not have a radio clock?

They're common in Europe, on a midrange bedside clock for example, and typical office/school clocks.

I remember we were foiled by one at school, when someone set the clock 15 minutes forward when the teacher wasn't looking. The hands could only move forward, so a few minutes later they started spinning 11½ hours further forward to set the correct time.

https://en.wikipedia.org/wiki/Radio_clock


Interesting. I think I've seen those things, but I've never bought one. I was expecting this tech would be built into microwaves, ovens, and cars by now.


Time over analog TV signals was supported in EIA-608, the standard for sending closed captions and other data in one line of the vertical retrace interval. PBS stations used to send it. Few things used that data.

In the 1990s I encountered a hotel TV with that feature. It had a built-in clock with hands (not on screen), which was also the alarm clock for the room. No one had set it up, and I spent about ten minutes with the remote getting it to find a station with time info and set the clock. Then the "alarm set" function on the remote would work and I could set my wake-up time.


Time codes inside of analog terrestrial NTSC sounds really easy and obvious.

Given that nobody did it, it would appear that even though legally people like Mr. Rogers were making the case for time-shift programming, the industry must have assumed it was a minor use case.


The main reason I mentioned it is that I know I've seen various implementations--they're just not widely adopted. I guess nobody has the business interest to make it all work?

https://en.wikipedia.org/wiki/Extended_Data_Services (NTSC) looks like a 2008 standard and most PBS stations provide "autoclock" time data

https://en.wikipedia.org/wiki/Radio_Data_System (FM radio) I figured this had an implementation considering text has been around for years. Amazingly, I don't think I've ever seen a car stereo use it to set the time!

https://en.wikipedia.org/wiki/Broadband_over_power_lines I know this has been around but has had a lot of hurdles. I figured the current time might be a simpler thing.

The only reliably time-setting tech I've seen integrated is GPS--I'm not 100% sure how time zones work with it, but it does know your location.


https://en.wikipedia.org/wiki/Extended_Data_Services

Autoclock setting was done for VCRs. It just happened much later than the case in question.


> the industry must have assumed it was a minor use case.

You mean the same industry that was trying to make time-shifting (and VCRs in general) illegal?


The problem was everyone did it once, and then lost power at some point and it went into "minor task not ever important enough to be worth taking the time.

If they'd included a backup battery to retain the clock, I suspect it'd been less of a thing.


In the days before power strips were ubiquitous, my VCR got unplugged whenever I played the Sega. There was no way anyone was setting the clock daily.


I still think that's part of the bad UI =) That setup was bad unless you included a battery or time was likely to set itself.


> If they couldn't make it easier to set I think the clock should have been less prominent. It's necessary if you're doing a scheduled recording.

On the contrary, the clock needs to be super obvious precisely because it's a pain to set. Otherwise you wouldn't notice until your recordings were messed up.


I think context is key. It's only necessary if you have a scheduled recording. So it should only be obvious if you're setting up a scheduled recording or have one queued up. In those cases it should force your or alert you in an obvious manner that the time is not set.


"Sometimes the opinion of an uninformed public shouldn't matter."

Correct, that's the 2000+ year old axiom of ignoring the lowest common denominator and seeking the best advice available.

That said, if you're designing software for use by users who are 'lowest common denominator' then, a priori, you have to make it to their measure. If they cannot understand what you've done then you've wasted your time.


Hamburger menus are symptomatic of this for me. I spent /way/ too long not understanding this completely new element everybody was suddenly jamming in everywhere.


Agreed, 1000%. I have the traditional Firebox menu bar turned on, but I can't get rid of the lousy hamburger menu. Plus I have ten icons up there, most of which I don't have any idea what they do. I should probably get rid of them. (When was the last time you used the "Home" icon in a web browser? What is "home", anyway?)

(I just now cleaned it up, although there are some icons you can't get rid of.)


My impression is that modern UX is data-driven alright, it just follows radically different paradigms and goals.

It's not at all anymore about presenting consistent mental models, it's solely about the ease or difficulty with which particular isolated tasks can be performed.

It's also not automatically the goal to make all tasks as easy as possible. Instead, discoverability and "friction" are often deliberately tuned to optimize some higher-level goals, such as retention or conversion rates.

This is why we have dialogs where the highlighted default choice is neither the safe one nor the one expected by the user, but instead the one the company would like the user to take. (E.g. "select all" buttons in GDPR prompts or "go back" buttons if I want to cancel a subscription.

You can see that quite often in browsers as well, often even with good intentions: Chrome, for a time, still used to allow installing unsigned extensions but made the process deliberately obscure and in both Chrome and Firefox , options are often deliberately placed into easy or hard to discover locations. (E.g. a toggle on the browser chrome, vs the "settings" screen, vs "about:config", vs group policies)


Data driven ux seems to put all users in a single bucket.

I will readily admit in collective number of clicks and screentime, 37 year old men with advanced degrees in computer science are a super small minority.

But who is the majority then? Who spends the most time on say Reddit and YouTube? Children! Yes, people who we know are dramatically cognitively different than adults.

Why does YouTube keep recommending videos I've watched? That's what a child wants! Why does reddits redesign look like Nickelodeon?

There isn't one user and one interface that's right for everyone when we're talking about 5 year olds, 50 year olds, and 95 year olds.

We can make them adaptable to the screen, we should also do work to make them adaptable, at fundamental interaction levels, to the person using the screen.

And not in a clever way, but in a dumb one.

For instance, here's how you could ask YouTube: "We have a few interfaces. Please tell us what you like to watch:

* Cartoons and video games

* Lectures and tutorials

* Other "

And that's it. No more "learning", that's all you need to set the interface and algorithms.

Let's take Wikipedia, it could be broken up into children, public, and scholar. Some articles I'm sure are correct but are way too wonky and academic for me to understand and that's ok. There's nothing to fix, I'm sure it's a great tool for professionals. However, there should be a general public version.


> Let's take Wikipedia, it could be broken up into children, public, and scholar.

"Simple English" does a pretty good job. Obviously it's a mix of children/public but for science/mathematical topics where I'm looking just to verify my basic understanding of something, swapping over to Simple English usually gives me what I was looking for if the main article is immediately going down into technical rabbit holes.


> here's how you could ask YouTube: "We have a few interfaces. Please tell us what you like to watch: [...]

This proposal quickly falls apart because your categories are ill-defined based on your preconceptions. I watch a ton of lectures about video games on Youtube (e.g. speed run breakdowns or game lore theories). Do I choose the "Cartoons and video games" bucket or the "Lectures and tutorials" bucket?


yeah it was off the cuff. If you ask a 9 year old online if they're an adult, some will say "yes". I mean I guess it's their loss. Maybe a more direct approach is better.

"We've found adults and teens like different parts of youtube and use it differently. We want to make it the best for you. You can switch at any time, but tell us what best describes you:

* I'm an adult

* I'm not an adult.

"

youtube has this "for kids" app which came out after I first started pointing this difference in earnest around 2013, (https://play.google.com/store/apps/details?id=com.google.and...) but it's not right and they clearly still cater their main interface to the habits of children who watch the same video hundreds of times - the insane repetition is a part of learning nuance and subtly in the context of content they don't have to actually pay attention to. It's all about learning the meta, super important. They know what happens, it's the silence in between they're excited about - that's the nature of play.

This app instead silos the kids into a playskool interface, great for people under 7 or so, but like our playground reform, we've made it completely unappealing for the 8-22 or so demographic (when I was a kid and there were ziplines into a bank of tires, you bet there were 20 year olds lining up to have a good time on those, we all have a need for play; freedom to err wrapped in relative safety).

Instead, it's data-driven UX for adults and data-driven UX for children - it's about separating the data, not a PTA-acceptable UX for overprotective parents.


The best thing a parent could do is download a set of approved videos and use a local playlist.

The easiest thing to do is just allow them on youtube no filter.

The middle ground is the play app. Weird stuff sometimes get through but usually it's more someone dressed as a pretend princess. The good thing it's never really a murder scene or something equally as horrible (which could popup on youtube.com).

What would you do as a parent?

I would avoid youtube unless you setup the video until 7 or 11. After that it depends on the child.


The one big thing "For Kids" has going for it is the pro-active identity. Rather than feeling like they are missing out by not being an adult, they instead feel like they're picking the thing that's special for them.


> Let's take Wikipedia, it could be broken up into children, public, and scholar. Some articles I'm sure are correct but are way too wonky and academic for me to understand and that's ok. There's nothing to fix, I'm sure it's a great tool for professionals. However, there should be a general public version.

It kinda has this for specific subjects:

https://en.wikipedia.org/wiki/Introduction_to_quantum_mechan...

https://en.wikipedia.org/wiki/Category:Introductory_articles



options are often deliberately placed into easy or hard to discover locations. (E.g. a toggle on the browser chrome, vs the "settings" screen, vs "about:config", vs group policies)

Yes, and Mozilla has become much worse about this. Turning off "Pocket Integration", or "Shared Bookmarks", or "Mozilla Telemetry", or "Auto update" becomes harder in each release.


I mean, at least for the "go back" case, it seems like good sense for any non-reversible action (delete, overwrite, buy, send, etc.) to highlight the option that means that people that are just mashing their way through prompts without looking at what's going on won't be screwing themselves over by doing something irrevocable they didn't mean to do.

Native macOS apps get to be a bit clever for this, in that there are two kind of button-highlight-state per dialog (the "default action" button, which is filled with the OS accent color; and, separately, the button the tab-selection starts off highlighting, which has an outline ring around it.) This means that there are two keys you can mash, for different results: mashing Enter results in pressing the default action (i.e. colored) button–which Apple HIG suggests be the "Confirm" option for dangerous cases; while mashing Space results in selecting the initially-selected (i.e. outlined) button—which Apple HIG suggests be the "Cancel" option for dangerous cases. I believe that, in cases where the action isn't irrevocable, Apple HIG suggests that the default-action and initially-selected attributes be placed on the same button, so that either mash sequence will activate the button.

I really wish that kind of thinking was put into other systems.


This distinction is used by all CUA-derived GUI toolkits. Unfortunately by default windows uses same outline style for both default and focused buttons so there is no visual distinction. (there is an alternative button style on windows that distinguishes between these two states, but it tends to be used to mark buttons as two-state and anyway looks distinctly ugly and non-native)


Windows distinguishes default and focused, it's just a bit subtle. A button in focus has a dotted rectangle around the contents (immediately adjacent to the actual border, which is why it's kinda hard to see). A button that's the default has a thick blue outer border in Win10, and used to have a black border in the classic Win9x theme.

What is different in Win32, however, is that if any button is focused, it is also made the default for as long as focus is on it (or, alternatively - Enter always activates the focused button). Thus, there's no visual state for "focused, not default", because there's no such thing.

The distinction still matters, though, because if you tab away from a button to some other widget that's not a button, the "default" button marker returns back to where it originally was - focus only overrides it temporarily.

This can be conveniently explored in the standard Win32 print dialog (opened e.g. from Notepad), since it has plenty of buttons and other things on it. Just tab through the whole thing once.


And even that concept (of "defaultness" of a button) is IMO wrong: it introduces "modes" of operation -- you have to look to see what is the default before you can press Enter. Which also means that you can't even have some general expectation what the Enter key is going to do when.

There were computer keyboards which had a distinction between the button to enter the field and the button to, for example, do the desired action behind the whole dialog. Just like today it is common to expect that Esc is going to cancel the dialog (or the entry form) there was a key that one knew would do the "proceed" (GO) independently of the position in which field your cursor is at the moment (or was). In these operating systems Enter always did just the non-surprising "end of the entering of the current input field, skip to the next" and the "GO" signaled the end of that process and the wish to use everything that has been entered up to any point. It's particularly convenient when entering a lot of numerical data on the numeric keypad, where Enter also just moves to the next field.

I think that concept was right, and better than what we have today. Entering what are basically "forms" in any order (filling the dialogs) and proceed from any point is a basic task and could have remained less surprising.


> It's not at all anymore about presenting consistent mental models, it's solely about the ease or difficulty with which particular isolated tasks can be performed.

IOW following metrics optimising for local maxima instead of looking at the big picture in a non-zero sum game. Each task is made easier by itself but in doing so creates a model in conflict with everything else, making everyone miserable. Nash would be sad.


Correct. I remember traditional UI, watching users do things behind 1-way mirrors and grinding irritating and inconclusive statistics to try to get to a better interface. This used to be a speciality.

Now all you have to do is stick a bone through your beard and pronounce yourself a "UX Guru" and off you go.


UX to me is finding a compromise between designers who want it as sparse as possible and users who want "homer cars".


Gimp is probably the only software I use somewhat regularly in which I can absolutely never figure out how to do something new, and if I don't do something for more than a month, I have to google again for how to do it. The level of stupidity in its design surely deserves a reward for unusability.


GIMP has had no much powerful functionality while at the same time having an absolute awful UI going on 15 years or more now.

And it still hasn't been fixed.

I'm not a big believer in conspiracies, but if there's one I'd not dismiss out of hand, it's that Adobe or some other company has been ensuring that GIMP has never been improved or become a viable replacement for some PS users.

There is obviously a large potential market for a lower cost option for light users of Photoshop who don't want a monthly subscription to Creative Cloud.

Maybe they secretly paid off open source devs to obfuscate the code so much that any potential volunteers would have too much trouble finding a way to re-architect the UI without years of unpaid work.

When I see so many great improvements to complex software released to the community on GitHub, along with the potential for some startup to fork GIMP, fix its UI, and charge some sort of support fee like a lot of companies do with OSS, I just find it very strange that GIMP's UI is still in such bad shape, after two decades oconatant complaining by users.

It wouldn't surprise if Microsoft did or does something similar with the OpenOffice code base. So many compatibility and usability problems that just seem to langusish for decades, while you'd think some company could find a way to make money fixing some of the biggest issues keeping light users of Office 365 who don't want to pay subscriptions.


Part of that is also down to knowing about competitors and for them to stay alive. For instance, the other day I downloaded from the Mac app store a fork of GIMP from way back called Seashore (that's since been updated and barely any GIMP code survives). I've not had much chance to use it yet but so far it's what it claims to be, simple to use, which is a breath of fresh air after using GIMP. But who knows about it? It's been around for years and I'd not heard about it.

I read an interview with the maintainer[1] and it sounds like he's put in a lot of work but as he says it's a "labour of love". I wish someone was paying him, even surreptitiously!

[1] https://libregraphicsworld.org/blog/entry/meet-seashore-free...


Good designers don't work for free. It's kind of strange that programmers have a culture of working for free (or at least of employers agreeing to open-source contributions), but we do, so gnarly algorithms get open-sourced all the time.


We developers write something for free because we need it. Even if we build something awful we still use it and maybe open source it. Then maybe we improve the UI but it's not our job so we're not good at it.

A designer that can't code will never start a software project so I guess that it's uncommon for them to get involved in one for free.

Then there are developers and designers involved in open source because their companies pay them for that. Gnome's designers are listed at https://wiki.gnome.org/Design#Team_Members

Two of them work at Red Hat, one at Purism, I didn't find any immediate affiliation for the other two.


In addition: If you know a tool well enough that you can design an intuitive UI for it, you don't need it.


> Good designers don't work for free.

Is there any company employing them ? Because i find the user interfaces from the 80-90 even 00 much more usable that the today's crap. Remember help buton ? Remember buttons ? Why does windows 10 looks the same and behaves worse than windows 1.0 ?


I must be the rare person here who finds that Gimp becomes better and nicer with each release.

Yes, I had to explicitly set the way I want the icons to look in settings. It wasn't hard, and one of the bundled sets worked for me.

Maybe it's because I'm a long-time user and I know my way around, and where in the settings to look.

One of the problems of shipping UIs is setting good defaults. Maybe Gimp does not do a great job here; I should try a Clea installation.


I'm dating myself, but I really liked late 90s GIMP, where almost everything was available via right click menu. GIMP was simpler then, though.


I thought you were exaggerating but holy moly it's 100% true.

What happened? search for images: "gimp 1.0" vs "gimp 2020". Wow.


Hah, I see that I'll be in for a surprise myself once I upgrade to Ubuntu 20.04.


You can change the UI skin in the options. I've been using GIMP for years and I don't have any major complaints.


This is about usability, so I don't think referring to a setting buried in the options (that you have to know about first) is a valid point.

> I've been using GIMP for years

I think usability to users experienced in the software and to new users are two different things. I believe an important part of usability is discoverability which is probably better judged by new users than by experienced users.


>You can change the UI skin in the options.

Holy cow! There's even the "classic" theme right there. Wish I knew this a year ago.


Yup. Edit -> Preferences -> UI -> Icons -> Legacy. Done.


sure, you can make the icon bar more sensible with some effort, but not as sensible as it was in 1998: https://scorpioncity.com/images/linux/shotgimp.png


Long time Adobe user. I tried Gimp. Shut it down, and merrily went back to my Adobe subscription plan.


I haven't used Gimp in perhaps a decade, but I can't imagine a worse UI than Adobe. At home, I have a free PDF reader, but at work (...when I was going to work...), I had to use Adobe's PDF editor. (Not that I edit the PDFs, I mostly just read them; occasionally a highlighter would be nice.) Ugly huge icons that I don't need and which take up lots of space. And next to nothing in the menu, except ways to turn on icon bars at the top and/or side, hopelessly emsmallening the page I actually want to read.


Long time emacs user. I tried vim. Shut it down, and merrily went back to my motor memory.


I have only used GIMP a handful of times in my life. I recently had to download it to do something beyond MS Paint's abilities. I had a hard time understanding why many things behaved the way they did. I don't remember it being this hard the last time I used it.


Whenever UI/UX is brought up, GIMP inevitably enters the discussion.

I've used it a number of times and do not find it any harder than any other piece of software — doing complex operations where you are not sure what you want to do (or especially, how is that called) is hard, but that's hard in an IDE as well.

I do not do much, but I do not do little with it either — I am perfectly happy with layers, selection tools, simple painting tools and the rudimentary colour correction I may want to do. And one can claim that the hamburger-menu-like approach started with Gimp, fwiw (right click on your image to get to a full menu, though you still had that menu at the top of your image window).

Two things have always been a requirement for proper Gimp use: "virtual desktops" — a dedicated one for Gimp — and IMO, "sloppy focus" (window under the pointer gets the focus right away), but I've been using those since at least 2001 or something when I first saw them on Sun workstations, so I probably never had trouble with extra clicks required to focus between toolbars and other dialogs.

For creating original artwork, I find any graphical approach too limiting — I _do_ want an easy approach of UIs, but I frequently think in terms of proportions and spatial relationships when drawing ("I want this to be roughly twice the size of this other thing and to the left") — I always try to imagine this combined tool that would fit my workflow, but then I remember that I am probably an outlier: I may have been spoiled having done mathematical drawings as a primary schooler in Metafont and later Metapost (for colour, or rather, grayscale :)), and being a developer since 7th grade, where it's hard for you to come to grips with how suboptimal doing precise drawings in any software is (I've done some minor uni work in AutoCAD too).


> Apparently almost everyone agrees

I very much do not.

Gimp used to be horrendous to use. It still has some usability issues, but it's become something I can use without risking my mental health.


The UI changes in Blender 2.80 were exactly to get rid of the awful non-standard non-discoverable UI that has plagued the program from the start. It actually fixes a whole ton of issues that the article complains about! For instance, mouse button assignments are now sane (select on left button, not right) and Ctrl+S brings up an actual save file dialog with pretty unsurprising layout and behaviour (instead of that abomination that replaced the application window contents when pressing F12 and required an enter keypress to save - there was no button). There are many, many more of these changes and they were absolutely necessary.

The unfortunate side effect of this is that grumpy old users that were trained to accept the previous highly idiosyncratic UI started to complain because they have to relearn stuff. But it's worth it. And it opens up blender for more users.


As I recall history, using icons was a main way the rest of the industry tried to copy the usability of the MAc UI, as it conquered a lot of mindshare in the 80s and 90s.

But the Mac almost never had just icons in the ui. There would usually be an icon and a text. With little space you'd revert to text only.

Apple had a team of expert Usability experts. Others... did not. So they just copied something that looked cool and was easy to implement.

That it cut down on internationalizion efforts surely didn't hurt either.


The old interfaces (I guess we can say that now, talking about a quarter century ago) tended to have several modes.

The menu bar was always just text.

The toolbars offered several options: large/medium/small/no icons, text/no text.

(Not all of those options were always available.)

This let you progress as a user of a system. When you first experienced it you could use the large icons with text because it made the things you were searching for standout. As you learned the icons you could start shrinking them, and eventually remove the text. This opened up the toolbar to fit many more actions (often less frequently used). And the tool tip from hovering remained throughout, so in the worst case of an ambiguous (or unknown) icon, you could hover over it and learn what it did. Additionally, you'd often get the shortcut for the action when hovering over the button (or viewing it in the menu bar).

Many contemporary applications don't provide their users with this notion of progression.


It really depends on the period. On Windows, Microsoft eventually unified menu bars and tool bars - the widget was called "cool bar" or "rebar", and it was basically a generic container for certain specific kinds of children that organized them into floatable and dockable "bands", with automatic layout: https://docs.microsoft.com/en-us/windows/win32/controls/reba...

The widgets that can be placed on that are buttons with icon & text (either of which could be hidden) that can be regular, toggle, or drop-down; and text and combo boxes. Well, and custom widgets, of course, but the point is that they were different from regular widgets in that they were toolbar-aware. IIRC this all first came as part of IE "common controls" library, even before it was shipped as part of the OS.

So then a top-level menu is just a band with a bunch of buttons with drop-downs that have hideen icons. A regular Win9x-style toolbar is a band with a bunch of buttons with icons but hidden text, and an occasional text box or combo box. And so on.

But the real nifty thing about these is that they could be customized easily, and it was traditional for Win32 apps to expose that to the user. At first it was just about showing/hiding stuff, but Office especially gradually ramped things up to the point where the user could, essentially, construct arbitrary toolbars and menus out of all commands available in the app, assign arbitrary shortcuts to them etc. So if you wanted icons in your main menu, or text-only toolbars, you could have that, too! This wasn't something that regular users would do, but I do recall it not being uncommon for power users of a specific app to really tailor it to themselves.

Visual Studio has it to this day, and takes it to 11 by further allowing to customize context menus through the same UI: https://docs.microsoft.com/en-us/visualstudio/ide/how-to-cus...


I get the point about extreme customisability but I still think the "rebar" is ugly as fuck and inconsistent (non uniform look, a salad of different types of elements). As a mac user I didn't have to put up with it and was disgusted when I first saw it.


What was particularly non-uniform about it? Most toolbars looked very similar in most apps, because there were certain UI conventions, similar to main menus. Once customized, sure, it's no longer uniform, but that's the whole point.


Oh, the joy of using an application that acknowledges your subtle interactions, such as hovering over an unknown button, like a friendly old-time barber who knows just how to cut your hair. No explanations needed, he knows just what you need.


> Others... did not.

While there is no disputing Windows copy heavily from the Mac UI, the actual feel of that interface was also strongly influenced by the IBM Common User Access (CUA).

https://en.wikipedia.org/wiki/IBM_Common_User_Access

Not only did Windows try to follow those CUA rules, Microsoft encouraged Windows applications to also follow those rules.

That meant that from a user perspective, the Windows experience was fairly consistent irrespective of which application was being used.


The "Slide up" is gone in next LTS due in a few days.

https://www.omgubuntu.co.uk/2019/10/ubuntu-20-04-release-fea...


About half way through article: “The new lock screen is easier to use, no longer requiting you to ‘slide up’ to reveal the password field (which now sits atop a blurred version of your desktop wallpaper):”

Great! That UI was horrific. Hitting a key like spacebar didn’t unlock.

Aside: I completely broke the Ubuntu login screen yesterday. I did `apt remove evolution*`. Unfortunately the login screen depends on evolution-data-server, so I couldn’t login from lock screen, and after reboot it dropped me to TTY!! Gnome is just getting crazy - it would be like Windows login depending on MS Outlook! Gnome is a big ball of interdependencies, becoming more like Windows. I get it, but I don’t like it. Edit: FYI: fixed in TTY3 (ctrl-alt-F3) by `apt install --reinstall ubuntu-desktop` from memory.


apt automatically removing dependencies by default is such a trap, I much prefer pacman's behaviour of refusing to remove package that breaks dependecies unless explicitly told to do so.


apt will not automatically remove dependencies either.

I suspect the OP already had ubuntu-desktop package removed for some other reason, and there was no direct dependency on evolution-data-server for gdm: it will only remove dependencies which no other still-installed package depends on. That might still mean a packaging bug (but at least on 18.04, attempting to remove evolution-data-server prompts me how it's going to remove gdm3 too — sure, it's short and easily missed; attempting to remove evolution does not attempt to remove evolution-data-server since a bunch of other stuff depends on it like gnome-calendar too).

In any case, apt will prompt you about all the packages you are about to remove (unless you pass "-y(es)" to it).


And it is a GNOME-ism, Ubuntu went out of their way to modify it.


That swiping up thing makes me so angry. What an absolute waste of effort. No way to disable it. If this is what they're doing in the most visible bits of the system, what on earth is happening in the rest?


Just hit the Escape key. You can also just start typing your password. I often type my password and press enter before the monitor is even awake.


...just to then realize that the computer was still logged in and the focus was on a chat window, and it was only the screen that had been in power saving mode. :-}


And this is why I bring my machine out of sleep by tapping the shift key.


Your post needs a trigger warning. My heart is racing.


Good thing no one on HN would ever reuse a password, right?


Good news, it is gone in the next version.


that slide up thing has to be some ultra clownish way. who proposed this, who reviewed & approved this, and on what fundamental ? it there no easy option to get rid of that irritant ?


I remember when I was playing around with building an HTPC for my car in the mid-2000s and I got to trying to put a frontend skin on the touchscreen. And I found all the existing skins completely awful because they were wall to wall arbitrary icons, in a setting where I needed at a glance functionality.

Eventually made my own, and the key element? No icons at all. Just text - potentially small text - on the buttons. Turns out, being something you spend your entire life reading, text works great - within a sparse set, you can resolve exactly what a word is from letter shapes even if you can't directly read it, and if you don't know what something is you can just read it.

No one ever had any problem using it, even if they'd never seen it before, because every button said exactly what it did.


Yes, but now you have to write gobs of internationalization code. ;)

Actually, that sounds completely sane.


"You can't have the same UX for both handheld touchscreen devices and M+K laptops/desktops" seems like an absolutely 101 level no brainer to me. How are big projects/companies still attempting to make stuff like this happen?


> How are big projects/companies still attempting to make stuff like this happen?

"None of us are as stupid as all of us."


Cost saving?


> You can't Google an icon.

I've been complaining about that since 1983, when an Apple evangelist came to my workplace to show off the Mac's icons. Nobody was able to guess what the box of Kleenex icon was, much to the frustration of the evangelist. Of course, there wasn't a Google then, but how do you look up a picture in a dictionary?

We've reverted to hieroglyphic languages. (Ironically, hieroglyphs evolved over time into phonetic meanings.)


> Ironically, hieroglyphs evolved over time into phonetic meanings.

No, they didn't. The original creators of hieroglyphs knew how to use them to spell phonetically, but they didn't do it that often, they were satisfied with the old system (just as Chinese are now). It was a job of other people, who actually didn't know how to use hieroglyphs properly, to build a functioning alphabet on top of them. Two systems coexisted for some time, and then hieroglyphs went into decline with the whole culture that supported them.


The Egyptian and Mayan ones did.


What I wrote about was about Egyptian hieroglyphs. I can only repeat that they did not _evolve_ into alphabetic writing. The Semitic alphabet was a fork; alphabetic use by the Egyptians themselves was very marginal. The Mayan system AFAIK was a mixed logo-syllabic one to begin with.


But the heiroglyphs are corporate logos crossed with app icons (that change every so often to stay "fresh")


>Blender fanboys argue for using keyboard shortcuts instead. The keyboard shortcut guide is 13 pages.

Well to be fair Blender is a professional tool. It is expected that users read the manual and learn the shortcuts, etc. Discoverability is something that should not be optimised for in a tool for professionals like Blender.


That is such a lame, played-out excuse. A real favorite of apologists for shitty UI design.


You are absolutely right. It doesn't hold water once you look at other professional programs. They all use the same math underneath and love or die to a huge degree based on their interface. Blender being free and open source opens up possibilities for people with no access to professional tools and researchers, but compared to the commercial tools out there its interface is a mess of inconsistency.


It's really not. Take a tool like after effects. The interface is not obvious, the icons are unlabeled (until you mouseover and get a tooltip). You've got to learn it as you would learn to use any other tool.


Icon mania... and pretty much every single icon is only possible to understand after you have learned what it means.


That's deliberate. The saying "a picture is better than a thousand words" doesn't apply when the picture is 16x16 pixels.

Every single icon that makes sense to you now (the floppy disk, the binoculars...) do so because you learned them a long time ago; it's funny how you can now find YT videos that explain where that weird blue icon for the "save" function comes from.

The images are just a mnemonic device - in the sense that the sign is partly related to the meaning (the binoculars could very well mean "zoom in" in an alternative world). Certainly a stronger connection works better because it helps to remember, but they are not meant to help with "understanding" what the button does.

It is the same deal as with keyboard shortcuts. ctrl+S is Save, but you know that ctrl+V is Paste and it has absolutely nothing to do with spelling.


Actually the floppy disk made perfect sense when I first saw it: "we're going to do something involving the floppy disk, and there's a put-onto arrow superimposed over it" (the load-file icon had a take-off-of arrow). It makes less sense now, but only because computer storage no longer has a single, iconic (heh) form factor.

ZXCV is positional, [C]opy having a nice mnemonic is more of a happy coincidence than a design decision in it's own right.


> ZXCV is positional, [C]opy having a nice mnemonic is more of a happy coincidence...

ZXCV is actually half positional/mnemonic, half graphical-as-letters, a bit like old-fashioned smileys: X for cut looks like opened scissors, and V for insert looks like the downward-pointing “insert here” arrow-like mark editors and teachers use(d?) to scribble onto others’ texts.


Even worse: You have to try out each icon. Next version replaces many of them and entire design again. Rinse & repeat.


Ubuntu 19 is terrible. I was forced to use KDE (Plasma) instead because they broke so many basic UI concepts (to give you a tiny taste: changing tabs went from Ctrl+Tab to Meta+Tab, the keyboard layout switcher takes 2 seconds or more because it pops up a stupid little window to show the language selected, and many other things just like this I thankfully erased from my mind and now only the frustration remains).


I use the keyboard layout switcher only with the keyboard which is pretty much instant, so I wonder how was it broken in any of the Ubuntu 19.x releases — that sounds worrying if it carries over to 20.04? (I am on 18.04)

"Any Ubuntu 19.x" because non-LTS Ubuntu releases come out every six months, so there was Ubuntu 19.04 and Ubuntu 19.10, but never an "Ubuntu 19": they are never as polished or as stable as LTS releases, and are only supported for 9 months, forcing you to update regularly.

If you are looking for a more stable experience and you are not chasing the latest features, you would probably be better served with LTS releases which come out every 2 years (they have hardware updates with newer kernels included every once in a while, so you do not have to worry about using them on latest laptops either).

If you want the most stable route, go with LTS point (.1) release. Eg. I only update my servers when 18.04.1 or 20.04.1 is out.


The language switcher was broken in 18 already : https://bugs.launchpad.net/ubuntu/+source/gnome-shell/+bug/1...

People who switch between several languages just can't use Ubuntu because of this, read some testimonials on the bug linked above.

This is just one of the pain points though. There were many.


That's quite interesting — I do not get that behaviour at all, both on newly installed and upgraded systems from older LTSes (3 laptops and a desktop). They are all on 18.04, and I do have at least 2 layouts on all of them (Serbian Cyrillic, Latin and sometimes US English) which I switch between hundreds of times a day.

Of the non-standard settings, I've got "make Caps lock an additional Ctrl" and "each window keeps its own layout" on. The rest, including keyboard switching shortcut (Meta+Space) is using the defaults.

I simply press the shortcut and start typing and works as expected — if I find the time, I'll debug further and report on the LP bug, but just wanted to report that I do not experience any of the problems mentioned.


I just noticed that the above bug refers to switching input methods vs. just keyboard layouts.

Input methods are a separate concept from keyboard layouts (XKB), and I only ever use XKB stuff. Input methods load separate libraries which interpret keycodes, and are commonly generic enough to allow "external" complex input definitions (think Arabic or CJK) — not sure how fast they used to be before, but perhaps combining IM and layout selection is the culprit.


One tip for Blender at least... It's very hard to discover, even if you know it exists, but you can drag out from the side of the main icon bar and turn on text labels as well.


One nice thing about recent Ubuntu is that even though they hide the password box, you can start typing on the unlock screen and your text will be entered into the password box.


Why hide the password box, then, unless you think that users learning to type and have magically it go somewhere invisible is a good UI design…

(Interesting aside: I complained to someone on the Mac Safari team that it was difficult to search open tabs, and he told me that apparently this feature already exists! You go into the tab overview, and…just start typing. A little search bar will pop into appearance in the top right corner. Why it couldn't just be there and have keyboard focus from the start, I have no idea…)


Worse, this creates a bad habit. What if the UI changes and now it's the username that pops up first, not a password box?

So it's a hard-to-discover feature, and a misfeature unless you elect to keep this behavior forever.


Because this only happens on the lock screen when there's no user to be entered/selected. There's no slide-up on the initial login.


Wow, I would’ve never guessed that. Apple has a terrible habit of burying obscure UX features.

It’s a shame really because these undocumented features mean 99% of people WON’T ever use them. Isn’t that counterproductive to engineers?

Why aren’t employees speaking out against this?


macOS hides the password field to encourage people to use Touch ID instead, but Ubuntu probably doesn't even support fingerprint login...


Every mainstream Linux distro supports fingerprint login natively just by virtue of using PAM.


Funnily enough, so does every Mac by virtue of using PAM ;)


Ubuntu does support fingerprint login if you have supported hardware.


Unless slack steals the focus. Happened to me a few weeks back. Then slack gets your password and enter key, and the login screen doesn't.


The irony being that there's a vocal group of GNOME users complain that newer versions full-stop prevent windows from taking focus and install an extension to being the behavior back.

Can't please everyone I suppose.


It wouldn't be a problem if it is an option which can be configured, I think.


Steam remote streaming did something similar, while on the road with my laptop it presented me with my Kubuntu login screen... so I logged in, fine... and now my computer at home is sitting unlocked and unsecured.


Blender icons are also microscopic on a 1920 x 1080 monitor of usual size. We don't all have astronaut vision.


windows 10 is the same, there is lag between when the screen with the input box appears and when it accepts input. so if you start typing too quickly, fail. and don't get me started on their auto update policy (i agree with automatic updates; i do not agree with them forcefully hijacking my computer with no means to stop it. its actually what brought me back to linux as my dd. no regrets!)


In Ubuntu 19 I can just type my password and hit enter to log in. In windows 10, I have to hit any key before typing my password and hitting enter to log in.


> Logging in on desktop now requires "swiping up"

No, it required pressing some button to show the password box. Just like windows. If you don't have a keyboard you can also swipe up instead.


I've always guessed that the infinite feature creep that is plaguing modern UI's is caused by salaried visual designers who aren't being given new projects to work on. Once the visual templates are fleshed out, the icon sets are put together, the pallettes completed, there's not all that much to do. Unless you start playing around with NEW icon sets, pallettes, mouse aware buttons, etc, etc... I would be so much happier with a consistent, self explanatory UI than anything that is "sleek" or "dynamic". Those properties, excepting very specific situations, really detract from UX. Visuals improve with user familiarity, not with constant changes. Imagine if the Department of Transportation decided to redesign stop signs to be yellow, triangular, with a sleek looking matte finish... Traffic accident deaths would go up 10x in the first week.


Whenever these discussions come up, I don't know how much of this response is just confusing familiarity with usability. I kid you not, I knew people who bemoaned python because of tab spacing and this wasn't 2003, this was may be a few years ago (2016 or so), which I'd argue is just being unfamiliar with it. C isn't really that intuitive--I still remember learning it at 14 and finding the syntax confusing vs basic--it's just familiar to those who use it.

I do feel that some modern changes are annoying and unnecessary though and the instances I see that increase as I get older. I just always try to check myself and analyze how much of that anguish is just from being used to something or not.

I will finally say the examples here of inconsistency between evince and MPV is just inexcusable. You can't both break expectations AND be inconsistent, it's like the worst of both worlds.


The author completely failed to mention discoverability. Which is a huge part of usability because it allows _new_ people to sit down at your applications and gradually work up their expertise until they are keyboard wizards (or whatever).

So, no, just having a bunch of magic unlabeled buttons and saying "use the keyboard" isn't good usability. The part that kills me, is that a lot of these applications don't even really have shortcut key guides. So, you don't even know the magic key sequence for something until you discover it by accident.

Worse, are the crappy programs that break global key shortcuts by using them for their own purposes.. AKA firefox seems to manage this on a pretty regular basis. Want to switch tabs? ctrl-tab, oh wait, it doesn't work anymore, cross fingers and hope someone notices and fixes it (which they did).


> a lot of these applications don't even really have shortcut key guides

Not even Windows itself has, any where that I could find, a list of all the keyboard shortcuts. I find multiple lists, each with a different random subset of whatever the actual set is.

Sometimes I'll hit wrong keys, and something interesting happens. I don't know what I did, so it's "oh well".


tab spacing isn't just "used to it". It's the "hidden white space has meaning" problem. Copy and paste some code indented with spaces into some that's indented with tabs and if you're unlikely they'll look like the indentation matches but the won't actually match. If you're even more unlucky the code won't just crash it will appear to run but the result will not be what you expected.

You try to get around it by using an editor that shows tabs or an editor that re-indents the code but plenty of editors (notepad, nano, vi, emacs) don't show tabs as different from spaces by default.

https://wiki.c2.com/?SyntacticallySignificantWhitespaceConsi...


Is there a sane mode or config for vanilla Ubuntu 18.04? I'm considering upgrading from my trusty old 16.04 LTS (both home and office laptops) and I dread the usual pointless UI changes that come with all the reasonable bugfixes/improvements.


20.04 is due out in a few days. I hope it's good. 18.04 has been total crap for me. The lock screen randomly won't authenticate, and I am forced to reboot. My USB dock suddenly started randomly disconnecting and reconnecting, after it worked fine for months.

The "Save/Open" button in the file dialog boxes is in the title bar, which is the dumbest thing I have ever seen. Dialog boxes get tied to windows, so when I try to move the dialog out of the way to see my work, it drags the whole damn window. (Some of this is mentioned in the TFA.) I think a lot of these decisions were Gnome-driven, but still... stick with 16.04.


I think Xubuntu is a good alternative. XFCE doesn't have as much eye candy, but it certainly surprises me far less and usually is very pleasant to work with.


I used Xubuntu for a number of years and it's a great lightweight environment overall. The main problems that I experienced with it are that it's handling of hot-plugging multiple displays (especially in between sleep states) has always been poor and crashy.

And I like the cohesiveness and integration of GNOME, although I had to do a hell of a lot of customization to mold it into something I could tolerate.


I've also noticed that XFCE's handling of hotplug monitors leaves much to be desired, and it also cheerfully ignores my preference of not suspending when I close the laptop lid (so I can use only the external monitor and an external keyboard when I have it plugged into my TV). Close the lid under XFCE and "BEEP!" it goes into suspend mode instead of doing nothing.

Fortunately, Cinnamon is just an apt-get away, handles both monitor hotplugging and closing the laptop lid sanely, and works the way I expect a desktop to work. I've settled on Xubuntu+Cinnamon as my go-to when setting up a desktop or laptop.


Can you elaborate on Xubuntu+Cinnamon? That sounds interesting.

I also just noticed that there's an Ubuntu Cinnamon which might be right up my alley as well.


The setup I did was to just install Xubuntu, then:

apt install cinnamon

Once installed, I logged out, and then picked Cinnamon from the session selection menu (the little gear near the upper right corner). It comes right up, though it won't pick up your preferences from XFCE.

I hadn't realized there was now a Cinnamon spin - is that still in testing? It's not on Ubuntu's list of flavors.


I'm on 19.10, and I'm quite disappointed. Exposing all windows as thumbnails is a big regression compared to Unity. It feels heavyweight, and just doesn't work anymore for window navigation. Pressing the app icon on the side bar brings up small, equal-sized window previews on top of each other without spatial information. Consequently I have now tens of open terminal session and browser windows. Global menu is gone, wasting precious vertical space. Or it would, if apps such as Firefox didn't change to use stupid hamburger menus. The place where the global menu used to be is now used for displaying the time, which is absurd. What's not gone is the chopped display of the focussed app's name on the left. I frequently make errors on window decorations (might be me used to Unity, but it's still the case after two months). Search isn't as useful as it used to be, but has a pompous animation. Apps are beginning to use snaps, which I have no use for - all I want is a driver for running the same old programs I used to run the last twenty years. It's not that there's been a boom of new desktop apps for Linux lately.

Installation was rock-solid, though. Only had to install synaptics over libinput (libinput causing physical pain for me because I had to press the touchpad hard all the time, and because lack of kinetic scroll).

Seriously considering alternatives.


I was on the same boat and decided to install KDE Plasma. Very very happy with the result, the UI dark theme is very beautiful and all shortcuts and UI elements work the way "old" UIs did (unlike freaking Ubuntu 18 and 19 which changed everything for 0 reason).

https://kde.org/plasma-desktop

It was quite easy to install and everything worked out-of-the-box. I just customized some widgets, the dark theme, icon colors and now it looks amazing. Best Linux desktop I've used so far.


I've been on the brink of installing KDE many times over the years. What held me back was that screenshots always displayed bulky window decorations which I have no use for on a 13/14" notebook. Also, for space efficiency, I want global menus period. I think Ubuntu pre-18 with Unity got a lot of things right - a lean UI getting out of your way and yielding space to actual apps. I can do without the collection of desktop apps for either Gnome or KDE - on Gnome the file manager is particular anemic, on KDE it seems too Windows-y for my taste. And I don't use Gnome's mail app or video player but rather "best-of-breed" apps like Thunderbird and VLC anyway. So I guess I'll be looking at lightweight DEs/non-"DE"s going forward (but I'll give KDE a try for sure).

It's sad because gnome worked very well for me so far, and I've actually seen Ubuntu becoming the mainstream choice for freelance dev teams at many places over the last couple of years. I do feel guilty to criticize other people's hard work given for free to me without constructive criticism but as far as I'm concerned gnome shoots for a future and audience of desktop users that just isn't there when IMHO the goal should be to preserve what precious free desktop apps we have and not making app development any more difficult or gnome-centric.


You can create a Unity-like look and feel in KDE Plasma 5 relatively easily[1]. Although the look and feel is only part of the experience, KDE Plasma 5 is very customizable that I'm sure it could be adapt fit to most workflow.

Despite its bad reputation in early days, KDE Plasma 5 nowadays is very lightweight. As in, the resource usage is pretty much on par with Xfce.

[1]: https://userbase.kde.org/Plasma/How_to_create_a_Unity-like_l...


Just to make it clear: after I posted this , I did some research and it turns out I am using KDE Cinnamon actually... I installed KDE, and now I have options when in the login page to select which "session" I want: Ubuntu, Unity, Plasma, Cinnamon, Wayland... I have Cinnamon by default, and it is indeed my favourite... Plasma seems very cool, but too different from what I am used to... I might play with it later when I have more time though.


Do not upgrade.

Switch.

A lot of Ubuntu software is now (Version 19.xx) only available with "snaps". They make some sense for IoT machinery (the user does not control updates, so they are deploy and forget) but I do not want to loose control.

Final straw for me. I am test driving Arch now....


Did you install Arch from scratch or are you using Manjaro, etc? I love Arch but I've never installed from scratch. It's on a my bucket list. I'm currently using Endeavour OS, which is easy-install Arch Linux with way less bloat than Manjaro. It's awesome. I'll never install a *buntu type system again.


Ha ha.

From scratch I think. My aptitude is no use with pacman


I highly recommend trying out other flavours of Ubuntu: Xubuntu (XFCE), Lubuntu (LXDE), Kubuntu (KDE), Ubuntu MATE, Ubuntu Budgie. Next week all should get 20.04 LTS release.


I don’t understand why you need different distributions for different DEs.

The only distro I’ve used past teenage is Ubuntu. I alternate stints of maybe 2 years with Windows, 2 years with Ubuntu. First thing I do after installing the most recent LTS Ubuntu is “apt install spectrwm”.

Spectrwm is not even particularly good — everyone tells me to use xmonad instead — but I know how to get it in usable shape in about half an hour. This after many moons of exclusively using Windows.


They are not different distributions but more like a different edition or "spin" of Ubuntu. They just install a different set of packages.


For KDE you're better off with KDE Neon. Still based on Ubuntu, but that's the official KDE distro if you can speak of one.


No, or at least I haven't found one. I was recently forced to upgrade from 14.04 and tried the default configuration in 18.04 (clean install) for a while before I gave up and installed Unity -- which, thankfully, is in the package manager. That at least got me back to the point where I could configure things critical to my workflow like having a 3x3 workspace switcher. The system has fought me every step of the way though -- especially with things like global hotkeys which have about half a dozen different places they can be configured and it's completely inconsistent what works where.

I've literally spent weeks trying to get back to the level of usability I had on my 14.04 setup -- compiling old/patched versions of software from source because the "improved" versions removed features I depend on or otherwise fucked up the interface (I cannot understand why anyone thought removing typeahead from Nautilus was a good idea!), trying every damned thing I can think of to debug the global hotkey problems (still can't get IME switching to work right reliably... it works for a while after I fiddle with it then just stops working and I have no clue why), and just generally having a bad time.


I've always considered Debian to be the sane, vanilla Ubuntu :-)


On the GNOME login screen you can press [Enter] or [Space], or click (on the latest GNOME, 3.34) or swipe up (on the previous versions) to get to the password entry box, or you can just start typing your password. It's extremely easy and discoverable (because there are so many options, nearly anything you do will take you to the password box). I really don't think there's an issue with it.

This is with stock GNOME (on Arch); I think Ubuntu may ship a skinned / modified / older version of it (which can create UI problems).


So what you're saying is that there's no way to discover it by visual inspection alone; and so my elderly family members could never discover it for fear of breaking something by pressing the wrong button. That's bad UI.


GIMP's UI/UX has been garbage since long before 18.04. Not contradicting anything you're saying, just noting that it might not be the representative example you're looking for.


Ubuntu shell was a disaster and I can't imagine it's gotten better - Mate on Ubuntu is the answer for that.

And Gimp is a mess. Enabling single window mode makes it better.


I had the hidden window problem the first time i used win7.

It was one one of those tiny netbooks with 1024x600. I think I was trying to add a user, and for the life of me couldn't figure it out. Turns out the updated add user control panel at the time put the add button on the lower right of a window with a minimum height > 600px and about 400px of white-space above it, and no resize/scroll bar so there wasn't any visual indication that there was more to the window.

But, there is a flip side too. I have ~6kx5k of desktop resolution (portrait mode 5k monitors) and very few applications know how to handle that in any reasonable way. Web pages are frequently the worst, nothing like a column of vertical text consuming 10% of the horizontal resolution of the browser that manages to scroll for a page or two. I guess no one reads newspapers anymore, so the idea of having multiple vertical columns of text is foreign.


Ubuntu got worse because of GNOME


> Ubuntu got worse at 18.04 ..

Give Lubuntu a try.

https://lubuntu.net/


The official site is https://lubuntu.me/ btw.


Are they really mixing themes like this?

https://lubuntu.me/wp-content/uploads/2017/09/video.png

from the front page


LOL nOmegalol.

LXDE is dead, and jankier than Xfce.

Use Xfce (if you like jank) or KDE or Cinnamon.


Lubuntu is no longer on LXDE, having switched to LXQT on version 19.04. LXQT is pretty good from my perspective, minimal but intuitive.


> Logging in on desktop now requires "swiping up"

That's not how it works here. From boot I am presented with a list of users, I click or press enter and type the password. When it's locked/suspended, all I need to do is to start typing.


I recently came across Glimpse [0] which is a fork of Gimp. They state usability as one of the reasons for the fork.

[0] https://glimpse-editor.org/


> Ubuntu got worse at 18.04. Logging in on desktop now requires "swiping up" with the mouse to get the password box.

You can also just start typing the password if you want to unlock the machine.


>now requires "swiping up" with the mouse to get the password box.

oh, that has to be some ultra clownish way. i would punch through the display within a week. who proposed this, who reviewed & approved this, and on what fundamental ? is there no easy option to get rid of that irritant ? if none, I have to stay with 16.04 for lots more time.


> Ubuntu got worse at 18.04. Logging in on desktop now requires "swiping up" with the mouse to get the password box.

I don’t know about you, but I just start typing on my keyboard.


Took a while to figure out I could do that. And I only figured it out by accident.


The best thing about Ubuntu desktop usability might be mnemomics, where alt+ an underlined letter is your keyboard shortcut, but it seems they're dying :-(.


> Logging in on desktop now requires "swiping up" with the mouse to get the password box.

What? I just type my password without swiping anything. I've I think I've upgraded through pretty much every version of Ubuntu for the last few years, I haven't customized it to speak of, and I've always been able to do this on both my desktop and my laptop.


Moreover, specifically I’m running 19.04 and I click anywhere on the screen to open the password. Maybe they changed it over time.


> Ubuntu got worse at 18.04. Logging in on desktop now requires "swiping up" with the mouse to get the password box.

Have you tried pressing a key?


Is there any indication that pressing a key is an option? I've been swiping up with the mouse the whole time as well.


> (I have no idea what's happening in Windows 10 land. I have one remaining Windows machine, running Windows 7.)

I use a laptop with an external monitor. When the monitor is not connected, Adobe Reader windows have the title bar out of the screen. The only way to maximize: Alt + space + x.


> Ubuntu got worse at 18.04.

Only if you use the default UI, which I think is an important distinction to make: I use Window Maker and had no regressions.

The ability to choose your own UI is an important strength of Unix, and one which distinguishes it from macOS and Windows.


> The ability to choose your own UI is an important strength of Unix, and one which distinguishes it from macOS and Windows.

It's a strength as well as a curse for Linux distros not limited to just Ubuntu since the developer of a GUI program must test if their app works on your unique setup, if it doesn't run on one distro or your specific configuration out of the box, then that is already a usability issue.

The Linux ecosystem doesn't have any guarantees for the app developer whether if a KDE, GNOME or Xfce built application will consistently work on your setup if the user changes the DE, display manager, etc so it is harder for the app developer to support accessibility features in whatever DE the user uses and its harder for them to track the source of the issue. This could be anywhere in the Linux Desktop stack.

The inability to "choose your on UI" on Windows and macOS guarantees that GUI programs will be consistent with the accessibility and look and feel features in the OS which makes it easier for app developers to test on one type of desktop OS, rather than X * Y * Z configurations of one Linux distro.


I can't find the icons in gimp on a recent ubuntu because they all look the same these days, every icon is just 'light grey geometric shape on dark grey' with no visual distinctiveness at all.


ubuntu is a hodgepodge. I was always accidentally maximizing windows at the screen edge. wtf. (use dconf to turn off edge-tiling)

(also lots of phone home crap you have to hunt to turn off)

By comparison arch linux + gnome is relatively unencumbered


Space bar (or possibly any key) works.

The up-swipe to log in induces ux rage for me. I haven't yet tried to hunt down a way to shut it off because I just hit the space bar and forget it ever happened.


Press escape at the "lock screen" and you'll get the login UI. It works the same way on Windows too.


I’m on 19.04 and haven’t experienced the swiping up. Could it have to do with me running i3 instead of Gnome?


UI design lately sounds more and more like a Monty Python sketch.


18.04, <enter> works for me to get the password box up.


At the Ubuntu log in screen just start typing your password and it will automatically scroll up.


Yes, let's train users to type their passwords in with no visual indicator of where it's being input, and only faith that it will go well. Great idea. /s


> It’s totally inappropriate to desktops.

I don’t agree. It’s important for the user to know a login UI is the real thing. For example, Windows NT used to have you hit Ctrl+Alt+Del to make the credential dialog appear so that any fake lookalike was impossible.


Ctrl+Alt+Del cannot be caught by any program, and is therefore reasonable to identify the login UI. Swiping up can be detected by any program, does not improve security as a result, and is ridiculous to have on a desktop UI.


Unlike the Ubuntu mystery experience, Windows actually tells you to press Ctrl+Alt+Delete:

https://troubleshooter.xyz/wp-content/uploads/2018/08/Enable...


That's a bit different. I can fake swipe-up on a GUI but I can't fake Ctrl-Alt-Del.


Actually the idea behond Ctrl-Alt-Del is the other way around. Application can cause the same effect as user pressing Ctrl-Alt-Del (although doing that is somewhat complex due to interactions with UAC), but there is no way how application can prevent the said effect (essentially switching virtual desktops) from happening when user presses Ctrl-Alt-Del.


The implementation may be bad but it seems like the same idea to me, “user must interact with UI before entering credentials”.


Except you don't have to. Just start typing your password.


Yes, that’s why it’s a bad implementation of a good idea.


Even if you had to, it's still a bad idea. Ctrl+Alt+Delete works, because no normal Win32 app can intercept this - so if you do it, and you see a login box, you know that this is the real thing.

But any app can go fullscreen and draw a fake login screen that you can swipe up to show a fake login form.


Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: