Then there's icon mania. I've recently converted from Blender 2.79 to Blender 2.82. Lots of new icons. They dim, they change color, they disappear as modes change, and there are at least seven toolbars of icons. Some are resizable. Many icons were moved as part of a redesign of rendering. You can't Google an icon. "Where did ??? go" is a big part of using Blender 2.82. Blender fanboys argue for using keyboard shortcuts instead. The keyboard shortcut guide is 13 pages.
Recently I was using "The Gimp", the GNU replacement for Photoshop, and I couldn't find most of the icons in the toolbar. Turns out that if the toolbar is in the main window (which is optional), and it's too tall to fit, you can't get at the remaining icons. You have to resize the toolbar to give it more space. Can't scroll. There's no visual indication of overflow. It just looks like half the icons are missing.
(I have no idea what's happening in Windows 10 land. I have one remaining Windows machine, running Windows 7.)
Anecdote: The first time this happened, I had no idea why it wasn't working and naturally started clicking on things and pressing buttons to try to get it to do the thing. I thereby discovered that you can get the password prompt by pressing Enter.
Having used it this way for two years now, your description of this behavior is the first time I'm learning that it is also possible to do it by dragging the mouse upwards. The discoverability of this behavior apparently does not exist -- I assume if pressing Enter hadn't worked I would have had to use a different device to look it up on the internet.
One more anecdote. I found this screen for the first time when I got my laptop to demonstrate something to a student. I didn't know what to do, started to do random things, trying to figure out what happened, and the student interrupted my attempts, got mouse from my hand and swiped with it. I felt myself old and stupid. It is the thing like 20 years ago when I taught my parents to use standard UI. Only new it is me who needs help.
I asked, she didn't see Ubuntu before, and nevertheless she managed it better than me. I think I'm growing old, and just couldn't keep up with a pace of changing.
Because any hope of guessing it comes from knowing that phones do that, so the less like a phone you know it to be, the less likely you are to try that. Notice that even on phones it isn't intuitive if you've never done it before. If you tap the lock screen on Android it shows a message that says "Swipe up to unlock" -- because how else would you know that for the first time? But at least there it serves a purpose.
On the other hand, when my iPhone suddenly would connect with a caller, but neither party could hear the other, redialing didn't help, turning it off/on didn't work, I remembered the ancient trick of "cold boot". Which resolved the problem.
After some digging, I found a new trick that I guess is implemented at a lower level: press and release volume up, then volume volume down, then press and hold the main button until it powers off.
It is pretty confusing the first time, and annoying every time after that, didn't conciously know about the "Enter" trick before now.
1. What do you need to do to invoke a wakeup? Press a key? Are there any keys that don't wake the machine? Move the mouse? Click a mouse button?
2. Multiple monitors: During the wakeup sequence, you first have one display turn on, then you think you can log in but surprise! Another display turns on, and the main display briefly flickers off. For some reason, after 25+ years, display driver writers still can't figure out how to turn a second display on without blanking and horsing around with the first display.
3. Once the displays are on, some systems require some kind of extra input to actually get a login prompt to display. Is it a mouse click? A drag? Keyboard action? Who knows?
4. Some systems allow you to type your password as soon as the computer wakes. But there is some random delay between when you invoke wakeup and when it can accept input. What this usually means is I start typing my password, the computer only recognizes the last N characters and rejects it, I wait, then type it again.
These are some irritating bugs that affect everyone who uses a PC every time they log in. Yet OS vendors choose to spend their development time making more hamburger menus and adding Angry Birds to the Start menu.
So MS has managed to make an interface that works just the same on desktops, laptops and touch-enabled devices, and the UX isn't bad on either.
It's the shitty ergonomics that have been pervading software UI design for several decades now.
From the huge number of responses to this article, it's clear the software industry has a very major problem. The question I ask is why haven't users/user complaints been successful in stopping these irresponsible cowboys.
Seems nothing can stop them.
Most users don't complain, because technology is magic to them and they have no point of reference; they assume things must be the way they are for a reason. From the remaining group that does complain, many do it ineffectively (e.g. complaining to friends or in discussion boards that aren't frequented by relevant devs). And the rest just isn't listened to. It's easy to dismiss few voices as "not experts", especially today, when everyone puts telemetry in their software (see below), and doubly so when user's opinions are in opposition to the business model.
Finally, software market isn't all that competitive (especially on SaaS) end, so users are most often put in a "take it or leave it" situation, where there's no way to "vote with your wallet" because there's no option on the market that you could vote for.
The problem with telemetry is that measuring for correct things and interpreting the results is hard, and it's way too easy to use the data to justify whatever the vendor (in particular, their UX or marketing team) already thinks. A common example is a feature that's been hidden increasingly deeply in the app on each UI update, and finally removed on the grounds that "telemetry says people aren't using it". Another common example is "low use" of features that are critical for end-users, but used only every few sessions.
I also don't like things measuring "dwell time" when scrolling, as it encourages attention-grabbing gimmicks and rewards things that are confusing as well as things that are useful.
As in, if you are not a UX professional, your opinion is inconsequential.
See: replies on most Chrome UX feature requests over the last decade
Of course. The people complain. They say fork it and change it yourself or use what we give you.
The people that can't just suffer through it. The ones that know enough use something else.
I login in linux tty. And startx starts dwm. No fancy login screen for me.
This is why I don't touch GUIs from the major binary distros or gnome3 with a 10 foot pole. If I can avoid it I don't ever install anything from those projects.
 is the example that always comes to mind. I guess this made sense to somebody at the time, but it adds overhead to a process that was simple before, and isn't enabled just for "Enterprise" deployments, it's just dumped on the user to figure out how to configure screensaver hack settings by creating / modifying a theme.
Instead of spending the substantial donations they received on who knows what, the GNOME foundation should have spent some of it conducting proper focus groups.
I think it's an interesting and worthwhile experimental path; I just wish it wasn't the "default" as much as it is. But I also feel that way about Ubuntu. And Windows. xD
One of my least favourite was when it was not possible to configure the screensaver timeout to never turn off the display. IIRC you had a choice of several fixed times, from 5 minutes to 4 hours, but no "Never" option.
Not useful for systems which display information and are infrequently interacted with. That use case was completely ignored, and for no good reason.
oh no doubt, I have another comment from 4+ years ago about the same topic https://news.ycombinator.com/item?id=10883631 and even then it was ancient history IIRC
man, just looking at that page again reminded me that Windows Registry for Linux^W^W^W^W gconf exists.
>Icon Themes can change icon metaphors, leading to interfaces with icons that don’t express what the developer intended.
Icons were never sufficient metaphors to start with which is why we have text labels.
>Changing an app’s icon denies the developer the possibility to control their brand.
What does this even actually mean.
>User Help and Documentation are similarly useless if UI elements on your system are different from the ones described in the documentation.
This is only true if the user is completing an action that is solely based on clicking an icon with no text which we have already established is bad.
>The problem we’re facing is the expectation that apps can be arbitrarily restyled without manual work, which is and has always been an illusion.
Why has this worked generally fine in lots of ecosystems including gnome?
>If you like to tinker with your own system, that’s fine with us.
Earlier discussion seemed to suggest that lots of gnome developers were in fact not fine with this because it hurt gnomes "brand identity"
>Changing third-party apps without any QA is reckless, and would be unacceptable on any other platform.
> we urge you to find ways to do this without taking away our agency
> Just because our apps use GTK that does not mean we’re ok with them being changed from under us.
Nobody cares if you are OK with it.
Because it’s now possible to run multiple VMs at once (containers, etc) perhaps it’s time to run a simple, minimal, admin friendly hacker vm inside Ubuntu desktop?
Let Ubuntu configure all that it needs to get a good functional machine out of the box (working sleep mode for laptops, WiFi management, GPU support, systemd if that’s what it wants.) I then deploy the minimal VM I actually want to poke around with inside that installation.
This is pretty much what many people do in macOS. Apple’s OS supports the bare metal, vagrant / VirtualBox give me my tractably scrutable dev environment.
It’s not a particularly ground breaking concept but it might cheer me up a bit when battling with the volatility of user facing Linux distributions.
> Let Ubuntu configure all that it needs to get a good functional machine out of the box (working sleep mode for laptops, WiFi management, GPU support, systemd if that’s what it wants.) I then deploy the minimal VM I actually want to poke around with inside that installation.
If there's anyone like me here they might be happy to know that KDE Neon exists and is something like:
- Stable Ubuntu base.
- Infinitely (almost) customizable KDE on top.
- And (IMO unlike Kubuntu) sane defaults.
There is never going to be a unified GUI for Linux; that requires a dictator. KDE tried to provide the carrot of development-ease, Gnome tried to generate some reality distortion, but nobody cared. Carrots don't work. As far as I'm concerned, the experiment is over and it is time to embrace the chaos.
Now, this is easy for me to say, I'm mostly a command-line person anyway, and have spent most of my working life dealing with horrible UI. But it does have a lot of implications for Linux that I think a lot of people are not ready to accept.
You are spot on, and your 'book analogy' is perfect. If it works perfectly don't change it — that is unless an innovation arrives that offers a significant improvement and that's just as easy to use.
Unfortunately, most so-called UI improvements over the last 20 or so years are not improvements at all, in fact many have been quite regressive. They've annoyed millions of users who've collectively wasted millions of hours relearning what they already knew (and in the end nothing was added by way of new productivity)—and that doesn't include the developer's 'lost' time developing these so-called improvements. It's time that would otherwise have been much better spent fixing bugs, providing security improvements and or developing software for altogether new applications that we've not seen before.
The question I keep asking over and over again is what exactly are the causes behind all this useless 'over innovation'. Why is it done on such a huge scale and with such utter predictability?
Is it marketing's wish for something new? Are developers deliberately trying to find work for themselves or to keep their jobs or what?
It seems to me that many a PhD could be earned researching the psychological underpinnings of why so many are prepared to waste so much money and human effort continuing to develop software that basically adds nothing to improve or advance the human condition.
In fact, it's such an enormous problem that it should be at the core of Computer Science research.
Promotion & NIH management sydrome.
New shiny gets a promotion. Fixing a niche bug in a decades-old stable system does not.
And by the time all the new bugs you've introduced are found, you'll have a new job somewhere else.
So essentially, project managers' bosses not pushing back with a hard "Why should we change this?"
I’m probably missing some stuff, but I think people out to at least be able to “feel” their way around a UI. Lately there’s been so much push for minimalism like omitting scroll bars and such that make it confusing.
But, again that experimentation will root out what works and doesn’t. And new devices like VR of course have yet to be discovered paradigms.
> the stacking window managers work well from Windows 95 and XP why change it?
To get something that works better.
Despite all evidence to the contrary.
Well said. Is your machine shop stocked by a single brand of tools all in the same color, or is it a mix of bits and pieces accumulated, rebuilt, repainted, hacked, begged-borrowed-and-stolen over the course of your development as an engineer?
A free software Unix workstation is exactly the same. It’s supposed to look untidy. It’s a tool shed.
Apologies if I’ve touched a nerve with the Festool crowd with my analogy.
Agreed, but I can never get to the bottom of or reason why developers do not provide alternative UI interfaces (shells) so that the user can select what he/she wants. This would save the user much time relearning the new UI (not to mention a lot of unnecessary cursing and swearing).
For example, Microsoft substantially changes the UI with every new version of Windows—often seemingly without good reason or user wishes. This has been so annoying that in recent times we've seen Ivo Beltchev's remarkable program Classic Shell used by millions to overcome the problem of MS's novel UIs.
Classic Shell demonstrates that it's not that difficult to have multiple UIs which can be selected at the user's will or desire (in fact, given what it is, it has turned out to be one of the most reliable programs I've ever come across—I've never had it fault).
It seems to me that if developers feel that they have an absolute need to tinker or stuff around with the UI then they should also have at least one fallback position which ought to be the basic IBM CUA (Common User Access) standard as everyone already knows how to use it. If you can't remember what the CUA looks like then just think Windows 2000 (it's pretty close).
It's because everybody wants you to use their thing and not some other thing. If people have a choice then some people will choose something else.
This is especially true when the choice is to continue using the traditional interface everybody is already familiar with, because that's what most everybody wants in any case where the traditional interface is not literally on fire. Even in that case, what people generally want is for you to take the traditional interface, address the "on fire" parts and leave everything else the way it is.
Good change is good, but good change is hard, especially in stable systems that have already been optimized for years. Change for the sake of change is much more common, but then you have to force feed it to people to get anyone to use it because they rightfully don't want it.
Linux was all about chaos and herding cats until a short number of years ago.
It's the "standardisation at all costs" brigade who have killed the goose that laid the golden eggs. It's now far worse than Windows in many aspects. Freedesktop and GNOME deserve the lion's share of the blame, but RedHat, Debian and many others enabled them to achieve this.
Over the last decade, we have experienced a sharp loss of control and had certain entities become almost absolute dictators over how Linux systems are permitted to be run and used.
Linux started out quite clunky and unpolished. It could be made polished if you wanted that. But nothing was mandatory. Now that's changed. A modern mainstream Linux distribution gives you about the same control over your system that Windows provides. In some cases, even less. Given its roots in personal freedom, ultimate flexibility, and use as glue that could slot into all sorts of diverse uses, I find the current state of Linux to be an nauseating turn off.
And I say that as someone who has used Linux for 24 years, and used to be an absolute Linux fanatic.
Exactly, but what perplexes me is why aren't these issues that are so obvious to us not obvious to them. Why do they think so differently to normal users?
Animations that block use input are the sort of stupidity that becomes evil in its own regard. There was even a calculator but with this. This is a massive failure at the management level; somebody actually codes these things, but that it's not caught anywhere before shipping shows that the wrong people are in charge.
Don't make your user repeat something twice :)
I do not log-in frequently enough to remember how it behaves on log-in (even my laptop has ~90 days of uptime).
FWIW, moving away from GNOME 2.x to either Unity or GNOME 3 was a hard move to swallow, though in all honesty, Unity was better (though pretty buggy and laggy until the end)!
Now when it is gone, I'm using xfce, which seems be the last decent desktop environment.
Besides waiting for animation, in Windows 10, if you type the password fast enough the first character gets selected and the second character you type will replace it.
This happens frequently.
These things used to work reliably. I think most of the problems are caused by introducing asynchronicity into apps without thinking about how it affects keyboard input. Keyboard input should always behave like the app is single-threaded.
Here's an example I encounter whenever I use Microsoft Teams at work. I go to "Add Contact", and the entire screen becomes a modal entry box into which I have to enter a name. There's a single entry field on the screen. It's not in focus, even though that's the sole action that I can perform at this time. I have to explicitly select the entry field with the mouse and then type. It's such a basic usability failure, I really do wonder what both the application developers and the testers are actually doing here. This used to be a staple of good interface design for efficient and intuitive use.
Presumably Gnome copypasted it from Windows, because otherwise where did that idea come from into multiple distinct projects simultaneously? Windows has always had ctrl+alt+del to logon, Ubuntu hasn't had a precedent of having to do something to get to the logon prompt, IIRC.
Of course not.
This is the same this Win10. I was super annoyed they add one more step to a very frequent action, for no benefits on a PC computer. Hopefully I don’t have to use Win10 too much, but this is symptomatic of the mobilification of computers.
"Hey wow, that looks pretty great! I know where the buttons are, I can quickly scan them and it's clear what they do! It isn't a grey on grey tiny soup, they are distinct and clear, this is great. When is this version shipping? It fixes everything!"
Apparently almost everyone agrees but somehow we're still going the wrong way, what's going on here? Why aren't we in control of this?
At this point it feels like a prank that's been going on for a quarter century.
You're not wrong, I wish you were but you're not. By any measure GIMP is a dog of a program. I wish it weren't so as I stopped upgrading my various Adobe products some years ago, but alas it is. It would take me as long as this 'The Decline of Usability' essay to give an authoritative explanation but I'll attempt to illustrate with a few examples:
1. The way the controls work is awkward, increasing or decreasing say 'saturation' is not as intuitive as it is in Photoshop and the dynamics (increase/decrease etc.) just isn't as smooth as it ought to be. Sliders stick and don't respond immediately which is very distracting when you're trying to watch some attribute in your picture trend one way or other.
2. Most previews are unacceptably slow; they really are a pain to use.
3. The latest versions have the 'Fade' function removed altogether. I use 'Fade' all the time and I don't like being told by some arrogant GIMP programmer that the "function never worked properly in the first place, and anyway you should use the proper/correct XYZ method". You see this type of shitty arrogance from programmers all the time.
4. GIMP won't let you set your favourite image format as your default type; you're forced to GIMP's own XCF format and then export your image into say your required .JPG format from there. (I understand the reason for this but there ought to be an option to override it, if the GIMP developers were smart they'd provide options for various times, for instance: 'session only'.)
5. As others have mentioned, there's icon and menu issues, menu items aren't arranged logically or consistently.
Essentially, the GIMP's operational ergonomics are terrible and there's been precious little effort from GIMP's developers to correct it. (GIMP's so tedious to use I still use my ancient copy of Photoshop for most of the work, I only then use the GIMP to do some special function that's not in Photoshop.)
 The trouble is most programmers program for themselves—not end users, so they don't see any reason to think like end users do. (I said almost the same thing several days ago in my response to Microsoft's enforcing single spaces between sentences in MS Office https://news.ycombinator.com/item?id=22858129 .) It doesn't seem to matter whether it's commercial software such as Microsoft's Office, or open software such as the GIMP or LibreOffice, etc., they do things their way, not the way users want or are already familiar with.
Commercial software is often tempered by commercial reality (keeping compatibility etc.) but even then that's not always so (take Windows Metro UI or Windows 10 for instance, any reasonable user would have to agree they're first-class stuff-ups). That said, GIMP is about the worst out there.
"At this point it feels like a prank that's been going on for a quarter century."
Right again! GIMP developers seem not only to be hostile towards ordinary users but there's been along-standing bloody mindedness among them that's persisted for decades; effectively it is to not even consider ordinary users within their schema. Nothing says this better than the lack of information about future versions, milestones, etc. All we ever get are vague comments that don't change much from one decade to the next.
Perhaps it would be best for all concerned if GIMP's developers actually embedded this message in its installation program:
"GIMP is our play toy—it's prototype software for our own use and experimenting—it's NOT for normal user use. You may use it as is but please do not expect it to work like other imaging software and do not bother us with feedback for we'll just ignore you. You have been warned".
>they do things their way, not the way users want or are already familiar with
In my experience, a program that has never had a feature removed (or unintentionally broken) is an exception, not the rule. It takes a lot of effort to keep things working over the years, and if there is no will to maintain that, then those things will disappear.
Users, show precious little allegiance to any app when it balks them or they cannot find an easy way to do what they want to (run a help desk for a week and you'll get that message loud and clear).
As I see it, there are great swathes of poor and substandard software on the market that shouldn't be there except for the fact that there's either no suitable alternative, or if reasonably good alternatives do exist then they're just too expensive for ordinary people to use (i.e.: such software isn't in widespread use). I base this (a) on my own long experience where I encounter serious bugs and limitations in both commercial and open source software as day-to-day occurrences; and (b), data I've gathered from a multitude of other reports of users' similar experiences.
(Note: I cannot possibly cover this huge subject or do it reasonable justice here as just too involved, even if I gave a précised list of headings/topics it wouldn't be very helpful so I can only make a few general points.)
1. The software profession has been in a chronic crisis or decades. This isn't just my opinion, many believe it fact. For starters, I'd suggest you read the report in September 1994 edition of Scientific American titled Software's Chronic Crisis, Wayt Gibbs, pp 86-95: https://www.researchgate.net/publication/247573088_Software'... [PDF]. (If this link doesn't work, then a search will find many more references to it.)
1.1 In short, this article is now nearly 26 years old but it's still essentially the quintessential summary on problems with software and the software industry generally (right, not much has changed in the high-level sense since then, that's the relevant point here). In essence, it says or strongly implies:
(a) 'Software engineering' really isn't yet a true engineering profession such as chemical, civil and electrical engineering are and the reasons for this are:
(b) As a profession,'Software engineering' is immature, it 'equates' [my word] to where the chemical profession was ca 1800 [article's ref.], (unlike most other engineering professions, at best it's only existed about a third to a quarter the time of the others).
(c) As such, it hasn't yet developed mandatory standards and consistent procedures and methodologies for doing things—even basic things, which by now, ought to be procedural. For instance, all qualified civil engineers would be able to calculate/analyze static loadings on trusses and specify the correct grades of steel, etc. for any given job or circumstance. Essentially, such calculations would be consistent across the profession due to a multitude of country and international legally-mandated standards which, to ensure safety, are enforceable at law. Such standards have been in place for many decades. Whilst the 'Software Profession' does have standards, essentially none are legally enforceable. Can you imagine Microsoft being fined for, say, not following the W3C HTML standard in Windows/Internet Explorer to the letter? Right, in this regard, software standards and regulations are an almighty joke!
(d) Unlike other engineering professions, software engineers aren't required by law to be qualified to a certain educational standard [that their employers may require it is irrelevant], nor are they actually licensed to practice as such. When 'Software engineering' eventually becomes a true profession then these requirements will almost certainly be prerequisites for all practitioners.
(e) With no agreed work procedures or mandated work methodologies across the profession 'software engineers' are essentially 'undisciplined'. As such, the SciAm article posits that software programmers work more akin the way of artists than that of professional engineers.
(As a person who has worked in both IT/software and in an engineering profession for years, I have to agree with Wayt Gibbs' assessment. There are practices that are generally acceptable in software engineering, which if I attempted to equate them to an equivalent circumstance with my engineering hat on, then I'd likely end up in court (even if no one was killed or injured by what I'd done. Here, the rules, the structure—the whole ethos is different, and both ethics and law play much stronger roles than they do in software-land).
2. You may well argue that even though Computing Science is not as old as the other engineering professions, it, nevertheless, is based on solid mathematics and engineering foundations. I fully agree with this statement. However, without enforceable standards and licensed/qualified software practitioners, the industry is nothing other than just 'Wild West' engineering—as we've already seen, in software just about anything goes—thus the quality or standard of software at best is only that of the programmer or his/her employer.
3. As a result, the quality of product across the industry is hugely variable. For example take bloatware: compare the biggest bloatware O/S program ever written, MS Windows, with that of tiny, fast and highly efficient Kolibrios OS, built on Assembler https://kolibrios.org/en/ (here I'm referring to methodology rather than functions — we can debate this later).
4. The commercial software industry hides behind the fact that its software is compiled, thus its source code is hidden from public view and external scrutiny. Its argument is that this is necessary to protect its so-called intellectual property. Others would argue that in the past loss of IP was never really the main issue, as manufacturing processes were essentially open—even up until very recent times. Sure, it could be argued that some manufacturing had secrets [such as Coca Cola's formula, which really is only secret from the public, not its competitors], but rather industrial secrets are normally concerned with (and applied to) the actual manufacturing process rather than the content or parts of the finished product. That's why up until the very recent past most manufacturers were only too happy to provide users with detailed handbooks and schematics; for protection from copies they always relied on copyright and patent law as protection (and for many, many decades this protection process worked just fine). It's a farce to say that commercial 'open source' isn't viable if it's open. Tragically, this is one of the biggest con job the software industry has gotten away with—it's conned millions into believing this nonsense. More the true reason is that the industry couldn't believe it's luck when it found that compilers hid code well — a fact that it then used opportunistically to its advantage. (Likely the only real damage that would be done by opening its source is the embarrassment it'd suffer when others saw the terrible standard of its lousy, buggy code.)
4.1 'Software engineering' won't become a true profession until this 'hiding under compilation' nexus is broken. There are too many things that can go wrong with closed software—at one end we've unintentional bugs that cannot be checked by third parties, at the other we've security, spyware and privacy issues that can be and which are regularly abused; and there's also the easy possibility of major corruption—for instance, the Volkswagen scandal.
5. Back to your comment about 'good design not being free'. I'm very cognizant of the major resource problems that free and open source software developers face. That said, we shouldn't pretend that they don't exist, nor should we deliberately hide them. I accept that what we do about it is an extremely difficult problem to solve. My own suggestion to up the standard of open software is a sort of halfway house where cooperatives of programmers would be paid reasonably acceptable remuneration for their contribution to these major open projects. In turn, there would be a small nominal free (say $5 to $20) levied on large scale open software programs such as GIMP, LibreOffice, ReactOS etc. to ensure that development could go ahead at a reasonable pace (the projects otherwise would be revenue neutral—there would be no profits given to third parties).
Let me finish by saying that whilst commercial software has the edge over much free/open software (for example MS Office still has the Edge over LibreOffice), that edge is small and I believe the latter can catch up if the 'funding/resource' paradigm is changed just marginally. Much commercial software such as MS Office is really in a horrible bloated spaghetti-code-like mess and with better funding it wouldn't take a huge effort for dedicated open software programmers to beat their sloppy secretive counterparts at their own game. After all, for many commercial programmers, programming is just a job, on the other hand open software aficionados are usually doing it for the love of it—and that's a true strategic advantage.
I firmly believe that for open software to really take off it has to be as good as and preferably better than its commercial equivalent. Moreover, I believe this is both possible and necessary. We never want a repeat of what happened in Munich where Microsoft was able to oust Linux and LibreOffice. With Munich, had it been possible to actually demonstrate that the open code was substantially and technically superior to that of Microsoft's products, then in any ensuing legal battle Microsoft would have had to lose. Unfortunately that was not possible, so the political decision held.
One thing is for certain, we urgently need to raise the standard of software generally and it seems highly unlikely that we can do so with the way the industry is currently structured.
This wouldn't work. Most software isn't life-and-death. That's a big difference from bridge engineering, nuclear engineering, and aeronautical engineering.
If you're hiring someone to do Python scripting, there's little point insisting they have a grounding in formal methods and critical-systems software development. You could hire a formal methods PhD for the job, but what's the point? The barrier-to-entry is low for software work. Overall this is probably a good thing. Perhaps more software should be regulated the way avionics software is, but this approach certainly can't be applied to all software work.
If your country insisted you become a chartered software engineer before you could even build a blog, your country would simply be removing itself from the global software-development marketplace.
> compare the biggest bloatware O/S program ever written, MS Windows, with that of tiny, fast and highly efficient Kolibrios OS
I broadly agree, but in defence of Windows, Kolibri is doing only a fraction of what Windows does. Does Kolibri even implement ASLR? One can build a bare-bones web-server in a few lines, but that doesn't put Apache out of a job.
> My own suggestion to up the standard of open software is a sort of halfway house where cooperatives of programmers would be paid reasonably acceptable remuneration for their contribution to these major open projects. In turn, there would be a small nominal free (say $5 to $20) levied on large scale open software programs such as GIMP
This doesn't work. It means a company can't adopt the software at scale without implementing licence-tracking, which is just the kind of hassle Free and Open Source software avoids. If I can't fork the software without payment or uncertainty, it's not Free even in the loosest possible sense.
The way things are currently is far from ideal, but we still have excellent Free and Open Source software like the Linux kernel and PostgreSQL.
> open software aficionados are usually doing it for the love of it—and that's a true strategic advantage.
Agree that this can be an advantage. Some FOSS projects are known for their focus on technical excellence. That said, the same can be said of some commercial software companies, like iD Software.
> One thing is for certain, we urgently need to raise the standard of software generally and it seems highly unlikely that we can do so with the way the industry is currently structured.
Software firms today are doing a good job of making money. If the market rewards regressions in UI design, and using 50x the memory you really need (thanks Electron), what good would it do to regulate things?
Apparently most people don't care about bloat, and they prefer a pretty UI over a good one. That doesn't strike me as the sort of thing you can tackle with regulation.
UI is hard. It got replaced by "UX", but nobody agrees what that really is. So it boils down to whatever impracticality designers dream up. When UI was easy, there were real research, data backing up claims of improvements and laid down rules to enforce some consistency. This became "unfashionable" and was removed.
It was like during the VCR wars of the 80s when consumers wanted the most features but yet the fewest buttons. Then they complained how you had to basically play rachmaninoff on their sleek minimal interface to set the clock.
We need to be like other industries; "that's too bad". Seatbelts are inconvenient? "that's too bad". You don't want to stay home during a pandemic because the weather's nice? "that's too bad" ... you want a bunch of incompatible UX goals that leads to trash? "That's too bad".
Sometimes the opinion of an uninformed public shouldn't matter. We don't go to a doctor and pass around ballots to the other people in the waiting room to democratically decide on a diagnosis. Knowing what to not listen to is important.
The UX propellerheads come back with statistics from user telemetry that always agree with them.
UX is the problem — designing “experiences” geared around an 80/20 approach is substituted for the harder task of building tools that work.
Fred Rogers, 1984:
"I have always felt that with the advent of all of this new technology that allows people to tape the 'Neighborhood' off-the-air ... they then become much more active in the programming of their family’s television life. Very frankly, I am opposed to people being programmed by others. My whole approach in broadcasting has always been ‘You are an important person just the way you are. You can make healthy decisions’ ... I just feel that anything that allows a person to be more active in the control of his or her life, in a healthy way, is important."
There's definitely non-crazy ways of doing this ... but it requires what at first blush, would appear to be a complicated interface.
Ha, reading that link now one feels a delicious sense of irony. Imagine how Sony would react today seeing that it has become one of the biggest purveyors of video/movie content. ;-)
Honestly I don’t know why VCRs are so hard to program but all of the buttons can’t help. I might be getting old but the Roku remote seems about right as far as complexity in the device goes and I can see how a nice interface with relative timekeeping could do what you need without a clock per se. Inertial guidance for timekeeping? A self winding DVR?
Two things off the top of my head that I can think of: 1) knowing that a blinking number is indicating some kind of selection and more generally 2) seeing the UI as a glimpse into a larger abstract space that can be navigated. Or in other words, having used computers for many years, what my parents sawzl as just a blinking word, I would see as a menu where up/down/left/right had meaning.
There's also some more abstract thinking involved there - for me it's very spatial so I think of it as being able to keep track of your place in this 'abstract map'. You had to learn some non-obvious things like "if the down button stops working, it probably means I'm at the 'bottom' of my available choices" or "if I start seeing the same choices again, it means I have 'wrapped around' and in a logical sense I'm back to where I've been before".
I actually remember thinking something like this as a child when we got a VCR. I think I remember that realization that "this is a menu I can explore". The exploratory skills you pick up when you have to figure out how to use something technical generalize really well to other technical things.
TL;DR: I think VCRs were hard to program because the limited UI of buttons and a tiny screen meant that you actually needed a fairly built-up mental model of the process to keep track of what you were doing.
The biggest “what were they thinking” part for me is why they cram a whole GUI with config options and menus into a clock when almost every use case for a VCR is already connected to a perfectly workable display which is much better suited to a GUI in the form of the TV. Later VCRs had onscreen rather than on-device GUIs but by then institutional momentum was too far along to redesign the remote when they moved the GUI out of the device and onscreen. Truly a missed opportunity.
I don’t know anyone involved in any VCR product. If I did I’d be asking them a lot of questions. But I have a hard time thinking they meant to make it so hard. They probably were clapping each other on the back and congratulating each other. They were inventing future ways of using content and for that they deserve praise. They just sucked at understanding how hard it is for non experts to put themselves in the mind of experts, someone whose inner mental world has jarringly different contours and whose mental model of reality may have little to no correspondence whatsoever with their own.
Many people would keep the manuals near the TV, so they could remind themselves how to use the rarely used features.
The Panasonic VCR we had included a barcode reader in the remote. The printed TV guide has barcodes for each program. This interface was very easy to use -- scan, and press "transmit".
Edit to add a link to an advert: https://www.youtube.com/watch?v=tSGUbE1v5RA -- the sheet of timecodes was necessary if you didn't have a TV guide with them printed, as shown here: http://www.champagnecomedy.com/panasonic-barcodes-saving-you...
That's the outrageous point. You shouldn't need manuals for operation of ordinary domestic appliances! If you do then you automatically know its design is substandard!
(The only reason you should need a manual is for some info or maintenance function that's not normally associated with
its user-operated functions.)
So now my wristwatch is sitting on the desk.
*The hard part of programming the Beta(and early VHS) for me was getting the family to leave the tuner on the channel I/we wanted to record.
I hate that ovens and microwaves have clocks on them. I don't need two devices in my kitchen to tell time. It's ridiculous since they usually next to each other, and most of the time have different displays. Just because there is an LCD/whatever, doesn't mean it always has to display something!
At least on the latest power outage, my microwave stopped showing the time. The oven still flashed, so I set that time and only have one clock in my kitchen now.
Even my vehicle has two clocks in it, one on the instrument cluster and one on the infotainment system. So stupid!!!
What's even more crazy, increasingly often I've started to encounter ovens that don't work until you set the clock. I.e. if the clock was reset and is blinking, the heater won't turn on. Took me a while to figure it out the first time I saw it.
It's too bad time sync over power lines didn't catch on widely (or broadcast over the radio). It would still be saving everyone from changing their digital clocks during DST.
They're common in Europe, on a midrange bedside clock for example, and typical office/school clocks.
I remember we were foiled by one at school, when someone set the clock 15 minutes forward when the teacher wasn't looking. The hands could only move forward, so a few minutes later they started spinning 11½ hours further forward to set the correct time.
In the 1990s I encountered a hotel TV with that feature. It had a built-in clock with hands (not on screen), which was also the alarm clock for the room. No one had set it up, and I spent about ten minutes with the remote getting it to find a station with time info and set the clock. Then the "alarm set" function on the remote would work and I could set my wake-up time.
Given that nobody did it, it would appear that even though legally people like Mr. Rogers were making the case for time-shift programming, the industry must have assumed it was a minor use case.
https://en.wikipedia.org/wiki/Extended_Data_Services (NTSC) looks like a 2008 standard and most PBS stations provide "autoclock" time data
https://en.wikipedia.org/wiki/Radio_Data_System (FM radio) I figured this had an implementation considering text has been around for years. Amazingly, I don't think I've ever seen a car stereo use it to set the time!
https://en.wikipedia.org/wiki/Broadband_over_power_lines I know this has been around but has had a lot of hurdles. I figured the current time might be a simpler thing.
The only reliably time-setting tech I've seen integrated is GPS--I'm not 100% sure how time zones work with it, but it does know your location.
Autoclock setting was done for VCRs. It just happened much later than the case in question.
You mean the same industry that was trying to make time-shifting (and VCRs in general) illegal?
If they'd included a backup battery to retain the clock, I suspect it'd been less of a thing.
On the contrary, the clock needs to be super obvious precisely because it's a pain to set. Otherwise you wouldn't notice until your recordings were messed up.
Correct, that's the 2000+ year old axiom of ignoring the lowest common denominator and seeking the best advice available.
That said, if you're designing software for use by users who are 'lowest common denominator' then, a priori, you have to make it to their measure. If they cannot understand what you've done then you've wasted your time.
(I just now cleaned it up, although there are some icons you can't get rid of.)
It's not at all anymore about presenting consistent mental models, it's solely about the ease or difficulty with which particular isolated tasks can be performed.
It's also not automatically the goal to make all tasks as easy as possible. Instead, discoverability and "friction" are often deliberately tuned to optimize some higher-level goals, such as retention or conversion rates.
This is why we have dialogs where the highlighted default choice is neither the safe one nor the one expected by the user, but instead the one the company would like the user to take. (E.g. "select all" buttons in GDPR prompts or "go back" buttons if I want to cancel a subscription.
You can see that quite often in browsers as well, often even with good intentions: Chrome, for a time, still used to allow installing unsigned extensions but made the process deliberately obscure and in both Chrome and Firefox , options are often deliberately placed into easy or hard to discover locations. (E.g. a toggle on the browser chrome, vs the "settings" screen, vs "about:config", vs group policies)
I will readily admit in collective number of clicks and screentime, 37 year old men with advanced degrees in computer science are a super small minority.
But who is the majority then? Who spends the most time on say Reddit and YouTube? Children! Yes, people who we know are dramatically cognitively different than adults.
Why does YouTube keep recommending videos I've watched? That's what a child wants! Why does reddits redesign look like Nickelodeon?
There isn't one user and one interface that's right for everyone when we're talking about 5 year olds, 50 year olds, and 95 year olds.
We can make them adaptable to the screen, we should also do work to make them adaptable, at fundamental interaction levels, to the person using the screen.
And not in a clever way, but in a dumb one.
For instance, here's how you could ask YouTube:
"We have a few interfaces. Please tell us what you like to watch:
* Cartoons and video games
* Lectures and tutorials
And that's it. No more "learning", that's all you need to set the interface and algorithms.
Let's take Wikipedia, it could be broken up into children, public, and scholar. Some articles I'm sure are correct but are way too wonky and academic for me to understand and that's ok. There's nothing to fix, I'm sure it's a great tool for professionals. However, there should be a general public version.
"Simple English" does a pretty good job. Obviously it's a mix of children/public but for science/mathematical topics where I'm looking just to verify my basic understanding of something, swapping over to Simple English usually gives me what I was looking for if the main article is immediately going down into technical rabbit holes.
This proposal quickly falls apart because your categories are ill-defined based on your preconceptions. I watch a ton of lectures about video games on Youtube (e.g. speed run breakdowns or game lore theories). Do I choose the "Cartoons and video games" bucket or the "Lectures and tutorials" bucket?
"We've found adults and teens like different parts of youtube and use it differently. We want to make it the best for you. You can switch at any time, but tell us what best describes you:
* I'm an adult
* I'm not an adult.
youtube has this "for kids" app which came out after I first started pointing this difference in earnest around 2013, (https://play.google.com/store/apps/details?id=com.google.and...) but it's not right and they clearly still cater their main interface to the habits of children who watch the same video hundreds of times - the insane repetition is a part of learning nuance and subtly in the context of content they don't have to actually pay attention to. It's all about learning the meta, super important. They know what happens, it's the silence in between they're excited about - that's the nature of play.
This app instead silos the kids into a playskool interface, great for people under 7 or so, but like our playground reform, we've made it completely unappealing for the 8-22 or so demographic (when I was a kid and there were ziplines into a bank of tires, you bet there were 20 year olds lining up to have a good time on those, we all have a need for play; freedom to err wrapped in relative safety).
Instead, it's data-driven UX for adults and data-driven UX for children - it's about separating the data, not a PTA-acceptable UX for overprotective parents.
The easiest thing to do is just allow them on youtube no filter.
The middle ground is the play app. Weird stuff sometimes get through but usually it's more someone dressed as a pretend princess. The good thing it's never really a murder scene or something equally as horrible (which could popup on youtube.com).
What would you do as a parent?
I would avoid youtube unless you setup the video until 7 or 11. After that it depends on the child.
It kinda has this for specific subjects:
It already exists.
Yes, and Mozilla has become much worse about this. Turning off "Pocket Integration", or "Shared Bookmarks", or "Mozilla Telemetry", or "Auto update" becomes harder in each release.
Native macOS apps get to be a bit clever for this, in that there are two kind of button-highlight-state per dialog (the "default action" button, which is filled with the OS accent color; and, separately, the button the tab-selection starts off highlighting, which has an outline ring around it.) This means that there are two keys you can mash, for different results: mashing Enter results in pressing the default action (i.e. colored) button–which Apple HIG suggests be the "Confirm" option for dangerous cases; while mashing Space results in selecting the initially-selected (i.e. outlined) button—which Apple HIG suggests be the "Cancel" option for dangerous cases. I believe that, in cases where the action isn't irrevocable, Apple HIG suggests that the default-action and initially-selected attributes be placed on the same button, so that either mash sequence will activate the button.
I really wish that kind of thinking was put into other systems.
What is different in Win32, however, is that if any button is focused, it is also made the default for as long as focus is on it (or, alternatively - Enter always activates the focused button). Thus, there's no visual state for "focused, not default", because there's no such thing.
The distinction still matters, though, because if you tab away from a button to some other widget that's not a button, the "default" button marker returns back to where it originally was - focus only overrides it temporarily.
This can be conveniently explored in the standard Win32 print dialog (opened e.g. from Notepad), since it has plenty of buttons and other things on it. Just tab through the whole thing once.
There were computer keyboards which had a distinction between the button to enter the field and the button to, for example, do the desired action behind the whole dialog. Just like today it is common to expect that Esc is going to cancel the dialog (or the entry form) there was a key that one knew would do the "proceed" (GO) independently of the position in which field your cursor is at the moment (or was). In these operating systems Enter always did just the non-surprising "end of the entering of the current input field, skip to the next" and the "GO" signaled the end of that process and the wish to use everything that has been entered up to any point. It's particularly convenient when entering a lot of numerical data on the numeric keypad, where Enter also just moves to the next field.
I think that concept was right, and better than what we have today. Entering what are basically "forms" in any order (filling the dialogs) and proceed from any point is a basic task and could have remained less surprising.
IOW following metrics optimising for local maxima instead of looking at the big picture in a non-zero sum game. Each task is made easier by itself but in doing so creates a model in conflict with everything else, making everyone miserable. Nash would be sad.
Now all you have to do is stick a bone through your beard and pronounce yourself a "UX Guru" and off you go.
And it still hasn't been fixed.
I'm not a big believer in conspiracies, but if there's one I'd not dismiss out of hand, it's that Adobe or some other company has been ensuring that GIMP has never been improved or become a viable replacement for some PS users.
There is obviously a large potential market for a lower cost option for light users of Photoshop who don't want a monthly subscription to Creative Cloud.
Maybe they secretly paid off open source devs to obfuscate the code so much that any potential volunteers would have too much trouble finding a way to re-architect the UI without years of unpaid work.
When I see so many great improvements to complex software released to the community on GitHub, along with the potential for some startup to fork GIMP, fix its UI, and charge some sort of support fee like a lot of companies do with OSS, I just find it very strange that GIMP's UI is still in such bad shape, after two decades oconatant complaining by users.
It wouldn't surprise if Microsoft did or does something similar with the OpenOffice code base. So many compatibility and usability problems that just seem to langusish for decades, while you'd think some company could find a way to make money fixing some of the biggest issues keeping light users of Office 365 who don't want to pay subscriptions.
I read an interview with the maintainer and it sounds like he's put in a lot of work but as he says it's a "labour of love". I wish someone was paying him, even surreptitiously!
A designer that can't code will never start a software project so I guess that it's uncommon for them to get involved in one for free.
Then there are developers and designers involved in open source because their companies pay them for that. Gnome's designers are listed at https://wiki.gnome.org/Design#Team_Members
Two of them work at Red Hat, one at Purism, I didn't find any immediate affiliation for the other two.
Is there any company employing them ? Because i find the user interfaces from the 80-90 even 00 much more usable that the today's crap. Remember help buton ? Remember buttons ? Why does windows 10 looks the same and behaves worse than windows 1.0 ?
Yes, I had to explicitly set the way I want the icons to look in settings. It wasn't hard, and one of the bundled sets worked for me.
Maybe it's because I'm a long-time user and I know my way around, and where in the settings to look.
One of the problems of shipping UIs is setting good defaults. Maybe Gimp does not do a great job here; I should try a Clea installation.
What happened? search for images: "gimp 1.0" vs "gimp 2020". Wow.
> I've been using GIMP for years
I think usability to users experienced in the software and to new users are two different things. I believe an important part of usability is discoverability which is probably better judged by new users than by experienced users.
Holy cow! There's even the "classic" theme right there. Wish I knew this a year ago.
I've used it a number of times and do not find it any harder than any other piece of software — doing complex operations where you are not sure what you want to do (or especially, how is that called) is hard, but that's hard in an IDE as well.
I do not do much, but I do not do little with it either — I am perfectly happy with layers, selection tools, simple painting tools and the rudimentary colour correction I may want to do. And one can claim that the hamburger-menu-like approach started with Gimp, fwiw (right click on your image to get to a full menu, though you still had that menu at the top of your image window).
Two things have always been a requirement for proper Gimp use: "virtual desktops" — a dedicated one for Gimp — and IMO, "sloppy focus" (window under the pointer gets the focus right away), but I've been using those since at least 2001 or something when I first saw them on Sun workstations, so I probably never had trouble with extra clicks required to focus between toolbars and other dialogs.
For creating original artwork, I find any graphical approach too limiting — I _do_ want an easy approach of UIs, but I frequently think in terms of proportions and spatial relationships when drawing ("I want this to be roughly twice the size of this other thing and to the left") — I always try to imagine this combined tool that would fit my workflow, but then I remember that I am probably an outlier: I may have been spoiled having done mathematical drawings as a primary schooler in Metafont and later Metapost (for colour, or rather, grayscale :)), and being a developer since 7th grade, where it's hard for you to come to grips with how suboptimal doing precise drawings in any software is (I've done some minor uni work in AutoCAD too).
I very much do not.
Gimp used to be horrendous to use. It still has some usability issues, but it's become something I can use without risking my mental health.
The unfortunate side effect of this is that grumpy old users that were trained to accept the previous highly idiosyncratic UI started to complain because they have to relearn stuff. But it's worth it. And it opens up blender for more users.
But the Mac almost never had just icons in the ui. There would usually be an icon and a text. With little space you'd revert to text only.
Apple had a team of expert Usability experts. Others... did not. So they just copied something that looked cool and was easy to implement.
That it cut down on internationalizion efforts surely didn't hurt either.
The menu bar was always just text.
The toolbars offered several options: large/medium/small/no icons, text/no text.
(Not all of those options were always available.)
This let you progress as a user of a system. When you first experienced it you could use the large icons with text because it made the things you were searching for standout. As you learned the icons you could start shrinking them, and eventually remove the text. This opened up the toolbar to fit many more actions (often less frequently used). And the tool tip from hovering remained throughout, so in the worst case of an ambiguous (or unknown) icon, you could hover over it and learn what it did. Additionally, you'd often get the shortcut for the action when hovering over the button (or viewing it in the menu bar).
Many contemporary applications don't provide their users with this notion of progression.
The widgets that can be placed on that are buttons with icon & text (either of which could be hidden) that can be regular, toggle, or drop-down; and text and combo boxes. Well, and custom widgets, of course, but the point is that they were different from regular widgets in that they were toolbar-aware. IIRC this all first came as part of IE "common controls" library, even before it was shipped as part of the OS.
So then a top-level menu is just a band with a bunch of buttons with drop-downs that have hideen icons. A regular Win9x-style toolbar is a band with a bunch of buttons with icons but hidden text, and an occasional text box or combo box. And so on.
But the real nifty thing about these is that they could be customized easily, and it was traditional for Win32 apps to expose that to the user. At first it was just about showing/hiding stuff, but Office especially gradually ramped things up to the point where the user could, essentially, construct arbitrary toolbars and menus out of all commands available in the app, assign arbitrary shortcuts to them etc. So if you wanted icons in your main menu, or text-only toolbars, you could have that, too! This wasn't something that regular users would do, but I do recall it not being uncommon for power users of a specific app to really tailor it to themselves.
Visual Studio has it to this day, and takes it to 11 by further allowing to customize context menus through the same UI: https://docs.microsoft.com/en-us/visualstudio/ide/how-to-cus...
While there is no disputing Windows copy heavily from the Mac UI, the actual feel of that interface was also strongly influenced by the IBM Common User Access (CUA).
Not only did Windows try to follow those CUA rules, Microsoft encouraged Windows applications to also follow those rules.
That meant that from a user perspective, the Windows experience was fairly consistent irrespective of which application was being used.
Great! That UI was horrific. Hitting a key like spacebar didn’t unlock.
Aside: I completely broke the Ubuntu login screen yesterday. I did `apt remove evolution*`. Unfortunately the login screen depends on evolution-data-server, so I couldn’t login from lock screen, and after reboot it dropped me to TTY!! Gnome is just getting crazy - it would be like Windows login depending on MS Outlook! Gnome is a big ball of interdependencies, becoming more like Windows. I get it, but I don’t like it. Edit: FYI: fixed in TTY3 (ctrl-alt-F3) by `apt install --reinstall ubuntu-desktop` from memory.
I suspect the OP already had ubuntu-desktop package removed for some other reason, and there was no direct dependency on evolution-data-server for gdm: it will only remove dependencies which no other still-installed package depends on. That might still mean a packaging bug (but at least on 18.04, attempting to remove evolution-data-server prompts me how it's going to remove gdm3 too — sure, it's short and easily missed; attempting to remove evolution does not attempt to remove evolution-data-server since a bunch of other stuff depends on it like gnome-calendar too).
In any case, apt will prompt you about all the packages you are about to remove (unless you pass "-y(es)" to it).
Eventually made my own, and the key element? No icons at all. Just text - potentially small text - on the buttons. Turns out, being something you spend your entire life reading, text works great - within a sparse set, you can resolve exactly what a word is from letter shapes even if you can't directly read it, and if you don't know what something is you can just read it.
No one ever had any problem using it, even if they'd never seen it before, because every button said exactly what it did.
Actually, that sounds completely sane.
"None of us are as stupid as all of us."
I've been complaining about that since 1983, when an Apple evangelist came to my workplace to show off the Mac's icons. Nobody was able to guess what the box of Kleenex icon was, much to the frustration of the evangelist. Of course, there wasn't a Google then, but how do you look up a picture in a dictionary?
We've reverted to hieroglyphic languages. (Ironically, hieroglyphs evolved over time into phonetic meanings.)
No, they didn't. The original creators of hieroglyphs knew how to use them to spell phonetically, but they didn't do it that often, they were satisfied with the old system (just as Chinese are now). It was a job of other people, who actually didn't know how to use hieroglyphs properly, to build a functioning alphabet on top of them. Two systems coexisted for some time, and then hieroglyphs went into decline with the whole culture that supported them.
Well to be fair Blender is a professional tool. It is expected that users read the manual and learn the shortcuts, etc. Discoverability is something that should not be optimised for in a tool for professionals like Blender.
Every single icon that makes sense to you now (the floppy disk, the binoculars...) do so because you learned them a long time ago; it's funny how you can now find YT videos that explain where that weird blue icon for the "save" function comes from.
The images are just a mnemonic device - in the sense that the sign is partly related to the meaning (the binoculars could very well mean "zoom in" in an alternative world). Certainly a stronger connection works better because it helps to remember, but they are not meant to help with "understanding" what the button does.
It is the same deal as with keyboard shortcuts. ctrl+S is Save, but you know that ctrl+V is Paste and it has absolutely nothing to do with spelling.
ZXCV is positional, [C]opy having a nice mnemonic is more of a happy coincidence than a design decision in it's own right.
ZXCV is actually half positional/mnemonic, half graphical-as-letters, a bit like old-fashioned smileys: X for cut looks like opened scissors, and V for insert looks like the downward-pointing “insert here” arrow-like mark editors and teachers use(d?) to scribble onto others’ texts.
"Any Ubuntu 19.x" because non-LTS Ubuntu releases come out every six months, so there was Ubuntu 19.04 and Ubuntu 19.10, but never an "Ubuntu 19": they are never as polished or as stable as LTS releases, and are only supported for 9 months, forcing you to update regularly.
If you are looking for a more stable experience and you are not chasing the latest features, you would probably be better served with LTS releases which come out every 2 years (they have hardware updates with newer kernels included every once in a while, so you do not have to worry about using them on latest laptops either).
If you want the most stable route, go with LTS point (.1) release. Eg. I only update my servers when 18.04.1 or 20.04.1 is out.
People who switch between several languages just can't use Ubuntu because of this, read some testimonials on the bug linked above.
This is just one of the pain points though. There were many.
Of the non-standard settings, I've got "make Caps lock an additional Ctrl" and "each window keeps its own layout" on. The rest, including keyboard switching shortcut (Meta+Space) is using the defaults.
I simply press the shortcut and start typing and works as expected — if I find the time, I'll debug further and report on the LP bug, but just wanted to report that I do not experience any of the problems mentioned.
Input methods are a separate concept from keyboard layouts (XKB), and I only ever use XKB stuff. Input methods load separate libraries which interpret keycodes, and are commonly generic enough to allow "external" complex input definitions (think Arabic or CJK) — not sure how fast they used to be before, but perhaps combining IM and layout selection is the culprit.
(Interesting aside: I complained to someone on the Mac Safari team that it was difficult to search open tabs, and he told me that apparently this feature already exists! You go into the tab overview, and…just start typing. A little search bar will pop into appearance in the top right corner. Why it couldn't just be there and have keyboard focus from the start, I have no idea…)
So it's a hard-to-discover feature, and a misfeature unless you elect to keep this behavior forever.
It’s a shame really because these undocumented features mean 99% of people WON’T ever use them. Isn’t that counterproductive to engineers?
Why aren’t employees speaking out against this?
Can't please everyone I suppose.
No, it required pressing some button to show the password box. Just like windows. If you don't have a keyboard you can also swipe up instead.
I do feel that some modern changes are annoying and unnecessary though and the instances I see that increase as I get older. I just always try to check myself and analyze how much of that anguish is just from being used to something or not.
I will finally say the examples here of inconsistency between evince and MPV is just inexcusable. You can't both break expectations AND be inconsistent, it's like the worst of both worlds.
So, no, just having a bunch of magic unlabeled buttons and saying "use the keyboard" isn't good usability. The part that kills me, is that a lot of these applications don't even really have shortcut key guides. So, you don't even know the magic key sequence for something until you discover it by accident.
Worse, are the crappy programs that break global key shortcuts by using them for their own purposes.. AKA firefox seems to manage this on a pretty regular basis. Want to switch tabs? ctrl-tab, oh wait, it doesn't work anymore, cross fingers and hope someone notices and fixes it (which they did).
Not even Windows itself has, any where that I could find, a list of all the keyboard shortcuts. I find multiple lists, each with a different random subset of whatever the actual set is.
Sometimes I'll hit wrong keys, and something interesting happens. I don't know what I did, so it's "oh well".
You try to get around it by using an editor that shows tabs or an editor that re-indents the code but plenty of editors (notepad, nano, vi, emacs) don't show tabs as different from spaces by default.
The "Save/Open" button in the file dialog boxes is in the title bar, which is the dumbest thing I have ever seen. Dialog boxes get tied to windows, so when I try to move the dialog out of the way to see my work, it drags the whole damn window. (Some of this is mentioned in the TFA.) I think a lot of these decisions were Gnome-driven, but still... stick with 16.04.
And I like the cohesiveness and integration of GNOME, although I had to do a hell of a lot of customization to mold it into something I could tolerate.
Fortunately, Cinnamon is just an apt-get away, handles both monitor hotplugging and closing the laptop lid sanely, and works the way I expect a desktop to work. I've settled on Xubuntu+Cinnamon as my go-to when setting up a desktop or laptop.
I also just noticed that there's an Ubuntu Cinnamon which might be right up my alley as well.
apt install cinnamon
Once installed, I logged out, and then picked Cinnamon from the session selection menu (the little gear near the upper right corner). It comes right up, though it won't pick up your preferences from XFCE.
I hadn't realized there was now a Cinnamon spin - is that still in testing? It's not on Ubuntu's list of flavors.
Installation was rock-solid, though. Only had to install synaptics over libinput (libinput causing physical pain for me because I had to press the touchpad hard all the time, and because lack of kinetic scroll).
Seriously considering alternatives.
It was quite easy to install and everything worked out-of-the-box. I just customized some widgets, the dark theme, icon colors and now it looks amazing. Best Linux desktop I've used so far.
It's sad because gnome worked very well for me so far, and I've actually seen Ubuntu becoming the mainstream choice for freelance dev teams at many places over the last couple of years. I do feel guilty to criticize other people's hard work given for free to me without constructive criticism but as far as I'm concerned gnome shoots for a future and audience of desktop users that just isn't there when IMHO the goal should be to preserve what precious free desktop apps we have and not making app development any more difficult or gnome-centric.
Despite its bad reputation in early days, KDE Plasma 5 nowadays is very lightweight. As in, the resource usage is pretty much on par with Xfce.
A lot of Ubuntu software is now (Version 19.xx) only available with "snaps". They make some sense for IoT machinery (the user does not control updates, so they are deploy and forget) but I do not want to loose control.
Final straw for me. I am test driving Arch now....
From scratch I think. My aptitude is no use with pacman
The only distro I’ve used past teenage is Ubuntu. I alternate stints of maybe 2 years with Windows, 2 years with Ubuntu. First thing I do after installing the most recent LTS Ubuntu is “apt install spectrwm”.
Spectrwm is not even particularly good — everyone tells me to use xmonad instead — but I know how to get it in usable shape in about half an hour. This after many moons of exclusively using Windows.
I've literally spent weeks trying to get back to the level of usability I had on my 14.04 setup -- compiling old/patched versions of software from source because the "improved" versions removed features I depend on or otherwise fucked up the interface (I cannot understand why anyone thought removing typeahead from Nautilus was a good idea!), trying every damned thing I can think of to debug the global hotkey problems (still can't get IME switching to work right reliably... it works for a while after I fiddle with it then just stops working and I have no clue why), and just generally having a bad time.
This is with stock GNOME (on Arch); I think Ubuntu may ship a skinned / modified / older version of it (which can create UI problems).
And Gimp is a mess. Enabling single window mode makes it better.
It was one one of those tiny netbooks with 1024x600. I think I was trying to add a user, and for the life of me couldn't figure it out. Turns out the updated add user control panel at the time put the add button on the lower right of a window with a minimum height > 600px and about 400px of white-space above it, and no resize/scroll bar so there wasn't any visual indication that there was more to the window.
But, there is a flip side too. I have ~6kx5k of desktop resolution (portrait mode 5k monitors) and very few applications know how to handle that in any reasonable way. Web pages are frequently the worst, nothing like a column of vertical text consuming 10% of the horizontal resolution of the browser that manages to scroll for a page or two. I guess no one reads newspapers anymore, so the idea of having multiple vertical columns of text is foreign.
Give Lubuntu a try.
from the front page
LXDE is dead, and jankier than Xfce.
Use Xfce (if you like jank) or KDE or Cinnamon.
That's not how it works here. From boot I am presented with a list of users, I click or press enter and type the password. When it's locked/suspended, all I need to do is to start typing.
You can also just start typing the password if you want to unlock the machine.
oh, that has to be some ultra clownish way. i would punch through the display within a week.
who proposed this, who reviewed & approved this, and on what fundamental ?
is there no easy option to get rid of that irritant ? if none, I have to stay with 16.04 for lots more time.
I don’t know about you, but I just start typing on my keyboard.
What? I just type my password without swiping anything. I've I think I've upgraded through pretty much every version of Ubuntu for the last few years, I haven't customized it to speak of, and I've always been able to do this on both my desktop and my laptop.
Have you tried pressing a key?
I use a laptop with an external monitor. When the monitor is not connected, Adobe Reader windows have the title bar out of the screen. The only way to maximize: Alt + space + x.
Only if you use the default UI, which I think is an important distinction to make: I use Window Maker and had no regressions.
The ability to choose your own UI is an important strength of Unix, and one which distinguishes it from macOS and Windows.
It's a strength as well as a curse for Linux distros not limited to just Ubuntu since the developer of a GUI program must test if their app works on your unique setup, if it doesn't run on one distro or your specific configuration out of the box, then that is already a usability issue.
The Linux ecosystem doesn't have any guarantees for the app developer whether if a KDE, GNOME or Xfce built application will consistently work on your setup if the user changes the DE, display manager, etc so it is harder for the app developer to support accessibility features in whatever DE the user uses and its harder for them to track the source of the issue. This could be anywhere in the Linux Desktop stack.
The inability to "choose your on UI" on Windows and macOS guarantees that GUI programs will be consistent with the accessibility and look and feel features in the OS which makes it easier for app developers to test on one type of desktop OS, rather than X * Y * Z configurations of one Linux distro.
(also lots of phone home crap you have to hunt to turn off)
By comparison arch linux + gnome is relatively unencumbered
The up-swipe to log in induces ux rage for me. I haven't yet tried to hunt down a way to shut it off because I just hit the space bar and forget it ever happened.
I don’t agree. It’s important for the user to know a login UI is the real thing. For example, Windows NT used to have you hit Ctrl+Alt+Del to make the credential dialog appear so that any fake lookalike was impossible.
But any app can go fullscreen and draw a fake login screen that you can swipe up to show a fake login form.