- Clear indication of whether something was clickable or not, vs. everything flat and featureless
- Good contrast
- Normal-sized widgets which left enough room for content vs. the huge widgets we use today for some reason (it's weird for me to say this, but Linux desktops are the worst offenders here). I get why they're important on touch-enabled systems but that's no reason to use them anywhere else.
- Useable scrollbars, no hamburger menus
- And -- although Windows-specific: the Start menu was something you could actually use.
The state of testing, examination and debates about user interfaces was also light years ahead of what we see today. I was genuinely fascinated about what my colleagues who did UI design were doing, and about the countless models they developed and metrics they used. If it was the same bikeshedding we see today, they sure as hell knew how to make it look like they were having a real debate...
I suspect the reason behind this drop in quality is largely economical. Fifteen years ago, you needed a great deal of understanding about perception, semiotics, about computer graphics, and a remarkable degree of mastery of your tools in order to produce an icon set. This made icons costly to develop, to a point where it was pretty hard to explain it to managers why you need to pay a real designer a heap of money for a real icon because, dude, just look at every other successful app on an OS X desktop!
Agreed. I strongly dislike the Windows operating system but I miss very much the Windows user interface which was very well designed, consistent, and optimized for real work.
One of the things I miss the most are the consistent, universal and wide-ranging keyboard shortcuts. Not just key shortcuts for menu items, but keystrokes that allowed you to move around dialog boxes, resize windows, etc. OSX is largely terrible in this regard with even many common menu items without shortcuts ...
I actually find that's what I miss when I use Windows. Windows' universal keyboard shortcuts seem to be limited to window management and some basics like new files, save, cut, copy, paste, etc.
On the other hand, just about every macOS has the same shortcuts for actions within applications as opposed to without. For items without shortcuts, you can define your own quickly and easily in System Preferences, and those custom shortcuts can apply to just one app or every app.
Not only that, every macOS app puts the shortcuts in the same place, the menu bar. They're all searchable via the Help menu. On Windows, there are more often than not keyboard shortcuts that aren't listed in any menus, so one would have no idea what they are without looking them up. Further, because Windows software tends to be developed in any ol' framework with any ol' user interface, lacking in any consistency, quite a lot of programs don't even implement the standard, universal shortcuts.
The idea of a drop-down main menu with all commands comes from CUA:
If you realized that, were doing the same thing frequently, and used the keyboard action after the first couple times you learned the memory of how to do it the quick way.
I was going to say "not on macOS, they aren't" because this isn't true for any apps that use Cocoa or Carbon (which is to say, all of them). The only apps that exhibit any weirdness are ones that use cross-platform frameworks like Qt, GTK+, or wxWidgets.
But nowadays, there are loads of Electron and React Native apps that don't even present anything in the global menu bar; if the developers don't think to add them in themselves, basic functions like copy and paste don't work without secondary clicking!
Counterpoint: 99.5% of everything ever created with Visual Basic.
I wish Alfred had a plugin to do this on Mac OS. I remember Quicksilver had something.
Mouse support is generally terrible in OSX.
I'm kinda out of the loop, do they still sell mice with only one button?
I'm not sure where you're getting that from, since Magic Mouse only has one button: https://www.ifixit.com/Teardown/Magic+Mouse+2+Teardown/51058...
It honestly reminds me of the experience of navigating Windows menus with Alt- and progressive keyboard shortcut learning, but flexible enough to handle things like Electron applications that don't even have a Menu bar or settable shortcuts.
Does anyone else use this or something similar?
I would welcome a kit made of say 120 key modules (one key per module, real clicky keys, please!) all with the same pinout, and normal+2x/3x/4x keycaps, special "key" modules such as trackballs, Thinkpad "nipples", analog knobs etc. then a breadboard-style carrier board where i can stick them as I want, plus a decoder board. Then, once the keyboard is ready, I can send a file describing its details to some high quality 3D printing assembly to purchase a pcb and case for my prototype keyboard and turn it into a real one.
The Macbook Touchbar is pretty much exactly this, it shows e.g. dialog options as keys. I know it's not what you want (I like mechanical keys too), but it's the first actual extension of keyboards in a long time and it's convenient sometimes.
There is also the Optimus keyboard that sounds more or less like what you want built.
>Why there's no contextual Info key to show information/help pertaining the current task?
Usually F1 brings up relevant help on Windows. The actual implementation in most apps is useless though.
I’ve had that page bookmarked for like 5 years now, and its never changed/stopped being out of stock. I’m starting to doubt it ever physically existed..
Wouldn't "yes" and "no" be redundant with "enter" and "escape"? Most situations I can think of where you can confirm or abort an action, those actions will map to enter and escape.
As an exercize, I very seriously embraced touch a few years back.
I will not give up keyboard plus mouse, but I am also amazed at what really can be done with touch keyboard and pen.
Lots of people want to expand on that because it expands on phones and tablets.
I have a Note 8, and can content create on it to a level I did not think possible a few years ago.
I use touch keyboard, voice and pen.
At times I carry a small bluetooth keyboard with touch mouse pad. It is not as fast, but is more robust.
Just because a thing has a CPU and operating system does not mean it needs to have a unified user experience.
Have spent the vast majority of my time on OSX for the past several years, but still reminisce about the wonderful keyboard shortcuts from Windows.
Right next to ctrl+q (something I end up hitting once per session of an app, by definition) is ctrl+a (select all, something I hit tens of times a session), ctrl+w (hit in the browser a good amount), ctrl+1, ctrl+2 (also good browser shortcuts)
I actually don’t need to close the entire program that often! Please don’t make it so easy to accidentally hit at such a high frequency. Bonus points: I switch between QWERTY and AZERTY keyboards a lot. That’s just me but it makes me have a lot of extra pain
Alt-F4 is very nice in that regard.
Ctrl+F4 is a shortcut that closes the current child window/tab. For apps with tabs, this was often duplicated as Ctrl+W.
There's no standard shortcut for exit. I believe that this is deliberate, to avoid people hitting it accidentally. However, the accelerator keys to do it via the File menu are always Alt+F ("File") -> X ("eXit").
These are all platform UX guidelines, or at least they were back in the day. Of course, apps can and did ignore them, but that was also possible on macOS.
This is much less likely, given that they are set up by default for every new application.
When I used to do tech support, from Win 98 all the way through Win 7 days, I would consistently find that reasonably intelligent human adults had difficulty understanding the Start Menu. If something wasn’t on the first menu, they weren’t going to interact with it. It seemed like such a no-brainer to me — just expand the folders! — but a staggering number of people found it alien and never adapted. Even the idea of a right-click VS left is too much for many people.
I think the desktop designers should have made a fixed set of top level menus. Only show the non-empty ones, but at least make everyone put apps in them. I'd propose a set including: games, programming, engineering, design, media, office, entertainment, audio-visual, system tools.
I'd also suggest subcategories particularly for games. If there is only one category, or not that many programs total, it could omit the subcategary level when showing that menu.
Put users first and stop sticking your company names in their menus. Add a little structure and some reasonable heuristics. Done.
I think Linux distros could do this since they have packaging guidelines and huge software repositories.
Edit: While we're talking about laggy keyboard interfaces, AAARGH the Win10 logon screen is awful. You have to hit a key, then wait for the screen to load and be shown on the screen, before you start typing your PIN, or it'll eat the first character of your PIN going from "press any key to log on" to "enter your pin". What the hell is so hard about understanding that keyboard interfaces shouldn't be laggy even if graphics don't quite keep up? Electric typewriters from 1990 got this right!
There's no reason search has to do this, though. Spotlight (the macOS version of this) is able to find results pretty much instantly, and it doesn't show any results until it's searched everything, so the top result never changes. I really don't understand why Windows hasn't been able to do the same thing (and if anything, the win10 search is worse).
I don't see how that follows. If it also searches files, the top result can change if any indexed file changes, which would still break the app launching scenario.
Or is it always showing apps on top?
Result: search, wait 10 seconds for the damn UI to stabilize, tap the result and NOPE, THERE COMES ANOTHER LATE RESPONSE MESSING IT UP AGAIN.
It's sometimes not obvious which category a program belongs to, so you have to meticulously search all categories that could apply.
And you'd only to that if you don't exactly remember the program name in the first place, otherwise you would have had quicker ways to launch it.
Here the old windows way offers more 'breadcrumbs' to find something.
Many already do such things (most Desktop Environments provide a shell with some kind of software category menu).
Screenshot from my system: https://i.imgur.com/csWWl9Y.png
And this is on Arch, which means that almost all software is unmodified from upstream.
I agree that using Company Name as the top level folder might be the worst option. If I'm looking for some random utility app I downloaded and installed, why should I have to first remember what the name of the company was who made it?
The concern is namespace conflicts. Users may be surprised to find a program has 'changed' when in fact another user installed a similarly named program.
I can't remember the rationale, but it does make it hard to find things.
That was part of the Windows 3.1 guidelines. On Windows 95, Microsoft's guidelines specified that your applications were to appear as single icons under the Programs menu. Unfortunately, most almost all developers ignored that and continued doing things the Win3.1 way.
So not only would you have the program, but you'd also have the readme and the uninstaller and anything else that goes along with the program.
When Windows 95 came out, Microsoft wanted to hide all of that. The new guideline was to just put the program in the root level of the Programs menu, leave the uninstaller up to the new Add/Remove Programs control panel, and if you want the readme or whatever, go hunt for it yourself. But unfortunately most developers ignored it. They continued building Windows 3.1-style Program Groups the same way they always did.
And, yes, the Programs menu used the same backend as Program Manager. What appeared in one appeared in the other.
It was even worse than that. The app name usually was a subfolder too. It went like: 'Start -> Programs -> Adobe -> Photoshop -> Launch Photoshop' (Plus 'Open Readme' and 'Uninstall' and whatnot)
If you shave a modern OS down to just the features that you'd need to do your work, the economics of carefully designing a desktop and the icons becomes more "reasonable" again.
I still think that a clone of the classic Mac desktop, or Windows 3.11 would be great on Linux.
It's a purposeful mess in that regard. They know the control panel is unmanageable for the average user, so they've been building out the Settings app over the last few versions. The settings app is much more like the Settings in iOS and Android and purposefully laid out.
It makes perfect sense to have both those applications exists while the transition in progress.
Lots of work remains, but they're getting there step by step. E.g, there's no link to modify the colour profiles yet, but at least being able to quickly check and change from previously defined ones is a good little addition.
A version of windows 10 that is completely free of bloatware (no Microsoft store, no preinstalled apps like candy crush, no tiles on start menu, no Cortana, only windows search, no edge, only Internet explorer). It only gets security updates instead of feature updates, and has support for 10 years. Best version of windows 10 IMO. FYI it was originally called LTSB (Long term servicing branch) and was renamed to LTSC (Long term servicing channel).
I put it on my new machine and it the 1st version of Windows I've been ok with since 7. Buying it is basically impossible for a normal person, but its readily available on tpb.
(also kms activator)
Start menu takes > 0.5 seconds to even begin animating. Disabling animations actually makes the lag subjectively worse.
I installed W7 a few hours later to compare, and I was amazed at the difference in responsiveness. I'm talking like 10-20x faster.
I bought 5 year old netbook recently and it didn't have the resources to open W10's start menu. (Well, it could, but it took three minutes.) Did a fresh install of W7, and like magic, you can actually use the computer again.
I don't use any version of Windows anymore, but I'd like to be able to. I'm sad support for W7 will be ending soon.
With classic theme gone, and everything so slow, the best Windows desktop experience for me in this day and age is XFCE.
- you aren't running off an SSD and Windows Defender is choking your disk I/O out from under you
- you haven't allocated enough memory to the VM
- you've just installed the VM and a Windows Update is going on behind your back and dragging things down
Opening the Task Manager and seeing what is choking the life out of your machine would be very helpful.
I agree with the parent poster that LTSC is generally pretty great if you've got to run Windows. For everything else, there is KDE :-)
Back to usable basics.
I wonder if this project is already out there... (Rather than try to start another side project)
It takes a bit of work to set up but once you do, it genuinely feels like you're using Windows 95 but with modern Linux apps.
Now the prevailing trend is this Fischer-Price children's-toys minimalism, with bright shiny colors and cute mascots. It's insulting
Do you have any links to such studies/focus groups/articles on this? I'd be genuinely curious to read about them.
My last job we brought in focus groups at times and that was exactly their feedback, for an embedded UI for the machine that we sold. It should be noted that that machine required about a week of operator training to use, which included interacting with our UI
What I see a lot is UI professionals and artists simply declaring that interfaces must be minimal and abstract and that visual complexity is bad. Whether this is just current fashionable dogma, or if there is actually research to support this, I have no idea, but none of the product decision makers I've worked with have ever asked for evidence.
A lot of Windows programs have really cluttered, disorganized configuration menus and confusing workflows; PuTTY and ConEmu are two particularly egregious offenders. Seems more like a "developer lacking good design sense" issue, rather than something inherent to the Windows environment.
However, no conforming app should have had any functionality in the toolbar that was not available in the menu bar. This isn't to say that nobody ever did it, but the OS design guidelines were very clear on that point.
Added: I'd be interested in putting together a Wayland desktop environment around the strengths of Windows 98's interface designs (along with some modern discoveries about human interface, and at least whole number resolution scaling). I feel like there should be at least one well-maintained toolkit which doesn't attempt to support full CSS styling on widgets.
On Unix, I also reached a similar point on SGI Indigo Magic Desktop.
New Start menu:
1. Click Windows key
2. type (typically) three letters of the program you want to run
3. Click <Enter>
That's 4 keystrokes, bound only by the time it takes the user to physically make the keypresses.
User doesn't need any knowledge of how programs are categorized, nor know a hierarchy of categories.
The process by which the system displays programs matching what the user is typing happens as fast as possible.
You can even speak to your damned computer and the start menu will probably react accordingly.
None of these actions require the user to even know they have a harddrive, or a system path, etc.
So modern users have a discoverable, accessible, realtime-responsive start menu that requires minimal cognitive load.
Remind me: how does Windows 98 Start Menu compare to that?
Unless what you want to run is, for example, "Internet Explorer": "Inte" will auto-complete to nothing useful if you have a bunch of apps, "Interne" will auto-complete to Internet Explorer, but "Internet" will auto-complete to Edge. Not exactly convenient.
Besides: it's full of ads, and it takes bloody ages to find an application in a list where every item is touch-sized and which doesn't expand to fill your screen. The keyboard entry became necessary because the new structure is impossible to navigate visually (for bonus points, while this structure is supposed to be better for touch interfaces, that's precisely where it sucks even more, because "just type three letters of the program you want to run" isn't too convenient on touch-only devices).
There are environments which manage to get this surprisingly right, such as LXQT: you have a hierarchical menu which is easy to navigate, but if you're faster with keyboard-based search, you can do that as well.
Plus, you know, to us ol' Unix farts, not having to type stuff in order to launch a program is what progress is supposed to look like. If thirty years of UX research gave us the equivalent of bash and tab completion, we might as well go all the way and replace the start menu thingie with a terminal and call it a day.
Edit: also, I don't know what kind of super workstation hardware you're on, but I'd hardly call that thing "realtime-responsive" :-).
In the Linux world I think that Plasma nailed that feature.
I realize this may not be very intuitive for most people, but in the case of Internet Explorer I would instincitively just type "ie" because I know the name of the executable.
You can turn all of these off, with a combination of registry/GP fixes. Granted, they should be off by default, especially on Professional/Enterprise, but at least you can do it. - https://superuser.com/a/1348759/100543
I've never wanted to search for anything on the web from my start menu, and to make matters worse, it always performs the search using the IE/Edge browser and Bing as the search engine. To date, I've yet to see any way of customizing (or disabling) this behavior.
If I had a list of the "most annoying things my computer does", this would certainly be near the top.
Launchy is really great as well.
That's the problem. It was far easier to browse through what's available. You can't search for something you don't even know the name of, but you can certainly read through a list.
Some things were in the start menu under a hierarchy of company name -> program name.
Others went just by the program name.
Some were (company name) (program name).
Some were just a start menu entry, not a folder.
It is notable that you can create your own such menuized views by right-clicking on the taskbar -> Toolbars -> New Toolbar.
You can if those search results aren't limited to literal name matches, but also consider the intentions the user expresses with their search terms. Maybe associate a bunch of keywords with the result. Terms like "backup", "update" or "presentation" should lead to relevant applications/settings regardless of what they are actually called.
As someone else pointed out, the discoverability isn't as good, but I think this has more to do with the fact that the start menu items aren't just all neatly collected in one location on your disk and by the amount of space the new menu uses, like if I have to always scroll to find what I want, it already lost the race against a list that shows most if not all applications at once.
Discoverability is, to me at least, a nightmare on most modern operating system, mobile included. I don't think it was much better on older operating system, but at least they had a manual and less stuff to worry about.
It takes several seconds for the results to load - I know performance may degrade in a VM, but come on! Searching through a list of strings is an interview question.
Not only that, all the key-presses of "Update" show the Java update program.. only when I add the final "S" it will show "Check for updates" system applet.
And with the amount of cpu/memory SearchIndexer.exe consumes, it's not even a bad joke.
This is one reason why, on Linux, I prefer KDE/Qt applications above all others.
Ever since I first started using Linux in the early '00s, I've noticed that GTK+ applications have excessive padding, and the widgets just look huge, regardless of what GTK theme you're using. Qt, on the other hand, has a number of theme engines with small, tight widgets.
Personally, I'm a huge fan of QtCurve. You can customize it exactly how you want it, and it's a godsend. I just wish the GTK devs didn't torpedo the possibility of making a GTK3 version available.
For some examples, I opened up a KWrite window and took a couple screenshots of the main UI and the settings dialog: https://imgur.com/a/bx1dk8h
Get the Tenebris theme.
- Even purely from a UI perspective, I much prefer modern Chrome's flat and minimal UI over Internet Explorer 4, and I think most other people do too. At the time, Microsoft claimed that IE4 was an integral part of Windows 98, so I'm going to consider it as part of the Windows UI rather than just a standalone app :)
- The taskbar doesn't scale very well, and once you get more than a dozen or so windows, each entry with the same icon becomes indistinguishable. This was particularly bad because browsers at the time didn't have tabs. As I type this, I have 14 tabs open in various browser windows, and this just wouldn't have fit in a Windows 98 style taskbar. Note that I consider the non-tabbed single document interface ("SDI") to be an integral part of the Windows UI here. Both SDI and MDI (multi-document interface) were part of Microsoft's UI guides, and MDI was even worse than SDI.
- The Start menu also doesn't scale very well, and could get very deep, which confused users. It lacked a search function like Windows 10, Mac (via Spotlight) and most Linux DEs today have.
- No support for virtual desktops, which Windows 10, Mac and Linux DEs today all have.
- Network Neighborhood was slow af, and was confusing for users to configure. Apple's AirDrop has a much better UI for sharing files on a network.
- Active Desktop.
- At the time, Microsoft was experimenting with integrating the web with Windows, and one of the things they did was put hyperlinks all over the place. They even experimented with changing it so that desktop icons and icons in Windows Explorer were links, and this caused a lot of confusion due to the inconsistency between a single-click vs double-click.
Some of the problems here are that we have much more computing power now, and (at least I) tend to keep more things open at a time. There were also numerous Internet UIs that Microsoft was experimenting with at the time (NN, AD, MSN, etc.) and many of them didn't work out.
At least when they started doing animated menus and such you could still turn those features off.
- Start menu was crap without search. Search, and being able to just start typing, is extremely important for day-to-day usability.
- The toolbars of many applications used to contain far too many buttons that almost nobody ever clicked on.
- I consider the taskbar in Windows 7+ to be the best way to handle multitasking and switching between applications (including the previews on hover, the wheel click for new window or closing an existing one - just like browser tabs work, etc.). No other OS/environment even comes close in this regard.
windows 98 may have been the height of desktop UI.
the windows file dialog is still, puzzlingly, the very best out there. no idea why other platforms are so resistant to copying it.
Whatever I did on Solaris  or even early OS X  felt like I was doing real work, important stuff, even if I was just messing around.
I don't know what changed, I use both Linux (Gnome 3) and macOS Mojave daily but they both lack that polished "workstation" feel. Maybe it's all in my head or I'm just getting old :/
That's the sort of phrasing you don't want to inflict on everyday users.
And right here is why modern tech is so condescending.
The language you use for this is important because it shapes the way you think about the difference. The way it is often phrased is in the form of "we're special, better, smarter, people than those dumb people who have no hope of understanding the arcane magicks we are naturally attune to". Which is of course bull. We have specialized knowledge and familiarity from spending years working with this stuff. That's it.
> [UNIX]... seems to say "Oh, you don't belong to the super-secret cabal of users who know these arcane commands? Fuck you, then!"
It seems to say that because that's exactly what UNIX says. They don't even name commands sensibly, not even in 2019. Discoverability basically doesn't exist.
Um, you do know that Unix used to come with user manuals? Like, oh I dunno, the vast majority of software in the 1980s and early 1990s? The designers of Unix and comparable systems were perfectly aware that command-line incantations cannot be figured out simply by sitting at the system and playing with it; this is very much not what it was designed for!
If discoverability by novice users is a priority, then that is an argument for menu-driven, interactive interfaces and UIs - which could well be built on top of something like UNIX. But documentation is always going to be important.
The kind of documentation that Unix comes with is of little use to people who already have some specific training in computing disciplines.
I learned the command line from a book that came in a Redhat boxed set.
(Also, the word "oops" was chosen because it connotes "something went wrong and it's our fault" -- probably chosen to avoid implying that it was the user's fault. Really ingenious, again, if your goal is to keep users comfortable rather than fully informing them.)
But most of us who have been around for a while can imagine a modern computing environment that still treats desktop computing as desktop computing (and not just large form factor mobile computing).
That's why Microsoft set the "always on top TWM/FVWM IconBar", that's it, the taskbar.
Tiling WMs (which I tried 10 years ago?) would always break on some programs (say Gimp), then you had to run that program in "floating mode" and its already too much overhead for me...
Who cares about having a file explorer on their mobile device? Who needs advanced networking options on their laptop when they're just using coffeeshop wifi? It'll probably get more and more segmented.
I've recently had the fortune of talking at length with my mom about her past, and one thing she brought up was how she felt when my dad brought that first desktop computer into the house. To her, it was kind of like a typewriter (which she understood), and kind of like a television (which she also understood). You type things, and they appear on the screen, but -- and this is the spooky bit -- other things may appear on the screen that you never typed. It's something she got used to quickly enough, but never totally came to grips with.
I think most people -- even very smart people -- are like that. They don't know how to deal with a machine that works semi-autonomously, in ways that don't obviously correspond with their input, nor to form an internal model of how it works, nor to engage with the machine transactionally in order to successfully operate it to complete a task ("if I do A, the machine's internal state will become B and I can expect its future behavior to look like C"). This comes natural to us, because we're techies and this is what we do. Some people can sit at a piano and play it like nothing. I can't!
The insight of the GUI was to draw a representation of the machine's internal state (or a highly simplified model of it) to the screen in terms that humans readily understand, along with available options for a human response (in the form of buttons and pull-down menus). Early GUIs prioritized the mapping of machine models to aspects of the real world, leading things like the spatial Finder which presented the file system in such a way that we can use our instincts for how we find things in real space to navigate it. This approach gets you some leverage, but there are limits to how far you can go with this. As time went on, we ran harder and harder against those limits. Typical office users may have fared okay, but then computers started to enter the home in a big way AND started to be networked in a big way, leading to a whole new base of inexperienced users -- who might've otherwise never touched a computer in their daily lives -- confronted with an overwhelming tidal wave of possibilities. And they became baffled, mystified, and frustrated by even the easier-to-use, Windows 9x era interfaces we had. And then, a decade later, smartphones created a whole new base of confused users. So the designers of today, having exhausted all the good ideas of how to solve the problem, resort to the UI equivalent of shouting at a deaf person: dumbing down the UI, removing elements considered to be too distracting, enlarging and spacing out the ones that remain, replacing specific error messages with meaningless but inoffensive blobs of text ("Something went wrong", "There was a problem", etc.).
Even more maddeningly, some of these changes were inspired by corporate communications. Some of these new error messages ("We're sorry, but...") resemble the old broadcast-TV error message of "We are experiencing technical difficulties. Please stand by." But the thing you have to understand is, this sort of communication works on normies. They don't need specific details of what went wrong, what they need is to be reassured that everything, in fact, will be okay. From an appealing-to-normies standpoint, "We are experiencing technical difficulties" would have been a vast improvement over a common Windows 9x error message -- "This program has performed an illegal operation and will be shut down." To a normie, "illegal" means criminal! The Feds put people in prison for a long time for computer crime; imagine the panic that would set in if you, knowing nothing about how a computer works, were suddenly told that it had done something illegal!
So really UI designers are just prioritizing soothing users over giving them actionable information and fine-grained control. The next revolution in UI design will be in making users well informed and capable without alarming them. I'd prefer that everybody toughen up a little, and basic understanding of how these machines work becomes a part of our civilization's literacy requirements, but that's nearly impossible to achieve given current market forces.
Take object persistence. It’s innate to assume that objects don’t go away simply because we can’t see them. Documents don’t vanish in real life simply because you stop looking at them.
Many people don’t understand why a document on a computer screen can vanish, because they don’t understand that that document has to be assembled from data and code every time it’s opened. They don’t understand why it should look different in a different version of word (or worse, in some other program), because objects shouldn’t change when you view them somewhere else.
They don’t understand why you can’t just put a Word document in an email, or a website, or in ‘the cloud’ and edit it in-place. To many people the functionality of the editing is inherently in the document, (not the system) and don’t understand that, without the system, it’s just a series of bytes with no inherent meaning or functionality.
And that's largely fault of the developers, since they build on layers upon layers of utility libraries, which are not exposed to the user but inevitably pop-up in the form of a broken metaphor or unintelligible error message.
User-facing systems should be defined around powerful data&workflow metaphors, and all the layers in system built around supporting those metaphors in coherent ways.
There is a tradition of people trying to build user systems around simple concepts, easy to combine (starting with Memex, then Smalltalk, Hypercards, and nowadays with mobile OSs). But there's always been a great deal of friction in adopting them:
- first because their experimental nature can't compete with the more polished nature of commercial systems based on legacy conceptual metaphors;
- and second, because up until recently, end-user hardware was not powerful enough to support the complex graphical and computational requirements for the heavy environments required to support these novel interfaces.
Now that computers are powerful enough to build novel experimental interfaces on top of all the legacy libraries required to run generic hardware, we're starting to see again a lot of experimentation of those system-encompassing alternative metaphors for interaction.
I don't know what changed”
You got more experienced. When you’re looking at your second, third, etc. system, there always are cases where you think “This is so easy on ‘Foo’, why does ‘Bar’ make it so difficult?”, and feel like wasting time, even if it isn’t really difficult on that system, but just different, or if it is difficult because you are working on step A, but the new system has a better workflow that does steps A thorough Z in one go.
If you ask people what’s the most fondly remembered or impressive OS, computer game, word processor, mobile phone, music player, etc., it often is the first one they really used.
The new version of iCal’s only purpose was to look pretty and offer very basic functionality. The older version might have started looking dated, but I could use keyboard shortcuts and see details about my appointments easily at a glance. The new version didn’t even want me to know details existed.
The same story played out in Mail.app, Address book, iWork, etc.
MS’s new “Modern” apps show that the same influences have driven Windows development in recent times as well.
I didn't understand Lion, at all. As much as I loved skeuomorphism on iOS, it felt out of place on the desktop.
I feel like this is an odd statement to make with no data.
I can only speak for myself and my partner, but our current systems hold much more love than anything that came before.
For me i3 on Linux is mature enough to not be intrusive into my life, mpd as a music player and so on.
For my partner, she uses a Mac/iPhone/Apple Watch, and after coming from windows 7 she finds it “much better”, and “I would never go back”
Games are another example. I played hundreds of computer games in my youth, from donkey king on the Commadore64 to rayman on the PlayStation. And my most fondly remembered game is almost certainly grand theft auto: vice city which is a much later title.
I don’t think it smacks true that people love the first thing they learn on. I’m not keen on MS Windows 3.1 today, or MS operating systems in general, in fact quite the opposite.
Neither is inherently bad. The problem comes when you're apower user forced to. Use casual product or vice versa.
The strange thing to me, though, is that once smart phones and tables took over as the preferred platform for internet consumption the desktop OSs didn't start reverting to targeting the market that still wants them. Instead they doubled down on trying to turn desktops into smart phones.
Anecdata: I used Windows for the first 10 years of my computing life, and today I'd rather use any obscure Unix over any Windows. The "Unix philosophy" as an attempt to produce a consistent UX has held up pretty well over 40+ years.
They did, because usually only adults used PCs.
Now kids through elders use PCs and there's nothing wrong making the UX more friendly to people unaccustomed to working in tech.
It's in your head because I think you're missing the roles PCs now play for everyone in society.
There's nothing wrong with making error messages less intimidating. There is something wrong with not giving any information about the problem or not even displaying an error message.
As with all abstractions, though, they tend to leak. Software design tries to minimize those leaks, but they have to prioritize which ones to fix.
Advanced users like you or me don’t need those abstractions nearly as much, so we’re not prioritized. Which is probably fine. We end up seeing the leaks in the abstractions a lot more because of it, though.
That's part of the problem, though, and not something that should be brushed aside. The old designs were good partly because they operated one abstraction level lower, where the leaks were inherently much smaller.
I stick with KDE and have been happy.
It works perfectly for the same values of "works" and "perfectly" as on commercial unices. In other words it is in the middle between lightweight WM and full desktop environment, mainly because there aren't any applications that meaningfully integrate with CDE apart from all the dt* stuff (text editor, terminal, calculator...) included with CDE distribution and for the CDE's design to be meaningful you really want CDE applications that integrate with it's object model and not just plain X applications, otherwise it is only somewhat mis-designed window manager.
There was nothing in common widget libraries or development processes that helped users learn how to operate the system. Merely exposing all the functions is of no use if you don't already know what's their meaning and how you're supposed to use them.
People learned more those days not because the interface made it easy, but because they had no choice if they wanted to use the system at all.
Solaris  UI certainly does though!
A: You didnt used to have a "workstation" at your house
B: The machine you had at your house was a completely different platform than say, Solaris machines or terminals/mainframes etc.
C: The UI/UX of the work machine and the home machine are now the same -- so its easy to do the "home stuff" on the work machine now.
D: Fewer people than ever have a dedicated "work machine" and do a lot of personal stuff on that "work laptop" regardless of if they arent supposed to.
It borrowed some technologies and ideas from NeXT but the final product from a UI/UX perspective was more a continuation of what is now Classic Mac OS.
Today that would roughly correspond to looking at Windows Mobile CE interfaces or OSX Panther/Safari 1.0. Anything older and it starts coming back into fashion.
The rise of windows 95 ＡＥＳＴＨＥＴＩＣ a couple of years ago and now this seems to confirm a trend. Certainly so, if you thrown in some art projects like Windows ‘93, recent fashion and music trends around vaporwave and reinterest in PC-9800 emulation.
Everyone is copying the typefaces and color schemes from the magazines which came in the 70's.
>The rise of windows 95 ＡＥＳＴＨＥＴＩＣ
Most of the kids with the ＡＥＳＴＨＥＴＩＣ meme didn't even use Windows 98 or be even aware of computers. I remember w9x not as a fashion trend, but as a shitty os with a nightmare to manage in order to not crashing while intalling a driver. Installing games took ages, and viruses were a real thing.
Also, everything was shareware. Libre software and Linux/BSD were not known outside academics except at very late 90's.
If they knew and be alive into that, they woudn't be so fake nostalgical.
I suppose it's similar to the situation with newer cars, where the engine is so quiet that one sometimes forgets whether it's even on, and attempts to start it again. There have even been laws introduced to make sure that cars can be heard: https://news.ycombinator.com/item?id=8925126
And to your comment about cars, it seems to be more about pedestrians that can know about an oncoming vehicle and less about whether the user thinks it's running. The latter seems to be something that can be easily fixed.
I mean, swapping used to be a fact of life in the late 1990s and early 2000s, even on Linux - RAM was just too cramped back then. But then we got machines with lots and lots of RAM even at the low end, and Linux became snappy and quiet-- while Windows is still as bad as ever.
RAM is, I feel, one of the more precious commodities on a machine, still. I have spinning rust in my machines (more space/$), and I've not regretted it, or really had a need for the speed an SSD could bring. (If anything, I think I'd do a hybrid install, with a small SSD and a large HDD.) But I've never once regretted upgrading RAM on a machine, and I definitely miss it on my work MBP.
But the grind of the hard drive when something happens. I never would have thought of that again had you not made this comment. Crazy nostalgia there.
So there's this notion in video gaming where you're strolling through a forest or in a cave or factory or some sort of level with no enemies, no battle music, but you suddenly find yourself upon ammo crates and health packs.
Indicator that a big fight was about to happen.
For me and my early voyages through computing, learning how to write little programs and messing about with settings to see what they did, if I ever got stuck on a problem it was THAT noise that told me "hey you're onto something here".
What a time.
I think we're in a transitional phase where we're halfway between old-style GUIs and something more fluid that approximates real life to a greater degree. Consider the "UI" of a kitchen appliance or the packaging of a new iPhone, or a TV remote control, or just a plain old door. Everyday objects vary wildly in what "idiom" is provided to the user. Some doors have a handle, some have a knob, some have a bar you push. We have the same kind of annoying lack of standards and consistency in the real world, though it's usually evident that you can turn a know and push down on a handle.
One can imagine a future where UIs are gesture-based, for example. Think of the 3D UI from Spielberg's Minority Report. Some of these UIs may need to offer completely new way of interacting with objects (grab and make a fist to copy, open your hand wide to paste, or something) that will be difficult to standardize, much like the real world.
Except MusicMatch Jukebox, Sonique and zillons of "who made this?" shovelware.
These days, such shovelware became the norm, to the point where even built-in apps often look like that.
It's interesting that every time I bring it up, such comments get a lot of upvotes. Clearly there's some demand for this sort of UX, at least in this community.
Moreover, if you guys haven't read it yet you should definitely check out Raymond Chen's Old New Thing, which talks about the reasoning behind some of the design choices that went down in earlier Windows desktops.
(In fact, I wish modern desktop environments did this automatically on HiDPI screens while keeping the original pixel art as their source-- especially for its improved usability on lower-res displays, which are still widely used, both on desktop and mobile. Instead we tend to get SVG, which while extremely crisp on high-res displays is a mess for the original 16x16 or 32x32 use case.)
The "point" of pixel art-- what makes it so convenient for graphicians, even amateur ones-- is not the blocky appearance (what you call "pixelated" - but in fact these icons did not appear "blocky" on the CRT screens that were in common use at the time!), but to set a uniform constraint on fine detail (and sometimes color depth) within the image, and then to maximize quality while staying within that constraint. It is perfectly consistent to want a means of rendering these images that preserves whatever level of detail was in the original while not introducing blocky artifacts.
I can kind of see this argument if you're talking about playing nintendo on mom's old dog-eared TV with the UHF adapter... but frankly I prefer to see pixel art in its original unmolested, pixellated form
edit- on that note I remember very clearly that 320x240 games had a blocky appearance in the 640x480 era. That was one of the biggest reasons to get a 3D card!
1. it's amazing this game actually runs at 640x480
2. there's no point in having resolutions any higher than that, as you can't see the individual pixels at that size anyway (I had a 14" CRT, viewable area probably around 13").
At least in the early to mid 90s you definitely still had "CRT fuzz" on computer monitors.
(And yes, 320x240 did use 2x nearest neighbor interpolation on later video cards/monitors that could only display higher resolutions natively. But I assume that back in the early 1980s, you would actually get a "native" 320x240 screen, just like on a home computer or console.)
I remember that many games (myself included) resisted LCDs for a long time even beyond that, precisely because they could only do one resolution well. If you played old games, this wasn't satisfactory because those were often hardcoded in the resolutions they support - typically 320x200 or 640x480. And if you played new games, you'd often have to dial the resolution down to get it running reasonably fast.