Hacker News new | past | comments | ask | show | jobs | submit login
Windows: Interface Guidelines (1995) [pdf] (uci.edu)
165 points by tosh on March 3, 2020 | hide | past | favorite | 101 comments



Windows 95/98/2000 and Office 95/97/2000 is in many ways my “native interface”, probably because those were the platforms I grew up using most during my late-teenager formative years in High School and in the early years of University.

I have to say that those interfaces are clunky in retrospect, but they are undeniably clear and do not place form over function as many of the modern ‘flat’ and touch-orientated interfaces seem to.

The other two graphical interfaces I remember most fondly are NeXT’s and BeOS’, which are also, probably not coincidentally, OSes I used frequently over the same period of time.

(Just to give you some context, I remember avidly reading the Windows 95 Resource Kit in the run-up to the Windows 95 release in August 1995 because I had no internet access and therefore had had no way of downloading and testing the many “Chicago” betas that everybody had been raving about... and therefore I know that radio buttons on the interface were originally intended to be diamond-shaped rather than round.)


I honestly still refer to the Windows 2000 User Experience book I found online years ago whenever we have to add a new form to our old Winforms applications. Funny thing is, back when that document was written, application skinning was all the rage and I spent considerable time masking that clunky old interface.


In what way are those interfaces clunky?


I agree - I'd say clean rather than clunky. The old and (by modern standards) spartan appearance of Windows 95 applications doesn't mean the UI design is no good. Similarly, command-line interfaces can be very effective, even if they lack GUI gloss.

Somewhat related: long live the FOX Toolkit and its hard-coded Windows 95 theme http://fox-toolkit.org/screenshots.html


The 90s were definitely a time when people thought deeply about how to make computer applications more usable. Apple also had excellent guidelines. Problem was that back then the hardware and the operating systems sucked. Now it’s the opposite. Hardware and OS are very stable now but applications are getting worse.


> but applications are getting worse

So many people say this but I fundamentally disagree.

Applications are so much more complex today, supporting more combinations of OS and input method and data storage and accessibility and display modes and whatnot.

Applications are harder to use, yes, but because they do so much more. Your interface has to work and be responsive whether your file sits on a local disk or in the cloud, or maybe has to be synced. It needs to work with mouse and touch and a screenreader. And so on ad finitum.

Relative to their complexity, applications are doing just fine today I think. (Also don't forget there were so many terribly designed applications in the 90's. It's not like everybody was even remotely following established UX guidelines.)

The same kind of clear UX standards just don't exist anymore because there are so many different apps that do so many different things, and there's no obvious best answer.

But the good news is that applications do slowly converge on best practices. Think of how things like hamburger menus or swipe-to-refresh or pinch-to-zoom have become expected standards.


> Applications are harder to use, yes, but because they do so much more

I use Microsoft office 2000, because since then, no new features have been added to word or Excel that I care about. In fact, I couldn't even name a single feature added since then. What they did add, is the ribbon instead of the toolbar, which makes it impossible to find things you need, and a whole lot of bloat.

On modern machines, office 2000 opens faster than I can release my mouse button from clicking its icon.

That is to say, I entirely disagree with your statement.


Me too! With the exception of Outlook, I stick with Office 2003. I'm so much faster and more productive with it.

The problem with the ribbon is I find myself constantly having to click back and forth between different tabs. It's annoying, and takes twice as many clicks to get things done. Microsoft lost sight of the purpose of a toolbar: to make commonly used functions ONE click away.

When the rest of the industry followed suit and emulated them, the result was a tragic loss of precious vertical pixel space for the content I actually cared about: whatever I was working on.

I also miss the elegant discoverability of classic menu bars. I loved being able to open a program and quickly become familiar with what tools are available.

They did a great job surfacing keyboard shortcuts. Hints were right there beside the menu items, subtly advertised every time you clicked them. You naturally learned the ones you used most. I worked alongside a younger guy for a few months who was blown away by how quickly I navigated around my PC and got work done, for many sequences using like 90% keyboard and 10% mouse.


Exactly. Modern UX seems to split everything into two categories: things that happen automatically, and things that take ten times as many inputs as they did two decades ago. For example, it's nice that modern Windows automatically switches between wifi networks. It's not nice that instead of being a checkbox on a settings menu, turning off the superfluous Lock Screen requires creating registry keys.


Menu bars are still alive and well on the Mac, but only as a design holdover from a previous era. Really appreciate how the menu bars on the Mac centralize all application functionality, while also surfacing keyboard shortcuts.

Would love to see menu bars make a return to the iPad and mobile, such as in this concept. You can really get a sense for how quickly a touch-based UI can be navigated using menus and toolbars: https://twitter.com/stroughtonsmith/status/12339911770803609...


> It's annoying, and takes twice as many clicks to get things done.

It's very strange that Microsoft does not care about this. It's very annoying to me as well.

I don't mind much the vertical space of the Ribbon, but the stupid back and forth between tabs is really annoying.


I just found out yesterday that you can import a picture into word and then remove the background, ala the selection tool in PS. Now while I'm sure most technical people would use PS or gimp or Paint.Net the fact that for 99% of other people there's an easy way to quickly edit photos is pretty incredible.


That feature is super handy, but it has been in word for a long time - I’m guessing about a decade.


So well after the 1990s, 2000 or 2003 alledged highwater marks of usability and feature completeness. Still an example of latterday improvement.


I distinctly remember using that feature on Windows 98


I always say MS Office reaches its peak right before they introduced the ribbon in 2003. Since then I haven’t seen many interesting new features. They just keep moving stuff around.


I think power pivot was only added in excel 2010? Pretty much every megacorp has people power pivoting all day long


> that I care about

Well there's the rub.

There are tons of users who do require critical new features like cloud integration. And the ribbon was designed because specifically more people find it easier to use, as Microsoft's user research showed.

Office 2000 may very well be better for you. But it certainly isn't for everybody.


And the ribbon was designed specifically because more people find it easier to use, as Microsoft's user research showed.

Not exactly. What Microsoft's user research showed was that people who were unfamiliar with Office found the Ribbon easier to use than the traditional menu bar. They did not test whether the same held true for experienced users. They also did not test whether the Ribbon had a higher "skill ceiling" than a traditional menu bar (i.e. if you take two users, one who is proficient with the Ribbon and one who is proficient with the traditional menu bar, and ask them to complete the same task, who is faster?).

Most of the complaints about the Ribbon came from the audiences that Microsoft failed to adequately test the Ribbon on. Microsoft, as far as I can tell, figured that experienced users would quickly get used to the new UI paradigm and adapt. That was not the case, due to the aforementioned tendency for the Ribbon to "helpfully" move things around in order to put the most recently used tools front and center. This broke many people's muscle memory and, more importantly, inhibited the formation of new muscle memory. It's the latter that was especially galling for experienced users. Changing the UI is bad enough. But changing the UI and replacing it with a constantly shifting toolbar that gives the user no indication as to where their controls nor any consistency with regards to their positioning is intolerable.

Imagine if your car radio shifted its buttons around every time you started it to put the last selected station in the first position.


Their research probably found people LIKED it more, not that it performed better. Computers went mainstream and functionality became secondary to seeming high tech.


>Their research probably found people LIKED it more, not that it performed better.

This internal survey seems to suggest otherwise. It asked a range of questions, not just "do you like it?".

http://video.ch9.ms/slides/mix08/UX09_Harris.pptx slide 140


I don't understand what you gain by asking people questions about how they think usability has improved. Users are notoriously bad at actually knowing what they want. If I were testing this sort of thing I'd give them tasks to do and watch what they do, when they look frustrated etc.


There was a lot of real-world usability research that went into the Office 2007 ribbon-- both qualitative and quantitative. Jensen Harris went into some of this in his blog; see, for example, https://docs.microsoft.com/en-us/archive/blogs/jensenh/more-... .

More here: https://docs.microsoft.com/en-us/archive/blogs/jensenh/table...


Would've done this too, but then LibreOffice came along and IMHO, it's basically the same minus the proprietary issues, so it's even less hassle!


Recent Excel has a much more convenient interface for formatting pretty graphs than I remember from ~10 years ago. The available options are mostly the same, there indeed aren't really any new features in that, but it has live preview and more "visual" ways to modify the graph, compared to the gruesome modal interfaces that I remember where you have no idea what will happen until you "OK" it.


Excel 2007 introduced multithreading, which is very useful on large spreadsheets.


We are talking mainly about UI here. They could have added multithreading to the old UI.


> But the good news is that applications do slowly converge on best practices. Think of how things like hamburger menus or swipe-to-refresh or pinch-to-zoom have become expected standards.

Hamburger menus are literal garbage with a little bit of everything and zero organization. Give me a menu bar instead. Swipe-to-refresh is completely useless for well-behaving software and pinch-to-zoom often activates when I wanted to press a button instead.

Mobile device features were shoved into desktop UI without regard for desktop users. Desktop users' productivity has suffered as a consequence.


And you will have to click on that hamburger menu, since even on a device with a keyboard there will be no shortcut for using it. It's not merely nostalgia that has me pining for those bygone days of being able to drive almost every feature of every application from the keyboard, and having a clear culture of right-click-gives-context menu ... now on a Mac I'm resigned to apathetically pulling down menus with a mouse and playing guess-which-keyboard-modifiers-combine-for-this-menu-option.


At least on macOS, if you don't like a particular combination, you can reassign it — for any or all application(s).

Also, what stops you right-clicking (or in Apple parlance, secondary clicking) on macOS? Context menus are plentiful, either by: - Holding Ctrl and clicking with the primary mouse button - On the Mighty Mouse and Magic Mouse, enabling secondary click in System Preferences - On the Magic Trackpad, enabling secondary click as either a click in one of the lower corners or two-finger tap in System Preferences


It's not about right-clicking. It's about not having to use the mouse in the first place. Learn to use your keyboard more instead of your mouse. Tabbing around, for example, is a hell of a lot faster than moving your mouse across all the fields to type in. But many hamburger menus can't be tabbed into nor are they brought down with the alt key (like on Windows) nor are they available by the context key (since... if they were, they could be tabbed into).


At least on macOS, that's because hamburger menus should be an accelerator; the functionality that they expose should also be available via the global menu bar.

It's badly-designed apps, likely cross-platform or designed by people who don't know the macOS HIG, that bring mobile-style hamburger menus to macOS, not the hamburger itself.


>Applications are so much more complex today, supporting more combinations of OS and input method and data storage and accessibility and display modes and whatnot.

But do they need to be that complex? Often times we are solving for the same problems (most CRUD apps aren't doing anything we didn't do in the late 90's), but devs have convinced themselves that all of this abstraction and overly engineered layering is necessary. It's often not.


Nobody asked for applications to support "more combinations of OS, input methods and data storage and accessibility and whatever else", developers decided to shove all that because reasons.

If applications are worse because they are doing so much more then they should stop doing that "much more", focus on doing one thing and leave the rest to other applications.

> Think of how things like hamburger menus or swipe-to-refresh or pinch-to-zoom have become expected standards.

That isn't a great example for best practice IMO since the only expectation i have about hamburger menus is for them to die in a fire.


> Nobody asked for applications to support "more combinations of OS, input methods and data storage and accessibility and whatever else

Really?! Just look at all the effort that people have spent their personal time contributing to projects like WINE for Linux for example. MANY many people want cross-compatability. I for one have certainly done a lot of waiting for things to become available for Linux, and very much appreciate all of the cross platform frameworks that exist and have existed in the past (WINE, Adobe Air, Electron, Cordova etc.) to enable cross platform software. I think they have been GREAT for the whole ecosystem, both for consumers and developers.

Making computers more accessible for visually or motor impaired has also been one of the great victories of modern software... and certainly not because developers have shoved it down peoples' throats. In fact they have had to REGULATE it in order to drag developers kicking and screaming into building accessible software.

As for input methods, that is just a natural evolution required from the proliferation of devices (desktops, smartphones, etc).

So sorry to say I don't think your statement is rooted in any kind of reality beyond what exists in your head.


Wine is a compatibility layer and in fact supports what i was saying in two fronts: first, it is something developers decided to do and second, the Windows application does not need to care about the 1% that uses Linux on desktop and force the cross-platform complexity and bloat on the Windows users, but instead the Linux users (who as you seem to imply do not mind that extra bloat) can use Wine and run the same application on their systems.

Accessibility is something that can be necessary, but certainly not in all situations. Though of all things, it is the one bit that the OS itself can support the most and relieve applications from having to do it themselves (of course this assumes that applications do not try to use the lowest common denominator of OS functionality in order to be cross platform and actually take advantage of the functionality their OS provides).

About the proliferation of devices, this is a great case where what i wrote about leaving things to other applications applies: instead of having a single application try to cater to a bunch of different devices and input methods (and thus providing a mediocre experience on all of them), it is better to have one application tailored for each device and input method.

And note that i'm not writing about what is being done but what about should be done. Of course this is in my head, otherwise i would say that things are actually nice now.


If things are so complex now why does Google Maps constantly shift buttons and menus around without offering new functionality? To me it seems designers are just spinning their wheels. The whole data driven UX stuff reminds me a little of Agile with its story points and velocity charts. Looks “scientific” but if you take a closer look it’s just BS.


Also Google Maps interface for EDITING routes, its main purpose, it's utterly broken. If you misclick, you must start over. Most late 90's offline editing maps were billions better.

But these cool kids will never understand functionality vs ubiquity.


I really hate how the order of "Images Video Maps .." changes on Google depending on the query. The UI shuffling around unpredictably makes it that much harder to find anything from positional memory.


I often click on stuff by position and not by text so moving stuff around is really bad for me.


It seems like a lot of changes in software now are made just for the sake of change.

But in a way it makes sense for the developers, if there is nothing left to change or for them to work on, there is no reason for them to still have their jobs.

Why fix bugs when instead you can just move things around and make things more flashy in an attempt to make management think you are making the app more 'responsive' or 'increasing user engagement'


"Applications are harder to use, yes, but because they do so much more."

But do they? The move to web browser apps and the loss of rich native desktop functionality means that many web apps offer far less functionality than native desktop apps. The companies that offer these web apps sell them on their easy sharing capability and collaboration features.

An example: thirty years ago (or more), you could use any desktop word processor and perform basic tasks like spell check, change the colour of text, choose fonts and change their size.

Or today, in 2020, you can use Dropbox Paper without any spell check, no way to change the colour of text, no ability to choose fonts or even alter their size. But it does runs in a web browser. This is apparently progress.


I don't know why you're using Paper as an example. In 2020, you can use Google Docs which has spell check, text color, the entire collection of fonts at fonts.google.com available for instant selection, and so on. But I can also collaborate instantly.

That is real, apparent progress.


Applications are harder to use, yes, but because they do so much more. Your interface has to work and be responsive whether your file sits on a local disk or in the cloud, or maybe has to be synced.

Why? Why does every application need to be "cloud connected"? What's wrong with having a normal desktop application that saves files to the filesystem like every application did for thirty-odd years? The only reason for this that I can discern is that it's an easy way to lock users into paying a monthly or annual recurring fee, rather than a one-time fee for the software.

Users themselves are not asking for cloud connectivity. People understand files. They can save files, copy files to a thumbdrive (or Dropbox), and e-mail files as attachments. Files are an interface that people have figured out. We don't need to reinvent that wheel.

It needs to work with mouse and touch and a screenreader.

In my experience, older applications are far more screenreader friendly than new applications. Moreover, not all visually impaired people are so visually impaired as to require screenreaders, and the more skeuomorphic designs that were favored in the '90s and 2000s were far easier for them to use than today's flat designs where one can't tell what is and is not a button. Heck, even I get confused sometimes on Android UIs and don't notice what is a plain text label and what is an element that I can interact with. I can only think that it's far worse for people who have sensory and cognitive deficits.

As for "it needs to work with a mouse and touch", my answer is once again, "No it does not." Mouse and touch are different enough that trying to handle both in one app is a fool's errand. Mice and trackpads are far more precise than touch, and any interface that attempts to both mouse and touch with a single UI ends up being scaled for the lower precision input (touch), which results in acres of wasted space in the desktop UI.

The same kind of clear UX standards just don't exist anymore because there are so many different apps that do so many different things, and there's no obvious best answer.

Of course there's no obvious best answer if you're trying to support everything from a smartwatch to a 4k monitor with a single app. So why are you trying to do that? Make separate UIs! Refactor your code into shared libraries and use it from multiple UIs, rather than attempting to make a single mediocre UI for every interface.

But the good news is that applications do slowly converge on best practices. Think of how things like hamburger menus or swipe-to-refresh or pinch-to-zoom have become expected standards.

The problem is that all of these new "best practices" are far worse, from a usability perspective, than the WIMP (windows, icons, menus, pointer) paradigm that preceded them. Swipe to refresh is much less discoverable than a refresh button, and much more difficult to invoke with a mouse. Pinch-to-zoom is impossible to invoke with a mouse. Hamburger menus are far more difficult to navigate than a traditional menu bar.

When today's best practices are worse than yesterday's best practices, I think it is fair to say that applications are getting worse.


100% agree with your comment. One caveat though:

Mouse and touch seem like completely different things, but they're more similar when you consider pen/stylus input. 2-in-1 devices running a proper desktop-grade OS[0] are amazing devices, and one thing they're missing are properly designed apps, which are few and far between. 2-in-1 made me actually appreciate the ribbon a bit more - though an overall regression in UX, it shines with touch/pen devices, which I'm guessing was MS's intention all along[1]. 2-in-1s with pen are really magical things; I use one (a Dell Latitude) as my sidearm, and started to prefer it over my main Linux desktop on the grounds of convenience and versatility.

Best pen-oriented apps actually allow you to use keyboard + finger touch + pen simultaneously. You use pen for precise input (e.g. drawing, scaling, selecting), fingers for imprecise input (e.g. panning/rotating/scaling, manipulating support tools like rulers) and keyboard for function selection (e.g. picking the tool you'll use with stylus).

--

[0] - Read: MS Surface and its clones.

[1] - For instance, Windows Explorer would be near-unusable as a touch app without a pen, if not for the ribbon that makes necessary functions very convenient to access using finger touch.


I agree with your caveat, but reply with a caveat of my own. The key difference between a mouse and pen/touch is the ability to hover. With a mouse, I can put the cursor over a UI element without "clicking" or otherwise interacting with it. That's difficult to do with a pen and impossible to do with touch. The key use case that hover enables is the ability to preview changes by hovering over a UI control and confirming changes by clicking. A pen/touch UI would have to handle that interaction differently.


Thank you to your caveat to my caveat, and let me add a caveat to your caveat to my caveat: while you're spot on with the hover feature being an important differentiator, it's not in any way difficult with a pen. It works very well in practice. On Windows, even with old/pen-oblivious applications, it works just like moving the mouse - you gain access to tooltips and it reveals interactive elements of the UI. That's another reason I prefer pens over fingers.

(Tooltips usually show when you hold the mouse pointer stationary over an UI element. With pen, it's somewhat harder to do unless you're in a position that stabilizes your forearm, but there's an alternative trick: you keep the pen a little further from the screen than usual and, once over an element you want to see the tooltip for, you pull the pen back a little, so that it goes out of hover detection range. It's simpler than it sounds and it's something you stop thinking about once you get used to it.)


That's very interesting! My only experience with a pen UI is from using the iPad with an Apple Pencil, which doesn't seem to work the same way. As far as I could tell, the Pencil works like a very precise finger. It's great for writing, but I didn't notice any further enhancements beyond that. Of course, it might just have been that the app I was testing it with (Microsoft OneNote) didn't fully support the Pencil at that time.


On a Windows 2-in-1 it works more like a mouse with pressure sensitivity. It maintains its separate pointer (which is only shown when the pen is near the screen), so apps that aren't designed around a pen just behave as if you worked with a regular mouse.


> Why does every application need to be "cloud connected"? What's wrong with having a normal desktop application that saves files to the filesystem like every application did for thirty-odd years? ... Users themselves are not asking for cloud connectivity.

Of course they absolutely* are. I keep literally all my documents in the cloud. I'm constantly editing my documents from different devices -- my phone, my laptop, my tablet. Users like myself are absolutely asking for cloud connectivity. I simply won't use an app if it doesn't have it. Your argument makes as much sense of "why does every skyscraper have to have elevators? Users aren't asking for anything more than stairs!"

> Mouse and touch are different enough that trying to handle both in one app is a fool's errand.

Except you don't have a choice. Many apps these days are webapps, and absolutely require both interfaces to work. Many laptops also support both. That's just how it is.

> The problem is that all of these new "best practices" are far worse, from a usability perspective, than the WIMP (windows, icons, menus, pointer) paradigm that preceded them... When today's best practices are worse than yesterday's best practices, I think it is fair to say that applications are getting worse.

Except WIMP doesn't work on mobile. So it's an apples-to-oranges comparison.


Of course they absolutely are. I keep literally all my documents in the cloud. I'm constantly editing my documents from different devices -- my phone, my laptop, my tablet. Users like myself are absolutely asking for cloud connectivity.

If by "cloud" you mean a filesystem-like abstraction that's synchronized across multiple systems (e.g. Dropbox or OneDrive), I have no objection to that. Heck, I even called out Dropbox as a viable alternative to "cloud connectivity". What I am objecting to is the tendency that many apps (especially mobile apps) have of locking your data away in their cloud, making it impossible to get at your data, back it up, or share it with a different application.

Many apps these days are webapps, and absolutely require both interfaces to work.

That's a nonsequitir. It's entirely possible to detect the size and capabilities of the device that user is using and display a UI that's appropriate to that device. What I'm militating against is the lazy approach of designing the UI for mobile first, and then using CSS media queries to scale it up fit a desktop viewport. That results in acres of wasted space and a poor user experience, because the user doesn't have the same interaction expectations that they would have if they were using the UI on a mobile/touch device.

Except WIMP doesn't work on mobile.

And mobile UIs don't work on desktop. Trying to make a one-size-fits-all UI is a fool's errand. Much better to design each UI for the platform that it will be displayed on (laptop, tablet, phone, smartwatch, etc) than trying to scale a single UI across multiple devices.


> a time when people thought deeply

As opposed to "completely succumbed to metrics". Conventions like shift to range-multiselect/ctrl to toggle-multiselect cannot evolve from a series of a/b tests. It's not as if people never tested UI ideas back then, but it was a tool, not the entire process.


There has to be some gallery of horrible 90's windows program interfaces...

It was an experimental time, that's for sure.




It was also a time when most people's experience with using a new operating system (nevermind different software) was their first, ui guidelines had different requirements because the use case was different.


Today most people's experience with using any single app or webpage is also their first, UI-wise, because nothing is consistent with each other anymore. So I'm not sure what this is an argument for.


> applications are getting worse

Citation needed


"Anecdata"

Today's rush to simplified, web interfaces means typically that common keyboard scenarios have been completely forgotten about. A market leader in a niche sector, re-built their UI in Electron, it's primary purpose is to selectively migrate items from one technology to another.

While it does provide a treeview hierarchical structure, with a checkbox next to each item, selecting that via say "spacebar" or selecting multiple items using CTRL+SHIFT does not function. As well, it's non-native scrollbar does not accurately reflect your position and does not allow finely-grained re-positioning.

This is $5,000/seat software that has glowing reviews and has essentially captured the market for what it does - yes a small market, with approximately 6-8 competitors - almost all of whom have copied their user-interface and even Electron implementation.


- Atrocious input lag

- Lagging menus and widgets

- You misclick in Google Maps? Better if you start over the route

- 30x more resources for something done under 80MB, such as Discord vs Kopete. The later had inline LaTeX. And video previews. In 2007.

- Invisible scrollbars with no intuitive use

- Flat design not being able to distinguish a button for the background layer. Compare it with Motif, W9x, BeOS, KDE3 with Keramik.


You can cite me.


How are you notable?


Very


Some good stuff in there! I really like this one:

Forgiveness

Users like to explore an interface and often learn by trial and error. An effective interface allows for interactive discovery. It provides only appropriate sets of choices and warns users about potential situations where they may damage the system or data, or better, makes actions reversible or recoverable.


Yes! And if software lets a user do something that lands them in an error condition, then there should be a way to recover from that condition in the software.


For me, undo is probably one of the greatest inventions in computing next to the compiler and the internet.


Haha!

I’m home on sick leave today, a colleague just called because he’s unchecked some boxes in the CAM software resulting in the license being disabled and the check boxes disappear.

I’ve remoted in, no obvious and no hidden way to get the check boxes back, so he has to call the support line.


One of my fondest memories in learning to program as a kid in the late 90s, was writing windows 98 UI clone in QBasic.

I would screenshot the start menu, buttons, window borders, and various other UI components and try to recreate them in QBasic by zooming in and inspecting all the pixels.

I had subroutines to create windows, buttons, menues, various fonts, 255 colors and mouse support. It was coming together incredibly well given I had no idea how any of these were built. I had a working version of minesweeper and a text editor.


> One of my fondest memories in learning to program as a kid in the late 90s, was writing windows 98 UI clone in QBasic.

I did the same although trying to create a Unix GUI (in a purely visual “I’ve seen this in the movies” sense) and I did it in Amos Basic.

Needless to say it wasn’t a great success, but it provided me with the foundation for a making couple of neat-looking applications which actually did useful things (to me).

It was slow as heck, but I had great fun doing it.


That's funny, I did the exact same thing, although I didn't make it as far as you. I had a working mouse cursor (reading the mouse data directly from the serial port) and buttons. At that age, I didn't know about subroutines and had gotos all over the place.


It was a common rite of passage at the time. I did something very similar, first cloning Borland's TUI, then Win3.1.

The nice thing about Windows of that era - its widgets and their default color scheme was designed to still work with just the original 16 EGA colors (since that was the baseline for video cards back then). To be even more precise, everything other than window title and selection was done in 4 colors - white, black, and two shades of gray. Window/selection added a fifth. Things like selection rectangles and resizable window borders were done using XOR. This all was readily accessible in a DOS app, pretty much regardless of the language.


Did you build a gui toolkit or just hard-code everything? I remember creating a GUI paint program in Turbo Pascal (the only language I could get my mouse to work in) and I quickly got over my head as I didn't abstract anything out


It was abstracted out, but I don't know if it qualified as a GUI. I had subroutines for creating the various components and placing them anywhere on the screen. I don't remember how I handled the events. One of my biggest regrets is loosing all my work around that time.


The best Windows UX, except one thing that to this day i never liked: minimized windows in MDI applications having a "button" form instead of an icon form. I always found Windows 3.1's approach of using icons much better. Though i guess they tried to mimic minimizing the top level windows to the taskbar, but a real inner taskbar would work better IMO - mIRC did it best there - and functionally closer to what most applications do nowadays with tabs (but without losing the functionality of also having unmaximized windows, like opening multiple views of an image side by side at different zoom levels in an image editor - or just having multiple documents visible at the same time in general instead of being forced to only view one).


Opera had pretty much the perfect MDI interface - with a tab bar mimicking taskbar, but otherwise all MDI features were still there, like resizable windows.

And hey, MDI is still there, and often still the easiest way to organize things in a desktop Windows app.


90's era HCI research was so excellently focused on details. Apples Human Interface Guidelines from 1993 should also be mandatory reading for anyone building a human facing applications: https://woofle.net/impdf/HIG.pdf


Oh hey, that's my web site. :) Those PDFs were generated through a rather nasty process (print to PDF on a Mac OS 9 system), and the quality is a little uneven.

There's a nicer version of that PDF at:

http://mirror.informatimago.com/next/developer.apple.com/doc...

as well as a 1997 update for some newer interface elements:

http://mirror.informatimago.com/next/developer.apple.com/doc...


Wow, the Apple guidelines are done so much better then the Microsoft ones. Not only the guideline document is better written, styled, with focus on details, but also the guidelines itself are more concise.


In some ways interfaces were richer at that time. I can't wait for the flat interface fad will go away and some old thing to reemerge.


Many seem to think like us on this front (at least, in the NH comments). Now, what can we do concretely besides implementing those concepts in our own apps?


Convince designers people will hire them if they see designs like that in their portfolios. AFAI can tell designers favor whichever design will produce screen shots likely to make their next job search easier, actual usability or cost of implementation be damned, which makes perfect sense.


I actually worked on the followup to this book for the release of Windows XP at Microsoft. You can find it on Amazon if you're interested. https://www.amazon.com/Microsoft-Windows-Experience-Professi...

It was written largely by Tandy Trower (inventor of Clippy) and has many similarities to Apple's original Human Interface Guidelines though very different too.


Yeah I've been consulting this a lot lately. Been writing a data-dense desktop application and trying to make it as good to use on keyboard-only users as it is for mouse-using users.

I figure the closer I get to that, the easier the port to gui.cs will be.

Another good UX book I found was "The Definitive Guide to the .NET Compact Framework" by Larry Roof and Dan Fergus. Yes, it had mostly back-end stuff, but the UX concepts taught the reader to consider his audience.

Is the person using your app likely to be using it in a dock hooked to a full keyboard like you, Mr. Dev?

No, he will be standing next to a cellphone tower wearing gloves and trying to get the Falcon x3 out of the sunlight enough to see what the screen is showing him.

Okay then, make the buttons big enough for a gloved finger to mash, use combo-boxes everywhere you can stand it. So what if its ugly - if its functional and the user never has to use the SIP, then fine.


In all fairness to Tandy's 28 year run at Microsoft (Clippy aside) including his final stint with the Robotics group: https://en.wikipedia.org/wiki/Tandy_Trower


I've had some wonky PDF version of this for years I don't know why it never occurred to me that this was a book I could just buy. I can't wait to have a physical copy to reference!


I think in some regards, classic GUI interfaces peaked in about 2002/2003 with KDE2. The influence of Win95/98/NT4 on KDE was definitely there, but they took it in its own unique direction. Definitely some inspirations from NeXT as well.

I had a really nice FreeBSD+xfree86+KDE setup at that time. The closest I can come now is something based on XFCE4.


KDE3.


I'm mostly surprised by the number of pen input elements 95 had. I was still a kid at the time so I didn't have any exposure to more advanced hardware. How common was it?


Windows 95 was developed at a time when people thought that Pen computing would be the next big thing so they put in a lot of pen stuff that eventually was barely used. It always stayed a niche.


Maybe now is the time for it to come back? Windows 2-in-1 devices with a pen are magical these days, but there's far too little well-designed pen-oriented applications.

(I'm worried this won't improve until web folks fix the broken pointer events APIs, and even then it'll only lead to proliferation of pen-oriented Electron apps.)


It was used heavily on PDAs and similar devices until iPhone came to the scene and introduced "proper" touch.

Of course, those ran WinCE usually. But I don't think pen input code was any different.


A special build of Windows 3.1 called "Windows for Pen Computing" was made in the early 90s for very early tablet PCs. I'm guessing they rolled that stuff into the mainline build of 95.



Wow, "user centered". It was a refreshing read. Something quite contrary to modern: it looks purty to me and if it is not functional the rest can go eff themselves.


Now I see control panel icon is hammer and screwdriver and not cold and hot water tap as I always thought.


Thank you for posting this! I grew up in this era of computing and I’m working on recreating it for myself.

This looks like a fantastically good resource for inspiration :)


I realize I still have the hard copy of this book on my bookshelf for 25 years!


I don't miss the child windows. It was so confusing.


oh my. This looks like half research paper, not a design guideline :p


Back then HCI involved actual research, and not misinterpreting telemetry data to justify sales goals.


Best: Okay

Acceptable: OK

WTF: Ok




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: