Hacker News new | past | comments | ask | show | jobs | submit login
What we can learn from vintage computing (github.com/readme)
101 points by ecliptik on Dec 13, 2022 | hide | past | favorite | 75 comments



The lesson I wish OS designers would learn from vintage computing is that when it comes to an interactive user interfaces, responsiveness matters far more than throughput, and so nothing required for interactivity should be swapped out, ever, under any circumstances.


I'm not sure if I'd weigh latency as "far more" important than throughput (instead a more intelligent tradeoff), but I do think an exploration of low latency systems for day-to-day use would be very interesting. I've thought about trying this myself, starting simple with something like RT Linux and try to present a low latency UX for viewing books, media, and certain webpages. Almost every step of the computation pipeline has optimized throughput over latency so it would take some "radical" rework, but I think it would be a very fun and informative experiment.

I've spent a lot of time archiving and scraping my own content and optimizing my home network (and ingress points, routing, etc.) to access all of these things with minimal (< 3 ms) retrieval latency and the effects are joyful. The UX is bad as it's something I've written for low latency use and my UX skills aren't very good, but it makes me feel like I have "instant" access to content, which feels like a superpower. I'd love to explore topics like this for a broader audience for more than just nerdy media I consume.


In human-facing systems, responsiveness is everything.

Somehow games can run at 240fps on a modern PC, but desktop user interfaces (not to mention web apps) are still sluggish.


This is it exactly. Opening up file explorer in windows can sometimes take 1-2 seconds from click to window usable. or worse, context menu on a file can also take multi-seconds. It's absolutely amazing.

Still, it could be worse, at least desktop environments aren't as bad as the mobile world where it seems like everything has to have a transition and animation.


In some old computers, the OS was simple and on ROM and responded/loaded immediately. An upgrade just meant you swapped the ROM chip. HDDs and FDDs were then just for user data or application programs.


The games are not doing much OS interaction at all. That's where most of the delays come from (when it's not just poor internal architecture). The games do physics, logic and video rendering. They typically have a relatively direct path to the GPU, bypassing most of the kernel.

This is not a model for applications in general.

That doesn't mean that the sluggish desktop apps should not be fixed, but "be more like games" is not really the right advice.


Games and Applications aren't that far apart, most titles will be developed on existing engines that are predisposed to certain genres/constraints and will need to work within that the same way that UI frameworks limit what possible things an application can do(which is quite a lot).

I've spent a decent amount of time in both spaces in my career, it is totally possible to build performant, fluid applications but it requires a degree of care and investment. Usually "good enough" suffices and that's where things land but there are also developers who prioritize that and short of very restrictive environments you can build responsive applications.

Besides, at the end of the day all UI is doing is rendering, with a direct path to the GPU just like games do :).


> They typically have a relatively direct path to the GPU

That makes me wonder... How much of an OS could be offloaded to a GPU? Specially for CPUs built into the processors that share the same memory bus, the data wouldn't need to go through PCIe to get to the GPU and back to main memory.

Does any modern OS offer a standard and architecture-neutral way to run workloads on GPUs?

> That doesn't mean that the sluggish desktop apps should not be fixed, but "be more like games" is not really the right advice.

If we can offload as much of the GUI rendering to the GPU, that's still a win because it frees the CPU for other things.


> The games are not doing much OS interaction at all. That's where most of the delays come from

...really? My PC is orders of magnitude more powerful than the 166Mhz Pentium PC my family used in the 90s, yet the interface is no faster. Was interacting with the OS somehow orders of magnitude faster back then? I can run Win98 in a virtual machine with all the associated overhead... hell, I can do that on top of f'ing javascript in a web browser, and it is still more responsive than modern OSs.


There are certainly times when a heavy weighting in favour of latency makes sense and times when it doesn't - and interactivity is the defining factor for me. If I'm synthesizing an FPGA core or rendering a video I don't want latency optimisation to slow it down if I've wandered away from the computer, and left it to do its thing. If, on the other hand, I want to get on with something else while the long-winded task is progressing, I want what I'm doing now to be have priority, to be regarded as the "foreground" task, and I don't mind if the long-winded process takes longer as a result (within reason!) And, crucially, if that long-winded task eats all the RAM and I decide to cancel it, I want the user-interface to remain responsive so I can do that.

I'm glad you've recognised the delights of a low-latency interface. The ideal interface is invisible, unnoticed by the user - my own thoughts are that an unpredictable response time is jarring, causing the interface to be noticed in a way that it shouldn't be - I've heard it called "jank" in the context of cellphone apps. So I believe people do recognise it, but probably underestimate how much it detracts from a pleasant user experience.


I remember a cute trick Windows did (at least in the ~XP days) where if a window had focus, it got more CPU cycles. You could, for instance, create two identical console windows running a program that just does math in a loop, and if you give focus to one of them and let them churn, it will start to out-strip the other.

Of course, the underlying problem is that it's not always easy to know where the user's attention is (or if they have walked away, as you mention), but it was a simple "somewhat right" trick.


An RT kernel is not going to have much impact on UX latency.

I run an RT kernel all the time. Plenty of things are still janky. Improving UX latency has more to do with UI toolkit and application design than the kernel.


Of course. You'd need to start at more important latencies, like UX callback latency.


And properly prioritize the various GUI tasks otherwise they will simply get blocked by higher priority stuff. Having a real time OS and then running interactive tasks at a low priority will defeat the purpose, so there is that extra bit of configuration to keep in mind.


If you're looking for an easy-to-install version of Linux that gives you 99% of RT Linux then Ubuntu Studio is worth looking at.


I made (what today would seem) insanely fast UIs... under MS-DOS on 8088.

We've spent the last decade-plus focused mostly on slick brochures you can poke at with your finger. Where the point is not to get stuff done, but to "engage" and manipulate the user.

But not all is lost. There are some high-performance niches in tools for professionals, and in some facets of video games.


This is why you should develop user interfaces on last-gen (or gen-before-that) hardware. Developers optimize things that look slow to them so if you're on the latest firebreathing top end hardware, nothing seems slow. If you're on a 5-year-old i5 then you become very aware of which code is OK and which code is unacceptably slow.

That said, VR is the unsung hero of finally improving the awful jitter that most gaming hardware and software was developing 10 years ago. I don't care if your new graphics card can maintain 200fps on average if it has 100ms latency spikes every second. Smooth is important.


Not just last-gen hardware, but also with a network that deliberately drops maybe 0.5% of packets, and delays some at random!

I hadn't considered the VR aspect, but you're absolutely right - the jarring nature of UI "jank" would be literally nauseating in VR.


Yeah, there's a hard latency threshold beyond which most users very quickly get VR sickness. Abrash has some fascinating writeups on the research he was involved in at Oculus back in the early days.

Like you say, modern devs really need to remember to test against a high latency unreliable network. Or even just against a lack of network at all... it constantly astounds me that companies like Microsoft can let critical, central components like the start menu cease functioning without an internet connection. I recently had the start menu refuse to find any programs on my computer because it was trying to get to Bing and I was on a remote site with only local Ethernet plugged in.


> with a network that deliberately drops maybe 0.5% of packets, and delays some at random!

I've seen Sun's Crossbow stuff in use once, but it made quite an impression - it allowed testing many failure scenarios with relatively little effort.


For the first time recently I started going to a McDonald’s that uses the kiosks. The response lag to touch inputs is surprisingly bad. It’s quite shocking actually how awkward it feels to tap something and then wait awhile for it to react at all. The urge to tap again is overwhelming for even a few hundred milliseconds lag.

Another example: using the payment thing at a 7-11. You have to say if you want cashback or not by tapping on the screen (can’t use pin pad). Tapping a selection results in a minuscule change in contrast on the button that’s almost impossible to see in part because the buttons are smaller than a finger. And then it still doesn’t move on to the next screen for a few seconds, leaving you to wonder if you actually made a selection of if it’s still waiting because it didn’t register your tap.


The worst example of UI lag I've ever seen is on the shiny new treadmills they installed at my local gym a few years ago. While there are a few physical controls, most of them are now touchscreen-based. The speed-up and speed-down controls adjust the target speed, but the only visual feedback shows how fast the treatmill's currently running, and there's no indication of whether it's reached the target speed, which can take several seconds...

So a new user hops on the treadmill, leans on the speed-up button until it feels about right, lets go then hammers the speed down button in a panic as it keeps getting faster! Fun to watch, but a poster-child for how not to do UX!


The worst example for me is the volume knob on the radio in my car. When I start it in the morning I often find out that I was listening to something too loud last time I drove and I fruitlessly spin the knob for the ten seconds it takes for the radio to finish booting.

I think the digital version of the volume knob is a serious downgrade from the old analog version. It's probably cheaper though. The manufacturer saves $10 and I spend the next 5 years swearing at my car right after starting it.


Honestly, we should never wait a human-countable amount of time between a click and a response. Throw up a splash screen. Make the button appear to depress on click, show me a spinny wheel to indicate that some process is chugging away to bring me my latest dopamine hit.

And on top of that, either completely eliminate anything that bogs down the interactivity or at least give me a range of options between Win 98 and Modern lush.


User interaction is a hard real time problem, unfortunately no single OS vendor seems to want to acknowledge this fact.


It absolutely is not. At best, maybe a soft real time problem.

Hard real time problems are ones where people die, stuff is destroyed etc. when deadlines are missed.

User interaction is never (or almost never) like this.


Hard real time simply puts hard upper limits on constraints, and if those aren't met then the program is considered to have failed.

It does not imply that people die or that stuff is destroyed. I've used hard real time, soft real time and regular OSs for GUI based interaction and the only ones that really feel like they are satisfactory are the hard real time ones. The soft real time ones are obviously better than the general purpose ones but they still tend to give regular moments of unresponsiveness.


Agree. Games developers know this and put a lot of effort into reducing the key-press-to-display-update loop.


Do you have an example? (From a curious dev)


When I think of vintage computers I think of computers old enough that they have no concept of virtual memory. In particular, the Atari ST, the Amiga and the Acorn Archimedes - on all of those platforms the user interface wasn't merely another task running among dozens, it was the most important thing the computer was doing - and while it's hard to describe, the feeling of immediacy is noticeable (even if the task your mouse click set in motion takes minutes or hours to complete - the widget will give you instant visual feedback when you click it) and its absence on more modern machines is also noticeable. (If a modern machine's thrashing the hard disk - especially on spinning rust - it can take seconds for the same feedback to happen, even if the machine itself is thousands of times faster.)

Yesterday's topic about the memory footprint of desktop environments touched on this - the fact that if a memory-hungry process triggers a swapstorm it can take minutes to regain control of a Linux desktop.

I call that sense of immediacy the feeling of "having the computer's full attention" - and ensuring that nothing required for interactivity gets paged out is the only robust way I can see of creating that feeling - but instead we use the somewhat crude, brute-force approach of throwing more RAM at the computer until the swap-outs stop happening!


Thank you for the detailed explanation!

It sounds like programming computers back then would have been a joy.

Casey Muratori's talks[0] on performance and optimization sparked an interest in me to start putting thought into what the CPU actually does (which wasn't taught when I learned web development). Since discovering his talks, I've been on the hunt for more learning materials.

[0] https://www.youtube.com/watch?v=Ge3aKEmZcqY&list=PLEMXAbCVnm...



Thanks!


How to be less distracting is certainly one of those things. Modern computers are constantly begging for one's attention. It's one of the things I really resent about a lot of modern platforms, leave me alone.

> You can sync Newtons and Palm Pilots with modern desktops

Yep and I am seriously thankful JPilot exists, even though it's one of the most homely programs I've ever used. Keeps my Palm Pilot and laptop in sync with each other with absolutely no issues. Without it I'm not actually sure what I'd do.


What is it you're actively using PDAs for?


Exactly what mine was designed for. It keeps my contacts, appointments, and todos in check.

I have major, major issues with distractibility, so having a device that only makes sound when something actually important needs to happen is helpful. I find even with notifications reduced on my phone about half the time when it makes a noise it's not actually important. As such I turn off my phone for the majority of the day.

Additionally, once I clear a notification on the Palm it's not like I can then fall into web browsing on it.

Whether one believes it's a thing or not, I've been diagnosed with severe inattentive ADHD and I find simple, limited use devices really do help me stay on track.


honestly? they can handle most of my phone workloads. Notes, texts..is there anything else?


I admit I often wish my phone had the design aesthetic of a PDA from two decades ago. Contoured to the hand, a real d-pad with buttons right on the front, no camera cutouts or rounded corners to block my view, root access right out of the box, headphone jack, multiple memory card slots (damn having an SD Plus card that folded into a thumb drive was convenient!), and a thriving freeware community with no microtransactions, a jog dial, a removable battery, and a case held together by tiny screws instead of glue. You could break a screen and the replacement cost forty bucks and took ten minutes with a screwdriver.

Christ, being able to stick it on a dock and hit one button to mirror your entire device onto your desktop was great.


While I sympathise with some of your points, this would be something like The Homer:

https://simpsonswiki.com/wiki/The_Homer


The only difference between my description and any flagship PDA is a modern SoC.

https://phonedb.net/img/loox720.jpg


I think you can get something like that if you're OK with a Chinese Android-based device


You can text on a PDA? Isn’t it considered a smartphone once it has a mobile radio?


We have lost the art of keeping things simple, both on the hardware and software front. We have made everything more complicated. The gains are dubious.

For example, I saw a tear-down of an old game controller. It used a parallel-to-serial chip in the controller, which is a simple commodity chip. Keyboards and mice had very simple ways to communicate with the computer. In contrast, USB is complex, probably the most complicated protocol out there. And it's only getting more complicated. If you want to communicate via USB, best use a pre-existing stack.

Communications protocols of yore were relatively stable. If they were revised, it was infrequently. Today, protocols don't really seem to be designed to be common to everyone. They're more proprietorial.


Yes, the interfaces to external devices were simple, for some definition of simple. Don't forget that we had ports dedicated to keyboards, mice, joysticks, video, printers - all that takes physical space and all those connectors were slightly different and had slightly different orientations. Yeah, simpler.

Compare that to USB-C. One port that I can connect anything to, including power (which I didn't even mention above). To make things even simpler, again for some definition of "simple", it doesn't even matter which way I orient the connector. I can just insert it into the port without thinking about it. This is simpler for the user, but definitely harder to engineer.

Do I want to go back to the old days? No. My main machine when I got started was a C-64 but I also had access to a lab full of Apple II machines, a lab full of TRS-80 machines, and many friends having an Atari 800. They were definitely less complicated than the machines we have today. They were also far less capable, had proprietary interfaces necessitating proprietary hardware, and not at all portable.

We have the equivalent of a supercomputer in our pocket allowing us to communicate with anyone in the world at any time. Even sci-fi never envisioned that! I wouldn't call it a dubious gain.


Almost every element of computing has been harnessed to become a moat to protect someone's income stream.


One thing I just remembered: a couple of weeks ago I went to join a Zoom meeting, only to discover that it wouldn't work until I upgraded it. So I had to faff around trying to sort that out rather than doing what I wanted to do, which was join a meeting.

If I wanted to update my software every five effing minutes I wouldn't have chosen Debian Stable.


What stacks do you find yourself using when programming USB communications?


For microcontrollers, tinyusb seems to be the place to be. It supports a lot of architectures. The Pico RP2040 has tinyusb as part of its SDK. I've actually got something working, and I was toying with the idea of making the Pico controllable over USB. Once you can peek and poke memory addresses on the Pico, the world's your oyster.

Tinyusb seems quite flexible. You can write anything. It seems to have quite a bit of "canned" functionality, too, so if you wanted to write a keyboard host, for example, it's a case of trying to work through one of the demos and adapting it to your needs.

The problem with USB is that there is a lot of assumed knowledge as to how it all works. I only started playing with it recently, and it feels like being parachuted into enemy territory without a map. I wrote some of the basic here: https://medium.com/@oblate_24476/introduction-to-usb-from-a-...


>"You can download browsers for long gone operating systems"

Unfortunately, many simply don't work for 99.9% of the modern Web due to it requiring TLS 1.3. For example, on Windows 98,there is a myth of some Opera version that had TLS 1.3 support, but I haven't had a chance of finding it. TLS 1.2 is the latest one can get

It is quite ironic that it is not JavaScript, or the modern dynamic Web features that prove most difficult to back port to old operating systems, but SSL.

Of course there is the Web proxy that renders modern Web as an image map accessible even to the oldest computers, but this is "cheating" as you really are using a modern OS to sort of pre-digest content. You need a modern OS constantly running to use your vintage machine just in case you might need to download some file from the Web.


I bumped into this a few months back when toying with the idea of spinning up a Win2K virtual machine to keep some legacy software alive. I didn't expect browsing the web with a Win2K machine to be a realistic (or safe!) proposition - but I wasn't expecting SSL to be the blocker. (Not least because a few weeks previously I'd installed the latest AmiSSL for AmigaOS, which does support modern TLS! There, of course, Javascript becomes the blocker.)


> most difficult to back port to old operating systems, but SSL

This makes me curious. Is that of some new feature compilers that can generate code for those platforms never had?


I looked into this a bit for classic Macs (8MHz 68000), and found that they didn't have the computing power to complete a handshake before the timeout was up. I could be wrong though, and maybe some faster implementation is possible...


It makes sense. Timing is important in crypto as you don't want Eve to have enough time to brute force the secret out of the exchange between Alice and Bob.

We'll need to build new network cards with hardware accelerators.

And be back to the stage where the most powerful computer in the office was a peripheral (back then we had a printer with more memory than all our Macs combined and a flashy AMD 29K CPU)


Author here. This was a fun one to work on. Kept finding out about new things I wish I could have squeezed in there.


Thanks, great stuff!


I didn't know about Prodigy Reloaded. Certainly going to keep an eye on that. Just wanted to share my versing of this.

Over the last couple of months I dove into the vintage Macintosh community. In the process of restoring a Macintosh SE/30 and waiting for a Macintosh Quadra 700 to arrive. I've collected a few somewhat rare upgrades, including a DayStar PowerPro 601 that is somehow new in the box. This is a card that can go into the Quadra 700 to add a PowerPC 601 CPU at 100Mhz.

But the things I'm most excited about is the new hardware the community makes to work with these old computers. There are devices like ZuluSCSI and MacSD that let you replace the spinning hard drive with a solid state device that uses an SD card for storage. There is PiSCSI which attaches to a Raspberry Pi and emulates multiple SCSI devices, including hard drives, CDROM, floppy and ethernet.

FloppyEMU acts as a SCSI floppy drive and lets you load up a bunch of floppy images and swap between them without having to make actual floppies.

ADB-USB Wombat lets you use an ADB keyboard with a modern computer, or a modern USB keyboard with an ADB computer. I'm typing this on an Apple Desktop Bus Keyboard (AKA Apple IIgs Keyboard) on my M1 Macbook Pro.

And I'm waiting for delivery of a modern clone of a MicroMac Carrera04 accelerator with an adapter and ethernet for my Macintosh SE/30. This puts a 68040 40Mhz CPU in my SE/30, which normally has a 68030 16Mhz CPU. These accelerators are hard to find so someone on the 68kmla.org forums reverse engineered them, and makes small batches of clones. I had to source my own 68040 CPU, as those are getting harder to find, but I have one on the way.

These old Macs also have a unfortunate tendency to self-destruct if not looked after. The motherboard batteries can leak and basically destroy the inside of the computer. And the old capacitors do their own version of this. So both need to be replaced. Because of that, there are also full logic board recreations for several vintage Macintosh computers, especially the compact Macs. Since the original logic boards have some hard to find parts and some proprietary apple ICs, it requires a donor board, de-soldering various bits, and putting them on the new board.

Much of this stuff is happening in plain view either at 68kmla.org or tinkerdifferent.com


The old Mac community is excellent, maybe even one of the better retro hardware communities. It's filled with people who are very smart and making cool things.

There's definitely a higher barrier to entry with old macs compared to old PCs, but the community has done a good job lowering those barriers over time via devices like the FloppyEMU.

I should have restored an SE/30 like you, but instead I fixed up this[1] Mac, it really is the ship of theseus in computer form.

[1] http://muezza.ca/computers.html#classic2

Right now I've got a Plus I'm working on fixing up. Compact Macs are addictive, you've been warned ;)


I almost went for a Classic II. I took my first real typing class on one of those, so the nostalgia is real.


This is excellent. I'm working on Book III of my series (https://www.albertcory.io) which takes place in the 90's, and I really want to have one of the characters dive into the pre-Web online world. Prodigy was definitely a part of that.

If you think this stuff is old: the Computer History Museum (Mountain View) has IBM 1401s that work! https://computerhistory.org/exhibits/ibm1401/

Big Iron, for sure.


I hope the Computer History Museum will consider an interactive exhibit of early online services someday when the Prodigy, AOL, CompuServe, and other service recreations are more complete.


That would indeed be interesting. As would some working Minitels.


> the Computer History Museum has an IBM 1401

And a working PDP-1, which I'd highly recommend visiting. You can listen to some of the earliest computer music and play Spacewar!, the first video game to share or sell more than a single copy (as it were).


This is a good article about vintage computing, but where did the subtitle come from? Almost nothing in here is open source. On the contrary, people are performing miracles of reverse engineering to keep these things alive because they aren’t open source.


I kind of felt like how I imagine my dad felt when he saw cars of the 1960s put into vintage, compared to the 1930s rolls-royce he had (ex-ww2 cheap, doctors upgrading. it died in the 60s)

How can a NEWTON be "vintage" ???

I thought the lessons would be "remembering to add delay for fly back on the VDU" or "how to code for disk head effects with rotational delay" or "when to use duffs device, and when not in memory constrained 16 bit computers"


My 2016 Mac and iPhone are "vintage" according to Apple. ;-(


I realized the other day that I have 13 years of photos on my iPhone.

13. Years.

1977 was pretty much the debut of the Apple II and TRS-80. 13 years later, 1990, the NeXT Cube was already 2 years old and 486s cutting edge PCs. Lisp machines rose, flourished, and died.

I’m on my 4th iPhone, and will get my 5th next week or so. My original iPod touch still works. My word is it slow.


How about "ease of use is more important than looking cool". Half of modern apps are full of icons that all look exactly like each other and none of which I can use without a tooltip.


Reminds me of the Permacomputing site:

https://permacomputing.net/


I'm building content analytics around the premise that less is more. There are certainly tradeoffs, but the Internet has evolved into something unrecognizable.

We should rebuild the basics to reposition the ecosystem.


TIL: I struggle to read long-form content on a dark background.

Even more disappointing, “Show Reader” for iOS safari doesn't work on this site.


Lucky you're past the green-on-a-black-background phase of computing then!


OLED screens have made green-on-black UI's popular again. With the addition of yellow, orange and red highlights, for that real Hollywood Hack3r effect.


I still have my terminals set up that way. Just too used to it.


Those were not so bad because the font was quite large and you only had 80x24 characters on the screen anyway.

Also, people with taste of course used amber-on-black ;)


Spot the Ericsson fan :)

Amber on black was nice too.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: