Hacker News new | past | comments | ask | show | jobs | submit login
Why use old computers and operating systems? (ankarstrom.se)
167 points by hutrdvnj on March 20, 2021 | hide | past | favorite | 142 comments

A reason to use old computers that I don't see mentioned here has to do with accessibility. People in the US usually have current hardware such as the latest Mac laptop, but that is not the case in all other countries. Current hardware is a bit of a luxury that we don't fully appreciate.

I have an open source project with global users, and one person in Mexico contacted me looking for help. He was trying to create 3D visualizations of MRI brain scans and was running it on an old computer that hardly anybody in the US would consider using. Happily I had done testing on an old laptop and much performance tuning during my development. I was able to help him get his project working. It was still slow, but at least it was usable. It wouldn't have been if my code only worked on current hardware.

A couple of the web sites I maintain have a primary audience of poor, largely immigrant, people with a fifth-grade education and only rudimentary English.

The server logs show most of the connections come from people using what people on HN would consider toy or throwaway convenience store phones. The high-end is people on Windows XP.

(The sites are in the healthcare space, and if one of our clients is really so desperately poor that they can't even afford a smartphone, we'll give them either a laptop and a hotspot, or a smartphone, so they can access the web sites. We pay for their connection.)

I think tech people should get their funds together and uplift everyone on the planet. Today we have modern speedy hardware that is just as cheap... And it would add huge demand to software.

Even if we give everyone a $1000 laptop, we will create the following pitfalls:

-people don't know how to use them (and I mean even all the people living in the EU know how to use a desktop/laptop - and I don't mean just the old)

-people that are not educated will start clicking left right and center, their computers will be infested/compromised in one day, good luck supporting them. If you don't support them, you just helped create an extra 1bn zombie network

-most areas don't have adequate infrastructure or even no infrastructure at all, in many locations in EU, you 'feel' it when kids begin online classes. Suddenly the countries' networks get flooded with 1-2-5 million streams. I am not saying to leave areas in the dark forever, but it is a slow progress to expand/include all geographies, it takes time, and the need creates the work. We cannot force-invest to bring fast internet in remote locations just for the sake of bringing it to them.

-tech people make and spend money. Preference is give to 'make'. Making an investment of $100bn with a potential revenue of $1tn sounds good. By why would (e.g.) Lenovo donate $50bn worth of laptops? How will they recover this amount when their software sales are negligible? Will they track (spy) everyone to generate revenue? Will (e.g.) Microsoft sponsor those laptops, that will then 'monetize' (spy) to recover the costs?

so many more points/questions.. I will stop here..

Plus, if you give them a $1000 laptop, they'll sell it, buy a cheap phone, and use the remainder to buy something that serves a more pressing need.

That means somebody bought it - presumably someone who doesn't have any other computing device and wasn't awarded one.

Also, $1000 is way too much, I was thinking in terms of $150.

Indeed. But look at the specs of that computer - any raspberry pi is way more powerful than this. That's the point - the technology today enables meaningful change, back then it was only a dream because the device was too expensive and underpowered

Related to this: one of the very few good reasons to offer unencrypted HTTP is that in some parts of the world, old devices are in widespread use, and support for modern HTTPS cannot be taken for granted.

And older phones might have certificate stores that can't be upgraded and have already expired.

It’s kinda annoying that I can barely use my 2013 iPad Mini because of this kind of issue, even though I absolutely love that thing (I even used it as my primary smartphone using voip for a few years!).

Are you sure that's the issue, and not cipher/protocol support? The root CA needed for lets encrypt is "DST Root CA X3" which is supported by iOS 7 https://support.apple.com/en-ca/HT203065 (and has a validity start date in 2000, so i imagine goes earlier). Now there are lots of other CAs, but lets encrypt is probably the most popular, i would be kind of surprised that the root certificate store is the limiting factor as opposed to not supporting any GCM ciphers

I have the same problem with a BlackBerry Playbook tablet - great form factor but it doesn’t handle websites using modern SSL.

I believe you can work around this using another machine as an SSL proxy - though setting that up is beyond my ability. Perhaps someone else can elaborate?

Indeed, proxies can work around the problem. I made this for Macs, but you could run it on a Mac and connect from a Playbook, or set up Squid yourself on a Raspberry Pi. https://jonathanalland.com/legacy-mac-proxy.html

I’ve used Squid as a filtering proxy in the past. Unfortunately, I don’t have a Mac, but this:


- looks like it might be a useful guide for setting it up as an SSL proxy.

I know Google tried to address this by giving Chrome its own independently upgradable certificate store and thought Apple would do something similar, especially since they don't have to rely on OEMs to push system updates.

if it's a limited number of Root CA certs that are not supported, you can likely install those manually.

Or the server only accepts modern ciphers or TLS.

I took a deep dive into this after I was unable to access my blog on my iOS 6 device. I concluded that I don't really need a ssllab's A. It is much more likely someone will try visiting my blog with a older device than someone will MITM one of the visitors.


Interesting write up, thanks for sharing it.

How many of those are old enough that they can't download firefox?

(I realize this sounds snobby. I'm mostly just actually curious how much that is a viable option)

About years ago I saw a phone that could no longer connect to Play Store, probably because of lacking support for newer TLS versions. It was a rebranded Chinese phone, with no firmware update available.

I managed to install Firefox and a couple of apps by transferring the APKs from my phone using Bluetooth, but it's a popular brand in my country and I'm sure a lot of people are in the same situation.

I actually dug out my old Nexus One last week because I had an idea for a project and yeah, can't do anything on that phone anymore. It still connects to my WiFi, but it can't open Google play any more and of course there are no updates available to make it work. Most websites don't open in the browser.

Funnily enough, Google maps still work. I'm impressed that their APIs have remained the same for a very long time now.

And yes, I can probably still install APKs manually or find a custom firmware with more modern version of Android where Google play will work. But that requires certain amount of skill and time, so for most owners this phone is only slightly more functional than a dumbphone.

I’ve heard this complaint before, but couldn’t you just put an HTTPS to HTTP proxy somewhere with good bandwidth to cut out the latency without hurting the security of people with good bandwidth/devices? Sure, a proxy costs money, but it’s not much compared to other infrastructure costs and it could be shared.

Honestly never considered that. Thanks

my personal rule of thumb is that my software must be useable at -O0 with address sanitizers on my desktop - so far that has meant that at -O3 it stays useable on raspberry pi-3 level hardware.

A few months ago I tried to make a build which targetted ivybridge-level CPUs, it took no more than one day for a few users to report that it didn't work on their machines, turns out a lot of people still rock some old AMD Phenom or Q6600-era machines

> my personal rule of thumb is that my software must be useable at -O0 with address sanitizers on my desktop

The trouble with this criterion is that it fundamentally alters the language from the ground-up: it forces you to optimize the source code structure for this too, not just run-time performance. Specifically, one of the core strengths of C++ is that no matter how many (practical) levels of wrapping and forwarding you do, as long as they're simple, they can generally all get flattened and go away with optimizations like inlining. But if you don't enable optimizations, now every indirection in your source code will cost you—even absolutely trivial things, like std::move() or std::forward(), that should be 100% free. This obviously hampers your ability to design good C++ abstractions, and, basically, turns C++ into a different language (like Javascript or Python). It seems rather suboptimal. (Do you not encounter these issues in your particular application?)

What I would probably prefer in your situation is to change the criteria somewhat, by doing things like keeping ASAN, enabling some debug-mode facilities (like ITERATOR_DEBUG_LEVEL=1 for MSVC), but also enabling some optimizations for inlining and such so that you don't fundamentally alter the language like this. And/or you can just slow down your CPU when testing (in Windows you can just set the max CPU speed in Advanced Power Options).

Presumably they still optimize and write for -O3, just that they run far slower version.

Without any manual optimization targeting O0.

(main negative is that missing performance degradation appearing in 03 ut not O0 may be harder to notice)

> : it forces you to optimize the source code structure for this too,

I thought that it would but on my dev machine (a broadwell 6900k, still pretty good but definitely not top of the line) I actually have to push it a fair bit to have this be an issue (which is why it is important to do it ! because low-power computer are really low-power compared to that), so this question definitely does not come up during the design (which is in my case generally very template-y and subject to the issues you mention). For reference, the app in question is https://ossia.io

The cases where doing this led to changes in code were more in the lines of "welp, looks like this algorithm I implemented for rendering waveforms is damn inefficient", "gonna have to think if I can redraw this widget less", "I should really cache the results of this computation", etc.

Interesting, I guess it depends on your application. :-) You made me go back and double-check this on an actual program I had; here's what it is as a comparison point:

So I have an application in front of me right now that I've already optimized the heck out of (and it's as close to single-pass as can be), and turning off optimizations in release mode makes a basic 0.27-second task take 2.4 seconds... almost an order of magnitude difference.

And when I try to break into the code to see where it stops, it's almost always within traditionally-very-cheap operations like std::vector::emplace_back

  1 std::vector::emplace_back
  2 std::vector::_Emplace_back_with_unused_capacity
  3 std::_Default_allocator_traits::construct
  4 T::T
  5 U::V::w
and std::lower_bound

  1 std::lower_bound
  2 std::lower_bound
  3 std::_Seek_wrapped
  4 std::_Vector_const_iterator::_Seek_to
which have suddenly become incredibly expensive due to lack of optimizations like inlining. And notice this is all in the standard library, not within my own (template-light) code.

Going from 0.27 seconds (near-instantaneous for the user) to 2.4 seconds (a huge lag) is enough to make the program incredibly frustrating. Whether it's still "usable" at that point I guess is a matter of debate (some devs just put up with any amount of lag you throw at them!), but I feel pretty safe in saying the task I'm trying to accomplish simply would not be possible without optimizations.

So I'm guessing your performance targets & constraints are quite different, and that's probably why this isn't such a big deal in your case.

I've still got some SandyBridge-era computers running.

My PC is a dualcore intel thing with 8 gigabytes of RAM. It's 12 years old. It was 2 gigabytes of RAM when I bought it and I have added an SSD some years ago and upgraded the Gfx card. It is still perfectly usable for my job (writing code, word processing, web dev). When I have bigger task, I design them on it and move them to online CPU/GPU if needed.

So it's quite a durable product and I'm proud of it.

Using Linux helps as it doesn't need 1 more gigabyte or RAM each time I upgrade it. And my emacs just consume the same amount of RAM as years ago. Very predictable.

Likewise. A dual C2Q Mac Pro, Nehalem and Westmere Xeons, and a Sandy Bridge NAS. Newest non-embedded x86 in the house is probably my 2017 MacBook Air. I did buy an M1 Mac, but why would I replace our perfectly performant desktops that we only need occasionally for e.g. CAD or video editing or whatever when they still work absolutely fine? It's not a lack of money, it's a question of priorities. I have yet to find the killer app that's going to force my hand. It seems likely that hardware failure will get them first.

You just reminded me that I've also got a Core 2 Duo Mac running, as well. That thing can run games better than my Mac that came out a decade later. Might have something to do with the enormous caches on the Core 2 series versus later Intel Core releases.

I also agree with your reasoning. These computers have been serving their purposes for a while, and I see no reason to take the time to replace them.

Yeah, SFF PC of that era can be had for sometimes RPi-level prices. My grandma has one and it's still more powerful than most low-end laptops people use. I've also got one as a home server, it's plenty powerful for that too. I'd recommend them to anyone who "just wants a pc".

> -O0 with address sanitizers on my desktop

> that at -O3

What does this notation mean?

Optimization levels for C compilers like GCC and Clang.

Specifically the command line flags you would pass to the compiler.

Technically, couldn't he install a very lightweight Linux distribution.

I have a few Raspberry Pi zeros and I actually enjoy coding within the limitations of said hardware, when you know you only have 500 megs of RAM on the device you have to solve problems differently

"Only"... My first Linux system had 3 megabytes of RAM! I was running SLS Linux on a 386SX/20.

The old refrain:

EMACS - Eight Megabytes And Constantly Swapping

:) 16mb on my first computer and that had Win95 (then redhat linux)

I like how cushy we have it when 500Megs are considered 'only'. I appreciate nowadays even tetris runs on 6GB but if the software is written well, 500MB is a lot of memory to use.

Modern hardware has allowed us to ignore bad/inefficient coding. That aside from the need to compute larger data sets is the ONLY reason we keep needing more RAM and CPU with every coming year. If software companies would stop rebuilding everything from scratch as a product model and instead do it only when absolutely necessary and just work on improving the EXISTING software, most things could run on an OS with 2G of ram. There is no good reason for needing to continually upgrade hardware just to browse the internet. The reason it’s needed is because each year developers and designers assume that their userbase has better hardware so they can be wasteful with resources and not optimize.

This would drive down developer costs though wouldn't it?

Depends on the type of developer. With the increase in high level frameworks there are a lot of developers who never learn why the code they are using is bad. Development of anything is rarely involved in the libraries they use. They just import a blob and make it do stuff. A lot of times sites and web apps are scraped together from crap from stack overflow and it just barely works. There have been many cases of core library developers doing something stupid and avoidable that slows all resulting programs down.

My first computer was a Timex 2068.

With 500 MB the world is boundless.

If you want to experiment with constraints get a ESP32.

Why would anyone need more than one 5.25" floppy disk?

It's got 160KB!

Well, compared with 3" microdrive it is a lot! :)

Ha! 3" or Microdrive? They were different things.

The ZX Microdrive was the Sinclair stringy-floppy, released 1983, giving approx. 85-95 kB per cartridge: https://en.wikipedia.org/wiki/ZX_Microdrive

But 3" was a true floppy disk, used in e.g. the Amstrad Spectrum +3, released 1987, where it gave 180 kB per side: https://www.old-computers.com/museum/computer.asp?c=222

I love that people are still making enhancements for these ancient machines. For example, Amstrad promised an add-on disk interface for the cassette-based Spectrum +2, but never shipped one. Now, the nonexistent hardware has been cloned and you can buy a new one!


There's an SD-card based replacement mechanism for the original Microdrive:


I meant the Amstrad Spectrum +3 one.

I was envy of a friend that had one due to CP/M support.

OK. The Spectrum +3 had true floppy disk drives -- Hitachi 3" units. They are not Microdrives, are not compatible with Microdrives, and Microdrives cannot be attached to a +3.

Sinclair's system was much older (4 years, a long time then), and had its own external controller, the Interface 1. Microdrives were like tiny 8-track cassette tapes: an endless loop of tape on a single tiny reel, feeding out from the centre and wound back on the outside via a twist. They cannot be rewound or run backwards, only fast-forwarded, so access was slow.

So, no: not even similar. Different size, technology, OS extensions, interface, capacity, speed... different everything.

I had a Microdrive setup in the early 1980s. Like much Sinclair technology, it was radically cheaper than the competition. 90 kB of storage isn't much but it was twice the total RAM capacity of the host computer, and was 10x or more faster than cassette tapes. A microdrive cartridge could hold dozens of BASIC programs or machine-code snippets.

The Sinclair QL semi-16-bit computer also used Microdrives, with 2 built in, but with a different, incompatible format that got slightly more data storage (maybe 100 kB up to 105-110 kB if you were very lucky).

There were multiple officially-licensed derivatives of the QL, mostly running different incompatible OSes, and they mostly used Microdrives too: the Merlin Tonto, ICL One-Per-Desk, Telecom Australia ComputerPhone and more.

3rd party clones such as the CST Thor replaced the microdrives with floppy disk drives -- more expensive, but much faster and much more reliable.

There is something special about the Pi that makes an “oh well, time to reflash and start again” a non-disaster.

They are great and hacking about with them is fun, even when disaster strikes.

My version control on the Pi is different SD cards, I just copy the stable versions over and rotate. It’s fun :)

What's the best way to backup the actual sd card. I plan to store it on the cloud. I tried using Windisk 32 and it didn't work .

I’ve used Pi Baker on the Mac.

It kind of hurts that the image is the same size as the SD card when the card might be pretty much empty, but it does make recovery easier.

Write zeros to the empty space (many method available- make a big file of zeros or something and delete it) - then your image of the SD card should compress really well.

You can also use fallocate -d if you are in Linux.

Great suggestion.

Actually I think rpi clone can clone down to a smaller sd card


You could clone down to like a 4 gig SD card, and then back up that SD card.

For some reason I find "only have 500megs of RAM" very amusing. Many/most modern laptops only have 8-16 times more RAM than that. I'm genuinely curious what problems you're working where that "limitation" is your bottleneck and not the processor speed (which at 1GHz is still pretty speedy for many/most tasks other than pure computation (e.g. machine learning training and processing large datasets for statistics)). I'm also assuming you're treating it as a dedicated tool, and not doing tasks while running a DE and web browser at the same time.

I think if you require users to bring their own computer, you can be insulated enough from hardware costs to not really care about memory usage, and that's mostly fine. I have worked on set top boxes at an ISP. We designed and manufactured the hardware; if we could get away with 512MB of RAM instead of 1GB of RAM, that was basically pure profit for us. So some attention was paid to memory use, because it had a real dollar cost associated with it. (I guess I'll point out that the engineering samples had a gig of RAM, and someone got the idea to write the UI in Dart running inside Chrome instead of the very legacy Java that we had on the previous hardware generation... so the production models did not ship with 512MB of RAM.)

To some extent, being careful about memory usage is not the only way to make the business work -- you could, after all, charge more for the service or make people buy the CPE outright. But, being an ISP mostly involves getting enough people to buy the service to make it worth digging up a neighborhood to run fiber; you don't want to sour the deal by costing more than the competition with less able CPE. Doubling the RAM available to software engineers may improve the user experience by more than 100%, but nobody picks their ISP for the software than runs on their TV box, so it's probably wise to be careful.

My point here is that some programmers do have to care about memory usage. If you include a computer as part of your product, you will someday be looking at the BOM cost of the bundled computer in an attempt to turn cost into profit.

I’ve found that when working on things like this it’s better to make the engineering samples half HALF the RAM instead of double - it encourages minimalism.

Yeah, that's why I mentioned it. It is hard to commit to cost reduction. The hardware engineers don't want to do a bunch of work, just to have their project fail over 512MB of RAM. The software engineers and PMs will always want more features.

I think the devices are still in the field and being issued to new customers 5+ years later, so maybe it was the right decision.

I didn't say that paying attention to memory use wasn't important. I was more just curious about what kind of task 512MB of RAM is limiting for.

Browsing "modern" websites for one

Ram tends to create issues when you're building stuff locally.

I used ram as an off hand example of something which is limited.

I actually did go out and buy a Raspberry Pi 4 8gb since I want to start processing some machine learning, and the 512 on the Zero won't cut it

A lot of young people use hand-me-down computers, even in developed countries.

I was just playing Unreal Tournament with the homestay family children, on WinXP. One of their friends asked "Is this like Fornite?" and I felt like I'm getting old. I was there when UT was new! Fornite runs on the Unreal Engine!

On that note though, it would be really great to have a new game for Windows XP.

>I think the only solution is to stop expecting every computer to be general-purpose

Why? Computers are general purpose. The software we put on computers may have specific purposes, but computers are general purpose.

As for 'computer powered appliances' plenty of those exist and the general trend does seem to be to abstract the computer away inside some kind of locked down appliance.

I hope general purpose computers never go away. They're one of the most powerful and amazing tools ever created by humans. It's really too bad more people don't seem to understand or appreciate that.

I think a lot of people get turned off from general purpose computers because they are using proprietary operating systems and software that mitigate the "general purpose" aspect.

The most "general purpose" software most people interact with is a browser.

Computers are general purpose in that they are capable of doing anything possible by a Turing machine with limited memory.

Software built on top of that can be whatever we want within those limits. Even most proprietary operating systems are relatively general purpose. On windows and Mac os, you can generally acquire a wide range of software capable of doing many things and can create your own with relative ease.

Smartphones get a little less general purpose, again above the level of the actual computer though. In the case of smartphones and consoles and such, the extra software thwarting the general purpose nature of the computer is buried a little deeper as firmware flashed onto rom chips.

Then with computer powered appliance type devices, the only software is whatever is flashed onto the rom chip buried inside there that you can't really touch without some hardware modding.

In the end, computers have never stopped being general purpose, and likely never will. It's just the software separating the user from the computer is getting deeper and deeper into hardware.

I realize there's good security and user friendliness arguments to be made for this kind of thing, but it's a worrying trend. It'll create almost a pseudo class system with the people who have real computers and can use them to make money and do things and the people who have toys that suck money from them and feed them consumer garbage.

I’m having a hard time coming up with anything that a modern OS on a modern CPU can’t do - they’re about as general purpose as can be for any nontrivial but nuanced definition of “general purpose.” The only exception I can think of is real-time IO, which we offload to specialized chips with buffers and queues through PCIe and other busses. However, that’s a physical limitation since these peripherals would be impractical to implement in software until FPGA tech improves and gets significantly cheaper.

There's a lot you can't do, or can only do too slowly to be useful.

This is why scientific and commercial mainframes still exist, and why a lot of computing is offloaded to cloud services, relegating the modern OS on a modern CPU to a client system.

There are also entirely classes of fairly conventional applications - A(G)I, true holographic displays, pseudo-real telepresence, real-time photo-realistic rendering (although that's starting to become possible), distributed non-localised filesystems, associative semantic storage of all kinds - that modern hardware is still too slow for.

And more speculative classes too.

In fact contemporary hardware is mostly quite slow and dumb. It's incredibly fast, small, and cheap compared to a mid-80s mainframe, but it's going to look very underpowered and crude fifty years from now.

run a technologically secure code, with root of trust in cryptography/security model your software uses

On modern pc/server/mobile computers it's impossible, your root of trust there is manufacturer and their microcode/embedded security modules with separate operating system etc

Yeah...even 'general purpose' computers are shipped with hardware level 'software' that's beyond access from users. Intel and AMD have their management engines, Microsoft's got their in with uefi. I'm not sure if there even are any modern processors available with the kind of access allowed by 8 and 16-bit CPUs...

It really annoys me how wonderful a computer in your pocket should be, and then how little is permitted due to apple restrictions.

I'm not talking about apple turning down fart apps, I'm talking about the basic ability to write and run your own code without asking apple permission.

NB: He wrote ”I think the only solution is to stop expecting every computer to be general-purpose” (my emphasis). He didn’t write ”I think the only solution is to stop expecting computers to be general-purpose”.

>He wrote ”I think the only solution is to stop expecting every computer to be general-purpose”

Which is a bit ironic, as his website doesn't load on my Firefox (disabled HTTP-only connections), and after I added exception, it still looks like crap with DarkReader [1] because the website forces white background, and now I have grey font, with my sight problems, it's just too bright to read. Maybe it's time to stop expecting every website to be even displayed on every browser?


edit: 99%+ website work fine with darkreader

...he's using the old default Wordpress theme from 10+ years ago. Surely the issue can only be with darkreader? At the time this theme was made, it would have worked with every browser, and the theme hasn't changed.

On the other hand, I don't need my fridge to be a general purpose computer when people start making them smarter. Embedded software gains a lot of robustness from being separate from general purpose computing software pipelines.

If my toaster starts running node.js and needs internet connectivity I may go find my own shark to jump.

At my institution, our students take a series of courses on programming a simple microcontroller (and were doing so long before IoT/Arduino made that fashionable again). We worked with the HC11 until a few years ago when we moved to the 9s12. They even worked for a while in Assembly Language until quite recently (we now use C exclusively). In this case it wasn’t nostalgia or joy or anything subtle: modern computers are too complex to permit a useful mental model of how they operate. These ‘older’ systems (and their modern simple cousins) are a fantastic way to learn how a computer actually works with sufficient insight that it gives you a much deeper feel for how more complex descendants work. As one example, pointers and indirection are always a topic that students learning programming struggle with. Explaining that topic is much, much easier to a roomful of people who’ve worked directly with address registers and offsets.

My father believed strongly in this. I first learned to program in my early teens (at the time there were precisely two computers in a 300 km range of where we lived, my father was an operator on one of them). The ‘computer’ I learned on was made of cardboard, and I was the CPU: https://en.wikipedia.org/wiki/CARDboard_Illustrative_Aid_to_...

That looks great, I'd love to have a play with one. I wonder what's the easiest way to get one, perhaps a modern replica? It almost looks like it might be possible to implement as a PDF that you just have to print on some card and cut out?

Perfect! Thanks a lot.

Slightly more complicated, but you can also do bare metal programming on a Raspberry Pi, e.g. http://cs107e.github.io

> imagine if spreadsheet programs like Microsoft Excel stopped being developed and eventually just disappeared – that’s the level of significance that HyperCard had.

I often hear similar claims about the significance of HyperCard.

But if HyperCard was so significant to so many people, wouldn’t it have been ported and/or rewritten over the years to still be available today? Even if not by Apple, then by someone else?

That’s happened to Excel and other programs. So why not HyperCard? (Serious question)

HyperCard is like Concorde: it was replaced by less powerful alternatives (eg PowerPoint) that took away any hope of mass success and left the remaining audience too small to be viable.

That's why later successors like LiveCode have to aim themselves at niches of the original HyperCard audience, like those who want an easy dev tool. Which is nice, but misses the tool-for-everyone dream of the original

PowerPoint is the 25%.

The other 75% was Netscape Navigator as

1) distributing info on the web is so much better than on floppies. If you were not in the loop, it was very difficult to get hold of interesting Hypercard stacks.

2) Hypercard was fixed-screen-size, whereas web used whatever screen size you had.

3) The web was cross-platform and in color. Hypercard was mac only and b/w only.

In some cases it has to do with the operating system cooperating or not. Modern operating systems don't allow certain things to work due to restrictions in what apps are capable of. Consider a Smalltalk OS, where you can dynamically link objects across "apps" in interesting ways (see Alan Kay's "Tribute to Ted Nelson" where he shows a demo of this). You simply cannot do such a thing with mac/windows/linux.

In the case of Hypercard, I cannot say whether or not this is the case. It could be that Hypercard is absolutely possible today. But I wouldn't doubt it if it was somewhat unusable on modern systems due to this "cooperation" issue I mentioned. It may need buy-in from other apps and/or the host OS for it to work fully as intended.

For another example, consider emacs. Emacs effectively gives you the lisp-machine experience, but the problem is it isn't integrated with the rest of the system. You sort of have to live in an emacs bubble. With hypercard, you could surely get it running, but would you be in a bubble? Ideally you could use hypercard to script the rest of your system as well.

What we should want is something like a "card" that could "link" to a specific cell in a spreadsheet, as an example. Or a card that could open a PDF to a specific page. The more the rest of the system "plays along", the more powerful something like hypercard could be.

Hypercard was replaced by the World Wide Web and Power Point.

The World Wide Web because it solved distribution and vendor lock-in.

Power Point because slide shows were an important use case Windows Desktops were much much more common in the 1990's and 00's than Macs.

Power Point also provided much better integration with wordprocessors and spreadsheets.

One of the downsides of hypercard was that there were many XCMDs that were buggy, poorly written and became unmaintained. You could create a stack and send it to people and it would hang their machine. If not immediately an OS update came and bang! Often solving that issue was too much for the average user. It languished as Apple languished and people moved on.

Hypercard was amazing in it's prime, so it's place in history is assured. Many people who were not "programmers" were tricked in to being programmers on a 30MHz machine! They created amazing products. It also influenced the development of Netscape and the way Javascript treated events/actions (attaching actions to buttons for example as opposed to a buttons sending events in to an event loop).

It's also one of the finest examples of a domain specific language empowering a normal user to do more. It's powerful but uses human concepts. Something we seem to have forgotten completely with our obsession in integrating Javascript, Python or Lua in to everything and then blaming users for not being empowered.

There is still a lot to (re)learn from these technologies.

This is the best explanation I know:


In brief: in the early 1980s, home computers were designed with the primary purpose of owners programming the machines themselves. They came with BASIC interpreters and how-to-program manuals. (Examples: ZX Spectrum, Oric-1, BBC Micro.)

But in fact, what happened was that most owners just played 3rd party videogames on them, which they bought ready-made on pre-recorded media.

So late-1980s home computers mostly had much better graphics and sound for better games, and some didn't have a BASIC at all, or only limited ones (examples: Amiga, ST) and better BASICs were left to the 3rd-party market (e.g. STOS, AMOS, GFA BASIC, BlitzBASIC.)

The Mac was on the crux between these generations, with one foot on both sides. Fairly poor graphics and sound, but it did have a (limited) BASIC. It focused on delivering a radically better UI, and this briefly included a radically better end-user programming environment, HyperCard.

But that isn't where the market went, and it wasn't where Steve Jobs was so focussed, which was on the UI and improving it, not user programmability.

Cynical interpretation: making it easier for owners to write their own polished, professional-looking graphical applications would potentially reduce the lucrative aftermarket for additional applications software, so Apple killed off this line of evolution.

It died within Claris/Apple because nobody knew how to describe it and sell it. Was it a programming environment? You didn't need to be a programmer to use it. Was it a database? Claris also sold FileMaker so it couldn't be that. It could have been a "multimedia authoring suite" but it was black-and-white and to get color images or video you needed janky XCMDs.

So it languished in Claris and ended up so out-of-date it would need a complete overhaul. The best chance it has was when they attempted to integrate it into QuickTime as an interactivity layer, but that was still when Apple was in internal management chaos and someone quit and the project died. It made sense Apple abandoned it, they had bigger fish to fry.

The question GP asked was why didn't someone else create a clone and sweep the floor? And that's a good question! There was SuperCard, and a bunch of clones on Windows. But despite expanding on HyperCard and fixing its issues none of them caught on. Why?


It wasn't sold, was it? I think it was given away free with every Mac.

There _were_ multiple clones, as you say.

I think in part it made sense in the context of the Mac as the first mass-market GUI computer, with strict HCI guidelines, a small screen with no colour and limited sound... As computers got more multimedia abilities, including later Macs, Hypercard got left behind.

HyperCard 1 was free and included with every Mac but HyperCard 2 came out after Claris was spun off from Apple and was a paid product and Macs only came with a "HyperCard Player" version.

Edit: i just saw this in Wikipedia, apparently Atkinson commented on the death of HyperCard 3 in an interview " Steve Jobs disliked the software because Atkinson had chosen to stay at Apple to finish it instead of joining Jobs at NeXT, and (according to Atkinson) "it had Sculley's stink all over it".[9]"

There’s LiveCode (https://en.m.wikipedia.org/wiki/LiveCode)

Which is the modern evolution of HyperCard to my understanding

For others who love old software and hardware I'll share two of my favorite sites, an excellent retro PC emulator, 86Box [0] and a clean and well-maintained software archive, WinWorld [1].

These two sites together have provided me hours of exploration into old hardware, BIOS screens I'd never otherwise see, and plenty of interesting software scenarios.

[0] https://github.com/86Box/86Box

[1] https://winworldpc.com

Interesting. What's the difference between 86Box and DOSBox?

DOSBox is made for emulating a PC running some form of DOS to play games. It does emulate enough of PC hardware to do that and because of booter games (ie. games running directly from their own boot disks) it can also boot MS-DOS. However the development team doesn't care for non-game uses. There are some forks (e.g. DOSBox-X) that do (and also add more features) but in general it is more of a "compatibility" layer than a full emulator - think of it as Rosetta for DOS.

86box (a fork of PCem) is a full PC hardware emulator trying to emulate original real hardware as faithfully as possible, including their performance characteristics, limitations, etc. You can actually install Windows XP inside 86box on an emulated Pentium MMX 233MHz with an S3 Virge and a Voodoo 2 (though for better performance -and compatibility- Windows 95 is better).

Thanks! Very informative reply. I understand the difference now.

For one thing, upon brief investigation I don't think 86Box has Linux builds. Might be mistaken though.

86box is a fork of PCem (though it does get updates with new stuff from it) with IMO a nicer UI. If you are on Linux you ca use PCem instead.

>On this blog, I write about the various computers I use and about the operating systems I use on them. Apart from Windows 7, which is relatively modern, these include Mac OS 10.6 Snow Leopard, which at this point is quite old

Completely nitpicking here, but both operating systems are the exact same age. I agree that Snow Leopard feels significantly less up-to-date than Windows 7 though, which speaks to how quickly Apple’s operating systems are obsoleted (and this isn't necessarily a bad thing).

All software is bound to keep changing forever unless people stop using it... even after it's past its "perfect" place in terms of usability and benefits it brings to its intended audience (I am not saying perfect in terms of having no bugs - though that may also be the case)... because we can only know that in hindsight and we have no way of measuring this objectively.

Some old Unix tools are perhaps the closest we have to that. (ls, cd, tail...) but in terms of UI, I can't think of anything. As the needs of users change, so does what the "perfect software" for such users looks like... however, I would think there's usually a decades-long period in which some software could stay just as it is without there being possible improvements one could make to it.

I think it would be really interesting if we could find a good way to tell when that "perfection" is reached and tried to intentionally stop changing what is literally already perfect (though that will never happen in a commercial product, for obvious reasons).

Old unix tools are serviceable, but nowhere near perfect. I find exa better than ls, fd better than find, rg better than grep, etc.

Thanks for reminding me to use exa

At the time of writing the answer appears to be "Error establishing a database connection" which tickled me as, well, accessing my childhood 8 bit computers never involved database errors!

Syntax error!

> Error establishing a database connection

Archived: https://web.archive.org/web/20210319083317/http://john.ankar...

Sometimes I used old computers because they appeared functional enough. Not so long ago I used a computer from late 2000s and it was a quite normal user experience on Linux (on lightweight window manager of course) with a small exception of web browser. Amount of scripts and data on modern sites caused problems, made whole OS hang often. If only JS was turned off, no problem.

Coincidentally i'm writing this from my late 2009 iMac. It is already more than a decade old but i think it is a perfectly fine computer. With the latest version of Firefox every site works.

The main issue it has is that it is a bit sluggish but i think an additional 8GB of RAM (it has 4GB) and perhaps an SSD would make it feel perfectly fine.

Sadly Apple doesn't seem to agree and the last version of macOS to support it is 10.13 - which itself isn't supported anymore as of December 2020 (just ~3 years after it was released, which is kinda mad IMO). Most things seem to work fine so far (most open source applications seem to support even older versions anyway), though Homebrew (which i used to install a couple of command line tools) does warn that they have no official support for it and some stuff may break (fortunately that didn't happen).

I'm using an early 2008 MBP with Debian Buster with 6GB of RAM and SSD. The nvidia-340 driver is still supported so Youtube works nicely. With mbpfan it does not overheat. Touchpad and suspend are working pretty well. The Amazon website with Firefox is where I get warnings about a script taking too long.

At Brown U.'s semiconductor fab cleanroom we had a Windows 3.1 PC controlling our plasma etching machine. One day we opened up its case--not a speck of visible dust despite operating continuously since the early 90's!

Isn't that expected, given it was operating in a cleanroom?

Sure, but still surprising on some level.

I recently inherited a 32-bit laptop that runs Vista, any recommendations of what version of Linux to try?

32 bit aren't a problem, RAM however could be. I've run Debian on 32 bit Atom netbooks with 1 Gig RAM without problems. Using light desktop environments such as XFCE or smaller ones would allow also 512MB RAM or even less. Years ago I successfully run Debian + LXDE desktop on one of those toy Win-CE Chinese laptops with just 128MB RAM. CPU was a WM8505 clocked at a whopping 300MHz. And then there's ELKS Linux which would work on 8086 CPUs too which I successfully run on a industrial PC many moons ago. https://github.com/jbruchon/elks

Extremely small systems aside, it can run fine on decently equipped laptops or netbooks. Surfing the web with a full featured browser such as Firefox or using heavy apps such as LibreOffice without having the system swap too much would likely require no less than 2 Gigs or more, but if you do network maintenance using command line tools, even the smallest netbook with half a Gig RAM becomes an useful tool to keep in the bag along with bigger laptops.

Which CPU model do you have exactly? If it's a core 2 model, they are actually 64bit capable (32bit extended) and can run an x86_64 linux without issues.

Rather than that I'd recommend Debian or Mint with MATE if you want an easy and stable distro. Otherwise if you are willing enough, go for archlinux32 to have still the benefits of AUR.

I had a great experience with https://q4os.org/ (and its Windows skin).

It's feels like a modern Windows XP.

But I must admit I have not used it for much work, but the feeling of playing around with it was great.

> I recently inherited a 32-bit laptop that runs Vista, any recommendations of what version of Linux to try?

I'll have to check to be sure that it is 32bit(l/top is downstairs and I'm lazy), but I do my personal projects on a 2008 Asus that came with Vista and 2GB of RAM. I literally use it daily using:

1. Emacs 2. Vim + every plugin you can think of for development 3. GCC + all the devtools for C development 4. Standard gui tools (browser, some solitaire games, dia for diagrams, etc).

I am pretty certain I am using this: https://www.linuxmint.com/edition.php?id=255

Once again, I might be wrong (although "pretty certain" covers that), but you can give it a try.

I would load up Slackware 14.2 on that bad boy.

Mint has all sorts of versions that work great.

One thing about old computers is how hackable the hardware is. On old PCs, things like parallel ports, serial ports, joysticks and so on are trivially easy to interface to and have great performance. Even further back, generating and measuring periods in the sub-microsecond range was just a quick bit of assembler on 4MHZ Z80's back in the early 80's. Although that sort of hacking is still very possible now, it's the province of things like PICs, Arduino's and Pi's which are all relatively specialised and require far more effort to get started with than sticking a couple of wires in a parallel port and doing an outportb().

When I was an independent consultant in the mountains in northern california. There were companies running devices from their Windows 7 and Window NT computers where the device manufacturer had gone belly up, quit or just decided to not support their previous products. I had several sucesses using WINE on Linux. The companies were so grateful that in some cases I get free pastry, coffee, and/or sandwiches Here in LA some schools are in a bad way so I put Linux on old hardware and teach the kids what is different between linux, libre Office, GIMP, and other FOSS software. I freely transfer my HATE of M$Soft and MAC to these young minds

I worked as a recording engineer for several years. The first studio I worked at, the B room had an SSL 6000 G Series and a Pro Tools 888 system hooked up to an Apple machine running OS9.

This was 2008, so already old back then, but with the way it was configured plus the 888 system, it was still valuable.

I have friends who work in water treatment infrastructure, and out of necessity carry laptops with VMs for DOS, windows 3.1, etc.

Even my AD/DA converter at home is no longer supported. I use a 2010 mac mini running OSX 10.11 with it.

As long as people are using older hardware that interfaces with a computer, older OSes and machines will be useful.

I've enjoyed installing windows 98 on old hardware; unlike windows xp, the OS installs without network authorization and runs mind of legacy software. But the couple of old towers I've gotten from friend's closets have lacked driver CDs. Once you reinstall windows (perhaps because of a bad hard drive sector) it becomes impossible to get the sound card working again or better than 16 colors out of the graphics card. Vendors like even intel no longer provide 80710E chipset drivers and the ones for download on shareware sites don't work.

It's actually possible to browse the web on retro PCs by setting up a Raspberry Pi running "Browservice": https://www.youtube.com/watch?v=5MjZdKtv9ak

If you're on a machine with only a RJ11 / 56k dial-up port, you can also setup Raspberry Pi to handle this too: https://www.youtube.com/watch?v=NFUTInM7gq8

Hope this helps some retrocomputing enthusiasts!

In addition to Browservice, WRP is also pretty good for very old browser that can't do anything but to load images. As long as you can load a webpage with image map support, you can browse any modern websites on it (I think Browservice requires some js support on the client browser).


I have an old Win2K box with a P3-450 with a SB-Live! soundcard which has some nice environmental effects such as reverb that I use for recording. I haven't used it much in the past few years because I need to clone the two drives, which are close to 20 years old. Rebuilding the system from scratch would be a nightmare.

It's fitting that this post is written on a blog running the very old (original?) default theme from WordPress 1.0.

> Quite literally, the only way to use HyperCard is to get a hold of an old Mac – or emulate it, but emulation always falls short of the real deal. That’s why HyperCard alone is a pretty clear reason to use Mac OS 9

Not to mention Beautiful Doreena! Misses Mac OS pre X.

Sounds like nostalgia to me.

Anyone else in the UK thinking what a fine thing it would be to designate the space for a dedicated “HyperCard” machine?

I practically operate a one-in, one-out policy for retro stuff like this.

"Why use old computers and operating systems?" > link returns "Error establishing a database connection." Is this an attempt at ironic humor?

I am not GP ng to clutter my home with old, bulky and single-purpose computers.

> and the computers needed to run them are cheap

Old computers aren't always cheap. Retro PCs get expensive quick.

The website crashed, failing to withstand viewers from HN.

Emulators can give you the best at of both worlds.

> Quite literally, the only way to use HyperCard is to get a hold of an old Mac – or emulate it, but emulation always falls short of the real deal. That’s why HyperCard alone is a pretty clear reason to use Mac OS 9

Not to mention Beautiful Doreena! Misses

I agree with the idea that old computers and OSes might still be useful and that they have a lot of ideas we simply don't even get to experience today.

However the final points of learning to accept that general purpose computing isn't needed or something is not well worded and in it's current version I completely disagree with. Old hardware can be kept and used for specific, non-general purposes. And new hardware could be made which is locked down for security and maintenance reasons (think... routers or IoT bridges...). But a world where we resign ourselves to machines which are not general computing devices is not one I think we should be moving towards.

Let me give you a more concrete reason to maintain legacy systems.

We run a major set of COBOL applications developed under VAX/VMS, running under ACMS, utilizing TDMS. Please note, I can barely spell some of these things, let alone grasp what they do.

The application software that my predecessors wrote for these systems supports thousands of users, and is a vertical wall of technical debt.

I am far from the decision-maker, but I run the corporate-mandated communication gateways. I just switched my bastions from stunnel-telnet to tinyssh-telnet. At least my keys don't expire now, and the crypto is strict DJB.

We make due with what we have. I do the best I can. I respect the work of those that came before (and it signs my paychecks).

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact