I ran TAMU Linux on a 486 with 4MB RAM in 1994. It wasn't much fun- you could start X windows, and emacs, but if you tried to compile with g++ at the same time it would page.
I spent $250 of my hard-earned money to buy an 8MB upgrade, and ultimately upgraded that machine to 32MB, at which point it "flew"- no paging during development.
That would be about the era when people who weren't so keen on hardware upgrades as a solution to their problem were denouncing Emacs for its memory usage, and encouraging people to "stick to" vi if they wanted a performant development system.
Imagine denouncing Emacs for its runtime size today, relative to the bloat that is VS Code or IntelliJ :)
Even today editor bloat is real. On my netbook VSCode is noticeably laggy sometimes, and it wouldn't take too much more to start paging out the editor! It amazes me that we still can't do instantly responsive editors. Even my dinky notebook still has a gigabyte of RAM and can do a billion integer operations a second. But VSCode can't show text instantly. I guess I'll stick with vi!
Because it's not an editor - it's a browser. I wish some day all those Electron-based "applications" would go away forever. Worst of all technology "inventions".
It is fast and I find myself going between the two. But intellisense and the language plugins that Microsoft makes are just so much better than what Sublime Text offers.
It’s ironic that I still get lag with my vim setup running on the latest MacBook Pro today but ran a 486 and was able to write code back then with about the same or better responsiveness. Biggest issue seems syntax checking and code folding on files over 3000 lines long. Yes I know long files aren’t ideal but also for the specific project I’m currently working on it’s somewhat better than a thousand 3 line files.
In my experience, it's just some particular syntaxes that vim is slow to process - large XML/HTML files in particular. But just ":synax off" when you want responsiveness more than you want colors :)
For me it’s Python code. Django views and models. Which honestly should be pretty fast because it’s all indent based but I’ve tried a number of things and while I would up with acceptable results they still aren’t lag free.
I had already bought into the emacs ecosystem- it ran fine on the micros that I had been using previously.
For most work programming I still use emacs 20+ years later but that's because of muscle memory. For hobby projects I use VS Code. It runs just fine on my 32-core, 64GB RAM desktop :)
yes, I should have mentioned: I was using the X windows version of emacs (at the time, I thought that was a Really Cool Thing- now, I use text emacs in tmux). I think it has a much larger footprint.
I used a 486 66dx2 until 2001 when I went to college. FreeBSD was by far and away the winner on that thing. Had 20 megs of RAM and a 2.2 gig hard drive. Ran X and WordPerfect and Netscape in Linux compatibility better than natively in Slackware. Got me through high school with no problems (had to buy an external modem as the one it came with was a winmodem)
No clue the current state of things though.
The web loaded and rendered faster because Javascript was used sparingly, if it was used at all. XMLHTTPRequest didn't become a thing until 1999 and didn't catch fire until later, and so async requests just didn't happen.
For a short window of time we were using 1px flash objects to act as an async intermediary...
Yeah, I know. I still use Dillo because almost everything is usable. at least fora, news sites and such.
Except this site, which I find atrocious having to use JS in order to search. I use DDG/Google on just the domain (site:news.ycombinator.com) and call it done.
But well, no mpg123's unless being played at a reduced band (11MHZ, mono). If converted to mp2, the files would be bigger, but playable at highest quality.
Too crashy (I use v4.0), and slower on rendering than Dillo, by far.
Also, I can't set the User Agent to anything else, in order to fix rendering errors. For example, the PSP one, still supported on tons of sites. Or the first Opera Mini releases, pre v5.
They are still alive, altough I prefer roguelikes on local now; and, for something close, IF and a Zmachine interpreter.
Netplay? Just quick matches with my SO on an emulator for 90's machines, or an adventure being played together, solving puzzles.
It's been discussed here before, but it's an impressive reminder of what an individual can accomplish with modern small-batch PCB manufacturing.
It also goes a long way to make large-pitch BGA devices and high-speed signal design seem less intimidating to newcomers. Definitely worth a read if you're interested in these kinds of devices.
Also, the Lichee Nano is one of the only freely-available English-language reference designs for the Allwinner FIC100s processor, which is nice.
One dream project I have is to build a CM-5 like machine from hundreds of cheap units like this. Unfortunately this one seems to lack fast I/O. Could probably do something with the I2C connections but they would be very slow.
(Before anyone jumps in with "why don't you use a desktop machine, it'll be faster", this is for fun, not a practical project.)
I’m not sure what transfer rates you’re looking for but modern SPI slaves can do > 120 MHz (roughly 15 MB/sec); if you give it two more pins you can do 66 MB/s (bytes, not bits) via quad spi (cf the datasheet for this very run-of-the-mill NOR flash that can do SPI, Dual SPI, and Quad SPI [0]). Of course that’s a unidirectional transfer rate - you won’t be able to simultaneously both send and receive at that rate, but you can alternate symmetrically between the two.
You won’t be able to bitbang at those speeds, so you’ll definitely have to drop down a layer in your stack/abstractions but since you mentioned I2C I figured you’d probably be OK considering this.
Communication overhead is the enemy of these highly parallel machines. The Connection Machines had many CPUs, but the unsung hero was the hypercube of connections between them [the clue is in the name]. According to Wikipedia it was a 12 dimension hypercube so every node had 12 high speed point-to-point I/O channels to adjacent nodes, which must have been a nightmare to implement and a nightmare to design software for. The cost of a CM-5 (Wikipedia again says $25 million) must have mostly been for this very specialised network.
It's hard to imagine this could have been competitive with a $25 million pile of beige PC boxes from the same era, but the PCs would have been starved of I/O (10 Mbps shared thick ethernet anyone?) so only applications which don't need much I/O between the nodes would be possible.
A "modern" CM-5 would ironically look much more like the pile of beige PCs, because it will have much less I/O -- these cheap chips only seem to have at most one or two fast channels (eg. ethernet and SDIO). There's no way to build these into a hypercube. It will be constantly limited by bandwidth and contention addressing other nodes in the cluster.
So I'd only build it for fun, not for practicality :-)
SpiNNaker[0] is a species of that, with multi-dimensional connections in a toroidal surface configuration. It's ARM-based, largely because the chief developer/project head is Steve Furber. Along with interviews concerning the BBC Micro and ARM, Computerphile did a video with Furber concerning SPiNNaker[1].
Yes SpiNNaker looks very cool, also of course the BBC Micro connection as you say. I do wonder what the network architecture is, so now I'm going to have to watch that video you posted :-)
Edit: It's a toroid, which seems an unusual choice (because 2D) for something that's meant to simulate a brain. I wonder if a simple 3D cubic connection network would have been possible by adding more links between physically adjacent boards.
I have the Turing Pi 1, 7 RPi CM3 nodes. And it's fine. But next I want to really move to a "huge" cluster, say 100+ nodes.
The reason is that with 7 nodes I find I'm still logging into each Raspberry Pi and configuring it by hand. It's a bad habit for sure. With 100 nodes, there's no way I could possibly do that, forcing me to write software to control the nodes automatically.
I currently have BasicLinux (http://distro.ibiblio.org/baslinux/) running on a 386SX with 8MB RAM. Last remaining thing I’m still puzzling over is getting its 10Mbit Ethernet card working.
You knew this was coming ;-) Back in 1993 I had SLS Linux running on a 386SX with 4MB of RAM. And that included X11 and gcc so I could compile my own kernels!
(almost) same here. Had the DX though and quickly upgraded to 8MiB (by adding 36 RAM chips in DIP). Still when compiling the kernel (took about 20m, iirc), I left X11. Swapping wasn't that much fun using a single 65MB RLL(!) drive.
What's the problem with the ethernet card? I'm guessing Linux has deprecated/removed the driver for it? Also I'm quite surprised that Linux still works on a 386SX, because support for 386 was removed some time ago (https://lwn.net/Articles/527396/).
Right, this was the first problem: “i386” distros actually pretty much all went 486+ some years back. So that severely limits the ecosystem. While you can always go back in the archive and get a bare kernel, I wanted a full system with init, package management, etc.
The card is (unsurprisingly) an ISA slot, an Intel EtherExpress 16. I forget exactly where I was with it when last I poked at this earlier this year, but the promising thing is I have it worked in MS-DOS 6.22 without issue. There are some utilities from Intel that I found that write EEPROM on the card to setup its IRQ and all of that, which is how I got it working in DOS.
Perhaps I’ll take another swing at it and document more precisely what the issues are and what I’ve tried so far. I wrote up the acquisition, repair, and resuscitation of the machine on a series of posts on my blog under “Project 386” beginning with https://justinmiller.io/posts/2020/04/26/project-386-part-1/ so I have intentions of adding some more posts.
Just wanted to say that your Project 386 posts are fantastic! Thank you so much for writing them up! I really enjoyed reading them. I was so happy you got it running, and seriously impressed with your troubleshooting and analysis of all system components, and the eventual fix of the motherboard traces.
Oh, thank you! They are so niche, but I really wanted to convey the sense of adventure and persistence that I was wrapped up in while trying to revive the thing. It has since led to a new hobby (I recently acquired a working 1986 Mac Plus) as well as a side business project that I hope to share more about soon.
It's nice to run a common distro on such tiny resources, but things tend to pile very fast when you add packages because of long (and sometimes unneeded) dependencies chains. That's when more specialized solutions shine (like openwrt or buildroot). Kudos to the author.
5MB and it ran SLS! I even managed to run X11 and emacs (xemacs I think?), although it was very much a matter of either running X11+emacs or compiling but not both.
3 megs RAM and SLS Linux here, around 1993. This was a 386SX laptop with 1 meg onboard and a 2 meg expansion. X11 barely ran and I didn't dare try emacs, so I had to learn vi.
I agree, or even something like openwrt. This was just to see if it could be done, it’s not the most practical OS for the hardware but it runs pretty well!
Yeah, I encountered this. Repurposed my Netgear WNR1000 v2 with OpenWrt to act as a bridge to connect wired devices to my network. While it has served this role flawlessly for 5+ years, it's frozen in time running an old kernel. The write-up as to why they had to drop support is a good read. [1]
It doesn’t, it’s Linux only for now. Unless someone builds uboot for bsd. Even then there would be a huge amount of work to do to get it running stable.
Just a few days ago I was trying to cut down the configuration for the current Linux kernel so I could boot it on a circa 2001 PC/104 SBC with 32 MiB of SDRAM. The processor is an AMD Elan SC520, which is a 133 MHz AM5x86 core and about on par with a 75 MHz Pentium. Sadly the memory is not expandable.
Getting a buildroot based system to work was fairly easy starting with "make tinyconfig" for the kernel. IDE was a real hangup because there really isn't an IDE controller, it's a legacy IDE interface with no DMA. Disk IO is abysmally slow. Found that libata has experimental support for legacy IDE, rather than use the old IDE drivers. So I've managed to not select any deprecated kernel options.
Booting a current Gentoo stage3 with networking was a major challenge though. Still haven't managed to get IPv6 and all of the standard netfilter modules before the kernel hangs mysteriously between finishing self extract and producing any logs.
I also haven't managed to get ZONE_DMA support working for legacy DMA on ISA (for the PC/104 bus) without it using too much of the remaining DMA'able memory to load the Intel 82559 ethernet driver (e100.ko).
It's rather hilarious seeing a 5.10 Linux kernel complaining that it can't find a 256 KiB contiguous region of physical ram.
Buildroot boots in a matter of seconds on it, Gentoo takes a few minutes.
Languages with garbage collection and/or tons of reflection (requiring memory for bookkeeping) and/or JIT (requiring memory for on-the-fly jitted code) would indeed be a problem due to the overhead of those technologies.
Python is not as "bad" as dotnet and go I'd think, as most of python is reference counted garbage collection (with a "full" GC just to break up cycles) while go and dotnet use essentially mark-and-sweep GC strategies which require a lot of object moving and thus scratch space.
Running a dotnet hello-world (on x86_64 linux admittedly) gives an RSS of 26MB, most of it mapped libraries and .net assemblies, some libraries shared between processes of course, like libc/libm/ld.so. But it also maps a ton memory for the JIT and the GC (including scratch space to move objects into during gc). With some swap, it may survive on a system with 25MB of free memory, but I'd think there'd be plenty of swap-thrashing and gc thrashing going on, making that less fun.
Running a python3's hello world comes in at 10MB RSS, but a large part of that is the shared libraries like libc/libm/ld.so. With some memory-conserving programming some real python programs may run just fine without excess swapping.
For comparison, a hello world in C maps about 1M RSS, while a rust one maps 2MB RSS, both times a big chunk in shared libraries specially libc.
For context, I currently have a 16 MHz 68030 machine with 24 MB running a recent NetBSD install, just as a lark.
For simple command line things like poking around the file system over telnet, it feels pretty much like a contemporary system. It can even run the Python REPL fine, though it takes a good number of seconds to start up. Apache is no problem and it can host a website, though I imagine it would crumple with more than a dozen requests a minute or so. Actually, the only real annoyance for that kind of work is that the machine's too slow to authenticate SSL keys in a reasonable time, so logging in takes forever and you can't host an HTTPS site.
Given this ARM machine is probably ~100x faster, if you're willing to add some swap for flexibility, I imagine it'd be usable for a wide variety of low-memory tasks.
> Actually, the only real annoyance for that kind of work is that the machine's too slow to authenticate SSL keys in a reasonable time, so logging in takes forever and you can't host an HTTPS site.
Isn't Dillo (compiled from Mercurial) with mbedtls fast enough? Also, Gopher servers like sdf.org, magical.fish or i-logout.cz would shine on that machine. Fire up lynx and go. Or better, compile sacc with tcc, it will run megafast.
The first time I installed a minimal Gentoo (~2005 or so) I could boot to a shell with 17 MB of RAM used. This was after tuning the kernel to make it as small as possible and removing everything I didn't need, disabling all superfluous init services, etc.
It's quite possible that it did use more than 17 MB temporarily before going back down, but I would expect that it's probably doable in 32 MB + swap.
Configuring a minimal Linux system with Gentoo or Linux From Scratch is a great way to learn how things work (especially learning from mistakes like disabling ATA/IDE in the kernel and having to figure out why it no longer boots :p).
FYI you don't need to run the second stage of debootstrap/dpkg on the device. You can install qemu-user-static, "cp /usr/bin/qemu-arm-static path/to/sd/card/mount/usr/bin/" and then you can chroot there from your x86 machine. It works by registering a handler in /proc/sys/fs/binfmt_misc and then it runs the binaries with qemu.
I year ago, I got my ~first computer, a Pentium 2, to run the latest x86 Debian release with minimal effort. Half of what made it possible was that the P2 was the second chip to have 686 instructions, and that's the oldest supported target for Debian. It didn't hurt that the motherboard's 440BX chipset is popular with VMs, and the board had a USB port, so in many ways, it looks modern.
Corel Netwinder, single core Intel-made ARM, 2 ethernet ports (10 and 10/100), VGA, serial, and a 2.5" PATA disk; I think I splurged on the 64MB version and paid something like $800 for it.
My nostalgia for it is tempered by knowing that a Raspberry Pi 4B is better in basically every single way and ridiculously cheaper.
My benchmark for the cheap single-board has been the $9 CHIP [1]. As far as I can tell nothing since was able to match that kind of value - 512MB RAM / 4GB MMC / wifi+BT. Hell, it even had a power management IC and a battery connector. I regret I didn't grab more of them before the company went defunct. If anyone knows of a spiritual successor please share.
> As far as I can tell nothing since was able to match that kind of value
I mentioned it upthread and I don't want to look like I'm shilling but ... the Orange Pi zero board has 256M or 512M of ram, 4 cores, ethernet, wifi and all sorts of other stuff, for about $10. You have to provide storage though, and while it does have a graphics capability you need an expansion board (another $2) to use it.
The Orange Pi looks good, but it's a shame they have non-standard POE which seems to require weird hacks to make it work (https://parglescouk.wordpress.com/2017/04/14/getting-the-ora...). Having a single wire to each board would be very compelling if it could be more standard.
Are those solder bridges directly connected to the Ethernet Jack?
If so it would be much easier to just hook the 48V -> 5V step down converter there to avoid having to prepare a special Ethernet cable.
This is cool, but hopefully newbies are aware that for a few dollars more ($9.99) you can get complete ARM boards from Friendyarm.com, the ZeroPI has 512MB ram, and ethernet, USB, etc.
You can get a package manager, yes, but you do not get an entire repository of prebuilt packages. I suspect that the author selected Debian for its packages, not its package manager.
I spent $250 of my hard-earned money to buy an 8MB upgrade, and ultimately upgraded that machine to 32MB, at which point it "flew"- no paging during development.