This is pretty amazing. I remember the first computer I installed Linux 1.x on had 12 megs of ram in around 1992 or 1993? The smallest system I built/ran it on even back then was probably a 386 with 4MB of ram and it was so slow. Kind of amazing that after so much work, even torn down to the bare minimum it can run the modern kernel on such minimal hardware!
Edit: probably worth noting that esp32 cores are an order of magnitude higher frequency than those 90s computers.
Moving and resizing wireframe boxes seems like to me like an ergonomic and technical win - it’s a bit baffling to me why it’s not the default everywhere, no matter what the compute/gfx capabilities.
Back in the day you had plenty of SVGAlib games, PDF readers and even video players. You didn't need X at all, mp3blaster/mpg123 worked crazy fast on a Pentium and with Screen you had the perfect multitasking environment for the TTY.
Elinks? I think it's links+, which support{s,ed} SVGALIB and Framebuffer.
But back in the day you could browse without images perfectly fine, as the important ones where under galleries with custom links and downloading them was really slow.
> "You could run a serviceable Linux console on 4MB RAM back then. X11 was also doable [..]"
This was my setup for a considerable amount of time. Even though it is hard to believe in our fast paced times it must have been for several years.
The 386DX with 4 MiB was my first Linux machine and my first PC for that matter. 4 MiB was not enough to even run the Slackware installer, so you had to partition and mkswap manually from the command line before you could run it.
Apart from that it was a pretty workable machine for those days standards.
I used it from the console most of the time, but X11 worked, even if it was slow as molasses, as you said. I remember running GNU chess. While you could play from the command line it was one of the apps where the GUI was really helpful.
As long as you don’t expect it to do much. GEOS barely manages to run one application at a time while swapping continuously. It’s an impressive achievement but really not comparable to a general purpose multitasking operating system.
Is there any actual information for building it? I really am having a hard time finding anything that doesn't just link to a SourceForge repo last updated in 2016.
People forget (if they were even there) what was reasonable at the time. My first Linux was a 386sx with 5MB of RAM in the late 80s. It ran pretty well.
Haha, tell me about it. I think the 4MB upgrade cost around $300 or $400 at the time? Similar to my first 1GB hard disk which came a few years later that cost $500.
I was a little spoiled since my dad worked for a telco, so we tended to have decently modern computers and modems for that reason. I was REALLY fancy when I got 128k ISDN in 1996.
my 1st HDD (1992) was a 20MB one ... 20 - MEGA-BYTES (about $300) and quess that I totally screwed it using the debug -mistyped some params- in order to Low-Level format it, luckily I ve managed to return it back.
I worked in a lab once with a five megabyte hard drive from the 80s being used as a door stop, it was huuge, like the size of a large shoebox. They also had a WORM drive from the 80s that I dug up an ad for and it cost $25k for the writer and it could store 4Gb!!! On what looked like a laser disc in a tray. It’s wild the range of available technology at any given time, depending on how much money you have.
I do remember being very cautions when I first setup linux because things actually cared about the number of blocks, cylinders, etc, and if you got it wrong things broke in confusing ways.
Absolute luxury! I had 4MB three years later in '95... and in '96 I got the Linux Kernel Internals book which came with a slackware CD, forcing me to purchase my first CDROM drive.
The computer I was using at work in 92 had a whopping 16mb! It was amazing. It might have been a 386 20MHz (before the cache was added to the systems so was very slow compared to a 25Mhz that came out soon afterwards).
It seems to be booting a riscv vm, which is capable of running linux. The hardware capabilities of ESP32 is not very important(except being fast enough).
"Fast" is relative here. According to the youtube link OP posted in CNX comments, it takes 6 minutes to boot to shell. Still, better than two hours @dmitrygr's ARMv5-on-AVR emulator took.
yeah, 240MHz 32-bit cpu does help compared to the 20-mhz 8-bitter :)
Although, Xtensa can execute out of RAM, so here a JIT could be used for a larger speedup. Maybe i'll build a RiscV-to-xtensa jit some day when i'm bored
> it takes two hours to boot up to a bash prompt, and four more to load up Ubuntu and login. If you want a Megahertz rating, good luck; the effective clock speed is about 6.5 kilohertz. While the worst Linux PC ever won’t win any races, its simple construction puts it within the reach of even the klutziest of hardware builders; the entire device is just a microcontroller, RAM, SD card, a few resistors, and some wire.
I don't know why, but this little kludge greatly pleases my sense of aesthetics.
I remember when I first learned of MySQL’s triggers I immediately thought that it might be fun to use them to implement a microcontroller or even a full computer emulator. A table with registers, a table for RAM, etc. If it was x86 or ARM or some such then I could compile Linux on it and then run MySQL inside of that!
Similar in the sense that both approaches essentially run Linux in a VM, virtualizing the typical hardware required such as the MMU. In essence, it's the esp32 version isn't much different, just faster as the core is faster.
Would be cool to get no-MMU Linux running on it. I've seen that run on STM32F7's natively..
The MMU is not really a requirement in this situation, a processor with a real MMU is emulated in software[1] on the ESP32 (this is the virtual machine referred to in the article and comments)
He said infeasible, not impossible. You can run Linux with no MMU by compiling binaries with static offset into main memory but it's not amazingly practical.
Well, the processor needs to be able to do the context-switching (switch all registers, i.e. the visible state of the cpu, with those of another thread). You can obviously do this in software (emulate it) but that would be too slow for most practical purposes. Hence the question.
Yeah, that was it's own system that had a lot of mismatches with how different OSes wanted to conceive of task switching.
Where Intel ended up was the xsave/xrestor family of instructions to save and restore processor state according to a bit flag argument of high level functional units. And that's really only because of how complex all of the extensions to x86 have been over the years; a fresh arch is probably just better off manually writing everything with normal loads and stores until right around the iret equivalent. Dumps between the register file and memory show up on perf traces anyway so it's generally a very optimized path in the CPU for general uses anyway.
The article says that it's not, and according to the reddit post, it seems to be using a JuiceVM port to ESP-IDF, allowing it to run on the Xtensa version.
Edit: probably worth noting that esp32 cores are an order of magnitude higher frequency than those 90s computers.