There was definitely a rad-hardened version of the 8085 (similar to the 8080, and therefore to the Z80), which was used on the Sojourner rover (among various other NASA and ESA spacecraft). Seems like RISC processors were more common for this, though (looks like most relatively-recent NASA spacecraft - including pretty much all of NASA's Mars landers after Sojourner - use(d) rad-hardened POWER CPUs, e.g. the RAD6000 and RAD750).
though Sojourner rover used a rad-hardened 80C85 
"But if the collapse magnitude is right, then this project will change the course of our history, which makes it worth trying."
Let's say the jury is still out on that one? :D
Makes me wonder what possibilities become, er, possible if we up the computing power a few orders of magnitude to a Pi Zero W or Pi 4.
From what I understand it’s fairly easy to use a Pi as an LTE Router for longer ranges and WiFi for shorter ranges. I wonder if the right microsd cards and were stockpiled one would be able to reconnect several communities in a mesh.
Smart phones are a lot of things, but general purpose computers are not one of them.
As the author points out, probably useless, but still fascinating.
The primary design requirement for a stand alone computer system in a post-* world is simplicity, maintainability, and debugability. It must be possible for a single user to do _everything_ in situ. There are very few existing systems that meet all three of these criteria across the whole hardward-firmware-software stack, and modern technology companies are actively moving away from this.
At all levels this requires extensive and open documentation and implementations, and ideally a real standard.
The hardware level would probably need a complete rethink, and if you want good peripheral support (e.g. to be able to try to access whatever data device you come across) then you need a solution that doesn't require a subsystem kernel maintainer for everything, or you just give up on that. A potential 4th requirement here could be a large supply of parts since in most scenarios it is extremely unlikely
that anyone will be able to get a fab working again for hundreds or thousands of years. Maybe radiation hardened large feature size ICs or something like that. The alternative would be a zillion RPis (with some alternate data storage interface) so that
hopefully some of them survive and continue to work after 100s of years, but this seems like a much riskier bet than trying to actually engineer something to survive for a very long time. Above the IC level the ability for someone to replace parts without special tooling beyond maybe a soldering iron also seems like it is probably also important.
At the software level there are two existing systems that might serve, one of the Smalltalks, or one of the lisps (my bias says common lisp, despite the warts). Assembly and C are just not a big enough lever for a single individual, and other things like Java seem to have been intentionally engineered to deprive individual users of power. The objective here is not to be fast, the objective is to retain access to computation at all so that the knowledge of how to work with such systems is not lost. Also at the software level the requirements pretty much preclude things like browsers that are so monstrously complex that there no hope than an individual could ever hope to maintain a legacy artifact (or probably even compile one of the monsters) for interpreting modern web documents.
I do not think that we can expect the current incentive structure around software and hardware to accidentally create something that can meet these requirements. If anything it is going in the other direction as large corporations can employ technology that can _only_ be maintained by large engineering teams. We are putting little computers in everything, but they are useless to anyone in a world without a network.
It is a stack machine, it has somthing like FORTH.
In which you can implement anything else, if you absolutely have to. Like some have done with another stack oriented system here:
And then have some cybernetic monks preach the advantages of
something like TRON
applied to all of the above.
With an 8080 equivalent running a serial character display terminal based on an oscilloscope CRT (1940s RADAR tech) you have an input/output device.
This leaves the main job of processing to another cpu, which could be 16-bit for arithmetic speed and efficiency. The late 70s, early 80s 8-bit machines were only underpowered because they were doing all of the video output using the same cpu. Separate computation from video generation and you get a much faster system.
8-bit cpus rarely needed an OS. They were really only capable of running single applications at a time. All an operating system does is separate hostile C code applications from each other. C is probably not the best starting point to reboot society using 8-bit systems.
Forth, or some derivative might be better. Charles Moore's original 1968 listings for Forth on an IBM 1130 are available from here: https://github.com/ForthHub/discussion/issues/63
Remember also that every mid-1970s microprocessor generally relied on a minicomputer (built from TTL) for its software and logic design. If you go back 10 years (1965) to the PDP-8 minicomputer, these were built from diode-transistor logic or DTL - made from discrete diodes, transistors, resistors and capacitors. This sort of technology could possibly be re-booted more easily for post-apocalypse society.
The original 12 bit PDP-8 contained 10,148 diodes, 1409 transistors, 5615 resistors, and 1674 capacitors. See- https://www.pdp8.net/straight8/functional_restore.shtml
Scale these figures by 1.33 and you have the approximate requirements for a 16-bit architecture.
Whilst over 50 years old, the PDP-8 could run BASIC at speeds not too dissimilar to the early 8-bit micros that appeared in 1976 - about 10 years later.
It used a modular construction - and if you did find yourself with an excess of diodes and transistors, the best approach might be to build a series of logic modules - loosely based on the 7400 series, but using DTL for simplicity. If you were to standardise on a footprint similar to a 40 pin DIP, you could probably recreate about 8 NAND gates in such a device.
Some years ago I looked at the NAND to Tetris cpu, and worked out a bitslice design based entirely on 2-input NANDs. Each bitslice needed 80 NANDs, so a 16-bit machine would need 1280 gates. Memory would be difficult, but something could be implemented using shift registers. You could of course revert back to storing charge on a CRT screen - which formed the basis of the 1K words of memory on the Manchester Baby machine of 1949 (Williams Tube).
Finally - never underestimate audio frequency generation, and storing signals as audio tones - something that cpus are good at. Possibly use a rotating magnetic drum for storage.
In the summer of 1984 - a friend and I, who both owned Sinclair ZX81s set up a 1-way data link between one machine and the other across our college dorms - using a FM transmitter bug and an FM radio receiver - over a distance of 300 feet.
I'm thinking old phones, tablets, and portable computers will be more common. I keep several bootable USB drives which have lots of ebooks, audio books, videos, software, and games along with several old laptops/netbooks which were free. I also keep some of those files on microSD cards to make them accessible with tablets.
IMO collapse will be very boring so lots of books, audio files, video games, and music would be nice to have if it can be run off small off grid solar setups.
Can you please follow the site guidelines when commenting here? They include: "Eschew flamebait. Don't introduce flamewar topics unless you have something genuinely new to say. Avoid unrelated controversies and generic tangents."