Hacker News new | past | comments | ask | show | jobs | submit login
Collapse OS (collapseos.org)
752 points by spraak 11 days ago | hide | past | web | favorite | 301 comments





I don't think just developing a Z80 operating system is enough. The whole ecosystem needs to be preserved.

In agriculture, we have the doomsday seed vault [0] just for this purpose. If we anticipate collapse of the current economic system or society, I think we should build a doomsday computer vault, that keeps everything we need to rebuild the computing industry in a underground bunker. It keeps everything we need in a controlled environment, such as 8080, Z80, m68k, motherboards, logic chips, I/O controllers, ROM/RAM, generic electronic parts, soldering irons, oscilloscopes, logic analyzers, schematics, documentation, textbooks, software. We also keep some complete, standalone computer systems such as desktops and laptops, and all the parts that need to service them. We also need to preserve the old semiconductor production lines around the world, although probably not in the same bunker. Even if we fail to build better systems, 8080s are already useful enough!

Meanwhile in peace time, we need to form a team of experts that makes a roadmap to rebootstrap the computing technology for the future using parts from the bunker, with a step-by-step plan, that can be easily followed and executed.

[0] https://en.wikipedia.org/wiki/Svalbard_Global_Seed_Vault


> The whole ecosystem needs to be preserved.

Indeed. It takes a civilisation to build an iPhone.

I don't think people appreciate that even within the highest-tech manufacturing industries there is a lot of tacit knowledge. People shake their fists about "technology transfer" to China, and before that Japan; but that's taken decades for them to reach parity. And that's with running, copyable examples and all the parts of an existing supply chain widely available. Similarly the process for making a nuclear bomb can be written in a short paper, but few countries have successfully replicated it.

"Post-collapse recovery" and "technology transfer" are the same problem, except that post-collapse recovery is cribbing from a dead example rather than a live one and in much worse circumstances.

Collapse recovery is a fun little competence fantasy to play out in your own head. Like "the rapture" for atheists. But within our lifetimes, we have to put in the work to avoid the collapse.


> It takes a civilisation to build an iPhone.

The point here is that we don't need to build an iPhone, we only need a radio. Building a 8080 is much simpler, the USSR did it, the East Germany did it, China did it, all around the same time without too much difficulty. It's certainly would be much more difficult if the current civilization collapsed, but I think the author doesn't anticipate a total collapse, just a breakdown of the current economic system, thus it should be doable.

> Similarly the process for making a nuclear bomb can be written in a short paper, but few countries have successfully replicated it.

My understanding is that the physics of achieving the nuclear explosion itself is relatively straightforward. The real difficulties are to produce the weapon-grade materials needed, and to transform the explosion to an useful weapon, all under external sanctions, and even sabotage.

Taiwan had a nuclear weapon project in 1970s, significant progress was made in the beginning, if the U.S. didn't discover it and dismantle everything, it would be interesting to see how it turned out to be.


> breakdown of the current economic system

As the author themselves admits, this requires a very narrow band of Goldilocks catastrophe. Not catastrophic enough that you still have electricity, but catastrophic enough that all the plants on this list are destroyed or rendered unusable or embargoed from you? https://en.wikipedia.org/wiki/List_of_semiconductor_fabricat...

How is that going to work exactly?

No, it's a fantasy disaster, like Day of the Triffids and all the other John Wyndham style "cozy catastrophe" novels.

Real catastrophe is slow and grinding. Think "decline and fall of the Roman Empire" - a multi-lifetime process. Or something like Venezuela, where crumbling power infrastructure took out their aluminium smelter. This causes it to freeze solid and is unrecoverable without rebuilding the crucibles.


> but catastrophic enough that all the plants on this list are destroyed or rendered unusable

All the fabs on your list are fairly modern, and it is easy to see how they would be impossible to maintain in a Tainter-style collapse.

I would be looking into what you can do with ~1 um processes, which are currently accessible to amateurs. E.g. designs like https://news.ycombinator.com/item?id=11720289


Tainter-style collapse over what time period, though? One of his examples seems to be the Roman Empire, which depending on how you measure it took over a hundred years.

> ...East Germany did it

Not sure about the Soviet Union, but East Germany had to "import" complete production lines from Western Germany in order to bootstrap their chip production. Of course the West prohibited exporting such sensible technology to the East, and there's some wild stories around involving the secret service, setting up a proxy chip manufacturing company in Western Germany, shutting this down, and "losing" the production lines, which showed up a few months later in East Germany.

But there will be plenty of Z80 and other 8-bit chips to scavenge after a civilizational collapse, and those chips are simple and slow enough to be used in computers built from breadboards and wires.


Are there any good books/podcasts/documentaries about this? Sounds fascinating.

I don't have any really watertight first hand info unfortunately, only bits and pieces I stumbled over in German internet forums when I've been researching stuff for emulator coding.

Some of the stories of how Western machinery was procured is mentioned from time to time in articles like this one:

https://oiger.de/2011/08/26/die-teure-jagd-auf-den-megabit-c...

It seems to have been less "broad sweep style" than I mentioned above though, more like that new products required specialized machines which were only produced by a handful Western manufacturers, and which were under the CoCom embargo. And the "grey-importing" of those machines went through various shadow companies in different countries to cover everything up.



> “A lot of water went into the start of the tunnel and then it froze to ice, so it was like a glacier when you went in,” she told the Guardian. Fortunately, the meltwater did not reach the vault itself, the ice has been hacked out, and the precious seeds remain safe for now at the required storage temperature of -18C.

Also,

"The vault managers are now taking precautions, including major work to waterproof the 100m-long tunnel into the mountain and digging trenches into the mountainside to channel meltwater and rain away. They have also removed electrical equipment from the tunnel that produced some heat and installed pumps in the vault itself in case of a future flood."


Oh, no. I can’t believe it’s gotten this bad. I’m amazed we haven’t done much in the way of climate change control. It’s going to be too late very, very soon.

> It’s going to be too late very, very soon.

Sorry to ruin this for you, but it is already too late. We are already in the process of damage control.


Climate models indicate it's not too late to avoid a 2°C average temperature increase by 2100 if our energy matrix and heavy industries (concrete, steel and fertilizers) move away from fossil fuels. This takes political conviction and government policy, which in turn requires correct understanding of the situation.

"it's too late" is a dangerous statement because it suggests actions no longer matter. And if no action is taken, several climate models project >4 °C average warming. The majority of the planet will be become uninhabitable at the current human density levels, with billions of people having to relocate and/or die.


Due to delayed effects, we aren't even feeling current escalations but CO2 we let out while watching The Terminator.

Isn't our best course of action to achieve technological progress and fix the situation instead of 'preventing' it?

Like build dikes around low lying areas rather than trying to coordinate a worldwide slowdown of economic activity which faces the worst outcome of prisoner's dilemma.


How was this not built to be waterproof?!

it's extremely difficult to build waterproof subterranean buildings, especially if they're built into permafrost (which seems these days to be more like tempofrost). this very long but fascinating video discusses it: https://www.youtube.com/watch?v=nphxoUxSvgY

https://www.opensourceecology.org/

This website got posted to HN in the past, definitely something that should be part of the attempt.


This is already being done in some places, eg libraries and museums, it's called Digital Preservation, although it's for normal curatorial purposes rather than Doomsday scenarios.

While the focus is on the data, it also necessarily involves access to original hardware at times, which is additionally also often stored for curatorial purposes in its own right.

https://en.m.wikipedia.org/wiki/Digital_preservation


I'm pretty sure the internet, PDFs...etc fail heavily next to the printed form in the case of a complete collapse. As long as people can read, books should be good for this purpose.

The thing is, in the case of the complete collapse "the printed form" will be likely the last thing we will be worrying about.

I'd like to see a log being kept in orbit. Like every 10 years it's rotated (FIFO, oldest comes down, new one is launched) with a large delta so that it might survive our current civilization in case of global / climate craziness.

Most likely the source of the end of the world is going to be the Sun throwing a X class solar flare at us and just frying everything in its path.

What you need is a super-deep immortal vault that isn't anywhere near any known fault lines, and who's ingress point is sufficiently above ground, and can be accessed safely if under water, and supplies its own electricity for thousands of years.

Everything outside of the facility will have to be electrically neural up to extreme voltages (due to the possibility of plasma storms arising that make the worst thunderstorms on Earth look like a nice day outside).


Little do we know, there are five of them in existence already, by several ancient civilizations :)

I kid, but the more secure a vault is the more irretrievable it is, which only makes it useful to aliens, not us.


I remember reading a worldbuilding stack exchange question about what would be left of mankind in millions of years, were we to disappear today.

The unanimous response was "not much". Mostly anomalously concentrated resources. So... if they are right, even though we have no reason to believe there was a developed civilization before us, it's not that crazy to merely entertain the possibility.


What about burying something on the moon, with monoliths and radio sources marking its location?

Maybe we already did that in a previous civilization, but lost the ability to recognize the beacons.

The Sentinel anyone?

Isn't the moon protected even less from solar flares? How deep would you need yo build something on the moon for it to be protected, compared to on earth?

Gravity is a pretty reliable signal; doesn't decay much on million-years scale, and doesn't get obscure much either.

Not to imply those were from unnatural processes, more to indicate that basic spacefaring technology implies ability to detect such: https://en.wikipedia.org/wiki/Mass_concentration_(astronomy)


How would we get at the information if we don't have the technology to launch a rocket/spaceship to retrieve it?

Apparently they would fall, and as long as we still have the required technology and resource, we would send updated versions regularly.

The satellites orbit would naturally decay to the point of reentry. I'm not sure of the engineering, but it might be possible to do a controlled descent to a specific place for survivors to rendezvous with.

I could see that causing a huge poo-storm. If our supply chains collapse and all this knowledge is lost... but, we know when and where the knowledge will be returned to the Earth.... then that location is going to be ground zero for a lot of turmoil. Everyone will know that whoever gets that returned knowledge will have dominion.

If it’s just data on them, why not VERY MANY of them, so that they remain in living legend. With slightly different orbits so they deorbit all over the globe, a number of a year for decades or centuries.

Controlled descent is not necessary if they are built to last.


Thrusters with some sort of controlled decent if it doesn't receive a signal in some period of time?

Ham Radio.

like graveyard orbit

Yes I agree, a Z80 is not enough.

Especially that the people who will restart from a Z80 will face technological edges when their skills and needs grow. We faced the same in the 80's: going from 8 bit to 16, then 32, having memory addressing problems, being pushed to new and hardly compatible architectures, having programs to rewrite...

Then why not leaving behind us a technology, still simple, but that will save them most of these issues?

I feel an 8-bit data bus and 32-bit address bus would be a good way to make long-live programs, edge-free when extending memories, and still not so complex processors and main boards. The address bus does not need to be fully wired in the first processor versions, so it can scale over time with more complex processors when skills and needs grow.

Besides, it would be smart to leave in a vault a kind of FPGA technology with sources to flash the FPGA components. So no need to create a production line for many different integrated components: only one output, and the components are specialized by flashing them.

Indeed even microprocessors can be flashed on an FPGA.

Well, just ideas...

FPGA: https://en.wikipedia.org/wiki/Field-programmable_gate_array

Low tech CPU to support 32 bit code: https://en.wikipedia.org/wiki/Motorola_68008 (8 lines data bus, 20 lines of address bus, 32 bit programs, 70 000 transistors)

Microprocessor on an FPGA: https://en.wikipedia.org/wiki/Soft_microprocessor


I like the idea. But I ask (not being annoying), do we really need it? Seems like having all of that more distributed in the hands of users would be a better protection against unavailability.

Usually the idea behind a doomsday vault is that you're keeping everything securely in one place in case there is widespread disaster that sufficiently wipes out the distributed knowledge. They're not mutually exclusive.

And to expand on this: ideally we wouldn't have just one repository of hardware and knowledge of how to use/recreate that hardware, just like how ideally we wouldn't have just one repository of seeds and knowledge of how to plant/cultivate those seeds. We should be aiming to have as many redundant Svalbard-style doomsday vaults as possible, for all sorts of things (seeds, computers, medicine, you name it).

Gotcha. Thank you for clarifying.

I think about this all the time. A technology vault to take people from the water wheel to the Z80 would be smart. Expensive and I hope never needed, but there is a lot at risk.

And generators (or maybe solar cells). For that vault to be useful, you have to be able to generate electricity.

The embryo of this idea is in Software Heritage [0]. It's a great project, well worth supporting. Of course this doesn't include the requirement to store and maintain hardware, and even though we can store well-written documents detailing how to build these sorts of machines from scratch, it might not be feasible for post-disaster societies to do this.

[0] https://www.softwareheritage.org/


If you like Unix, Cromix was built on the z80.

I recommend also checking out Fuzix, a modern small-platform UNIX-like which targets Z80 among others

> The whole ecosystem needs to be preserved.

I disagree. If humanity collapses, and a new fledgling civilisation grows out of the ashes, then maybe we should let them follow their own path, and make their own discoveries and mistakes.

It's massive hubris to assume that future generations can only survive if they have access to our knowledge.

Many civilisations have fallen only to be replaced by newer betters ones, even when they've had no previous records to refer to.

We're just a stepping stone along the path of human evolution, we are no greater or lesser than the stones before us, or the stones that come after us.


This is one of my concerns. If there is a large-scale collapse, that is a very concrete statement of "what you tried did not work." Preserving as much of that past as possible would be a mistake. Any large collapse should be an opportunity for rebirth. We already drag along old mistakes in the technology industry after many repeated attempts at a fresh start always failing to a competitor offering to hack in backwards-compatibility making the initial deployment easier.

A collapse would be the perfect and likely sole opportunity to ditch the mistakes of the past and forge something completely new with the benefit of a "foresight" forged from hindsight.


Technology evolves so fast that a tech vault would become obsolete very fast. People in the future may not know how to deal with such obsolete technology. Technology also degrades over time. Unlike seeds you cant "freeze" technology to keep it from degrading. At least I dont know how.

Sorry if this is a naive question, but if the assumption is "doomsday", how would one know how to access vault and that it even exists?

The vault will have a fingerprint sensor, which will be stuck on an incomplete online firmware update.

And people are baffled that the 2fa SMS to reset it can not be delivered.

And what about the opposite issue: too many people know and enter in a greedy and bloody competition?

It reminds me an 80's cartoon where Spanish, aliens and 3 children run after a golden city and destroy it out of greed when they find it.


dont we have many programs on archieve org

I think some sort or retro low-performance OS should start with NeXTSTEP.

Ryzen processors have 80MB of L3 cache! IIRC that may have been twice what was needed to run the colorized versions of NeXT.

A lot of "man the old days were all we needed" recollections of Windows and DOS forget how crappy those OSes were...

But NeXT? That was basically a modern OS, with a bit less anti-aliasing. XWindow systems STILL look horrible compared to it.

It fits in L3 cache. 5-10 NEXT hard disks fit in RAM!

It had a web 1.0 browser, TCPIP networking, all the productivity apps. It had DOOM.

Sure Mach the microkernel was buggy and leaked memory, but kernels... there's lots of kernels now.

I think it would be a great project to start with the NextOS and carefully rebuild it up with good security and multitasking.

It would be fun to run NeXTSTEP on bare metal with RAMdisks (spare a processor to do the writes to SSD), and compare with other systems, see if it feels ludicrously fast or not.


NeXTSTEP basically became MacOSX when Apple merged MacOS with it.

AROS is the next best OS to use as it has a low memory footprint and can run on X86 systems: http://aros.sourceforge.net/

If you want Windows try ReactOS: https://reactos.org/

OS/2 try OSFree: http://osfree.org/

BeOS try HaikuOS: https://www.haiku-os.org/

All are low memory OSes.


Couldn't get ReactOS to boot on any hardware that I own. HaikuOS boots with GFX disabled, but is not anywhere near stable.

Will try the other two, I guess. Not holding my hopes up.


> HaikuOS boots with GFX disabled, but is not anywhere near stable.

Can you elaborate? It's been fairly stable for me.

P.S. It's just Haiku, not HaikuOS


I get frequent freezes, followed by a kernel panic. Sound, network, memory management; every time it's different.

Try running it in a virtual machine.

OSX is horribly bloated already. I want to add in things without the bloat.

Never heard of AROS...

ReactOS is bloated by definition of replicating windows, isn't it? It probably uses less than main windows, but it is just my opinion that the 1990-1997 period preceded some of the real bloat added due to the massive Moore's law gains after that with the GHz races.

OS/2 ... seems obscure to me but I never used it. But NeXT and Beos had revolutionary capabilities, I think OS/2 was basically just preemptive multitasking for Windows. Is that right?

BeOS, which I've never used, is probably also a good starting spot. HaikuOS probably has more oomph behind it community wise.


>ReactOS is bloated by definition of replicating windows

It is very wrong assumption!


Practically speaking, you'd be better off with a stripped down Linux or BSD. Linux from the NeXTStep era (early 90's) could also run in a few megs of RAM. My first Linux box had 4 megs. I worked at a few early ISP's that tried setting up operations on NeXT boxes. They all moved to Solaris, FreeBSD, or Linux because though NeXT looked pretty, many of the API's were a bit off and you'd have much better luck building open source code in other environments.

Or System 7 Unix, runs fine on a 16 bit processor, has proven history being easy to port, is a fully functional operating system, and would be pretty easy to retrofit some sort of networking to.

There are millions of millions of x86 computers and free software to run on them from any era. That should suffice for a collapsian civilization. Unless it's full on Mad Max post apocalyptic we're talking about.

Including Unix System 7, its been ported.

Though I suspect your right, PC DOS is very powerful, and allows you to do a bunch of stuff.


You're probably talking about Unix version 7: https://en.wikipedia.org/wiki/Version_7_Unix

You really want to go back 40 years to the dumb terminal era?


If I'm running it on a 16-bit system, yes. I can built a teletype in a capable machinist shop.

But the NeXT UI was (IMO) the best UI of that stage of processing power.

> Ryzen processors have 80MB of L3 cache! IIRC that may have been twice what was needed to run the colorized versions of NeXT.

How long will 7nm chips work before electromigration destroys them?

https://semiengineering.com/chip-aging-becomes-design-proble...


> Ryzen processors have 80MB of L3 cache

The new Ryzen "only" has 64 MB L3 cache but it's split into 16 MB chunks per 4 cores. You can add in the L2 cache since it's exclusive (at the cost of more access fragmentation) to get 18 MB/72 MB depending on how you want to count it.

The new Epyc is the same design, just more cores so you get 256 MB. Still "only" 16 MB accessible per group of 4 cores though.


Being there when NeXTSTEP was new, and having done my graduation project by porting a particle visualisation engine from NeXTSTEP to Windows, seeing it being called retro low performance OS feels hilarious.

Interesting idea. But okay, how? Could you scribble down a proof of concept Quemu configuration in order to achieve it? Also where to get Next compiled for AMD64?

Decompile the old kernel? Yeah, definitely above my modest pay grade, and OpenStep was created to be more portable (but probably not open in the OSS sense). Hm, there is GNUStep

You are correct that one can't snap their fingers and create the community that even FreeBSD has. The last big corporate sponsor opportunity for this was the infant smartphones like the Palm Pre era (they owned Beos IP at that point I think) and early Android.

Sigh, that reminds me that Beos should have been the foundation of OSX, if not for the exhorbitant buyout cost they were insisting on.


Decompile the old kernel? Yeah, definitely above my modest pay grade, and OpenStep was created to be more portable (but probably not open in the OSS sense). Hm, there is GNUStep

You are correct that one can't snap their fingers and create the community that even FreeBSD has. The last big corporate sponsor opportunity for this was the infant smartphones like the Palm Pre era (they owned Beos IP at that point I think) and early Android.


Isn't that what Haiku OS is doing?

Haiku targets BeOS compatibility.

A neat idea!

I think we could also get started now. Not necessarily de-escalating tech, but realizing that the fundamental supply of newer, more powerful chips might not last even with a shift to more plentiful supplies of rare-earth metals due to our need to get off of fossil fuels, fast. I think it might be useful in the more immediate term to be able to lock-in the minimum set of features that make the web and Internet useful then distribute that as widely as possible on low-power, commodity platforms with resilient networks that could survive super-storms knocking out huge swaths of devices in one fell swoop.

Low-power p2p protocols, mesh networking, recycling devices, component-platforms that allow scavenging and make-shift repairs, etc.

Until we can solve the green energy problem it might be nice to know that even if your community gets hit with a storm or flood, it's still possible to restore and maintain network services rapidly in the aftermath. Simply being able to send a message to someone would be a big deal.


" Simply being able to send a message to someone would be a big deal. "

The simple way would be radio morse



I think there is a business model for different reasons.

As web client tech stabilizes and telecom regulatory rollback continues, there may be an opportunity for localized solutions to be landed for all sorts of different purposes.


I don’t see large opportunities in localized “web tech” (I’ll read as http based) business.

The whole reason the web works for business is that you can give something away to millions of people for ad revenue or sell something to very large groups for small amounts of money.

Localization reduces your customer base. The price a business has to charge would be higher. This is on top of the adoption problem.


That’s one aspect of the business, but you also have companies like Netsuite or Intuit selling general ledgers and similar solutions, and those lines of business don’t really benefit from the scale.

There might also be a business, although an altruistic one, in helping people set up low-fi data-centers and edge-network nodes on commodity, low-power hardware.

I've been sort of half-assed working on a novel about a post-tech future (100+ years after apocalypse), where a handful of people retained some technology by virtue of having stored information about it in a manner that survived and a tradition of scavenging and trading and a sort of religion based on maintaining it in a working state. So, this is a fun read and thought experiment, even if an 8-bit computer is probably not my highest priority when thinking of "when the shit hits the fan, as seems more likely today than it did five years ago, what do I want to have in my survival kit?"

One of the questions I keep coming back to for such a scenario, and still haven't come up with a great answer for, is how does someone living in a world without the ability to manufacture a computer still have computers that work 100+ years after the last one was made? Even manufacturing transistors without modern methods is non-trivial. Will a Z80 last 100+ years? I mean, maybe, if it's kept dry and not exposed to anything corrosive. I've got a Commodore 64 that's ~40 years old and still works...so, 100 years seems reachable, but there have to be extenuating circumstances to get to that "post-tech" world (though I guess in a post apocalyptic world, the value of computers would be seen as minimal for a few years while survival is the only concern, so just forgetting could be enough).


My novel idea (feel free to use it!) is about someone taking a 1980s computer back to the Second World War. It's used by the Allies to decrypt Ultra intelligence and is treated like a kind of holy relic - only a tiny set of "high priests" are allowed near it, fewer still can touch it, and because of its importance they go to extraordinary lengths to ensure it can never be damaged by anything from bombs to power glitches. Think a Commodore 64 in a ridiculous white room.

But the book would be more about the consequences of this - do they eventually take the thing apart and jump-start a silicon chip revolution in the 1950s, or (more likely I think) does the government destroy the machine as the UK government did to the Bletchley machines after WWII, and because there's no ground-up computer theory does it set back computing for decades?


Yeah, no. I've handled parts of Whirlwind[1], a vacuum tube machine from just post-WWII, and the gap from that to a C-64 or any other circa-1980 machine is just too great. They were using discrete wiring, resistors and wires soldered to the bases of the vacuum tubes. The Whirlwind was the first machine to use core memory, and the 4K core memory unit is a box about the size of a phone booth. I don't know if PCBs existed before 1950 but if they did, they were certainly single-sided.

So now ask somebody really smart in that technology, like say Jay Forrester[2] who had just finished inventing core memory, to analyze this magic beige plastic box. He could probably recognize that the PCB provided connectivity between parts, but what are the parts, these little flat plastic tiles? I don't think it would be possible to work out from first principles what the functional contents of a DRAM chip is, let alone the CPU. Even if they x-rayed it, supposing they had x-ray tech with enough resolution to resolve a chip, how could they figure out that those little blobs are transistors? Transistors hadn't been invented!

I think they'd have to concede this is "sufficiently advanced" tech, in Arthur Clarke's phrase, to be indistinguishable from magic.

[1] https://en.wikipedia.org/wiki/Whirlwind_I

[2] https://en.wikipedia.org/wiki/Jay_Wright_Forrester


I don't buy it; a C64 is simple enough that you could work out the functionality of most components by trial and error and then work back from first principles. You'd measure 5v TTL, and under a scope you'd see the 1mhz binary signal. From there, 74 series chips on the board would probably be the first to be identified, simply based on inputs and outputs. And once you did that, and knew that this was a NOR gate or whatever, you'd pop the top off, look at it under microscope, and start to work back from your knowledge that this had digital logic and you'd figure out what the _function_ of a transistor was even if you didn't know what it is.

The RAMs and ROMs would be fairly trivial to figure out, as well.

You might not learn the manufacturing process -- that really did take a couple decades of material science and physics advances. But the principles of the machine would be clear. And then you could take the knowledge of that and scale it to the electronics components available in the era. You'd definitely have a head start simply knowing that these things were _possible_, and getting a boost on knowing how a computer could be structured.


They can probably figure out that the little plastic things either store data or perform computation. By process of elimination since wires, resistors, capacitors, PCBs can all be analyzed for their properties.

Given that knowledge they can try breaking those pieces of plastic apart to see that it's a housing over some sort of ceramic core. Using spectroscopy and chemistry you can figure out what that core is made out of. Now you know what mix of chemicals allows for really high density data storage and computation.

Using x-rays and microscopes they can figure out that the little ceramic die has some sort of structure etched on it. Maybe remove tiny pieces to see what different parts/layers are chemically composed of.

Now they know that there's something interesting about certain elements deposited on top of silicon using some sort of etching approach. Early transistor research was already well along (and had been patented already in the 20s) so it's likely they would have made the connection. Given all that you can start brute forcing industries and ideas around those materials.


They would see the "© 1982" on a chip and although it would be incredibly futuristic (35+ years in the future!), would at least know it was likely to be created by humans. Whether they could work out how on earth you place such incredibly tiny components onto a sliver of silicon is interesting. If the person taking the computer back in time mentioned the word "photolithography" I suspect they would have been able to make a pretty good guess.

I don't think there would be many copyright dates on the chips. They might think that Texas ruled the world from the TI logo being on everything, though.

Here's a high res picture of the C64 PCB, where you can see the markings on the chips: https://myoldcomputer.nl/wp-content/uploads/2015/11/board-32...

You can see both copyright dates, and plenty of other English text. While in 1940 this would have represented incredible futuristic technology, it's pretty obviously made by humans and not a piece of alien magic. It also has components like resistors and capacitors with markings which would have been immediately obvious to 1940s electronics experts.


I'm pretty sure you're wrong. People are very good at pattern recognition. You don't need to understand the physics to check lots of combinations of inputs and deduce what this black box do.

I would read this book. Fictional [alternative] history is always fun to read for me.

If I may recomend the book "Stars' Reach: A Novel of the Deindustrial Future" by one John Michael Greer, maybe we can see this idea from a different perspective.

From Greer's point of view, the factors that make today's hardware brittle are not technical, but economic. Corporations have to make electronics at a profit, and at a price point that is accesible to the average working class citizen. This business model would not be sustainable in the either a fast-collapse or slow-collapse scenario.

Instead, in the novel, governments take over the tech industry sometime in the second half of the 21st century, and treat it as a strategic resource in its struggle to not be left out in the global musical chairs game of climate change + resource depletion. They run it at a loss, and put the best minds they can spare to the task of making a computing infrastructure that is built to last.

By the 25th century, which is the time when the novel's events take place, Humanity has lost the ability to manufacture electronics, but computers built 350 years ago are kept in working order by a cadre of highly trained specialists (most of which have the skills of a geeksquad employee, but still). Common people have maybe heard some wildly innacurate legend about robots or computers. Wealty individuals are probably better informed but still cannot own one of those at any price. They only computers depicted of spoken about are US government property operated at US millitary facilities (or maybe there was one at the Library of Congress, do not really recall, though).

There's one post-collapse hacker in the novel, a secondary character that is part of the protagonist's crew. The author is not an engineering type and dances around the actual skills of this guy, but I'd say he seems able to use a debugger/hex editor and read binaries. His greatest achievement, though, is to fix and set up an ancient printer and recover documents from a disk that was "erased" but not wiped clean.


Why is this dead? I've read the blog of the author in the past and found it to be inspiring, or at least interesting, because his short stories did the "what could be different?" thing very well.

(edit: wording)


IMHO you could be using the same chips for a surprisingly long time - you're likely to need some electronics maintained e.g. replacing capacitors, but if you really need to get some computing power for a 2100 post-apocalyptic scenario where there's just a few million people, then scavenging working chips from random gadgets has a lot of potential. E.g. a hard drive controller or a kids toy may be running on something that was good enough for a CPU twenty years ago.

Yeah, part of keeping old computers (and synthesizers, another interest of mine) running is re-capping. Modern caps last longer than the stuff from the 70s and early 80s, but still maybe needs consideration. I don't actually know much about the longevity of modern caps...worth some research.

Raspberry Pi is 4 gigahertz. Madness! You're in a postapocalyptic world and you find someone's drawer of raspberry pi sideprojects: a solar panel, a low power screen...

... you find a controller ... you find their emulation raspberry pi. ... all of a sudden, the world isn't as desolate.


... only to find it won't boot, as the OS was on some kind of SSD storage that died after ~2-3 years without being powered on. :(

Ben Eater's YouTube channel (https://www.youtube.com/channel/UCS0N5baNlQWJCUrhCEo8WlA) shows the step-by-step process of building an 8-bit computer, running a display and sending data over a network from fundamental components. He does use some manufactured stuff like breadboards, timing crystals and ICs, but it's still pretty cool stuff. Building computers from raw minerals would be pretty tough.

For people interested in this sort of thing, I can recommend the Nand2Tetris Courses[1] (also on Coursera).

They basically walk you through assembling and programming a full CPU from nothing but NAND-gates in a hardware description language, and in the second part even adding a compiler and higher-level language to the stack.

1. https://www.nand2tetris.org/


You might enjoy some of the Warhammer 40k lore; enough time has passed that technology is often literally indistinguishable from magic. A cult of "tech priests" are the only people who know how to speak to the souls that are inside the machines. It's heavily implied that they're not actually souls, of course, but that's how it's described to most people, as its easier to understand.

Computers could give you better chances of survival. In a shelter / bunker let's say you can have a control center where you can monitor sensors and manage databases. These tasks doesn't require a computer, but saves you massive time to do something else.

The core premise of the story is that the people who have tech have access to a copy of WikiPedia, or some portion of it, which is like a super power in a post-tech world. Even if it is only used periodically (to prevent wear and tear on the computer), it would still be incredibly valuable to have an oracle that can answer questions like "how to nixtamilize corn?" or "how to preserve food without refrigeration" in a world without grocery stores.

This, of course, presumes libraries are also mostly gone, since you don't need WikiPedia if you have a library.


Well, honestly, I think for that type of knowledge, Wikipedia is actually a very bad starting point. It could have become something like that, but ... I blame Deletionism, and more importantly, the general aversity to "know how" pages. Yes, you could relearn math from Wikipedia, but I'm not sure there's enough stuff in there that somebody who didn't already know how would be able to recreate half-way modern antibiotics, or an old CRT TV tube, let alone an SEM etc.

While I agree that WikiPedia is unnecessarily restrictive in their coverage (particularly with regard to how to do things), I think there's a lot of value there, too...especially in a world that has lost access to anything more in-depth. I mean, I can go to WikiPedia and figure out how ancient peoples cooled and heated their homes, handled irrigation, preserved foods without refrigeration, what plants are edible, what plants grow in what regions (though this would be thrown off by a climate catastrophe), how to render fat for storage, how to make rubber, how to smelt iron, how to build a boat from naturally available supplies, how to make cloth water resistant, natural remedies, etc. While it's not a "how to" guide for any of these things, if you can read, look at the diagrams, and follow the references within WikiPedia, you can figure it out with some trial and error.

The premise isn't that the folks with WikiPedia can rebuild modern society. It's that they literally can't (even if they had better knowledge resources), but would still have a survival advantage from having a little bit of the old knowledge. The fact is that if we lose our modern society, we'll never be able to build it up again. We've dug up all of the easily accessible resources, already. Scavenging from the good old days is the best any post-apocalyptic society can hope for, as bleak as that sounds.


Now I'm suddenly tempted to write up some kind of program that'll automatically archive not just the Wikipedia pages themselves ¹, but also their citations and external links. Or maybe (probably) someone else has already written such a program.

¹ I vaguely recall Wikipedia already provides some way to download all pages in bulk, but I can't seem to find it (if it even exists anymore, or if it ever actually existed instead of me just hallucinating it)



Nice, thanks!

There might be a place for a WikiKnowHow...

"since you don't need WikiPedia if you have a library."

I have libraries and WikiPedia is still pretty useful. Searchability and portability / mobility would be pretty valuable attributes in this type of scenario.


Or you manage to repair a advanced drone .. that can do scavenging in dangerous radiated areas for example (or just common drone, can scout for you). Solar panels will be valuable... But wind is easily to bild and generate as well.

In general, the scenario is, that the whole world is broken down, but full of tech. So many machines to get back to working. Machines beat muscle on a scale


> that can do scavenging in dangerous radiated areas for example

Most robotics don't work in areas of heavy radiation, because radiation damages electronics.


You wouldn't want to scavenge something from an area which is so irradiated that it fries electronics.

Could a Z80 do these things effectively? Honest question.

Yes, sensor monitoring (movement, heat, wind) is just simple IO handling and I'm sure managing a SQLite database would be in it's capabilities. If a C64 is capable of all if this then it can be done.

Even chat is possible for example between two buildings where radio might not penetrate the walls. But sure, at that point if you can lay down cables, then it's simpler to just build a telephone.


SQLite assumes quite a bit more resources than a Z80 has.

There were databases that ran on 8 bit computers. They were not as powerful as SQLite, but they could do the job.

Since the transistors are used as switches, one can use electromechanical one, aka relays.

There's a nice project page of one here[1], including an in-depth video about it here[2]. There's a collection of other relay computers here[3].

[1]: http://web.cecs.pdx.edu/~harry/Relay/

[2]: https://www.youtube.com/watch?v=tsp2JntuZ3c

[3]: https://github.com/Dovgalyuk/Relay/wiki


This reminds me a bit of A Canticle for Liebowitz, which in turn reminds me to re-read it. Thank you.

Yes, definitely. It's just another spin on the same idea 60 years later...a few things have changed since then, so I think there's room for another take. (And, I'm not the first to mine the territory since then.)

Similar to this scenario is Neal Stephenson's Seveneves. Without giving too much of the story away, it's a great sci-fi novel that covers some post-apocalyptic topics, one of which includes a huge regression in PC performance/transistor count due to the major disruption of this fragile supply chain.

You can look at the proliferation of antiques for examples of this. Arguably many antiques are more fragile than a computer, and exist over a hundred years later in excellent condition either through careful planning or neglect and luck.

So basically Waterworld but for computers instead of water? I like it.

Please continue working on this novel; I would read it. You could even start publishing today with services like leanpub and get some early backers and feedback.

Jacquard Looms. No electronics required.

I was pontificating about my 401k fund and what I should do if I find it no longer exists when I need to start using it.

The older gentleman who was polite enough to listen to me said, "It's ok guy, if that 401k doesn't exist, then neither will you".

And so I think I will not stockpile any computers for later. I do like the engineering spirit of this however.


"It's ok guy, if that 401k doesn't exist, then neither will you".

There are a number of ways that can not be true without a large scale societal collapse. Fraud involving pension funds has happened many times in the past (Bernie Madoff, Robert Maxwell being two high profile examples). The last financial crisis brought a bit more attention to the topic of counterparty risk - the idea that your "safe" investment is only as safe as the institutions that are backing it in many cases. It's not necessarily a high priority concern but I think it's worth at least considering splitting your retirement savings across more than one account with different institutions.

There are also lots of conceivable larger scale crises with historical precedent (many in the 20th Century) that would render your retirement savings largely worthless without leaving you dead. In many of those you would have more pressing concerns than your 401K but it still seems like not a bad idea to have some physical things of value that you keep somewhere secure but accessible (cash, perhaps gold and/or silver).


401(k) plans have individual named customer accounts and strict auditing standards. A Bernie Madoff type scam isn't really possible. Even if Vanguard or Fidelity goes bankrupt the customer accounts will still exist. There's no counterparty risk (except for cash actively being transferred in or out).

Even if you have full confidence in auditors and regulators (Arthur Andersen?) that only guarantees that you own the contents of the account. What about what's in those accounts? How familiar are most people with the details of the various mutual funds and/or ETFs that might be held in their accounts? How complex are the webs of obligations inside those funds? If one or more large financial institutions were to have a Lehmann Brothers situation how long might the assets in some of those funds be tied up in litigation, even if they were actually still there?

These concerns are all likely fairly low probability but there's certainly a whole range of possible scenarios between "my retirement funds are completely secure" and "I'm dead in a global thermonuclear apocalypse".


There are enough people who depend on those for their income that congress would do something. I don't know what, but the litigation would be the top concern of congress and that pressure would ensure that things got wrapped up quick somehow.

Civilization can withstand a surprisingly extreme level of collapse without most people dying. In the most horribly war-torn country I can think of, Syria, most of the pre-war population is still alive today.

If someday the US looks like Syria, your 401k will be worthless but you’ll probably survive.


I skimmed the announcement and it took a while to grok this comment of yours. Now I do, it all seems so much more awesome. An OS for a hobby project CPU, made with 9000 transistors, but for a future where society has collapsed.

The first link on CollapseOS’s announcement called “Winter is Coming” has it all:

https://collapseos.org/why.html


He mentions that its only applicable in a certain range of collapse-severity, I'm wondering what kind of scenario he foresees that would lead to that type of collapse?

There is a range of collapses where the economic disparity would be so great to cause a distribution in the current electronic supply chain, but not so great as to destroy everything not in a bunker.

Think about a super volcano erupting, large meteor sticks, Black Plague style microorganism outbreak, a small or medium sized nuclear war, climate change lead global food shortages, late stage trade wars, and many more.


So, let me see if I understand this correctly: this is supposed to run on z80-based computers after an armageddon comes and all the other "modern" computers are out of business, so people start building them by scavenging parts. Ok.

So, first of all, how are you supposed to download this thing onto your homebrew computer, given that internet will most likely be down?

"But if the collapse magnitude is right, then this project will change the course of our history, which makes it worth trying."

Mmmh, I think the author is a bit on the hyperbolic side here. I'm quite sure that anyone that can design & assemble a z80 computer can quite comfortably code some basic utilities by himself just fine. All the others won't care a bit about your OS. Sorry if I sounded harsh, but I actually was.


You're supposed to download it now and keep around. Just like preppers do with MREs and ammunition.

I said this further up the thread, but I'd rather have a raspberry pi, some screen cribbed from a smartphone, and some flash sticks.

Why plan for less than the raspberry pi level?


Because the pi might be needed for more demanding workloads (networks, routing, sdr radio system). With this solution you can build low-tech control systems and programmable switches with a bit of logic where a raspi would be overkill.

But the doomsday scenarios aside, this is super useful as an educational device. It can teach people what computers actually are and how they operate on the lowest possible level.


This is what excites me about this project. Another commenter posted the NandToTetris course, which is along the same lines. Computers/software are so complicated now, so much understanding gets lost in the upper levels of abstraction. Everything just seems like black box magic.

Or you know, a laptop. I've got like 6 of them piling up dust and all they need is 19V. The oldest from 2008, the batteries on 3 have sadly failed being unused, everything else still works.

I've also got 2 desktops from 2003ish (Athlon64 and Pentium III), they probably work, too, although they're stored in a garage (along with some other stuff like VCRs and CRT displays).

Not to mention routers old and new, all running Linux.

Yeah, why plan for less? All I need to do is scavenge around my property :D

Though a garden, livestock and a greenhouse will be a much higher priority. No one needs any sort of computing when they can't eat.


> Or you know, a laptop. I've got like 6 of them piling up dust and all they need is 19V

Mechanical parts fail first. The keyboards on those laptops will be gone after a few years of use. The USB ports won't last much longer. How long will the thermal paste and internal fans last?

The point of the z80 is that they're cheap and relatively easy to build from scavenged parts.


Not sure where you scavenge parts for a z80, I think it's much easier to scavenge parts for PCs. Plus, laptops are way more fixable than you seem to think, with simple soldering. Fans can be kept going for years with grease. External keyboards are ubiquitous. It's strange how people here bang on about RPi when a simple laptop is even more flexible.

And I'm also thinking more of a "repository of knowledge" use, not just simple controllers.


> Not sure where you scavenge parts for a z80,

TI calculators, microcontrollers, and more:

https://en.wikipedia.org/wiki/Zilog_Z80#Embedded_systems_and...

> I think it's much easier to scavenge parts for PCs. Plus, laptops are way more fixable than you seem to think, with simple soldering. Fans can be kept going for years with grease.

Collapse OS is talking about timelines of a century or more. Non-mechanical computers are the only ones that will last that long without the supporting infrastructure.


It's just a few thousand Z80 instructions in assembly notation. You can print it on paper, store the listings somewhere safe and type the code in by hand just like in the old days.

Why is that possible in this fictional reality but stockpiling an RPi isn't?

Because it doesn't fit the given narrative: it's all about scavenging/recycling.

Your reality does make better sense, but doesn't make a good story.


There are vastly more Z80's in circulation that Raspberry Pis.

Also, in a worse-case scenario, it's much easier to build a Z80 by hand from individual transistors and/or logic gates¹ (which would still be absurdly difficult, but not impossible) than any ARM CPU (including the ones in any generation of RPi). In a slightly-less-than-worse-case scenario, it's much easier to build a Z80-based computer by hand (i.e. with a Z80 you pulled off some other piece of hardware) than your average ARM-based computer (including any RPi).

¹ "logic gates" not necessarily being transistor-based, either; one could take a cue from the guy building a 32-bit RISC-V machine with vacuum tubes: https://www.ludd.ltu.se/~ragge/vtc/


A Z80 has over 8500 transistors, so even though it's much simpler than a Raspberr Pi, I don't think you're going to solder together an 8000 transistor processor in a post apocalyptic world. Even the old 4004 has around 2300.

I'd guess that those 8500 transistors would be better used to build thousands of much simpler logic controllers to help automate infrastructure that's lost its computer control systems.

https://en.wikipedia.org/wiki/Transistor_count

There are simpler "transistor computers" that might be more feasible to build from discrete components:

https://en.wikipedia.org/wiki/Transistor_count#Transistor_co...


There are examples elsewhere in these comments (linked by myself and others) of CPUs with similar (if not greater) transistor counts soldered together by hand out of TTL chips or even out of individual discrete transistors.

There are multiple ways to skin a cat, though (perhaps literally; this would be the post-apocalypse, after all!), and you're right that there are numerous ways to put transistors to use besides building full-blown CPUs. One of the key advantages of a general-purpose CPU is that it's general purpose and can be made to do all sorts of different things, but there are certainly plenty of cases where that ain't necessary and you'd need a fraction of that capability at most.

Still, they'll probably go hand-in-hand. "Chips" are just discrete components, whether crafted from a single chunk of silicon or itself built out of discrete components and treated as a single discrete unit. Building a whole general-purpose CPU from individual transistors is much easier when those transistors are already arranged on a little board you can plug into your bigger board. Chances are that no matter if someone's building a whole CPU or something more special-purpose and limited, that someone will be doing so in terms of already-assembled-and-composable gates rather than transistors directly, if only for the sake of one's own sanity.


Uhm....

[1] https://en.wikipedia.org/wiki/ARM_architecture#Acorn_RISC_Ma...

says about 30.000 gates, which is 10.000 less than Motorola 68000 and arguably faster.

[2] https://en.wikipedia.org/wiki/Zarch

at least made us Amiga and AtariST geeks envious.

Furthermore there is 'Microsequencer' like described there:

[3] https://en.wikipedia.org/wiki/Microsequencer

and following up from there to for example

[4] http://www.microcorelabs.com/mcl65.html

which is a 6502 softcore with microsequencing applied

Alas... There are many options to choose from according to the available technology, tools & knowledge. One does not have to make an exact copy of something which made sense for arbitrary reasons, which don't necessarily apply when doing it from scratch under different circumstances.


> says about 30.000 gates, which is 10.000 less than Motorola 68000 and arguably faster.

The Z80's at 9,000 transistors (not sure how many gates, but almost certainly a fraction of that transistor count), so even the Acorn would be heavy in comparison. Still doable, though; just takes more time.

In terms of speed, it has less to do with transistor count and more about how close together you can get the transistors. Big, hand-wired CPUs tend to be slower than small single-chip ones just from the sheer latency differences between components.

> One does not have to make an exact copy of something which made sense for arbitrary reasons, which don't necessarily apply when doing it from scratch under different circumstances.

True, and I ain't saying one does. If we're at the point where we have to hand-wire replacements, though, it helps to have at least some degree of compatibility with the thing we're replacing. There are at least some schematics out there for building 8-bit CPUs from TTL chips¹², and I'd imagine those would all be viable candidates if we have to re-bootstrap our computational power and run out of other CPUs to tide us over in the meantime.

Ideally we should be working on CollapseOS equivalents/ports for as many CPUs as possible, so that we know that no matter what we're stuck with, there's always a way to repurpose it. Just as importantly, though, we should be hoarding copies of pinout/wiring diagrams, hardware manuals, etc. to make sure we have the knowhow on the hardware side, too.

¹ http://cpuville.com/Kits/8-bit-processor-kit.html - happens to be bus-compatible with the Z80, though not ISA-compatible as far as I can tell.

² http://mycpu.thtec.org/www-mycpu-eu/index1.htm - more "modern" features like Ethernet and VGA out, so a more likely candidate for general purpose computing if we really do run out of Z80s to scavenge


The flash in the Pi will eventually die. Other high density ICs may also die from the laws of thermodynamics.

I don’t think it’s any coincidence that this collapse brings computing down to an era which is many people’s personal computing heyday, and not a decade before or after.

I’d hazard a guess that 8-bit machines played a part in the author’s young life - first computer, first job, happiest childhood summer, last computer they felt in control of before they got annoyingly complex - something like that. And therefore a collapse ending right when the author would have useful skills but things wouldn’t be too hard, is the most fun one to imagine.

Computing was around 40 years before the 1980s and electricity for a hundred years, but who wants to try and rebuild room sized punched card machines for ballistic trajectory calculations, get greasy fingers on mechanical parts, or deal with HT electrical power supplies safely, yawn, no fun there. rPi the same - by then everyone can do it and author isn’t special, so whatever. It’s not different enough from right now.


There are a very large number of Z80s around today and in-use. They are dirt-cheap and just right for many embedded applications. More importantly, they're simple enough to grok and to wire by hand; so yeah, were I stripping components for a make-shift computer in a post-apocalyptic world, I might just go for a Z80.

He specifically speaks of a "distribution" phase as imminent collapse seems apparent, before "blackout". So everyone who knows about it would hypothetically get a copy, and perhaps begin stockpiling some relevant hardware.

(SD cards seem like a good commodity to stockpile here, as he supports them, but they're likely incredibly hard to manufacture post-collapse.)


It may take a while but you could go Altair-style, by making a computer that is programmed through physical switches and then go and input every byte one-by-one. Though some automation would help here (AFAIK some people used a dotted paper tape reader and only entered the tape loader code by hand and the rest of their programs were loaded via the tape).

It's just a high effort apocalypse larp.

For anyone interested in the idea of keeping old computers running, I highly highly recommend the Living Computer Museum in Seattle [0]. It was started by Paul Allen and has some of the coolest stuff I've ever seen. Their goal is to restore old computers to working condition and have them look/feel/smell/work the same as they did when new. I got to see a ton of old computers that I had as a kid. I even got to write a program in BASIC on the original IBM PC like I did when I was a kid! [1]

[0] https://www.livingcomputers.org

[1] https://twitter.com/TeriRadichel/status/1164369796307116033


Why can it be only useful after a collapse?

If it can be useful, than it can also be useful today to all the (poor) tinker people around the world today. There are lots of alternative eco villages etc. trying to be self sufficient, who do all kinds of recycling and improvised technology. If this adopts with those people, then it might be useful.

But if they cannot use this today, then I don't see how a broken down surviver group could use it.


Forth is the ideal language for bootstrapping a cobbled-together computer from whatever scraps you can find. Forth gives you a shell, an assembler, a disassembler, and a rich, extensible programming language in a few kilobytes. You can peek or poke hardware interactively, or use the REPL as a calculator. Forth-style assemblers also make cross-compilation very practical.

If I was tasked with bootstrapping a post-apocalyptic computer from junk, a hard copy of a well-commented Forth implementation would be a welcome assistance.


Here's a very nice forth implementation for the C64 (not z80-based, it's 6502, but close): https://github.com/jkotlinski/durexforth

Sounds great! Can we get all that with a less confusing syntax? That’s my main difficulty with forth.

The syntax is unfamiliar to most, but it's very consistent and learnable. The syntax is one of many aspects of the design that keeps the whole thing simple. The use of a stack to pass arguments between words makes the kind of "pipelines" which are common in CLI shells very natural. I can think of few, if any, other languages I would want to use interactively as a replacement shell without complex line-editing assistance.

For those curious as to what a modern machine using Forth on bare metal as an operating system might feel like, check out Open Firmware: https://www.openfirmware.info/Open_Firmware

(If you have an OLPC sitting around in a closet somewhere from the Give-One-Get-One program years ago, you already have a serviceable and physically robust Forth machine ready to roll! Same deal for some older Powerbooks and Sun workstations.)


I wish openfirmware was the norm. PC BIOS and UEFI are horrible in comparison.

I like the concept, but I'm not sure that the Z-80 is the best implementation substrate -- it's got a lot of oddball properties and special case instruction encodings (due in part to the way things were squeezed in around the base 8080 instruction set).

A PDP-8 can be implemented in fewer transistors (original DEC wiring diagrams are on bitsavers, and github has source for several clones in Verilog), and DEC already shipped a moderately full software suite for it.


I think the point is in that there are so many z80 chips out there for scavenging and that there are plenty of consumer devices that can be used (like TI calculators). Even if they stop making them today, the supply will last long.

They are also the cochroach equivalent of a processor, tough as hell, functional on a ropey power supply and can be used to make a simple 8 bit machine with very few ancillary chips - the reason they ended up in the sinclair zx's.

Similar reasons are likely why they were in the TRS-80.

The TRS-80 was... not built for ruggedness and durability. At least not if you had a system with any peripherals; the cables were notoriously flaky, to the point that the problems are on record in the Wikipedia page, as an explanation for the "Trash-80" sobriquet. (I recall at the time seeing aftermarket recommendations for expensive cables and doing things like wrapping components in tinfoil to try to get some extra RF shielding.)

The system wasn't built for ruggedness. The chip had to be, because the system provided that ropey power supply previously mentioned and very few ancillary chips. It was a relatively inexpensive chip that one could build an even less expensive home computer around.

Precisely this, The ZX80 was a masterpiece of "just how close to the wind can we sail and still have a mostly functional for most people legit computer".

I mean it was built out of off the shelf cheaply (relatively) available TTL components and no real RF shielding (something variants in the US had to fix to comply with the FCC rules of the time).

It's astounding it was a commercial success but it cost 80 quid (as a kit, 100 pre-built) at a time when others where 3-4 to 10 times as expensive (average wage back then was around 110 per week).

In a very real sense it democratised computers to something almost anyone working could afford if they wanted it.

I know if it hadn't of been for the ZX-81/ZX-Spectrum I wouldn't have had a career in software engineering nor a life long love for computers, I was born in '80 to working class parents in the north of England even in 1987 having a computer was considered exotic among my cohort, I didn't see another one outside my family til 1990 (a C64 I lusted after).


I like the Collapse OS concept. I once typed Fig Forth from a booklet I ordered from America into a Sharp Z80 computer. Society was in the main computer free at the time. It was like the post collapse era. To restart the software industry from scratch; I recommend etching a modern version of that Fig Forth booklet onto stone tablets; perhaps provide a scheme interpreter written in forth as well. They will never need anything else.

I was also thinking that if I had to start from scratch, Forth would be a great way to do it. I'm not familiar with Fig Forth, but I just ran across these assembly sources for various early CPU architectures: http://www.forth.org/fig-forth/contents.html but they look really long. I wonder if that booklet you had was simpler than these. (Any idea if it's available online?) I was thinking something like jones forth https://news.ycombinator.com/item?id=10187248 which has a minimal assembly part (which is mostly comments explaining how it all works) and then quickly moves to implementing Forth in Forth itself.

Why does the author think the global supply chain will collapse in the next ten years? What scenario do they envision?

Climate change? Will cost trillions of dollars and billions of lives, but will likely be played out over course of several decades. We will be stressing out about it but its not going to be electronics-ending apocalyptic

Nuclear war? Please. The countries that have the capability are also level-headed enough to use them to play brinksmanship, despite what the news is telling us. These countries want deterrence, not to blow stuff up.

Disease? We're too widely distributed and the most successful viruses are ones that infect but do not kill. Ebola is scary but its too destructive for its own good which makes it easy to contain. The most successful virus is the common cold, and possibly HIV which is certainly a serious problem, but nobody's out there building shelters because of that.

Water/food supply? Fresh water is a function of energy, and if anything is a plus about climate change its that we're gonna have a lot of fresh water raining down on us from the Earth trying to compensate for higher temps.

Second order effects from climate change will likely affect arable land and is worrisome but it may also open up new areas for growth and will likely play out over time, so I'm considering this more of a political problem.

The only things I can think of are either:

1) A sudden disappearance of rare earth metals needed to make electronic, which would be massively inconvenient but we'd figure out a way around that, either by it suddenly becoming more valuable to recycle old electronics or not needing them in the first place. Besides if this happens we'd just get extra motivated to start mining asteroids.

2) Celestial events like asteroid strike or Coronal Mass ejection hitting Earth in the wrong way. The first problem is mitigated with asteroid tracking and we're getting better at that, and the second one would make for an interesting 6 months but pretty sure we'd get back on track pretty quick.

I am all for technology that does not depend on a complex global supply chain - we will need to manufacture simple but sophisticated tech in space and mars in the future but this prepper BS is just fantasy driven by a hyper-apocalyptic news cycle shlepping around clickbait.

What am I not worried about that I should be? What massively apocalyptic event is going to happen in 10 years to turn us back to the middle ages? Seriously.


> this prepper BS is just fantasy driven by a hyper-apocalyptic news cycle shlepping around clickbait

Au contraire, it’s the belief that our system can continue like it’s doing that is the real hyperbole. Collapse is just baseline reality of civilizations.

- HISTORY: Collapse is a property of every civilization we’ve studied. These people were as smart if not smarter than us, working with societies smaller and simpler than ours.

- ECONOMY: The way money is created and managed today is an ongoing experiment that almost ended in 2008, and we are still on uncharted ground. We can only continue paying for debt by increasing consumption in the following year, yet our debt keeps increasing, by the ever-devaluation of our currency, requiring more production and consumption. No one is planning on an end to this model of growth.

- TECH: Most of our infrastructure is built under the incentive of increased efficiency and profit, not long-term robustness since profit has to be sacrificed to plan for contingencies like price fluctuations in supply. Short term tech outcompetes the long term, easy. Strong but fragile. And then there’s the incentivized inefficiencies from economies of scale: one calorie of food now requires ten calories of energy from our system to produce.

- COMPLEXITY: “More is different.” As everything becomes interconnected, things become entrenched into dynamics that become increasingly difficult to control and even reason about. Rational decision-making must always be filtered by the interests of the current system, thus there is a loss in agency in what we can do (read: incentives), and we are stuck trying to find creative solutions that must accept the framework of what may be a harmful system, often just making that system more effectively harmful.

- ENVIRONMENT: Some call it the sixth mass extinction. Whatever it is, the biosphere is changing dramatically. Soil is in a weird zombie state kept alive by oil. The basic line is that the value of life is diminished through the lens of our economy, as dead resources. So our model will continue bringing the real world into consistency with that deadness.

- MYTHS: When we live in a civilization that sanctifies all forms of advancement and improvement and growth, there is no fertile soil for the acceptance of limitation. We only have the vocabulary to label it pessimist. Thus, optimism becomes co-opted for the aspirations of a mythical techno-utopia beyond all conceivable boundary.


>Collapse is a property of every civilization we’ve studied

How would you define "civilization?" Because sure, every civilization has an expiration date, but for current computing technology to be lost requires a worldwide civilizational collapse. Current global civilization is a decentralized collection of many civilizations which have all shared and replicated the knowledge of computing.

>our debt keeps increasing

Public and private debt are separate things. Public debt has generally seen a continuous march upwards. Private debt has been peaky, with no upward trend. Debts are fine when the debt is incurred for a purpose that has a sufficient return on investment. Public debts of sovereign currency issuers can always be repaid, and the yields on those bonds are whatever the currency issuer decides. And further debts shouldn't be judged as nonviable just because of the quantity of existing debt. Rather, the question at each point should be whether the investment is a good one.

> Soil is in a weird zombie state kept alive by oil

Soil is renewable, and can be made even with simple techniques. The terra preta soil of the Amazon rainforest was largely human-made, and thus the Amazon itself is largely a human construct. Creating it didn't require any oil.

>there is no fertile soil for the acceptance of limitation

Malthusian thinking has often been the default, and one of the most popular modes of thinking since the Enlightenment. The mid 20th century was full of best-selling Malthusian books by the Club of Rome, Paul Ehrlich, M. King Hubbert, and EF Schumacher. The entire fields of biology and ecology have been predicated on Malthusianism. Darwin was explicitly inspired by Malthus.

It has been to the great surprise of the intelligensia of each successive generation that there hasn't been mass starvation. We've been able to do more and more, with less and less. Any serious type of collapse hypothesis needs to factor in the history of losing bets on that side of the argument, and internalize why their predictions were wrong. It wasn't just luck every time.


Definitely, the Green Revolution et al is a solid basis for optimism, especially with Ehrlich losing his wager on resource scarcity. And I do like the malthusian lineage you described.

This empirical optimism is also paradoxically irreverent toward the immutable attrition of complexity. Our creativity has limits, whatever they are just pick something. At the risk of sounding flippant, 200 years of “creative patching” is historically too small a window to say we can continue subverting this “law” with eternal vigilance (I’ve heard this described as “we are running out of tricks”). Maybe I’m oversimplifying when I say we would have to approach the limit of absolute foresight to achieve this, but I think there’s some truth to it. For example, I like these explanations of our rational limits, with regard to managing a complex society:

- CHOMSKY[1]: We have in our heads a certain set of possible intellectual structures. In the lucky event that some aspect of reality happens to have the character of one of these structures in our mind, then we have a science. And that doesn’t mean everything is ultimately going to fall within the domain of science. Quite the contrary… personally I believe that the nature of a decent society might fall outside scope of possible human science.

- ZIZEK[2]: Hegel says, the owl of Minerva only flies out in the dusk. [owl being the icon of wisdom] So philosophy can only grasp a social order when it’s already in its decay.

Particularly unsettling is our reaction to the blurriness of our creative boundaries—that we insist on walking blindly toward cliffs to find where they are. Optimism in uncertainty is great, but some projections cannot be certain until too late.

A final quote that might address your first points:

- OPHULS[3]: Because our own civilization is global, its collapse will also be global, as well as uniquely devastating owing to the immensity of its population, complexity, and consumption. To avoid the common fate of all past civilizations will require a radical change in our ethos—to wit, the deliberate renunciation of greatness...

Anyway, this debate is covered in the book The Wizard and The Prophet[4]. I think we can tell which schools we belong to.

[1]: https://youtu.be/3wfNl2L0Gf8?t=1748

[2]: https://youtu.be/lsWndfzuOc4?t=6703

[3]: https://www.amazon.com/dp/1479243140

[4]: https://www.penguinrandomhouse.com/books/220698/the-wizard-a...


We're doing things all wrong now but we largely know what needs to change to achieve sustainability (hint: it's not more computers). Should a collapse come the transition will come quickly as well, depending on the style of collapse. Long-term effects of climate change are a different beast.

Regarding "A sudden disappearance of rare earth metals needed to make electronics", you (mostly) don't need to worry about that either. Fluorescent tubes and white LEDs use rare earth elements to convert blue/UV light to broad spectrum visible light -- these are how non-OLED displays generate back light. High strength permanent magnets contain the rare earth elements samarium or neodymium (with optional lesser quantities of dysprosium, praseodymium, gadolinium). Those are the only rare earth element applications worth mentioning as far as computer system components go. Strong rare earth magnets are still used in spinning-platter hard drives, but not in SSDs.

You could buy a new EPYC server with solid state drives, grind it up and homogenize the whole thing in acid, and the resulting solution would have a smaller percentage of rare earth elements in it than the same mass of ordinary crustal rocks treated the same way.

Computers don't need rare earth elements. Nor do solar panels, nor do most wind turbines.

See for example the "Consumption" section in the USGS 2015 Minerals Yearbook Rare Earths report:

https://s3-us-west-2.amazonaws.com/prd-wret/assets/palladium...

In descending order of global consumption volume, rare earth elements are consumed by the manufacture of catalysts, magnets, glass polishing media, and metal alloys. Everything else is just miscellaneous.


Big solar flare. Grid goes down. Could take years to get it back up.

https://www.cnet.com/news/we-arent-ready-for-a-solar-storm-s...

Near miss in July 2012: https://science.nasa.gov/science-news/science-at-nasa/2014/2...


We're near a new Maunder minimum, so becoming less and less likely.

Also, while I think a big solar flare would break a lot of stuff--- I think we're better prepared for it than many give us credit for. Tens of millions of people might be initially without power; some fraction of them may need to wait for a long time (months or even years) to get it back; and various bits of transport and production may get disrupted. Enough to require rationing and a major pain to quality of life, but not enough for any kind of catastrophic chain reaction, IMO.


Interesting, thanks

I believe there to be a different reason than the ones you mentioned. I believe what might happen is the end of "the Internet" as we know it. Russia is ready to detach their networks from the rest of the world. China has the GFW. Hackers from around the world are targeting each other for their governments and getting a get out of jail for free card (scratch that: a carte blanche). I like FOSS, but it is also easier pirated by a country who don't adhere to the licenses. There is a country who pirate en masse, who steal trade secrets. That country is called China. Worse, the USA does it as well, via NSA etc.

Either way, I don't think such is a good premise to start this/an OS. There are much better argument to be made to prefer a lightweight OS. Intellectual curiosity, for one.


Obviously, you spend way too much time consuming reasonable, balanced, non-hyperbolic media.

The only reason the supply chain is at risk is because we dispose of a lot of electronics simply because there's a new version.

When the product lifecycle changes from 1 year to 10+ years, you'll find that people will just keep their stuff around longer and the demand on the supply chain goes way down.

Plus, there will be a shitload of data centers with capacity that will no longer be necessary (because of reduced devices making requests, segregated internet, less connectivity) in apocalyptic scenarios. Those can probably be re-purposed.

We haven't had to get clever about computer conservation because there's been so much supply.


Peak oil.

Also, "middle ages" are going to take a good century at least. Think instead of the collapse of Soviet Union (with some places playing the part of the Balkans / Caucasus), but worse...

Btw, rare earths are not so rare - it's just that the US got rid of this industry.


Imagine it's 1910. The same argument you make against nuclear war can be made happily against a large scale war then.

Nothing of the sort the OP describes happened as a result of WW1. Yes, some empires "collapsed", in the sense that the ruling parties changed, their governments got completely ripped out and replaced, or countries split up (e.g. Austria-Hungary, Russia, Germany). Maybe people went through a few decades of economic hardships. Either way, these collapses were contained to local regions (with cascading global consequences), but resembled nothing of an apocalyptic scenario.

No large scale technology was lost. If anything, human civilization became more technologically advanced.


In the event of a sufficiently large collapse, people will be so far down on Maslow's hierarchy of needs that an OS will be about the last thing on their minds.

In the event of a sufficiently large collapse, I don't want to survive it. That is NOT to be misinterpreted to mean that I have any intention to do myself or anyone else any harm. I am quite happy with my life now, but in the event of a major collapse, I would rather not live to see it. My spouse feels similarly.

I don't know, I'd say something like hyperinflation in Germany[0] was probably a collapse that, er, "adjusted for inflation" was as big (for the Germans anyway) as "no more chips" might be to us today. And people still adapted to it and tried to get on with their lives in altered circumstances.[1]

[0]: https://en.wikipedia.org/wiki/Hyperinflation_in_the_Weimar_R...

[1]: a charming novel about this: https://en.wikipedia.org/wiki/The_Black_Obelisk


The problem is that modern society - and the current size of the world's population - is dependent on a lot of programmable devices. For example: agriculture, where tractors and other farm machines nowadays have ECUs/ECMs (let alone even more programmable bits and pieces). Same for the vehicles used to actually transport food from farms to the rest of the world. There are plenty of other examples, too, like medical devices and water extraction and heating/cooling and other things that are nowadays the difference between life and death for a lot of people.

Sure, we were able to make do a century or so ago, but not with 8 billion people and counting. People will die without some way to keep the various microcontroller-driven systems up and running. It's a long shot that we'd be able to adequately replace a microcontroller in a tractor ECM or a pacemaker or an air conditioning system or a water pump, but a slim chance is better than no chance at all, and the latter is exactly what we'll have unless we're thinking about and testing out solutions now, while we still have the resources to easily do so.


> the current size of the world's population - is dependent on a lot of programmable devices.

Not to mention the energy supply chain. If the supply chain required to make electronics collapses, that probably also means the energy supply chain has collapsed, or has at least been severely disrupted. That seems far more likely to be damaging and far more quickly that a lack of ability to keep a microcontroller running. If I don't have gas for my car, it doesn't really matter if I can fix it when it breaks down. (And I run out of gas in a few hundred miles, but repairs are required on the order of tens of thousands of miles.)

This is really what I was trying to get it with my first comment. The problems presented by a lack of ability to make new technology are the sorts of problems that take months or years to become critical, but in a true collapse setting, the issues that matter most would unfold in days or weeks.

(I feel like I should point out that I don't think any of this is particularly likely.)


Electricity generation does not require a global supply chain. Modern computer manufacturing does.

True as far as it goes, but a couple comments:

* I was referring to the energy supply chain, not just electricity. Energy as a whole is very much a global supply chain. (And even more than that, it's very globally interconnected in terms of pricing, etc.)

* As a thought experiment, consider completely shutting down the computer manufacturing supply for two weeks. Then consider the same for the energy supply chain. Which of those has more immediate and profound impact?

Keep in mind that I'm not saying that either of these domains is unimportant. Just that society would and has felt the importance of one a lot more acutely and a lot more suddenly.


I think the point of GP's comment, though, is that it's arguably straightforward to bootstrap some degree of electricity generation without there necessarily being a working energy supply chain (e.g. building one's own dynamo with a magnet and some wire and hooking that dynamo to a windmill or watermill or steam engine or other turbine, or salvaging bits and pieces of broken solar cells to build a new one from almost-scratch; then it's just a matter of building capacitors or batteries or flywheels or elevated weights or whatever to store that electricity). Yes, it'll be absolutely painful (and will offer nowhere near the energy production/distribution capability to which we're accustomed as a society), but it's survivable.

It's also possible to bootstrap some degree of computing power without an electronics supply chain, but it's also much easier to cannibalize from existing devices (whereas for the current energy supply chain there are fewer things to be cannibalized, besides perhaps electric motors to turn into impromptu dynamos).

Realistically, both will probably go hand-in-hand: we'll use primitive, cobbled-together generators to power primitive, cobbled-together computers; which we'll use to control more sophisticated generators to power more sophisticated computers (and the more sophisticated processes for repairing/building those computers); and so on until we're eventually back to where we started.


The energy supply chain is also in turn dependent on a bunch of microcontrollers (and also macrocontrollers, if there's such a word), all the way from power plants to distribution networks. So if we want to keep those running, we'll need to make sure we have the ability to repair/replace that hardware, too.

> (and also macrocontrollers, if there's such a word),

Not really, but the idea is sound. There's a hierarchy of control in electricity generation.

* At the bottom level you have microcontroller driven control loops sitting within the plants themselves. These operate on a sub-second timescale and do things like balance air/fuel/etc. flow through the plant to keep it safely running and stable.

* The lowest level loops take their setpoints and controls from a higher level set of controls that work at the level of the generating unit. Those work along the lines of 'generator 1 produce 200MW and ramp to 300MW over the next 3 hours.'

* Above that are control loops run by the grid operator that dispatch plants to match the amount of generation. (And do so in a safe and economic way).

* Above that are (can be) a series of nested power markets ranging in duration from real time, daily, monthly, etc.

* Above that are (can be) long term capacity markets that help ensure there's enough capacity within a grid to serve future load needs.

(So there are a lot of things that might qualify as 'macrocontrollers'. :-) )


Give me a good slide rule and a manual of practical mathematics, eh?

As for scavenged parts, you're going to need a warehouse of manuals and datasheets, eh?

Depending on the details of your post-apocalyptic scenario planning, simple automation driven by relays or clockwork logic will be more likely than e.g. scavenged microcontrollers.

I applaud the spirit of the project though: I don't want to live on Gilligan's Island making everything out of coconuts and vines.


> As for scavenged parts, you're going to need a warehouse of manuals and datasheets, eh?

You're right! As a thought experiment, let's say I download CollapseOS and then switch off my internet.

I have in my house a normal complement of electronic devices. I have a soldering iron, some wire etc. I assume if I start taking things apart I'll find some Z80s. Those Z80s will be living on boards with clock chips and memory etc. Where do I even start?


Reminds me of Global Village Construction Set

https://www.opensourceecology.org/gvcs/

The Global Village Construction Set (GVCS) is a modular, DIY, low-cost, high-performance platform that allows for the easy fabrication of the 50 different Industrial Machines that it takes to build a small, sustainable civilization


Long Tien Nguyen and Alan Kay's Cuneiform Tablets seem relevant: https://archive.org/details/tr2015004_cuneiform

But if someone has a hint about useful prior art, please let me know.

http://fuzix.org/ - lots of 8-bit targets, z80 included

http://cowlark.com/cpmish/index.html - has a vi-like editor, assembler, and is cp/m compatible so it can run lots of old cp/m software like various compilers


If I were to pick an 8-bit processor for a post-apocalyptic future, it'd be the single chip version of the Fairchild F8, not a Z80.

It was designed to be extremely simple and reduced in scope to the minimum of what a processor needed. It went into space. Radiation hardened versions were made.

The original version had its functionality broken up into multiple chips. That could allow for easier repairs.

I don't know how many transistors were in it, but I doubt it's more than the Z80 or 6502.

The RCA 1802 is another one I'd consider. In fact, it will likely outlive the human race entirely, as it's in the Voyager spacecrafts.


> It was designed to be extremely simple and reduced in scope to the minimum of what a processor needed. It went into space. Radiation hardened versions were made.

But you won't find them in calculators just lying around that you can scavenge. Remember, the narrative driving this is post-economic/supply chain apocalypse.


Relevant to this goal is Stage0 [0], which is attempt to bootstrap a compiler toolchain. It is still a work in progress but the most promising attempt I have seen.

[0] https://github.com/oriansj/stage0


I love projects like these. One that has me fascinated is the idea of building a computer than can last centuries. Can it be done?

- Will the ICs last that long, can they?

- How will it get electricity if the sockets and voltage standards change?

- How do you make it durable to dropping, water, dust, etc?

- What sort of writable storage can last that long without degrading?

- How do you edit fonts as language changes over time?

- What sort of libraries and documentation do you include?

- Should you include some sort of Rosetta Stone for new users?


Hey mate. Dunno if you recall, but some time ago you recommended How to Read a Book: https://news.ycombinator.com/item?id=20847508

Would you like us to share notes on the book itself? I've practiced its teachings for the last 3 years or so, and I'd like to chat with someone about it.

Sorry if this comment is a bit irrelevant, but HN doesn't really have a DM system. /shrug


I have some answers

1. Yes; 10C reduction in temperature means doubling of life. I've known pentiums to last 10 years at 60C+; just running processors at 30*C instead is 80 years minimum. Main thing is to use leaded solder so you don't get electromigration problems.

2. Solar panels and batteries. Battery voltage is chemical and fixed by physics; nickel-iron batteries can be rebuilt and last forever. Solar panels can be oversized to provide enough energy even when they degrade over time and/or the computer can just be used at a lower duty cycle.

3. Make it big and hard to move in a sturdy box.

4. Flash can last that long if it is periodically rewritten, kept cool, has redundancy, and isn't updated often.


5. Default to a ROM which is made in such a way it is basically immortal.

Do you know that can be possible (more than what you can think) the situation you told, Mr Virgil Dupras, in 2022? You were thinking about informatic and not about events on the planet. For that time (2022) all the wolrd should have a negative situation where a solar storm arrives on the planet and destroys many electrical/electronics... It is called Carrington event and happens every 150 years... Please ask to the NASA as they know well. US Government started researches about to stop, already in 2004...

Neat idea, but I'm not seeing the window of usefulness for this.

If society collapses and recovers relatively quickly, we likely can coast for 10-20 years on the computers that have already been built. This would be what I'd expect to happen with a point-in-time catastrophe that disrupts everything but then ends and we can all set to work to rebuilding everything. (Like a massive economic collapse, huge meteor strike, nuclear winter, etc.) Even if 95% of computers become inoperable, there's a lot you can do with the remaining 5%. Probably more than what you can do with new stuff you build.

Another scenario is that we recover really slowly. This would be due to some kind of enduring factor that holds back humanity, like a really long-term famine or global political instability that we somehow cannot reset. In that case, what's the hurry to develop software that's ready to go? Maximizing compute capability doesn't seem like it would be the thing that tips the scales and allows society to get rolling again. For that you need to solve whatever the root problem is.

TLDR, if we fall, maybe there is nothing holding us down, and we can bounce back up relatively quickly, in which case we don't need this. Or there is something holding us down, then it seems unlikely that computing is what we need to solve that.

Maybe there are other scenarios that I haven't thought of, though. Or ways that computing would help in the above scenarios.


Anyone wondering why the Z80 chip, just read the FAQ:

https://collapseos.org/why.html

"The z80 has 9000 transistors. 9000! Compared to the millions we have in any modern CPU, that's nothing!"


No, it's 9000. That's not nothing. You may be able to scavenge old ones for a long time, but building new ones will be non-trivial.

Non-trivial is still better than outright impossible.

Well, the first transistor was in 1947. The Z80 was first manufactured in 1976. That's 29 years of improvements in manufacturing technology before the Z80 was manufacturable. So if we have to start over, the Z80 is sure better than the Pentium (or at least, we can make it first), but it's still outright impossible for quite a while.

It's possible to build CPUs with many times the Z80's transistor count by hand¹, albeit with great effort and taking up a heck of a lot of space (and probably nowhere near the speeds of a single-chip Z80).

¹ http://megaprocessor.com/index.html


Why z80 and not x86 or AArch, which are both more readily accessible today? This whole idea reeks of someone trying to reconcile their love of old computers with their poorly considered death-cult Malthusianism.

Z80 chips are way more common than even x86, by merit of them having proliferated in embedded microcontrollers.

By accessibility I didn't really mean "commonality in terms of numbers," I mean that they are generally more easier to configure and work with (on top of being readily available). x86 PCs and ARM-powered mobile devices are very plentiful, are fairly modular without requiring more sophisticated tools, and avoid a high barrier of entry (i.e. deeper EE experience).

The author thinks that when their imagined Mad Max society comes to be, they're going to be picking up a soldering iron against old Segas and TI-84s. If for some reason that you need to use computers in a developmental capacity (since the author's OS has an assembler and an `ed` clone) in a "post-collapse society," I don't think it would be that hard to find some discarded HP desktop or laptop to work on.


> I don't think it would be that hard to find some discarded HP desktop or laptop to work on.

In the short term, you're probably right. Most modern desktops and laptops will hopefully last a decade or two (maybe three).

In the medium term, even these will start to break down. One of the key points of failure will be thermal paste; these modern CPUs run quite a bit hotter than a Z80 or 8086 or what have you, and the thermal paste has a finite lifetime (especially the cheaper stuff used in most mass-produced desktops and laptops). Unless you've got a whole bunch of the stuff stocked up, or you're able to setup an immersion cooling rig (with a coolant that's non-conductive and non-corrosive), these PCs will eventually overheat and die. Flash memory and hard drives both have similarly-finite lifetimes, too, so there goes the vast majority of mass-produced storage media (thankfully flash memory longevity is driven by use, so it should be possible to stockpile flash media).

Older chips like the Z80 or 8080/8086 or 6502 tend to avoid the thermal paste problem entirely (by not requiring any sort of heatsink at all), and have simpler memory interfaces (which makes it easier to wire them up to replacement memory, including potentially hand-wound core memory or hand-wired SRAM in a worst-case scenario).

In the long term, even these scavenged Z80s will probably eventually wear out. Hopefully by this time at least some degree of chip fabrication will have been bootstrapped back into existence, in which case replacement Z80s and 8080s/8086s will most likely be possible much sooner than replacement 386s and ARMs.

----

EDIT:

> x86 PCs and ARM-powered mobile devices are very plentiful, are fairly modular without requiring more sophisticated tools, and avoid a high barrier of entry (i.e. deeper EE experience)

Possibly, from a certain point of view. Apples-to-apples, though, this is very unlikely to be true. Z80-based computers tend to be electrically simpler (by a pretty wide margin) than x86-based or ARM-based computers. There's a lot more supporting circuitry between the CPU and memory/peripherals/etc., which means more components that can fail (and be difficult to replace, especially given the tighter electrical and latency tolerances of the average x86 or ARM motherboard).


One way to gain traction (pre-collapse) might be to hold competitions about getting it to run on challenging "salvaged" systems and demonstrating impressive ways to copy it from one system to another.

I think a useful adjunct to this sort of project would be a project that describes a really useful general-purpose CPU that can actually run a lot of advanced software but that's still as simple as possible --- and work out a realistic path for bootstrapping its manufacturing. A stripped down 32-bit RISCV for example only needs tens of thousands of gates but could run most modern software.

In conjunction with that, it would be good to have an archive of useful software and data in a durable format where access to that data can also be bootstrapped. I'm not sure what that format would be...


I've wondered about this a few times and always started by asking myself what would be left after armageddon.

Android phones. Tens of millions of them. It must be the most ubiquitous computing platform by now...


They have much too rosy an idea of the future. Nuclear war is actually the only thing that will save them! By that time countries will be their own separate cyber-Fascist states, actually cooperating with each other to keep their citizens, by that time implanted with chips in their brains and connected to the central Net, in line. Oh, by the way, they won't be forced to have the chips implanted, they'll willingly line up for it! So they can get their Dominos Pizza, and Amazon deliveries, and Google Map directions all with a thought! Or so they'll be told. Besides, the asymmetry in computing power between the rulers and the enslaved will be laughable. The government will have even (more) powerful quantum based machines. Cobbled together 48K Z80 machines will be insignificant. If they're even tolerated. Which they won't be...

Everyone is comparing it with Seed Valut[0]. its better to have a complete eco-system for computers. But its a nice forward step to save the technology in small computer, which gives the basic idea of how to proceed further. in case if you start storing the everything you need lots of space underground which seems not feasible.

In my opinion, there should be system in which all the blueprints for the technology is saved & that machine should be self sufficient to run on its own power, memory and should be capable enough to educate or atleast gives the basic idea of structure, as after the post Collapse, if anyone who is lucky enough get this technology, can improve and build a new system.

I like the idea of Collapse OS, in similar manner create the machine which can run any software/os or supports most basic and used operations.

Same goes with the books as well.

~Nauman


I had been thinking about similar projects myself. I figure that experience with the z80, 68000, and the 6502 would give someone a platform for hacking for at least the next century. There are some dozens of 68000-like chips in a single car. I/O is as simple as LEDs and toggle switches for the bare necessities, such as bootstrapping other I/O options. Worked for the Altair 8800. From there one could implement morse-like momentary switch input. In these (possibly far-fetched) scenarios, going back to things like ticker tape and printers would make a decent amount of sense. Perhaps spools of wire could be used as "tape" for programs and data, as wire recorders existed before plastic tapes were available. I love seeing how home fabrication is developing, with people making simple silicon in their garage, but there is value to a basic tool chain that doesn't require as much sophistication and supply chains. I truly hope we don't live to see such a world, as the suffering would be immense. That said, I have no idea how complex supply chains can be expected to persist without fossil fuels.

This is a fun idea.

I often think about hoarding a collection of software and media for an end of the world scenario. Then another year goes by and the world is still here.


No law against updating the software collection and the media ...that's what I do.

A bit of a tangent but in the novel "The Windup Girl" they live in a post oil world which essentially ended up more of a total societal restructuring that pretty much resembled what we'd consider a collapse. Nations fell and in some places companies took over. Cities collapsed as the population shrank and technology shifted to focus more on bio engineering to make up for the loss of all the mechanical/electrical technology that ran our world since powering it all got a lot more expensive after the oil was gone.

In one part a high security government installation was described with "ancient" PC's. They couldn't make new ones so they kept whatever they could running and the narrators mind was blown thinking about how much energy they wasted.

I think one of the top priorities for a project like this should be making it easy to implement considering practically everything you would use now days to get help getting it working won't exist. No websites or forums or anything like that.


Why not a Rad Hard 8086 https://www.renesas.com/in/en/products/space-harsh-environme... or maybe a Rad Hard RCA CPD1802 https://www.renesas.com/in/en/products/space-harsh-environme... ? Those might survive a hydrogen bomb if not too close.

I've thought about this a little, and I think rebooting vacuum tube technology from scratch is possible more easily. Not trivial, but possible. Once you get reliable triodes, you're on your way.


Maybe a system based on ARM or MIPS @ 1GHz very common in a modem-router. Z80 @ 8Mhz cannot dispay even a 640x480 image. it would be wonderful recycling modems as stand-alone computers

Why Z80? Is it like the 2nd most common processor type or something? Where would one find a Z80 when scavenging?

It's extremely common, even today, though probably not the processor I would pick (I think 68000 family, perhaps?), but it's probably a reasonable choice. You're going to be able to find it embedded in literally millions of devices and it's simple enough for one person to build a computer around it.

I suspect the argument against modern Intel chips is just their complexity. They need an incredibly complicated and somewhat fragile support infrastructure...you can't build a modern PC motherboard in your garage and you don't expect modern PCs to last decades. They're very common, though, and I suspect there will be plenty of PCs to scavenge, at least through our lifetimes. But, the next generation will probably have trouble keeping them going...I've got a 40 year old C64 still running with nearly all original parts, but I am nearly 100% certain my modern laptop will not last even a decade without repairs using parts that can't be manufactured without modern infrastructure.


> I suspect the argument against modern Intel chips is just their complexity.

Well that, and the fact that we already have plenty of OSes to run on x86(-64).

Looking at arch/ in linux's source:

  alpha  avr32     frv      Kconfig  microblaze  openrisc  score  um
  arc    blackfin  h8300    m32r     mips        parisc    sh     unicore32
  arm    c6x       hexagon  m68k     mn10300     powerpc   sparc  x86
  arm64  cris      ia64     metag    nios2       s390      tile   xtensa
I'm surprised that it doesn't have support for Z80 if it's so common.

I'm also surprised that I can't see mention of Z80 in GCC's documentation.


The z80 is an 8-bit processor released in 1976. It's not an appropriate target for Linux for many reasons, including: no processor support for process isolation, massively divergent hardware setup in different applications, lack of processing power, lack of modern c compiler, etc. I don't think any of the mainstream Linux ports run on anything less than a 32-bit processor; although there are some fringe ports to 16-bit systems.

I'm somewhat surprised there's no Z80 support for GCC, I recall running a gcc for Motorola's 68HC11 which is a similar processor. That said, most general purpose C compilers are a bad fit for 8-bit processors; you really want to write assembly for these small systems to ensure your code is compact and fast; it's much too easy to write code in C that will be significantly slower than if well written in assembly because of limited memory indexing modes or lack of registers. It's probably possible to carefully constrain your C usage to encourage reasonable assembly output, but then you're not going to be able to use existing C code. You won't have that much code that fits in your roms anyway, so you may as well dig in and write assembly.


No 6502, either. There aren't many 8 bit CPUs that are considered a reasonable platform for any UNIX. Which is why I think I'd prefer 68000 family. It's "modern enough" to use with modern compilers and operating systems, but simple enough for a one man garage shop to build/maintain a computer that uses it. I guess earlier 80x86 chips could fit that bill, too, but 68000 has a better assembly language.

The 68000 still did not have an onboard memory-management unit, though.

The was only introduced with the 68030, which is roughly analagous and contemporaneous with the Intel 80386.

The 68040 added an onboard FPU, like the 80486. (And like the 486SX, the 68040EC had the onboard FPU removed again.)


>I'm surprised that it doesn't have support for Z80 if it's so common.

That's because it's an 8 bit computer, my dude. Back in the old days, when "the internet" was still a military project, and you'd phone up your local BBS at 2400 baud on a POTS with suction cups; that's all the little people had access to as recently as the early 80s. And as other people said, there are apparently many of them around, and they run on shit power supplies.

It's a cool idea, but obviously it requires both cheap and dirty hardware implementations and a paper manual. Pretty sure "hacking" will be low on the hierarchy of needs in the event of apocalypse. Also pretty sure something like CP/M would be more useful. I know where there are CAMAC crates with Z80/CPM punched card/paper tape readers that would probably do great in a post apocalyptic environment.

http://www.cpm.z80.de/


The Z80 can't run Linux due to the lack of an MMU. (Though it's possible to get around that through emulation: see http://dmitry.gr/?r=05.Projects&proj=07.%20Linux%20on%208bit for example.)

There is Fuzix.

[1] http://www.fuzix.org/

And Symbos.

[2] http://www.symbos.de/

I wonder if he knows about them?


TI calculator. Old school game boy.

Dunno if this counts because I guess it's the "eZ80" but it looks like TI still makes one:

https://education.ti.com/en/products/calculators/graphing-ca...


The TI-83 and TI-86 series calculators were Z80 based.

Apparently they discontinued those in 2004. Huh.

Maybe the m68k would be a better target? Actually if the OP is serious about this project, I would write base kernels for Z80, m68k, armv7, and RV32i. The last one isn't widely available, but has the advantage of being both modern architecture and allowing open-source specs for how to construct one from scratch.


eZ80 in the TI-84 Plus CE (still manufactured) is backwards compatible with Z80 code.

https://en.wikipedia.org/wiki/Zilog_eZ80


Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: