Hacker News new | past | comments | ask | show | jobs | submit login

I've been sort of half-assed working on a novel about a post-tech future (100+ years after apocalypse), where a handful of people retained some technology by virtue of having stored information about it in a manner that survived and a tradition of scavenging and trading and a sort of religion based on maintaining it in a working state. So, this is a fun read and thought experiment, even if an 8-bit computer is probably not my highest priority when thinking of "when the shit hits the fan, as seems more likely today than it did five years ago, what do I want to have in my survival kit?"

One of the questions I keep coming back to for such a scenario, and still haven't come up with a great answer for, is how does someone living in a world without the ability to manufacture a computer still have computers that work 100+ years after the last one was made? Even manufacturing transistors without modern methods is non-trivial. Will a Z80 last 100+ years? I mean, maybe, if it's kept dry and not exposed to anything corrosive. I've got a Commodore 64 that's ~40 years old and still works...so, 100 years seems reachable, but there have to be extenuating circumstances to get to that "post-tech" world (though I guess in a post apocalyptic world, the value of computers would be seen as minimal for a few years while survival is the only concern, so just forgetting could be enough).




My novel idea (feel free to use it!) is about someone taking a 1980s computer back to the Second World War. It's used by the Allies to decrypt Ultra intelligence and is treated like a kind of holy relic - only a tiny set of "high priests" are allowed near it, fewer still can touch it, and because of its importance they go to extraordinary lengths to ensure it can never be damaged by anything from bombs to power glitches. Think a Commodore 64 in a ridiculous white room.

But the book would be more about the consequences of this - do they eventually take the thing apart and jump-start a silicon chip revolution in the 1950s, or (more likely I think) does the government destroy the machine as the UK government did to the Bletchley machines after WWII, and because there's no ground-up computer theory does it set back computing for decades?


Yeah, no. I've handled parts of Whirlwind[1], a vacuum tube machine from just post-WWII, and the gap from that to a C-64 or any other circa-1980 machine is just too great. They were using discrete wiring, resistors and wires soldered to the bases of the vacuum tubes. The Whirlwind was the first machine to use core memory, and the 4K core memory unit is a box about the size of a phone booth. I don't know if PCBs existed before 1950 but if they did, they were certainly single-sided.

So now ask somebody really smart in that technology, like say Jay Forrester[2] who had just finished inventing core memory, to analyze this magic beige plastic box. He could probably recognize that the PCB provided connectivity between parts, but what are the parts, these little flat plastic tiles? I don't think it would be possible to work out from first principles what the functional contents of a DRAM chip is, let alone the CPU. Even if they x-rayed it, supposing they had x-ray tech with enough resolution to resolve a chip, how could they figure out that those little blobs are transistors? Transistors hadn't been invented!

I think they'd have to concede this is "sufficiently advanced" tech, in Arthur Clarke's phrase, to be indistinguishable from magic.

[1] https://en.wikipedia.org/wiki/Whirlwind_I

[2] https://en.wikipedia.org/wiki/Jay_Wright_Forrester


I don't buy it; a C64 is simple enough that you could work out the functionality of most components by trial and error and then work back from first principles. You'd measure 5v TTL, and under a scope you'd see the 1mhz binary signal. From there, 74 series chips on the board would probably be the first to be identified, simply based on inputs and outputs. And once you did that, and knew that this was a NOR gate or whatever, you'd pop the top off, look at it under microscope, and start to work back from your knowledge that this had digital logic and you'd figure out what the _function_ of a transistor was even if you didn't know what it is.

The RAMs and ROMs would be fairly trivial to figure out, as well.

You might not learn the manufacturing process -- that really did take a couple decades of material science and physics advances. But the principles of the machine would be clear. And then you could take the knowledge of that and scale it to the electronics components available in the era. You'd definitely have a head start simply knowing that these things were _possible_, and getting a boost on knowing how a computer could be structured.


They can probably figure out that the little plastic things either store data or perform computation. By process of elimination since wires, resistors, capacitors, PCBs can all be analyzed for their properties.

Given that knowledge they can try breaking those pieces of plastic apart to see that it's a housing over some sort of ceramic core. Using spectroscopy and chemistry you can figure out what that core is made out of. Now you know what mix of chemicals allows for really high density data storage and computation.

Using x-rays and microscopes they can figure out that the little ceramic die has some sort of structure etched on it. Maybe remove tiny pieces to see what different parts/layers are chemically composed of.

Now they know that there's something interesting about certain elements deposited on top of silicon using some sort of etching approach. Early transistor research was already well along (and had been patented already in the 20s) so it's likely they would have made the connection. Given all that you can start brute forcing industries and ideas around those materials.


They would see the "© 1982" on a chip and although it would be incredibly futuristic (35+ years in the future!), would at least know it was likely to be created by humans. Whether they could work out how on earth you place such incredibly tiny components onto a sliver of silicon is interesting. If the person taking the computer back in time mentioned the word "photolithography" I suspect they would have been able to make a pretty good guess.


I don't think there would be many copyright dates on the chips. They might think that Texas ruled the world from the TI logo being on everything, though.


Here's a high res picture of the C64 PCB, where you can see the markings on the chips: https://myoldcomputer.nl/wp-content/uploads/2015/11/board-32...

You can see both copyright dates, and plenty of other English text. While in 1940 this would have represented incredible futuristic technology, it's pretty obviously made by humans and not a piece of alien magic. It also has components like resistors and capacitors with markings which would have been immediately obvious to 1940s electronics experts.


I'm pretty sure you're wrong. People are very good at pattern recognition. You don't need to understand the physics to check lots of combinations of inputs and deduce what this black box do.


I would read this book. Fictional [alternative] history is always fun to read for me.


If I may recomend the book "Stars' Reach: A Novel of the Deindustrial Future" by one John Michael Greer, maybe we can see this idea from a different perspective.

From Greer's point of view, the factors that make today's hardware brittle are not technical, but economic. Corporations have to make electronics at a profit, and at a price point that is accesible to the average working class citizen. This business model would not be sustainable in the either a fast-collapse or slow-collapse scenario.

Instead, in the novel, governments take over the tech industry sometime in the second half of the 21st century, and treat it as a strategic resource in its struggle to not be left out in the global musical chairs game of climate change + resource depletion. They run it at a loss, and put the best minds they can spare to the task of making a computing infrastructure that is built to last.

By the 25th century, which is the time when the novel's events take place, Humanity has lost the ability to manufacture electronics, but computers built 350 years ago are kept in working order by a cadre of highly trained specialists (most of which have the skills of a geeksquad employee, but still). Common people have maybe heard some wildly innacurate legend about robots or computers. Wealty individuals are probably better informed but still cannot own one of those at any price. They only computers depicted of spoken about are US government property operated at US millitary facilities (or maybe there was one at the Library of Congress, do not really recall, though).

There's one post-collapse hacker in the novel, a secondary character that is part of the protagonist's crew. The author is not an engineering type and dances around the actual skills of this guy, but I'd say he seems able to use a debugger/hex editor and read binaries. His greatest achievement, though, is to fix and set up an ancient printer and recover documents from a disk that was "erased" but not wiped clean.


Why is this dead? I've read the blog of the author in the past and found it to be inspiring, or at least interesting, because his short stories did the "what could be different?" thing very well.

(edit: wording)


IMHO you could be using the same chips for a surprisingly long time - you're likely to need some electronics maintained e.g. replacing capacitors, but if you really need to get some computing power for a 2100 post-apocalyptic scenario where there's just a few million people, then scavenging working chips from random gadgets has a lot of potential. E.g. a hard drive controller or a kids toy may be running on something that was good enough for a CPU twenty years ago.


Yeah, part of keeping old computers (and synthesizers, another interest of mine) running is re-capping. Modern caps last longer than the stuff from the 70s and early 80s, but still maybe needs consideration. I don't actually know much about the longevity of modern caps...worth some research.


Raspberry Pi is 4 gigahertz. Madness! You're in a postapocalyptic world and you find someone's drawer of raspberry pi sideprojects: a solar panel, a low power screen...

... you find a controller ... you find their emulation raspberry pi. ... all of a sudden, the world isn't as desolate.


... only to find it won't boot, as the OS was on some kind of SSD storage that died after ~2-3 years without being powered on. :(


Ben Eater's YouTube channel (https://www.youtube.com/channel/UCS0N5baNlQWJCUrhCEo8WlA) shows the step-by-step process of building an 8-bit computer, running a display and sending data over a network from fundamental components. He does use some manufactured stuff like breadboards, timing crystals and ICs, but it's still pretty cool stuff. Building computers from raw minerals would be pretty tough.


For people interested in this sort of thing, I can recommend the Nand2Tetris Courses[1] (also on Coursera).

They basically walk you through assembling and programming a full CPU from nothing but NAND-gates in a hardware description language, and in the second part even adding a compiler and higher-level language to the stack.

1. https://www.nand2tetris.org/


You might enjoy some of the Warhammer 40k lore; enough time has passed that technology is often literally indistinguishable from magic. A cult of "tech priests" are the only people who know how to speak to the souls that are inside the machines. It's heavily implied that they're not actually souls, of course, but that's how it's described to most people, as its easier to understand.


Computers could give you better chances of survival. In a shelter / bunker let's say you can have a control center where you can monitor sensors and manage databases. These tasks doesn't require a computer, but saves you massive time to do something else.


The core premise of the story is that the people who have tech have access to a copy of WikiPedia, or some portion of it, which is like a super power in a post-tech world. Even if it is only used periodically (to prevent wear and tear on the computer), it would still be incredibly valuable to have an oracle that can answer questions like "how to nixtamilize corn?" or "how to preserve food without refrigeration" in a world without grocery stores.

This, of course, presumes libraries are also mostly gone, since you don't need WikiPedia if you have a library.


Well, honestly, I think for that type of knowledge, Wikipedia is actually a very bad starting point. It could have become something like that, but ... I blame Deletionism, and more importantly, the general aversity to "know how" pages. Yes, you could relearn math from Wikipedia, but I'm not sure there's enough stuff in there that somebody who didn't already know how would be able to recreate half-way modern antibiotics, or an old CRT TV tube, let alone an SEM etc.


While I agree that WikiPedia is unnecessarily restrictive in their coverage (particularly with regard to how to do things), I think there's a lot of value there, too...especially in a world that has lost access to anything more in-depth. I mean, I can go to WikiPedia and figure out how ancient peoples cooled and heated their homes, handled irrigation, preserved foods without refrigeration, what plants are edible, what plants grow in what regions (though this would be thrown off by a climate catastrophe), how to render fat for storage, how to make rubber, how to smelt iron, how to build a boat from naturally available supplies, how to make cloth water resistant, natural remedies, etc. While it's not a "how to" guide for any of these things, if you can read, look at the diagrams, and follow the references within WikiPedia, you can figure it out with some trial and error.

The premise isn't that the folks with WikiPedia can rebuild modern society. It's that they literally can't (even if they had better knowledge resources), but would still have a survival advantage from having a little bit of the old knowledge. The fact is that if we lose our modern society, we'll never be able to build it up again. We've dug up all of the easily accessible resources, already. Scavenging from the good old days is the best any post-apocalyptic society can hope for, as bleak as that sounds.


Now I'm suddenly tempted to write up some kind of program that'll automatically archive not just the Wikipedia pages themselves ¹, but also their citations and external links. Or maybe (probably) someone else has already written such a program.

¹ I vaguely recall Wikipedia already provides some way to download all pages in bulk, but I can't seem to find it (if it even exists anymore, or if it ever actually existed instead of me just hallucinating it)



Nice, thanks!


There might be a place for a WikiKnowHow...


"since you don't need WikiPedia if you have a library."

I have libraries and WikiPedia is still pretty useful. Searchability and portability / mobility would be pretty valuable attributes in this type of scenario.


Or you manage to repair a advanced drone .. that can do scavenging in dangerous radiated areas for example (or just common drone, can scout for you). Solar panels will be valuable... But wind is easily to bild and generate as well.

In general, the scenario is, that the whole world is broken down, but full of tech. So many machines to get back to working. Machines beat muscle on a scale


> that can do scavenging in dangerous radiated areas for example

Most robotics don't work in areas of heavy radiation, because radiation damages electronics.


You wouldn't want to scavenge something from an area which is so irradiated that it fries electronics.


Could a Z80 do these things effectively? Honest question.


Yes, sensor monitoring (movement, heat, wind) is just simple IO handling and I'm sure managing a SQLite database would be in it's capabilities. If a C64 is capable of all if this then it can be done.

Even chat is possible for example between two buildings where radio might not penetrate the walls. But sure, at that point if you can lay down cables, then it's simpler to just build a telephone.


SQLite assumes quite a bit more resources than a Z80 has.


There were databases that ran on 8 bit computers. They were not as powerful as SQLite, but they could do the job.


Since the transistors are used as switches, one can use electromechanical one, aka relays.

There's a nice project page of one here[1], including an in-depth video about it here[2]. There's a collection of other relay computers here[3].

[1]: http://web.cecs.pdx.edu/~harry/Relay/

[2]: https://www.youtube.com/watch?v=tsp2JntuZ3c

[3]: https://github.com/Dovgalyuk/Relay/wiki


This reminds me a bit of A Canticle for Liebowitz, which in turn reminds me to re-read it. Thank you.


Yes, definitely. It's just another spin on the same idea 60 years later...a few things have changed since then, so I think there's room for another take. (And, I'm not the first to mine the territory since then.)


Similar to this scenario is Neal Stephenson's Seveneves. Without giving too much of the story away, it's a great sci-fi novel that covers some post-apocalyptic topics, one of which includes a huge regression in PC performance/transistor count due to the major disruption of this fragile supply chain.


You can look at the proliferation of antiques for examples of this. Arguably many antiques are more fragile than a computer, and exist over a hundred years later in excellent condition either through careful planning or neglect and luck.


So basically Waterworld but for computers instead of water? I like it.


Please continue working on this novel; I would read it. You could even start publishing today with services like leanpub and get some early backers and feedback.


Jacquard Looms. No electronics required.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: