As I've been playing with Raspi's and Beaglebones and stuff lately, it's been driving me nuts that EVERYTHING I do needs to be apt-gotten off the internet, the base image doesn't even include basics like screen/tmux.
If those repos are inaccessable for any reason, I have a bunch of hardware that's very hard to do anything useful with.
I know there are such things as apt-caches and squid caches and stuff, but I could really use thing that goes through every apt-get I've ever done and the top 50,000 packages on github and stuffs 'em all onto an SD card and shows me how to use them from my commandline.
OP mentions this as a future direction for the project, but I think it's one of the most important.
squid-deb-proxy can do most of this - it fetches and locally caches lists and packages. Clients install squid-deb-proxy-client which uses multicast-DNS to discover the local proxy.
Packages are fetched and cached by the proxy first time they're requested and thereafter served locally (subject to lifetime/space etc. squid configuration).
There are ways to pre-populate the cache but you're always going to have situations where package updates have to pull from remote archives.
apt-mirror is designed for a similar purpose. It allows creating a sub-set or entire mirror of one or more upstream archives.
Clients can then be pointed at the local mirror which could be on the same host or LAN.
Generally, 'packages' on github will be source-code only so you'd need a quite complex build server, or at least per-package specific links in your local cache rules for pre-built binaries.
> it's been driving me nuts that EVERYTHING I do needs to be apt-gotten off the internet,
One solution is to do it once only: grab the base image, install all the packages you need, and then make an image of it so you don't start from scratch again next time.
But next time I'll be doing something different. That's like saying "the only internet you need is that which is already in your browser history".
No.
What I want is something closer to Kiwix already on an SD card so even if everything is down, I can read about something I've never been interested in before. But for software.
This is my plan (in action) for an offline GPS nixie tube clock. I'm not sure about what is the right choice for options. Many are nice-to-have features for people like me, who would be the most likely to want a clock like this (snmp to host drift, jitter, and temperature mrtg logs), but come at the mild cost of security.
Will those who receive this bother to connect it to networks? Should I even enable network/GUI for those who want to tinker easily? How should the password be handled?
I'm considering two extremes:
1. A raspbian lite headless system with all unnecessary peripherals disabled and no network features.
2. A full installation, GUI and all with HDMI and network enabled so it's easy for people to play with it if they want.
I'm leaning towards 2 because I think it would be nice to have a clock that auto updates leap seconds and the tzdata db. If I go for 1, that opens up the possibility of using a raspi zero (non-W) for a real BOM and power savings.
As for the password: I'm thinking of having something short and simple other than "raspberry" that the user can/should change. This seems like standard practice with many enterprise systems.
>the base image doesn't even include basics like screen/tmux.
Easily arguable that RPi isn't made with such use cases in mind. I'm all in favor of removing unnecessary bloat if only a small percentage of user base is ever going use it, especially when the software is readily available.
>If those repos are inaccessable for any reason, I have a bunch of hardware that's very hard to do anything useful with.
How so? They are just Linux boxes, you can just download the source code and compile the binaries you need. Pre-built packages are not necessary for functional OS.
> How so? They are just Linux boxes, you can just download the source code and compile the binaries you need. Pre-built packages are not necessary for functional OS.
What about dependencies? If you have the internet access required to download the source code I'd say you'd be better off just using the repos.
> This is precisely what every linux distro does. Nix isn't providing novel functionality for bootable OS images.
None of them provide native, single-source-of-truth declarative configuration that is easy to reason about, pure, and guaranteed to deliver sane results every time (vs. something managed via a classic CM system). Oh, also one that it symmetrical to the way the distribution itself is built and managed.
> But a bigger question for the ergonomics of NixOS: Are NixOS and Nix prebuilding for ARM now?
> None of them provide native, single-source-of-truth declarative configuration that is easy to reason about, pure, and guaranteed to deliver sane results every time (vs. something managed via a classic CM system).
Firstly: the Nix language isn't pure. And neither are some base Nix library functions.
Secondly: You may find it easy to reason about. Many of us have not had that experience. Trying to do work as a developer, I felt it was absolutely miserable the instant I needed to depend on a new package or a new runtime. Every language had slightly different conventions and rules. You had to relearn how any specific package worked to integrate it with another, because there often wasn't sanity. And if you DID need to somehow interface with something outside of Nix (say, a vendor binary not in Nix) you had to use an unreliable environment hack.
And of course, the tutorials and docs didn't actually cover the majority of concerns on how to add new stuff folks will inevitably have, except for a trivial C executable.
In one case, after spending a week working out how to add a package to enable a Haskell binding to said package correctly, I submitted package updates that took MONTHS to propagate into the main repo, so I had to start pushing my fork of the nix repo from machine to machine via github on my own to manage multiple machines. It was pretty ridiculous and I regretted my choices.
I like the Nix philosophy. I respect a lot of the people on the project. But I am not a fan of the "it is all fine and well-baked and I'm sure you can use it too" approach a lot of Nix proponents decide to take.
You could absolutely arrive at a solid installable image for ANY major Linux distro.
I really like the idea of NixOS and Guix and tried NixOS for half a year on my laptop. Then I changed back to ordinary package based distributions. (Arch, Debian and Gentoo)
The issue with NixOS for me was probably quality control. When I did an update and instead of fetching the files from the binary cache it starts to build stuff on its own, I could be reasonably sure that some build error happens. Coupled with sparse documentation, I was very often at a loss at how to fix it and just had to remove the package for a while and try again a couple of days later.
Another point is NixOSes path mangling... Do we really need that? Can we not try to use namespaces and OverlayFS etc. to let each process assume it has a normal FHS file-hierachy while in real its 'root' is cobbled together from multiple package installation directories? Instead of patching the paths of every package, letting the kernel do the path calculations seems to be less intrusive.
You are wrong with build stuff in NixOS, their pure functional approach make easy to guarantee that builds that passed in CI will pass in your machine.
In theory you are right, but somehow changes made it into their channels that weren't cleared by the CI and therefor the binary caches didn't have the artifacts in them and I had to build it myself which failed.
So in practice updating a Gentoo system is more reliable than updating NixOS.
Yes, software accessibility is an issue in the most pronounced disaster scenarios. But anybody who looks at a device of such exquisite craftsmanship, which is evidently already set up to function, and frets about downloading software after some apocalyptic scenario has missed at least half the point.
Yep, big problem with the repo model that Linux insists on using for everything. Offline is a 5th class citizen because of course everyone lives in SV or a university dorm.
I see the repo model as the closest we've come to a solution. Not only are there mirrors in case the main repo drops, because all software is in one repo it's extreme easy to cache/sync/download. Offline mirrors are even an officially supported use case with many package managers. Have enough space? Mirror the whole thing. Not? Select only the interesting package groups. Missed one? You can probably get it from another machine it's installed on.
Want to have an offline copy of software on Windows? Go download all of the installers from their respective sites. Want it to constantly update - that's gonna be about 100 lines of Python code. And better make sure they're the "offline" installers, not those idiotic stubs that just dl the latest version for a server. Also don't forget about all of the 20 different VC++ runtimes that are sometimes not packaged in the installer.
I have a feeling it would be even harder on macOS.
(all of this is of course ignoring the legal issues of distributing those files - which is generally not an issue on Linux)
All software is most definitely not in one repo. If it were, people wouldn't have to use PPAs or compile from source and deal with all the problems those things cause.
> better make sure they're the "offline" installers, not those idiotic stubs that just dl the latest version for a server
Oh, you mean the ones that aren't just acting like apt.
> I have a feeling it would be even harder on macOS.
Depends, but MacOS software that uses Application Bundles should be fine. Even a lot of Windows software works fine if you just copy the installer contents to a directory. Self-contained application directories (or single files) are an old as dirt concept that Linux communities never got behind, preferring instead to use overly complicated schemes like package management that come with a bunch of their own problems. So many, in fact, that people are now often distributing applications in Docker images.
I had a feeling PPAs would come up, but I've never had more then maybe 7 PPAs on any of my Ubuntu machines, which is still better then each program coming from a different site and with a different installer. Including PPAs in caches/backups is trivial.
The way I see it there are 2 different use cases here:
1. You want a few individual apps kept safe on a USB stick in case you need them offline - installers on Win, AppImage on Linux
2. You want a constantly updated cache of the programs you use in case of the apocalypse - pain in the ass on Win, trivial on most Linux
Windows only seems superior here because no. 2 is barely possible, so everyone has gotten good at no. 1.
Oh, if only there were more than a handful of applications on Linux distributed as AppImage, and any support given to them at all by file browsers, life would be so much simpler. Linux community hates desktop so much they completely ignored an embedded ELF icon standard and have consistently poo-pooed every non-repo way of dealing with applications ever.
> You want a constantly updated cache of the programs you use in case of the apocalypse
Why would I want that? So I could discover that something I rely on was broken by a recent update only after I have no ability to do anything about it? Well, lets assume so. On Windows this is relatively simple, just keep a copy of Program Files and most applications will still work fine. It's not ideal, but I'll take it over package managers not even being able to install an application to a different disk.
Applications rarely copy dlls to the system directory these days, as that led to horrifying DLL hell just like Linux has, where conflicts abound unless some central authority carefully manages everything. I routinely run about a hundred different applications portably from a thumb drive. Try that shit with Linux (it only works with AppImage!).
User profile and registry are for settings. You can easily keep backups of those as well, but it isn't really necessary unless you have highly tweaked configurations or something.
FWIW providing you have the kernel and firmware basically everything can be built from source.
I've built most of the base repository of Arch Linux on my RPi4 without cross compiling. I'm buying a few more so that I can go 'full Gentoo' and pretty much rebuild the world.
It might feel like the ARM packages are somehow special but really only the blobby bits like the RPi firmware is. Everything else is just your bog standard armv7/aarch64 ELF.
So yeah, back up a GCC binary or bootstrap, I guess? I can probably email you a tmux binary in a pinch, plus I have a full local mirror of the repo for the apocalypse? :P
Debian provide a physical artefact containing all the packages that could be used without internet access. I don't know whether this also works for Raspian: https://www.debian.org/CD/
This is why projects like yocto and buildroot exist.
They help you maintain a stable software stack over time.
And generate a fully contained software image that can be flashed without needing the internet afterwards. (All downloads happen during compilation)
Thank you for filling in a gap in my understanding!
I'm pretty severely on the noob end of the scale when it comes to software, so I don't know if maintaining a new distro for myself would make sense, since all the tutorials I'm following assume I have Raspbian and all its built-ins available to me.
But maybe in the post-apocalypse, some hero with a prebuilt Rasp-yocto will rescue all our useless boards.
How often are the repos inaccessible? I've come around to the opinion that, unless you have a very specific use case, networking is an essential element of any (Linux) computer system. Once you have that, you can benefit from a world of free software, with updates all handled automatically.
Personally I find it almost magical that I can install and update almost any software I need with a short command.
I don't think I've ever seen the rpi apt repository offline, so I'm not sure why you're so worried. Embedded computers tend towards lean rather than heavy and that is the mindset you're going to encounter.
This site cannot be viewed without third-party JavaScript enabled (it shows a blank page on both Firefox and Chrome), and this is happening more and more on HN links. I think it's pretty sad that so many websites are adding a client-side dependency on third-party code just to view the website. In this case, this doesn't seem intentional, as the code is full of <noscript> tags.
I also cannot begin to understand how can a <body> tag's class definition can take 4400 bytes. In what kind of situation do we need to apply 146 CSS classes to the <body> tag ?
It's a squarespace site, and this particular behavior is pretty universal to squarespace sites, near as I can tell.
Since it's due to the squarespace, I wouldn't really characterize this as "third-party" javascript per se, but I agree it's annoying the page needs javascript to even render. Boo on squarespace.
If you don't want to enable all third-party scripts, try uMatrix (https://chrome.google.com/webstore/detail/umatrix/ogfcmafjal...). It has very fine-grained control over what assets you allow from where (it's why I knew offhand this was squarespace). A warning though: it's got a bit of a learning curve, and depending on how restrictive you want things, you will probably end up spending a fair amount of time un-breaking the internet.
Needing third-party scripts isn't necessarily evil in my mind though--aside from squarespace-like cases where a page loads scripts from the underlying platform (squarespace, or custom domains on top of medium), the other common case I see is loading scripts straight from cdnjs or similar. Is it really evil or insecure to load jquery from cdnjs?
Strange, it works for me on an ancient version of Safari (9.1.3) that's kitted out with ghostery and JS blocker. Most modern sites don't work on it, but this one did.
That's probably because that version of Safari ignores this little bugger:
.site.page-loading { opacity: 0 }
I used the developer tools in Firefox to disable that one and the page was instantly viewable without JavaScript.
That's right, the site uses CSS to hide all the content and then presumably reenables it somewhere in the gobs of JS, maybe after its loaded whatever other analytics/tracking crap there is. Absolutely vile.
People are rightly complaining about the accessibility (or lack thereof) of the content --- it needs to be nothing more than a static page, and in fact would be perfectly readable as such, if it weren't for one line of CSS that hid the content unless JS was run.
My main gripe is that JS is required to view the website. It doesn't work on my mobile phone (which is rather old, I admit), and it takes a huge amount of CPU time to load on my X60.
I really like that I can still use HN on this phone, but it's more and more frequent that I cannot open the links themselves.
This is why hacker news, despite all its problems, is such a joy. There are people doing some amazing stuff that I can learn so much from (while trying not to think to hard about how lazy I am).
Its not only a cool project, but is really well photographed and documented. Thanks for posting.
Projects like this remind me of why we need to use public money for space exploration. The extreme constraints on space exploration drive new innovations and ideas that can flow down into our everyday lives.
We used public money to go to the moon and our government recorded over all of the video footage. Can you name a single piece of video footage that is more valuable than that which was recorded on the Apollo missions? I can't. Governments are either incompetent or corrupt - usually both.
The video footage may be inspiring and I agree that it's a huge shame that it was erased but it really is of very little scientific value compared to the other research and data gathered on these missions.
I genuinely envy the creator — as in “envy is the sincerest form of flattery”. I love everything about this: that the maker perceived an entirely legitimate use-case, that he had the skills necessary to assemble off-the-shelf components into a device with such a professional finish, and that he shared so much detail with us. It’s the kind of project I’d support on Kickstarter for the twin delights of helping somebody creare something truly nice but exquisite and for the thrill of having such a unit myself.
Thank you! I'm new here but have more stuff in the works. Right now I'm mid-project on some 3D printer enclosures but I've got some more stuff planned in the months ahead.
I really like this project! But I speak from authority, having fully integrated a wireless and wired network and charging station into a carry bag for work uses:
Thar Netgear switch will cut through your onboard battery like a bullet. You'll absolutely need a larger battery if you flip that switch on.
You can find crummy little 100mb switches on Aliexpress that actually power from micro USB. They're almost but not quite as bad as you think they'll be but for only $8...
OP could use something like a LAN9354 Ethernet switch and build their own switch. Probably needs a third PHY chip to interface with the Raspberry Pi as the Pi doesn't expose raw MII to attach to the LAN9354.
I once worked on a project trying to isolate WiFi signals. We ended up purchasing a metal box with a conductive gasket, and were able to detect wireless signals through the box until we screwed the lid down to spec. Just a point of reference, I've never done EMP work... But I'm not sure how much the copper foil will buy you. Regardless, beautiful build!
If it's grounded you need very little material. The most important factor is the largest gap in the cage, which dictates the longest wavelength that is passed. Look at microwave oven meshes to see a size of gap that blocks slightly above 2.45 GHz.
>But I'm not sure how much the copper foil will buy you.
It'll do very little. There is a paper authored by US army talking about various EMP protection measures. The minimum reliable protection there is a 12mm thick (half inch) mild steel container with a lead seal.
This is why if there is ever a nuclear war the vacuum tube equipment will be the only stuff that survives.
A bit OT, but my brother lives by the sea, literally at the top of a cliff. He's a biker, and has owned 5 motorbikes in 10 years - every single one has ended up badly corroded because of the salty sea air, resulting in hefty bills.
After the first bike, he started putting a cover over them, but it didn't help much.
Any ideas on what might help reduce corrosion? (he doesn't have a garage, and building one isn't an option).
For some reason cars seem to be mostly OK there, it's just motorbikes that are badly affected. Obviously the parts that can be painted, the parts that aren't stainless steel, are paintee.
Galvanic rust protection requires the entire structure be immersed in an electrically conductive fluid. Works great for boats and hot water heaters, less well for cars and motorcycles.
On the Pi at least, once you've got a card set up you can `dd` it onto another card. Saves a lot of time when rebuilding. I tape them to their respective cases so they're easy to find when needed.
I was thinking a lower power USB3 hub might be more useful, could do USB ethernet dongles if really needed, or other options... may not be as high a throughput, but that probably isn't the goal.
Really cool! I especially liked the external EMP-shielding-box. I wonder if it's tested and how would one test those? Put it in a microwave and see if anything breaks?
A simple test would be to stick a few wireless devices in the box and check that they all loose signal once the box is closed. If you want to get fancy any place that does EM interference tests of consumer goods should have the room and equipment to test the box across a broad spectrum.
My intuition would be that the lid doesn't seal that good as it is. For long-term storage that's easily fixed by putting the shielding on the outside and gluing it shut with copper tape. Maybe you could instead add some copper-lined magnetic flaps that close the gaps.
Put a glass of water in the microwave as dummy-load to prevent high VSWR from reflecting back into the magnetron. Standard procedure any time you're doing dumb shit with a kitchen microwave.
Good point, but it's probably the most accessible test device for high levels of RF flux. But based on quick wikipedia tour, just testing on the ~2.4ghz band of microwave oven would probably not be enough to verify the shielding.
What is this? No offense but this site looks like a concept mock-up. There's no actual information besides general ideas and stock photos, and there's no further links or reading, just an email address.
I can assure you, it's real! I got mine in the mail on Monday!
Caveat: I have not attained reception yet, because of obstacles.
The North America kit comes with:
- Low Noise Block Downconverter (LNB) antenna, capable of Ku-Band reception of the SES-2 satellite. Must be pointed very accurately: elevation, azimuth AND rotating for polarization
- Crappy little tripod, good enough to get started
- An acrylic laser-cut LNB collar to hold the antenna on the tripod, which I immediately snapped.
- A 1GHz arm based board with embedded software defined radio.
Basically, the board receives compressed tar files in the data stream and caches them on a secondary microSD card. You access that content with various linux desktop apps served over a webdesktop.
I’ve got one of installed back home in the outskirts of Milan (near lake Como). It’s the sort of thing you don’t expect to ever need but that might be useful in the one in a billion chance.
It also receives APRS signals, which can be really handy in situations of truly catastrophic disaster.
I was thinking it could be pre-loaded with tons of relevant media and reference materials. Plus being a general purpose tool. I could see distance sailboat cruisers packing one away, similar to how they would pack some redundant spares of key electronics, or an industrial sewing machine like a Sailrite, for enhanced self-reliance in challenging environments. Pack a 3d printer in there too!
He has to open the case to use it, no? Quoting from the article:
> "I also added cooling vents- the internal Pi 4 has a fan on it, but it needed vents too- so if you look close you can see vents above the connector panel and above the display."
In all the ways having a known good/not hacked working linux computer with all your diagnostic and recovery tools installed.
Extra its not reliant on external power, periphials, and is very water resistant and environmentally sealed (the main / only benefit over just a laptop)
A raspberry pi doesn't make a good candidate for that, as it boots into a proprietary, closed-source system before loading Linux. It's hard to beat a toughbook, or even a ruggedized tablet, for any such scenario, especially as they are still sealed when in use. The BOM cost for this project is also relatively expensive - I estimate it around $600.
For the keyboard, a random question... does anyone know a real laptop keyboard that's available for general purchase and includes a datasheet on how to interface it? I don't really want to go to the length OP did and build a keyboard from individual switches.
This is really neat, and timely -- I was just watching some of those "buying and opening a Titan II missile launch facility in Arkansas" videos on YT. Seems like a useful addition to one's underground lair.
Isn't the whole point of C that you can compile it against whatever architecture you wish? What programs you know of that can not be compiled to run on ARM?
I wonder about the keyboard though -- why ortholinear? I guess if you prefer it...? I find it very unwieldy. Or maybe all the staggered 40% kits out there wouldn't fit?
Is this the new thing now, were everything is going to be cyber-thing, built for a post-apocalyptic world, with bullet proof shielding and water proof, emp proof?
I actually would really like that as the theme for 2020. The perception of an increasingly hostile world pushes designs to reflect rugged and reliable qualities. It would be a breath of fresh air compared to the crappy planned obsolescence of the 2010s. Can’t wait for Apple to create a phone with sharp square corners!
I call this a fake. Look at the photos and focus on the keyboard. What is it missing? The space bar. How the hell do you use a keyboard without a space bar?
Bottom row. On mine I have space left of centre and enter right of centre.
Its a full keyboard, you use the next two outwards on the bottom row to shift layers for access to all the punctuation. They actually very ergonomic. See olkb.com for more on the original.
This style of keyboard is called "40%" and, just as you would press shift+1 to input an exclamation mark, they're invariably set up with additional modifier keys (or chorded sequences) to input the values of all the missing keys.
Oh I dunno, it'd take a fair bit of time to vet for personal data .. I think I'll just make it more of a family/friends heirloom installation, and for those 'in the know' about how to get up to the hut. There's a ton of great stuff in the archive, but I don't think its something of public interest as much as just my personal tastes in interesting shit ..
If those repos are inaccessable for any reason, I have a bunch of hardware that's very hard to do anything useful with.
I know there are such things as apt-caches and squid caches and stuff, but I could really use thing that goes through every apt-get I've ever done and the top 50,000 packages on github and stuffs 'em all onto an SD card and shows me how to use them from my commandline.
OP mentions this as a future direction for the project, but I think it's one of the most important.