The timing of Google getting rid of the Google Cache couldn't be even worse with these ongoing DDOS attacks on and necessary hardening of the Internet Archive. Wonder what kind of twisty narrative one could posit about why this is happening?
There have been collaborative computing projects like SETI@home [1] and Folding@Home [2] where unused computing power could be used for productive purposes. Could there be something similar for storage? Software that provides unused storage for Internet archiving? In the best case scenario, we could have redundant backups of the Internet Archive distributed around the world.
archive.org does use torrents and I have one such torrent laying around in my client, which occasionally connects to peers although the the trackers are currently offline. I suppose a new client would find me and other peers through the DHT. I'd share a magnet link for someone to try, but it's a copyright-ignoring ROM dump archive so it may not be the best idea to post it here.
It's interesting that torrents may not be the first thing that comes to mind. They have the "PR issue" of being the now seemingly mundane way in which we've been downloading DVD rips for the last 20 odd years. Newer technology like IPFS does a better job making the cool core of this technology actually sound cool.
I didn’t know that archive.org already has torrents. I guess what we would need, on top of that, is a system for assigning those torrents to new peers.
That already exists. Peers find each other through a distributed hash table which can be bootstrapped from a variety of sources.
I would say the problem is discoverability and actual deployment.
For suddenly popular files it can be a way to donate bandwidth most of all, because then there might suddenly be a lot of peers. For the vast majority of files however, there won't be any other peers and they have to be web seeded by Archive.org either way.
Then there's the discoverability problem. Ultimately you need something like a magnet link to connect to a swarm.
IPFS on the tin seems pretty awesome however when I attempted to dig into it for an hour I still had no idea how to actually do anything with it. Their usability needs to go a long way before I give it another go. it's definitely not a two step process where you download a client and click on a link to start load sharing an archive in my past experience.
I really wish the EU would have their own organisation for creating an internet archive that at the very minimum mirrored IA. This is our history and there's only a single place now that has any significant archive of it. It seems like the EU should have a significant interest in preserving it for generations to come.
Demonising it is more fashionable right now. Everyone I know that contributes to the Internet archive is right of Center enough to be considered a horrible person.
> The INTERNETARCHIVE.BAK experiment has come to a close a number of years ago.
> Much was learned in the process, and many thanks are given to the dozens of people who donated time, space and coding efforts to make the system work as long as it did. A number of useful facts and observations came from the project.
> The Internet Archive continues to explore methods and code to decentralize the collection, to have a mirror running in various ways - these include IPFS, FileCoin, and others. The INTERNETARCHIVE.BAK project also added general mirroring and tracking code to a number of projects that are still in use.
IA called this their Postmortem, but it sounds... intentionally opaque. Also, I'm not sure if this website is affiliated with archive.org, since they say at the bottom of their homepage:
> Archive Team is in no way affiliated with the fine folks at ARCHIVE.ORG Archive Team can always be reached by e-mail at archiveteam@archiveteam.org or by IRC at the channel #archiveteam (on hackint).
Yeah, it’s a completely [1] separate team (they do run a bunch of archiving projects that end up in the IA / Wayback Machine though). Just wanted to share – it’s sad there isn’t much more info though apart from some code; maybe worth looking into IRC logs?
[1]: On paper, at least; the founder, Jason Scott, seems pretty involved with the IA as well, and I’m not really sure how much the teams intersect.
What we really need right now is "black hole" of information. A place where you can push stuff, but retrieving is impossible until that time when ironclad legitimation can be automated.
Insane that only examples of me myself ever doing anything is in printed copies from 1970's. In National Archives where some aunties still believe that Internet is just a passing fad.
The problem that multi generational projects like this always have is tech debt. Any library/dependency chosen by the previous generation might be unmaintained for decades until it falls through the cracks and someone notices it.
Heretrix, for example, was written in a very old "Java way" to do it. They have also lots of services that were built in the PHP4 age, with globals by default and stuff like that.
Always keep in mind that whatever you choose, it's a bet, essentially. Over time you'll realize that different language ecosystems have different aligned or misaligned goals to your project. Don't choose libraries because of hype, choose them because of maintainability.
I dunno about the state actor hypothesis, but if there is, it all sounds like Charles Stross's description of future cold war in Halting State:
> "And that's the twentieth-century model, what they used to call an electronic Pearl Habour. Things have moved on since then. Footnotes inserted in government reports feeding into World Trade Organization negotiating positions. Nothing we'd notice at first, nothing that would be obvious for a couple of years. You don't want to halt the state in its tracks, you simply want to divert it into a sliding of your choice."
Who knows what will appear after the archives are restored?
Heh, yeah. It was probably "the Entity". An AI that subtly nudged human affairs as the opening salvo of its Big Plan. Masquerading as hapless hackers of course hahaha! :)
Hah! As if they need ideas. But that's not the point, how possible is it?
Re your comprehensive edit, I totally am on board with that tech choice idea. It's a bet, avoid the fads, pick stuff that's robust (or at least a fit for your possible futures)
I'd say we have to differentiate between human error as an attack surface and software bugs / vulnerabilities as an attack surface here.
Software-wise I wouldn't know where to start, honestly, because the internet archive as a project is so vast [1] that it's hard to get an architectural overview of how the pieces are glued together. Unifying the tech stack seems to have been no concern at all in its development...
But from a pentesting perspective I'd try to find vulnerabilities in the perl based services first, then Java, then PHP, then NPM and so on... because older projects tend to have a higher likeliness of being unmaintained or using outdated libraries.