I used to solely depend on Wayback machine to automate archiving pages. Now, I am archiving webpages using selenium python package on https://archive.ph/ and https://ghostarchive.org/.
This told me not to depend on 3rd party services. Might self-host https://archivebox.io/.
I was just fantasising earlier, daydreaming, about what a distributed warc or similar solution would look like, with peering and user or distributed server archiving. Either by browser plugin submission or passively sending the urls to servers to do the fetching and archiving (removes some of the privacy issues).
I think it's everyone's responsibility to make sure the web gets cached, not one org... and since Google has canned the Google cache.......
There should be more internet archives, for various reasons, but it doesn't seem like anyone is willing to put in the effort and money involved, let alone the legal headaches.
I agree. And I am dismayed that government and academic institutions like to dance around the legal issues of archiving (outsourcing the legal risk to Internet Archive), instead of pushing for legal protections/exemptions for the act of archiving.
There's no grownup in the room who will fix this. "You're It". I recommend that you take this opportunity to download 1 favourite old website/article/piece of software from the Archive and rehost it on your own site. Reach out to me if you'd like help with getting started.
This told me not to depend on 3rd party services. Might self-host https://archivebox.io/.