Hacker News new | past | comments | ask | show | jobs | submit login

I used to solely depend on Wayback machine to automate archiving pages. Now, I am archiving webpages using selenium python package on https://archive.ph/ and https://ghostarchive.org/.

This told me not to depend on 3rd party services. Might self-host https://archivebox.io/.






I was just fantasising earlier, daydreaming, about what a distributed warc or similar solution would look like, with peering and user or distributed server archiving. Either by browser plugin submission or passively sending the urls to servers to do the fetching and archiving (removes some of the privacy issues).

I think it's everyone's responsibility to make sure the web gets cached, not one org... and since Google has canned the Google cache.......


ArchiveBox v0.8 is adding the beginnings of a content addressable store for P2P sharing! Stay tuned :)


There should be more internet archives, for various reasons, but it doesn't seem like anyone is willing to put in the effort and money involved, let alone the legal headaches.

I agree. And I am dismayed that government and academic institutions like to dance around the legal issues of archiving (outsourcing the legal risk to Internet Archive), instead of pushing for legal protections/exemptions for the act of archiving.

The UK and Portugal are both doing it for domestically published websites.

Would you mind sharing your script?


Thank you. It says: "Sorry, this post was removed by Reddit’s filters."



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: