
Ask HN: How to preserve webpages locally for later reading? - mark_edward
I wanted to write a little script to preserve webpages at URLs for later reading, and I was wondering what was the most high fidelity and space-saving way to do this. Sometimes I save pages as PDFs to get that kind of fidelity, other times I save them as HTML. The problem is sometimes when I save pages as HTML you get the actual page and this folder structure that I don&#x27;t really understand.<p>Is the save-as-HTML option more high fidelity? And if so where can I go to understand the structure of what I download?<p>EDIT: Thanks for the pointers, they&#x27;re very helpful. I&#x27;ll use HTTrack for anything wget breaks on.
======
aaronhoffman
I've used Evernote web clipper.

Also these [http://superuser.com/questions/14403/how-can-i-download-
an-e...](http://superuser.com/questions/14403/how-can-i-download-an-entire-
website)

------
apancyborg
You can use [https://documentcyborg.com/](https://documentcyborg.com/) does a
good job at it.

