Hacker News new | past | comments | ask | show | jobs | submit login

Chrome has the option to export all bookmarks to an html file, which I think can be given as an input to wget.

Now if I understand that correctly, won't it recursively download every web page and all it's links 5 levels deep? Because that could be quite enormous if there are just a few web pages with lots of links...




You will get some duplicates of Kevin Bacon's homepage but should be fine otherwise.


Also around 500 papers about number theory


It won't traverse across domains without an explicit argument, so it won't go too crazy. It also won't backtrack, so you won't end up with a complete copy of a blog if you've only bookmarked a single article. You can, of course, reduce the maxdepth to 1 pretty safely, if you don't think you'll need more than the specific bookmarked page.

I use that for programming language documentation - for example I'll hit the root page of Python's standard library documentation, and get everything I need locally.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: