May I suggest crawling the links and hosting a mirror? Maybe IPFS? We never know when a link will break in today's web.
Now if something is on the internet and I think I might want it later I'll save a copy. I just need to find a better way to originize the information than nested folders of HTML and text files.
Subject goes to the link text
Body to url
A bit hacky. But in case someone is interested: https://github.com/6uhrmittag/bashblog
Also as an aside I thought the theme looked familiar - I use the same underlying theme but have customised it a bit over the base theme, cool!
Here's the repo: