Hacker News new | past | comments | ask | show | jobs | submit login

Google Reader kept the content of RSS feeds cached forever, meaning it was the last surviving record of a huge number of dead and deleted blogs. The Archive Team have spent the last month or so fetching those blogs out of Reader to serve as a permanent archive. They posted a few days ago on HN asking for some last minute help, and managed to archive 46.23M feeds.

Check out their efforts here: http://www.archiveteam.org/index.php?title=Google_Reader




My former startup Twingly http://twingly.com has hundreds of millions of blog posts stored (everything collected since 2006) in 128 MySql shards with a unified query interface. The last few months of data are indexed and searchable for free from their website, but the entire archive is kept forever.


That's great.

However, ArchiveTeam has uploaded all data that they've found (at least 46.23M feeds) to the Internet Archive. That means it's public for everyone to mine through and/or use.

I'm not trying to belittle Twingly here - but their "last few months of data" are maybe not really comparable to completely free and public data - kept forever.


Would you donate your data to the Internet Archive?


Perhaps there could be some continuous rollover with all data older than five years being made available through the Internet Archive. I'm no longer affiliated with Twingly but of course know them very well. I can make a proposal! It would be a great idea and I guess for Twingly it could mean increased brand recognition.


Or Common Crawl so other people could actually download and use it?


I didn't realise you couldn't download and use the data from Internet Archive. If not, that's pretty silly to back up the feeds to them, and I'm a bit annoyed to have contributed. I'd like to make them available to everyone to download, analyse, plug into their reader etc etc etc...



You can from the Internet Archive. The GGP is talking about Twingly, and the discussion is about integrating their data with the Archive Team.


For anything substantial (like say, their actual crawl), they'll only do it on a case by case basis with a rather restrictive license and you have to drive up there and plop down the machines to copy it onto.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: