

Ask HN: What would it take to scrape & index delicious? - bootload


======
bootload
I'm not the only asking this question: @davewiner ~
<http://news.ycombinator.com/item?id=2014341> and this thread by @petercooper
<http://news.ycombinator.com/item?id=2014074> & thread by @simonw
<http://news.ycombinator.com/item?id=2014257> what would it take to capture
delicious? It's an important question on a couple of levels, because:

a) replacement services exist but nobody will pay, eg: ~
<http://twitter.com/waxpancake/status/15552773542117376>

b) and I remember the effort & fun it was to read about how Geocities was
saved <http://news.ycombinator.com/item?id=903567> into
<http://www.reocities.com/> by @jacquesm
<http://news.ycombinator.com/user?id=jacquesm>

c) and this effort, _"Archiveteam! The Geocities Torrent"_ ~
<http://ascii.textfiles.com/archives/2720> all one terrabyte of data
recovered.

~~~
pablohoffman
Someone already wrote a Delicious scraper in Scrapy (Python):
<http://news.ycombinator.com/item?id=2015680>

