

Google Reader API replacement, powered by Riak - charlieok
http://blog.superfeedr.com/google-reader-api-riak/

======
ivank
There is another ongoing backup of Google Reader's feed cache:
<http://www.archiveteam.org/index.php?title=Google_Reader> and the data is
landing at Internet Archive.

(If anyone has a dedicated server with a high transfer cap, we could _really_
use for temporary storage and uploading to IA. Email in profile.)

For anyone else doing an independent backup, you can get more than 1000 items
by using ?r=n&n=1000 and following the continuation in the JSON response with
a ?c= URL parameter. And keep in mind that Google doesn't canonicalize feed
URLs for the same content, so you have to grab all of them.

~~~
aviv
Is the list of feed URLs you have collected thus far (without the cached feed
content) publicly accessible?

~~~
ivank
Not yet, I don't have a good way to provide query ability on my postgres db. I
also haven't yet imported a lot of files I have lying around.

You can email me for an rsync source that contains the work items we've
generated. Right now this is about ~68.2M URLs, mostly on the big blog
platforms. This list should grow considerably.

------
bonzoesc
For the listing/deleting problems, have you looked at using LevelDB and
secondary indexes (2i) to make range queries cheaper?

Disclosure: I work at Basho, makers of the Riak database.

~~~
jethroalias97
I have been using Riak's secondary indexes for my latest project and have
generally found them a joy to use. However, I do have to have to question a
bit the way they are architected.

Assuming you are using Riak's default configuration each range query hits 1/3
of the cluster, which could get pretty hairy on large clusters that have lots
of requests. Also, there is no pagination, so if an index has a million
objects you'll have to be prepared to wait even if you only want the first
part of the query.

You could solve this by putting a sort value in the key and using a range
query, but this wouldn't work if you want the most recent items keyed with
time, because the items could be unevenly spaced back in time. Also, Riak,
like many databases based on Dynamo, thrives on fat data which one would think
would favor lists. LevelDB is also supposedly slower than Bitcask, the default
backend, but I'm not sure if this is still true.

I've been trying to think of ways around these problems. A simple thought I
had was to simply cache the response as pages in Riak. Although this
introduces new problems like how to know how often to reset the cache, too
often and I may as well not have this cache, too infrequently and users get
stale data. I would also have to handle this using worker threads because I
wouldn't want the odd 100th user to get a big latency hit. The database would
also either have to be continually polled, wasting CPU, or potentially not
have the data cached when needed.

Another solution I've been considering is to write a secondary index layer on
top of Riak using a skiplist or btree to know where to add and remove data
when it gets to be very large. This seems like a cool idea, but might be
tricky to implement and do conflict resolutions on.

My last idea was the most ambitious, which was to implement a separate
distributed database specifically for secondary indexes and range queries
which would not be bound by Dynamo. The idea here is to have each node in
charge of a segment of the key space (like Big Table) and then have it split
and coalesce not only based on size, but also on frequency of reads and writes
to handle the bottleneck problem.

I initially was going to have this paired with the Dynamo database
(<https://github.com/dbunker/Dynago>) I was experimenting with using Go and
LevelDB, but there is no reason it couldn't work with any Key-Value
eventually-consistent hyper-reliable database to provide light-weight
secondary indexes. Having it constantly check the core key-value database
would mean it wouldn't have to be super reliable in its own right and so could
be kept relatively simple.

But again the simplest solution may ultimately be the way to go, I'm not sure,
all these seem to have pretty big trade offs.

~~~
bonzoesc
_Assuming you are using Riak's default configuration each range query hits 1/3
of the cluster, which could get pretty hairy on large clusters that have lots
of requests. Also, there is no pagination, so if an index has a million
objects you'll have to be prepared to wait even if you only want the first
part of the query.

You could solve this by putting a sort value in the key and using a range
query, but this wouldn't work if you want the most recent items keyed with
time, because the items could be unevenly spaced back in time._

Pagination is coming soon; it's in riak_kv master already, but in buyer-beware
#yolo territory.

 _LevelDB is also supposedly slower than Bitcask, the default backend, but I'm
not sure if this is still true._

Bitcask is faster when all the keys fit in memory: it's designed to load any
value with a single disk seek. LevelDB can't make that guarantee, but neither
can Bitcask with too many keys for available memory.

 _I've been trying to think of ways around these problems. A simple thought I
had was to simply cache the response as pages in Riak. Although this
introduces new problems like how to know how often to reset the cache, too
often and I may as well not have this cache, too infrequently and users get
stale data._

Caching is one of the two hard problems in software engineering (along with
"naming things" and "off-by-one errors"), so good luck :) If you're not
opposed to running a separate service, Memcache is what I'd use.

~~~
jethroalias97
Any thoughts on when the pagination goes live? I can't find any information on
it online. Memcache would be a good choice, but I am wondering, if I have a
few secondary indexes with over a million indexes each, wouldn't continually
recreating this cache irreparably bog down the cluster?

~~~
bonzoesc
I believe it's part of Riak 1.4, which is our next release; no date yet.

The number of entries in a 2i isn't going to bog down querying it any more
than lots of objects bog down LevelDB. Make sure your indexes have the right
content with the right cardinalities and it shouldn't be a problem.

If you want to drop in to #riak on freenode tomorrow (I'm in the
America/New_York time zone) I'm brycek in there.

------
JeffJenkins
SuperFeedr is awesome and Julien is great to work with as a user of the
service. I wish they had this feature when I was working on my multi-medium
client (now defunct) a year and a half ago.

The only downside—and why I stopped using it—is that the pricing model is per-
item, so if you have frequently updating feeds it can get very expensive.
Although I never tried to use it, the pricing page does say they they'll meet
whatever it costs you to run your own feed system since their cost should be
lower than yours.

~~~
julien
The second part will stay, while we hope to change the 1st part (pricing per
item) really soon into another scheme!

~~~
JeffJenkins
Awesome. If I needed access to feeds again I'd definitely use superfeedr.

------
abalone
Interesting, I would have thought that the unidirectional, read-only nature of
the publisher-subscriber relationship would have made this simple for a
traditional SQL database with read replicas and a very basic partitioning
scheme. You assign workers to monitor feeds for updates, they update the DB,
and.. done.

Looks like they may have _added_ some complexity with their feed parser
implementation, what they refer to as "supernoders". Looks like they don't
lock ownership of feeds during parsing, thus allowing concurrent supernoders
to get into race conditions while parsing the same feed.

And so it turns into another NoSQL example of employing conflict resolution to
fix things.

I wonder if they could just use a simple locking scheme to prevent more than 1
parser from parsing the same feed at the same time. This sounds simpler than
conflict resolution, to me.

~~~
bonzoesc
_I wonder if they could just use a simple locking scheme to prevent more than
1 parser from parsing the same feed at the same time. This sounds simpler than
conflict resolution, to me._

You may want to check out Aphyr's "Call Me Maybe" series of posts about
distributed databases: <http://aphyr.com/tags/jepsen>

The short version is that convergent conflict resolution seems intimidating
but works better than locking and synchronization.

~~~
abalone
Ok, I did. I actually read the whole series, including the postgres 2-phase
commit I/O error case.

I don't see where it draws that conclusion at all.

The postgres post shows that even ACID databases have network error cases
which can leave your client in an indeterminate state. Fair enough. However
the solution for this is... to restart your client once the network's back up.
All it needs to do is requery the DB to determine the truth.

Compare that to writing conflict resolution logic for all your data because
there is no single source of truth. This is considerably more complicated.

The series actually ends up recommending "the right design for the right
problem space." I am not making a general SQL vs. NoSQL argument here, but I
think in this case they may have taken a more complicated approach than
necessary.

