Hacker Newsnew | comments | show | ask | jobs | submit login

Yes, mirroring as I described will fall down when the throughput of writes into the network exceeds the ability of a single machine to just write, because you will never have the opportunity to "catch up."

However, that is a lot of data to be writing into your graph, especially with how crazy fast writable media and storage is getting these days. YAGNI.

Now, on the theoretical side of things.. "How would you create a linearly scalable graph database across machines?" I don't know how I would make it so that I could maintain the same kinds of speeds for interesting graph traversals.




You of course can do it since all modern web search is graph-based. The real question, and one I don't have an answer to, is at what point do the additional performance of multiple nodes begin to trump the induced network latencies.

Splitting things is the relatively easy part -- it's building the consistency model for multi-node systems that's tricky.

For read-write partitioning it's pretty simple -- each item is largely independent; it has its columns of data and is handled by an index, so once you're just reading / writing / updating items, it's no problem to do the hashing from key to index then using that index to locate the appropriate node.

The devil is of course in the details. If we see that barrier approaching we'll plan ahead for scaling out this way.

However, just doing some quick calculations, looks like the English wikipedia gets 5.4 billion page views per month, which translates to about 2100 per second. On my MacBook I get an average query time on our profile dataset for wikipedia's graph of 2.5 ms per query -- meaning 400 requests per second, extrapolating from there, scaling that up to 6x that on a hefty server doesn't seem unreasonable, and that ignores the fact that we could go further in caching the results (since most of them would be duplicate requests) to push that number up even higher.

So, yeah, it's an issue that's in the back of our heads, but not one we're currently dreading.

-----


On my MacBook I get an average query time on our profile dataset for wikipedia's graph of 2.5 ms per query

Is the dataset changing (being written to) while you make those queries?

-----


No, not in our profile set, but because of our locking system (where readers don't block writes and writes don't block reads) I believe it should hold up well under moderate write conditions (in the case of recommendations applications, we assume that writes are infrequent relative to reads).

Still an untested assumption, but the system is architectured to hold up well in those situations and I think that it will reasonably scale there.

-----




Guidelines | FAQ | Support | API | Lists | Bookmarklet | DMCA | Y Combinator | Apply | Contact

Search: