Hacker Newsnew | comments | show | ask | jobs | submit login

Actually it's well understood how to scale a RDBMS horizontally (sharding and partitioning). I simply asked whether there is a comparable strategy for this graph db.

Also most websites probably don't need a graph database. But the few who do will likely also need the scalability - at least I cannot imagine many interesting web-applications where your one beefy box could possibly scale to a significant userbase (unless you're talking the z-series category of beefy).

Ofcourse there are still many interesting applications outside the public interweb.




Assuming a simple system a single box could be averaging around 5k requests per second. That's 18 million pages per hour, and just under 1/2 a billion pages per day. That's around 2.5 pages per month to every person on the planet but let's say you get 100 million people that's 150 pages per month. Now if you think you are going to get to that point in the next year feel then it's an issue but IMO 99% of people are far from that point.

Granted the real question becomes storing data not handling that number of requests, but a database that knows where a bunch of dumb files scales really well. (If you look into things this is Facebook's basic approach.)

-----


I wonder where you are taking those figures from. The article stated about 100 queries/sec on a MacBook without any write load (if I interpreted that correctly). If 5k/sec are sustainable on a single box, including writes, then yes, that will probably go a long way.

I'd venture the guess that you'd be talking quite a different budget than a bunch of pizzaboxes in a horizontal setup though. The SAN to handle 5k IOPS alone will set you back by an interesting amount (even more so when you consider mirroring, which you'd probably want to have at that scale). I'd also be worried about the network - GBit/s is probably not going to cut it at that rate anymore.

So, all in all this is precisely why I asked about horizontal scalability. A setup of 5 machines that handle 1000 reqs/sec each is usually cheaper than a single machine to handle all of the 5000/sec.

-----


If his MacBook can handle the entire database so I don't see the need for a SAN (yet). I don't know how much benefit there would be to increasing his systems RAM, but if the database fit's on a laptops HDD, then you can probably get most of it into ram which would make things insanely faster. My guess is upgrading to a 10,000$ box with 64Gb of ram and a 1 + 0 RAID of SSD he could probably get 50x increase in speed which would be ~5k operations per second. Granted he might develop issues with network bandwidth or some other bottleneck, but even just averaging 1k/second represents huge revenue potential relative to the cost of that system. And a back of the envelope calculation should give him a rough estimate of it's value.

PS: Upgrading 10gb Ethernet is not really that expensive now days if he is only linking a few web servers to two databases.

EDIT: To give you some idea what flash can do http://advancedstorage.micronblogs.com/2008/11/iops-like-you... (Granted, it's a stupid video, but 150,000 Read IO's and 80,000 write IO's and 800MB/second of bandwidth on two PCie Cards in 09 / 10 with fusion IO doing the same type of thing today).

-----


If his MacBook can handle the entire database so I don't see the need for a SAN (yet).

The SAN comes into play when a single box can't deliver the IOPS anymore - remember it's not just a matter of adding SSDs. At those rates you start touching the controller and bus limits. Likewise a saturated 10Gb ethernet link causes a significant interrupt-rate (older cards would bottleneck on a single core) that often exposes interesting corner-cases in your OS and hardware of choice.

I'm not saying it's not doable and I know what SSDs are capable of (we just fitted a server with X25's). I'm just saying that your estimate of $10.000 is very optimistic, add a zero and you'll be closer to home. That's because I still think you'd definately be talking an xfire 4600 class machine and a SAN.

Anyways, this is all speculation. Wheels made some reasonable statements that they have it on their radar and I'm definately looking forward to some real-world benchmarks with a concurrent write-load.

-----


So, assuming by "scaling" in the web space, you mean "be able to quickly respond to many reads and writes as my app generates" and not "handle large volumes of data", then you can do what LinkedIn does, which is to just copy the exact same graph to multiple machines, and rely on the fact that eventual consistency is good enough. You make things immediate for the user most concerned with immediate feedback (make them sticky to the graph their write updated) and for everybody else, a couple minutes of lag is no biggie. This is even more true in retail and other applications.

-----


Scaling in the web space usually means both; many reads/writes and large volumes of data.

Yes, mirroring may work to a point but falls down eventually in write-heavy applications. Ideally you want something that you can just add machines to and it will scale near lineary. I'm not sure if that's entirely achievable for a graph search, but that's where my question was heading.

-----


Yes, mirroring as I described will fall down when the throughput of writes into the network exceeds the ability of a single machine to just write, because you will never have the opportunity to "catch up."

However, that is a lot of data to be writing into your graph, especially with how crazy fast writable media and storage is getting these days. YAGNI.

Now, on the theoretical side of things.. "How would you create a linearly scalable graph database across machines?" I don't know how I would make it so that I could maintain the same kinds of speeds for interesting graph traversals.

-----


You of course can do it since all modern web search is graph-based. The real question, and one I don't have an answer to, is at what point do the additional performance of multiple nodes begin to trump the induced network latencies.

Splitting things is the relatively easy part -- it's building the consistency model for multi-node systems that's tricky.

For read-write partitioning it's pretty simple -- each item is largely independent; it has its columns of data and is handled by an index, so once you're just reading / writing / updating items, it's no problem to do the hashing from key to index then using that index to locate the appropriate node.

The devil is of course in the details. If we see that barrier approaching we'll plan ahead for scaling out this way.

However, just doing some quick calculations, looks like the English wikipedia gets 5.4 billion page views per month, which translates to about 2100 per second. On my MacBook I get an average query time on our profile dataset for wikipedia's graph of 2.5 ms per query -- meaning 400 requests per second, extrapolating from there, scaling that up to 6x that on a hefty server doesn't seem unreasonable, and that ignores the fact that we could go further in caching the results (since most of them would be duplicate requests) to push that number up even higher.

So, yeah, it's an issue that's in the back of our heads, but not one we're currently dreading.

-----


On my MacBook I get an average query time on our profile dataset for wikipedia's graph of 2.5 ms per query

Is the dataset changing (being written to) while you make those queries?

-----


No, not in our profile set, but because of our locking system (where readers don't block writes and writes don't block reads) I believe it should hold up well under moderate write conditions (in the case of recommendations applications, we assume that writes are infrequent relative to reads).

Still an untested assumption, but the system is architectured to hold up well in those situations and I think that it will reasonably scale there.

-----




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: