Hacker News new | comments | show | ask | jobs | submit login

    The database operations on the nested data were 
    just taking too much processing power.
Given that this is Rails, and given that it's certainly SQL involved here, I just have to ask (and I know the answer is probably "yes") -- did you try implementing nested sets?

I ask because my experience has been that nested data is (with the kinds of nested data I've been handed, anyway) not a performance problem. Selects and updates of nested data can be as responsive as any range query on flat data, and when it comes to managing the performance of inserting new nodes, deleting or moving subtrees, there are a lot of options depending on what you want to optimize for (like spreading out the range from 1 to the max integer supported and periodically re-packing; that way, inserting leaves and any kind of deletion is as fast as with flat data).

I'm curious about what your nested data looked like. Sorry to get distracted on a minor point, but I'm intrigued! When I'm developing, I just always feel as though I should only worry when I start seeing data that has to be a graph and can't be represented as a tree, but as long as it actually is hierarchical then I won't have to worry about speed too much; but now I'm wondering if that intuition will bite me.




did you try implementing nested sets?

The big thing we needed to do was a rolling archive to progressively broader timeframes. As metrics come in, we keep every single datapoint for the first 6 hours. After 6 hours, data gets rolled up into 5-minute archive. Each datapoint in the 5-minute archive then contains avg, min, max, etc for all the points that lived within that 5-minute span.

The archiving carries on through progressively broader windows as time goes on -- a 10-minute archive, 1-hr archive, etc. This progressive aggregation is the only sane approach to storing the massive amount of data we get. And, it reflects the need for higher resolution for recent events -- it's rare you need to see what happened at one exact minute 6 months ago.

It was this progressive archiving that bit us, specifically as DB performance degraded over time with lots of insertions/deletions. Nested set didn't/wouldn't help with aggregation costs and degradation from churn during the archiving process.

Hope this helps -- I'm going to try to do a more technical post on this in the future.


Huh, that's interesting. I guess I've never worked with really "churny" data like that before.

Sure, sometimes I've had cleanup/integrity/whatever tasks that run every few minutes, but the amount of records affected has always been pretty small.

That's an interesting conundrum. See, this is why we're all messing around with Cassandra et al; sometimes, in SQL, the answer is "don't do that", because it'd be too hard to tailor the db's behavior to suit your needs. Although frankly, with a design that deletes and updates a significant percentage of records in the system on a certain schedule, I can see any number of storage solutions having trouble.

That's interesting. I'll be thinking about this at "work" today. ;) I have a bunch of comments, but they're of the half-baked "oh, what about this?" variety.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: